near / NEPs

The Near Enhancement Proposals repository
https://nomicon.io
214 stars 137 forks source link

[Proposal] Differentiate cold and hot contracts #97

Open MaksymZavershynskyi opened 4 years ago

MaksymZavershynskyi commented 4 years ago

Contract compilation is expensive. We have introduced caching for compilation, but unfortunately we currently cannot have different fees for contracts that are in the cache versus that are not. This means that contract calls are priced based on the worst case scenario -- when every call leads to a compilation. Unfortunately, we cannot predict when contract will be compiled or not, because different nodes that implement the protocol can have different cache settings. However, we can enforce it:

We would need to introduce 2 parameters for the runtime config:

We would need to store the list of the 200 hottest contracts in the trie the way we store delayed receipts. Note, they don't need to be ordered, we just need to store the entries: (code hash, number of times the code was called in the moving window).

evgenykuzyakov commented 4 years ago

On the side note, the idea to target 1 second of compute may not be correct. There should be extra time to propagate receipts and execution outcome before starting the next chunk of compute. If we target 1 second block time, then it has to be taken into consideration.

MaksymZavershynskyi commented 4 years ago

@evgenykuzyakov Good point. However, we are implicitly targeting 1/2 second because blocks that are more than 1/2 full will lead to a steady gas price increase.

MaksymZavershynskyi commented 4 years ago

For the record, here are the formulas to compute our current TPS.

Transfer TPS: min(gas_limit /2/(action_receipt_creation_config. send_not_sir + transfer_cost.send_no_sir), gas_limit /2/(action_receipt_creation_config. execution + transfer_cost. execution)) Which is 2.2k TPS as of 2020-07-20

Contract call TPS for 300KiB contracts: min(gas_limit /2/(action_receipt_creation_config. send_not_sir + function_call_cost.send_no_sir), gas_limit /2/(action_receipt_creation_config. execution + function_call_cost. execution + contract_compile_base+ contract_compile_bytes*300*1024)) Which is 200 TPS as of 2020-07-20

evgenykuzyakov commented 4 years ago

We would need to store the list of the 200 hottest contracts in the trie the way we store delayed receipts. Note, they don't need to be ordered, we just need to store the entries: (code hash, number of times the code was called in the moving window).

We don't need hottest contracts in the trie, but we need all contracts calls ordered by block-number in a queue. Then we can order them in memory using BTreeSet<(num_calls, hash)> + HashMap<hash, num_calls>. The first for ordering, the second is for lookup and updates.

Once a node syncs the trie for a shard, it has to parse the moving window and reconstruct the cache of the contracts. Once the cache is constructed, the node has to pre-compile contracts to avoid delaying blocks.

@bowenwang1996 Is there a callback when the sync is complete to finalize in-memory operations or do we update them on the fly? If we update them on the fly, then the node has to compile contracts on the fly from the moving window.

bowenwang1996 commented 4 years ago

Is there a callback when the sync is complete to finalize in-memory operations

There is no such thing. We can do this operation when we finalize state sync. How do you want to store the cache information in state?

evgenykuzyakov commented 4 years ago

We need to have a history of successful calls in a trie similar to delayed receipts for a tracking window.

evgenykuzyakov commented 4 years ago

To simplify everything we can store a singleton key-value record that keeps Vec<CryptoHash> in LRU order without duplicates. At the beginning of the block you read it, at the end of the block you commit it.

No need for state sync changes. The in-memory cache doesn't need changes, but it has to be at least the size of the persistent cache.

The persistent cache will only be used for charging compilation fees

evgenykuzyakov commented 4 years ago

@bowenwang1996 pointed out that it's too easily abusable by having 200 smaller contract and calling/compiling them.

Another suggestion is to create a time-based cache (based on block height), but don't have expiration for the previous calls. Instead let them decay by half every epoch.

A simple version is to increase weight per contract hash call by

coef**block_height

Now we need to maintain top200 based on weight.

The issue is smaller contracts (143 bytes) can kick out contracts from the top200 much cheaper than putting 300Kb contracts into it. Which makes it abusable.

We can switch top200 cache into 128Mb weighted cache based on the input contract size. But this requires us to properly maintain this cache in the store

evgenykuzyakov commented 4 years ago

@bowenwang1996 and @mikhailOK suggested another option. Before we thought the compilation was fast relative to disk read/write we relied on in-memory cache. But looking at our contract sizes and the time it takes for a single-pass to compile a contract, we should consider an alternative to always keep the compiled version locally. Instead of dropping it from the in-memory cache we can rely on the disk cache to have pre-compiled version.

We can do this at deploy time and increase the cost of deploy operation. It will be a one-off event and will not affect future function calls. Function calls will assume the contract is already pre-compiled and pre-processed. So the only extra cost is to read the cached version from disk. This assumes you've tracked the shard node from the beginning of times, but obviously it might not be the case. The disk cache can be shared cross-shards, so if you tracked a shard then when it splits you still have all of them pre-compiled. But when you sync to a shard, you have to start pre-compiling all contracts that you don't have or try to do this on demand.

Pros:

Caveats:

bowenwang1996 commented 4 years ago

Potential disruptions and new vectors of attack due to cold cache after node shard sync.

I suggest that we not consider state sync done until the contracts are compiled to avoid potential cold cache attacks. We can spawn several thread to parallelize the process. In fact I don't think this is a concern for validators because they start catching up in the previous epoch and I think one epoch is for sure enough time for them to compile all the contracts.

evgenykuzyakov commented 4 years ago

We can spawn several thread to parallelize the process. In fact I don't think this is a concern for validators because they start catching up in the previous epoch and I think one epoch is for sure enough time for them to compile all the contracts.

But it means you have to inspect all accounts and extract code that you need to prepare and compile. Some code compilation will fail, but we still need to cache the result.

bowenwang1996 commented 4 years ago

But it means you have to inspect all accounts and extract code that you need to prepare and compile

We can store the hashes of contracts in state so that it is easier to look them up.

Some code compilation will fail, but we still need to cache the result.

Do you mean that people maliciously submit binary that cannot be compiled? If so why do we need to cache the result?

evgenykuzyakov commented 4 years ago

We can store the hashes of contracts in state so that it is easier to look them up.

We already have it on every account. Otherwise we have to do ref-counting per contract hash, but it's complicated during resharding.

Do you mean that people maliciously submit binary that cannot be compiled?

Yes, you need to remember the attempt, so you don't retry

ilblackdragon commented 4 years ago

What is the current speed difference between the best WASM interpreter and executing compiled code?

Also can we save compiled code somewhere on disk? What are the difference in time loading from disk compiled code vs loading WASM + compiling?

Ideally we should compile on deployment and charging gas for that on deployment time (during state sync would also need to recompile, but there is time for that) and store already compiled code in a separate storage location.

SkidanovAlex commented 4 years ago

What are the difference in time loading from disk compiled code vs loading WASM + compiling?

WASM is also loaded from disk. We can measure precisely, but generally the speed of reading from a random location doesn't depend that much on the side that is being read.

Also can we saved compiled code somewhere on the disk?

That is the current plan I believe.

ilblackdragon commented 4 years ago

That is the current plan I believe.

Not based on the proposal outlined in this issue, as far as I understand

SkidanovAlex commented 4 years ago

https://github.com/nearprotocol/NEPs/issues/97#issuecomment-674271581

Before we thought the compilation was fast relative to disk read/write we relied on in-memory cache. But looking at our contract sizes and the time it takes for a single-pass to compile a contract, we should consider an alternative to always keep the compiled version locally. Instead of dropping it from the in-memory cache we can rely on the disk cache to have pre-compiled version.

MaksymZavershynskyi commented 4 years ago

FYI @bowenwang1996 and @mikhailOK 's proposal is still a protocol-level change that will also affect Applayer, because now contract calls cannot return preparation/compilation errors.

MaksymZavershynskyi commented 4 years ago

Discussed it with @evgenykuzyakov . I agree that the modified proposals would work. I did quick computation. Compiling 100 200KiB contracts takes approximately 8 seconds.

@evgenykuzyakov also has a good idea how to retrofit it with error messages without breaking our protocol too much.