Closed vbuterin closed 2 years ago
It's worth noting that the input format specification is exactly the ABI encoding of (bytes32, bytes32[], bytes)
; would it be clearer to specify it as such?
Added a note.
I like this EIP (and more generally the concept) as it is useful for easily implementing turing complete state-channels, e.g., where the money gets sent depends on the execution of a counter-signed string of EVM bytecode, and/or the application of multiple of these strings for more complex channels. Also useful for dynamic programming tasks. :+1:
I like this EIP a lot, but I'm a bit uneasy about the implications of this specific proposal: It fixes many details of the protocol that were always a little hidden from the EVM up to now. Smart contracts could use blockhash to access some past block hashes, but that was about it. If we change anything in the encoding of transactions in the future, this change will be relevant to the semantics of this precompile.
Also concerning some of the specific use-cases: I don't think you can use this precompiled contract to verify a plasma change and have a hard fork on the main chain. The fork times would have to be synchronized and it would be impossible to verify a plasma transaction that happened before the fork (which is required at least for some transition period).
The only solution I see here is to add some kind of version field as another parameter, which has the drawback that dropping an ancient version is something that has to be specified or cannot be done.
On the TRIE_GET
and TRIE_SET
topic:
One idea that I have had recently that would be interesting to move Ethereum towards is the "stateless client, first-class witness data" paradigm. In this paradigm, miners/validators would not store state, and it would be the responsibility of the transaction senders to attach a "witness" to their transaction - essentially, a Merkle branch that provides and proves the state of any accounts accessed by the transaction. For a full-scale implementation, see https://github.com/ethereum/EIPs/issues/726, but we can also consider a more limited implementation applying to just storage roots.
Note that the witness would be sent by the transaction sender, but it would not be committed to (ie. signed) by the sender; it would be sent alongside the transaction (yes, this is segwit for ethereum, heh heh heh). Actually committing to the witness, by PoW'ing or signing the block that contains the entire transaction package, would be the responsibility of the miner/block proposer. The reason is that one or blocks could have appeared since the sender sent the transaction, and so the storage root that the transaction would actually apply to would be different from the storage root that the sender saw when first packaging the transaction. However, a miner/proposer that saw the original witness, and that was active as a full node in the intervening blocks, would see all the instances where storage roots were changed, and so would easily be able to get the new Merkle tree nodes from a cache and thereby reconstruct the witness appropriate for inclusion into a new block. This cache could expire after, say, 5000 blocks, so if a transaction does not get in within that time it would need to be resent (actually, there's a usability rationale for making the cache last even longer, possibly 3+ months).
The only requirement for this is that the specific set of storage keys that could get accessed is part of the data signed by the transaction sender, and attempts to access other keys throw an exception. Otherwise, a miner could run a transaction, then realize that it is attempting to access data that is not available, and thus lose cycles processing the transaction and not be able to validly include the transaction and hence not get revenue.
At first glance (thanks @chriseth for bringing this up), it seems harder to extend this to state roots provided by TRIE_GET and TRIE_SET. The reason is that state roots for these opcodes could come from anywhere, and so it may well be possible that a contract could try running TRIE_SET with a witness that does not exist or is not available.
The versions of TRIE_GET and TRIE_SET provided above get around the problem by requiring the witness to be committed to in consensus, and fail if they are not there, but this is very bad for usability, because it means that if a contract does use these "dynamic tries" to store application state (eg. balances of an ERC20 token), then it would be very hard for two senders to interact with the contract at the same time.
These "dynamic tries" do not have a static location, so one cannot make a "stable pointer" to them in signed transaction data the same way that one can with storage roots. One could solve the front-running problem by moving witness data outside of the signed tx data, but then the lack of "stable pointers" means that either (i) the locations are still in the signed tx data, in which case miners may try to include a tx but get a "witness data not found" error halfway through execution, or (ii) the locations are not in the signed tx data, in which case a miner can make arbitrary state queries fail by taking them out of the transaction when including it.
There may be a clever solution to this, but I am still thinking about it.
The "stateless client, first-class witness data" idea was also suggested in the Least Authority analysis:
Further reductions could be obtained by only storing the root of the entire account tree, and require senders to include the Merkle-tree "uncle" nodes necessary to verify the claimed dataset's presence in that tree.
Is anyone working on this or alternatively, on a language that supports recursive nesting/sandboxing?
I've started https://github.com/Magicking/solidity-evm for https://ethresear.ch/t/state-channel-toy-implementation/1495 I have to find time to work more on that
This proposal has some interesting implications, one of which is that it becomes necessary to support these calls recursively (VM executing VM executing VM... etc.). Another interesting property is this:
As of now, the EVM only supports "complete" transactions, if it runs out of gas, everything is reverted. Such an instruction could only run subcomputations with fewer resources (gas limit) than it has itself. I'd call this objective/absolute behaviour. Example:
100->30->10
A system supporting partial execution, state serialization and task resumption could support subjective/relative behaviour: a node in the execution chain/tree can run a child with a higher gas limit, but control reverts back to the parent of the earliest tree node that ran out of gas. The VM can be built in a way such that a node doesn't notice it ran out of resources. Example:
Square brackets denote the node which currently has control.
Child of child is currently running:
100->50->[80]
Control reverts back to parent of node:
[50]->0->30
Parent of node decides to refuel child, control is immediately transferred back to where it was
50->40->[30]
At first, this approach seems inefficient, since all parent nodes have to be checked every instruction step, but this can be optimized by ordering them and only checking the one with the lowest resources.
I figured this would be interesting, which is why I spent a few months creating a resource aware recursive virtual machine that shows that this proposal is not only possible, but opens up a wide range of further semantics: https://esolangs.org/wiki/RarVM
At first, I specified and implemented it recursively: ~ https://gist.github.com/void4/ccbfbf5293c474b971c2a9559cbc0a53 This resulted in a tree traversal (root->current node) every instruction step. The current implementations (RPython, Rust) are optimized and only switch between nodes if there is a transfer of control.
I'm using separate resource limits/resource vectors: gas for instruction steps and a memory limit (alternatively, memory*gas=memory seconds).
Admittedly, it would be very difficult to implement this on the current EVM, so I'm hoping that the EWASM environment will enable it some day.
Could anyone explain how we could achieve this? I mean how could we recover an old state from "stateRoot+txs" ? Does witness data include snapshot of previous state?
User wants to create a contract that queries the historical state of other contracts, or possibly itself, without having to store this entire historical data in storage. The user does so by simply making the query calls use EXECTX, with the right ABI data to make the call specified inside the transaction, and requiring the user to pass in (i) the header of the historical block, which contains the stateroot and can be verified against the hash provided by the BLOCKHASH opcode, and (ii) the witness data.
There has been no activity on this issue for two months. It will be closed in a week if no further activity occurs. If you would like to move this EIP forward, please respond to any outstanding feedback or add a comment indicating that you have addressed all required feedback and are ready for a review.
This issue was closed due to inactivity. If you are still pursuing it, feel free to reopen it and respond to any feedback or request a review in a comment.
0
G d
Introduces a new precompile EXECTX at address TBD, which accepts input in the following format:
Note that this is the ABI encoding of [bytes32, bytes, bytes], though here we are spelling it out in much more detail because there are currently not yet standards for exactly how "strict" ABI verification should be.
The precompile validates that:
len(input) >= max(witness_start_position + witness_length, tx_start_position + tx_length)
input[witness_start_position: witness_start_position + witness_length]
is a valid depth-1 RLP list (ie. it's a list of binary blobs, NOT a single blob, a list of lists, a mixed list or invalid RLP)input[tx_start_position: tx_start_position + tx_length]
is a valid RLP-encoded transaction.For simplicity of specification, oddities such as extraneous data or the witness and tx overlapping are not disallowed; anything that satisfies the above rules is okay.
The precompile then applies the specifies transaction to the specified pre_state_root. When a state query is required, it uses {sha3(x): x for x ϵ witness} as a database. The precompile exits with an error if the transaction is invalid. If execution succeeds, it returns 32+N bytes as output containing the post_state_root in the first 32 bytes and the return data of the call in the remaining N bytes.
The precompile expects
1000 + len(input data) // 10
gas to perform the initial pre-execution checks, then if those checks pass it expects an additional amount of gas equal to the transaction'sstartgas
to attempt to execute the transaction. If the initial pre-execution checks pass, but the tx execution fails, then all gas is charged; if the tx execution succeeds then1000 + len(input data) // 10 + gasused
is charged.Note that this precompile is a pure function; it operates entirely on the "state" provided by the given state root and witness data, NOT on the "current" state.
Rationale
This allows the EVM to be executed inside the EVM, opening the door for a number of applications involving the Ethereum blockchain verifying other EVM-based blockchains, a simpler implementation of read-only cross-shard calls, an interface for making calls to historical state, and other similar applications. This particularly makes the EVM friendly to verifying Plasma chains whose consensus rules, including transaction execution and the chain's PoW/PoS/PoA, are themselves EVM-based.
Currently any of these applications would require essentially creating a full ethereum client inside EVM code, which would add an unacceptably high amount of overhead.
More detailed user stories include:
Alternative approaches
An alternative and more thorough approach would be to make "witness data" be a first-class concept in Ethereum, perhaps being part of each transaction, and adding other opcodes that take advantage of it. For example, we can imagine TRIE_GET and TRIE_SET as precompiles that use witness data, as well as the simpler GET_SHA3_PREIMAGE. In the most extreme version of this paradigm, the entire state of the ethereum blockchain would always be a 32 byte root hash and everything would be implemented with first-class witness data and opcodes such as EXECTX, TRIE_GET and TRIE_SET. However, this would require much more research to create an optimal model.