Open saketh-are opened 9 months ago
can we link PRs for listed tasks to this tracking issue?
I have a couple of draft PRs I'm continuing to iterate on:
I don't anticipate having separate PRs for each subtask since it won't make sense to merge this until we have all the details right.
Here some thoughts I had over today as I was thinking about estimation and costs for this feature.
promise_await_data
seems quite straightforward in isolation too;
promise_and
: you can join
a bunch of promise_data_await
dependencies together and some of the backing structures will have to live for as long as the longest promise_data_await
! If implemented naively, this combination can result in surprising amount of gas charged for the amount of work done.promise_submit_data
but that will most likely happen deep inside the transaction runtime outskirts where gas tracking is no longer conducted, so it is necessary to account for this at the same time base action cost is charged.
cost(promise_submit_data) = action_base + storage_base + storage_per_byte
?promise_then
is invoked.
promise_submit_data
actually sounds like the most appropriate place to charge gas for storage of the data being submitted as this is the first point in the logic flow at which both the number of blocks to store the data for and the amount of data being stored is known.I have addressed the concern of the data being read out multiple times throughout the life of an unresolved promise in the PR referenced just above. Now we only are going to do a simple check for key existence, which simplifies the cost model significantly. In particular the model now does not need to account for the period of time between when the promise_submit_data
is called and when the future gets resolved, at least in terms of compute cost. Furthermore, I hear that we're now looking at making the timeout variable constant system-wide parameter, rather than a user-controllable one, which is probably a slight simplification as well.
In my mind a correct cost model in context of these changes looks like this:
cost(promise_await_data) = action_base + max_timeout * is_ready_check_cost
– the action_base
covers the compute resources to set up the action receipt, the max_timeout * ready_check_cost
part covers the compute resources of checking whether the promise is ready to go at the worst case of every block. ready_check_cost
should be roughly equivalent to a single check of key existence in the database.
promise_
operations the ready_check_cost
is unique to this function, due to its giving away resolution timing control to the end users."storage_read_key_byte": 30952533
and "storage_read_value_byte": 5611005
. These are low enough that I can think we could afford to not worry about the refunds for unused delay in case the future gets resolved early.cost(promise_submit_data) = action_base + cost(storage_write(data)) + cost(storage_read(data)) + cost(storage_for_max_timeout(data))
submit_data
successfully once.await_data
this is is paired with is part of a promise_and
and cannot be immediately resolved.cost(storage_for_max_timeout(data))
and cost(storage_for_n_blocks(data, actual_blocks_of_delay))
.
await_data
continuation is executed at all? Whom do we refund to -- the contract? the person who originally invoked the function call that then executed promise_submit_data
? How do we ensure we use the same gas:near price ratio as was used when creating submit_data
? gas
argument for promise_yield_create
should be accompanied with the gas_weight
argument. I added this to the task list.
Status update @walnut-the-cat: Fixed-length timeouts are implemented now. Work continues on gas costs and bounding congestion (mainly thanks @nagisa), as well as on the misc. smaller implementation details documented on this tracking issue.
Very excited about this idea! Thank you, contributors
NEPs: near/NEPs#516 near/NEPs#519
The following branches contain a basic prototype of yield execution supporting the chain signatures use case:
To test out the chain signatures contract:
neard
and run localnet.mpc_contract
fromnear-sdk-rs/examples
.mpc.node0
on localnet:env NEAR_ENV=localnet near create-account "mpc.node0" --keyPath ~/.near/localnet/node0/validator_key.json --masterAccount node0
.requester.node0
andsigner.node0
.env NEAR_ENV=localnet near deploy "mpc.node0" <path/to/mpc_contract.wasm>
env NEAR_ENV=localnet near call mpc.node0 sign '{"payload" : "foo"}' --accountId requester.node0
. Observe that the request will hang.env NEAR_ENV=localnet near call mpc.node0 log_pending_requests --accountId signer.node0
to see the data id for the pending request. In a real use case, the signer node will monitor the contract via indexers to get this information.env NEAR_ENV=localnet near call mpc.node0 sign_respond '{"data_id":"<data id here>","signature": "sig_of_foo"}' --accountId signer.node0
Note that steps 6-8 are a bit time-sensitive at the moment. If the call to
sign
in step 6 doesn't receive a response from step 8 within roughly a minute, you'll eventually see a messageRetrying transaction due to expired block hash
.Remaining work includes: