near / NEPs

The Near Enhancement Proposals repository
https://nomicon.io
214 stars 137 forks source link

Runtime triggers #516

Open bowenwang1996 opened 12 months ago

bowenwang1996 commented 12 months ago

It is sometimes useful for smart contract to subscribe to some event and perform an action based on the result of an event. One simple example is cronjob. Currently smart contracts have no way of scheduling an action that is periodically triggered (in this case the event that triggers the action is time) and as a result, if someone wants to achieve this, they need to rely on offchain infrastructure that periodically calls the smart contract, which is quite cumbersome. A more complex use case is chain signatures, which needs the execution to resume once the signature is generated by validators.

In terms of effect, this should roughly be equivalent to if the smart contract has a method that constantly calls itself

fn trigger(&self) -> Promise {
  if condition {
     do_something
  }
  Self::ext(env::current_account_id()).trigger()
}

However, a function like this is not practical as is because of gas limit associated with each specific function call. However, we can extend the mechanism of callbacks to allow them be not triggered immediately, but rather when a specific condition is met. More specifically, we can introduce a new type of Promise DelayedPromise that generates a postponed receipt, similar to what callbacks do today. However, the postponed receipt is stored globally in the shard, instead of under a specific account (at least conceptually, in practice a separate index could be created if that helps), along with the condition that triggers the delayed promise. Then, during the execution of every chunk, the runtime first checks whether any of the delayed promise should be triggered and execute them if the triggering condition is met. Roughly this would allow us to rewrite the example above into

fn trigger_condition(&self) -> bool {
   // some condition specified by the contract
   condition
}

fn trigger(&self) -> Promise {
  if condition {
     do_something
  }
  // specify the trigger condition, callback and arguments to the callback. This is almost a call to self with callback, except that it also specifies the trigger condition.
  DelayedPromise::new(env::current_account_id(), trigger_condition, trigger, {})
}

For this idea to work, a few issues need to be resolved:

DavidM-D commented 12 months ago

Thanks for writing up this proposal Bowen, it appears to me to be very sensible and solve our problem. I've got a few suggestions of how I think we can make this simpler and potentially more powerful using some well understood primitives.

If we consider an external action of type Input -> Output, the typed extended host function API I propose is:

yield : Input -> ()

resume : ActionReciept -> (Output -> b) -> Output -> ()

Or rendered slightly closer to the WASM API

yield : Bytes -> ()

resume : ActionRecieptId -> String -> [u8] -> ()

Each yield emits a new receipt type YieldedReciept.

type YieldedReciept = {
    data: [u8]
}

Resume continues the execution in the ActionReciept context, consuming the gas of the original caller.

Resume calls can be executed from the first yield call until either value_return, panic or abort is called during any resumptions. In the interim the contract runtime needs to store no data in excess of what it stores when there is an outstanding promise. Resume and yield do not have to be called the name number of times.

To sketch out how this would be used in the sign endpoint:

signer.near:
  // Caller calls and pays
  sign_start : (payload, derivation) {
    let caller = caller();
    // MPC service listens for these yields and calls resume with the result
    let sign_request_id = generate_request_id();
    yield((sign_request_id, payload, derivation, caller))
    store.insert(sign_request_id, (paylaod, derivation, caller))
  }

  // MPC service calls and pays
  sign_resume : (signature, sign_request_id, action_receipt_id) {
    let (payload, derivation, caller) = store.get(sign_request_id)
    assert_valid_sig!(signature, payload, caller)
    resume(action_reciept_id, "sign_finish", signature)
  }

  // Resume calls and caller pays
  sign_finish : (signature) {
    // Returns value back along the call tree
    value_return(signature)
  }

More generally, when including the in contract state, this is an untyped effect system. Potential use cases are:

bowenwang1996 commented 12 months ago

@DavidM-D sorry I don't think I fully understand your proposal. A couple of questions:

nagisa commented 12 months ago

Correct, the complication arises from the fact that WASM execution state is quite intertwined with the native state, so we can’t just save some explicitly defined data structures and save them, we would need to save things like stack, machine registers and such as well. Fortunately since the spots where the saving and restoring might happen are well defined (by virtue of them being specific user-invoked functions,) some of the concerns about adapting codegen to deal with being suspended and relocated at a further date aren’t as prominent.

akhi3030 commented 12 months ago

There is some prior work where it is possible to save some execution context. Essentially it relies on the async system provided by rust to enable this support. https://internetcomputer.org/docs/current/developer-docs/backend/rust/intercanister is a simple example of how something like this can be used to call another smart contract; yield execution till the response comes back; and then continuing execution. https://github.com/dfinity/cdk-rs/blob/main/src/ic-cdk/src/futures.rs is the implementation on the SDK of this feature.

itegulov commented 12 months ago

I don't think we need to save the execution context for this feature, although this is an interesting topic in and of itself that potentially deserves its own NEP (I remember briefly discussing how this would look like in Rust SDK last year). Let me rewrite @DavidM-D's code in a way that would be representable with the existing tooling:

pub struct Signer {
  ...
}

impl Signer {
  /// Caller wants to sign payload using a key derived with the given path
  pub fn sign_request(payload: Vec<u8>, derivation_path: String) -> Promise<Signature> {
    let sign_request_id = generate_request_id();
    store.insert(sign_request_id, (payload, derivation, env::signer_account_id()));
    // Yield to self (current account id), meaning only this contract can resume. Pre-allocates gas for the resume
    // transaction.
    YieldPromise::new(env::current_account_id(), {sign_request_id, payload, derivation, caller}, 30k TGas)
      .then(Promise::new(env::current_account_id(), "sign_on_finish"))
  }

  /// Caller wants to fulfill a signature request and resume the chain of promises by the given receipt id
  pub fn sign_response(signature: Signature, sign_request_id: RequestId, action_receipt_id: ActionReceiptId) -> Promise<()> {
    assert_is_allowed_to_respond!(env::signer_account_id());
    let (payload, derivation, caller) = store.get(sign_request_id);
    assert_valid_sig!(signature, payload, caller);
    // Resumes a yielded promise from the corresponding `sign_request` call and consumes gas preallocated for
    // the resume transaction, thus refunding caller the gas they have spent so that ideally calling `sign_response`
    // did not cost anything.
    ResumePromise::new(env::current_account_id(), action_reciept_id, {sign_request_id, signature})
  }

  #[private]
  pub fn sign_on_finish(sign_request_id: RequestId, signature: Signature) -> Signature {
    store.remove(sign_request_id);
    signature
  }
}

The way I see it runtime triggers are essentially polling the contract's state to see if it has changed in a specific way, but that can very straightforwardly be replaced by scheduling a ResumePromise when you change the contract in that specific way. I am not sure if cross-contract polling was ever considered or requested (i.e. being able to return Promise<bool> in trigger_condition), but even if so the polling can be done offchain by doing free view calls and then just "proving" it once in the end when scheduling ResumePromise. Happy to provide a more concrete example on above if unclear

akhi3030 commented 11 months ago

I just had a call with @ itegulov to understand their proposal above a bit better. I like this it very much, I think it is much simpler than what was proposed originally.

In the original proposal, we need a way to call into the contract regularly to allow it to implement polling to decide if the desired condition is met. However, as @ itegulov points out, if the polling function is inspecting some state in the smart contract that is going to be updated over time by some other calls, then instead of having the polling, the contract can realise that the condition is met from these other calls and then resume execution.

The biggest challenge though is to figure out how to delay execution while the condition is met. More specifically, we have the following case:

There is some related work that we can explore. This work is trying to come up with a clean way to implement what are effectively asynchronous host functions. It works as following:

The above solves the problem of having multiple call trees that need to be conflated and solves the problem of delaying responding to a request till some condition is met.

The problem with this solution as I see it is that it will move many bits of the signature aggregation into the protocol.

bowenwang1996 commented 11 months ago

@akhi3030 on your proposal with virtual smart contract, how exactly does it solve the issue mentioned in the previous approach that is related to the ability to trigger a callback? You still need a way for the virtual smart contract to signal that the action is complete and the callback can be executed.

akhi3030 commented 11 months ago

@bowenwang1996: I chatted with @DavidM-D earlier and am in line with what @itegulov is proposing above. I think it is simpler to implement in the protocol and also a more general framework.

Current situation

When a contract A calls B, B can either respond to A or it needs to call [another] contract. In other words, currently, there is no way for B to say, I am not ready to respond to A yet but hopefully in the near future, I might be ready to. It either has to produce a result for A or keep calling another contract to delay replying to A. And due to gas limits, it can only delay replying A for so long before the gas limit is exhausted and an error is returned to A.

Proposal

Introduce a yield / resume API that allows B to delay replying to A for arbitrarily long.

When A calls B, we introduce a host function: yield that B can call to suspend replying to A. Then in future, another contract C (or A, it doesn't matter) calls into B and B decides that it is now ready to respond to the original call from A. At this point, it can call resume which will now generate a response for A.

How this will work for the fastauth project?

A smart contract calls the signer contract to get a payload signed. The signer contract initiates the signing process and calls yields. The indexers prepare the signature and call into the signer contract. After the signer contract has aggregated enough signatures and verified that they are correct, it calls resume to return the signature to the original caller.

I think @itegulov's proposed API above shows how this would look like.

Changes needed in the protocol

I imagine that the changes needed in the protocol is that when yield is called by a contract, we need to create some state to store the "delayed execution". And when resume is called, we need to look up the respective "delayed execution" and continue it.

We will have to figure out how to charge for the additional state created when yield is called. One idea is that there is actually a time limit on how long this state is kept around for. Something that is yielded can be kept around for a fixed amount of time and if it not resumed during that time, it is resumed with an error. We could set the time limit or we could allow the contract to specify the limit (with an upper bound). Having such a time limit allows us to figure out how much gas should be charged for this storage.

akhi3030 commented 11 months ago

@bowenwang1996, @itegulov, @DavidM-D, @saketh-are, @walnut-the-cat: just so we are all on the same page, we are now planning on moving forward with the proposal in the comment above. There are still a couple of open questions for me for the API that I will make subsequent posts about to clarify. Saketh and I will then write up a draft NEP so that we are all agreed on the precise API and then start working on an implementation in nearcore.

akhi3030 commented 11 months ago

@itegulov, @DavidM-D, @saketh-are: I have a question about the API for yield and resume.

In order for the API to be generic enough, it will be possible that there are multiple outstanding yielded "executions" in a given contract that can later be resumed. It will also be possible that they will be ready to be resumed in different order than in which they were yielded. As such, the contract needs to specify which "execution" to resume. Does this concern make sense?

So what I am thinking is that yield needs to return a "token", an opaque identifier that the contract needs to save and pass in to resume to indicate which yielded execution to resume.

@itegulov: Looking at your comment, the closest thing I see resembling this identifier is sign_request_id. From your example, it seems like it is something that the application is generating internally. Any thoughts on how we could solve the above issue?

itegulov commented 11 months ago

@akhi3030 yes, so this is what I meant by action_receipt_id. sign_request_id is indeed a domain-specific thing here. The idea is that sign_request(...) results into Promise<Signature> which is what Rust SDK uses to denote a receipt id that is supposed to resolve into a value of a specific type. And then we reuse that receipt id to reference the yielded execution.

akhi3030 commented 11 months ago

@itegulov: cool, thanks, then we are on agreement on this.

A follow on question is that we need to have some sort of "time limit" for how long the yielded execution can stay alive for. This is because each yielded execution will create some state in the protocol that needs to be paid for. Otherwise, if a contract keeps forgetting to call resume on yielded execution, then that will accumulate state in the protocol.

My high level idea is that when some execution is yielded and it is not resumed within N blocks, then the protocol will resume it with a timeout error. Something like this could then also be used a feature if a contract just wants a callback sometime in the future.

There are two options for an API here. First is that the N blocks specified above is constant of the protocol that cannot be changed and then the gas fees are calculated based on that. But we could consider generalising this a bit more and say that the contract can specify N (which some upper bounds if needed) and then the gas fee will be calculated accordingly. Does anyone have opinions on which approach to take here?

An additional API we could offer is that when N is about to run out, the contract could request to extend the time that the execution is yielded for. I think that we should keep this as future work. I think we can have the simpler API for now and build this in future if needed.

DavidM-D commented 11 months ago

Do we have to have an opaque reference type. Is it not possible to do it all on the application level? Surely if the contract has to verify the response it can also associate it with the correct callback?

On Thu, 16 Nov 2023, 21:56 Akhilesh Singhania, @.***> wrote:

@itegulov https://github.com/itegulov: cool, thanks, then we are on agreement on this.

A follow on question is that we need to have some sort of "time limit" for how long the yielded execution can stay alive for. This is because each yielded execution will create some state in the protocol that needs to be paid for. Otherwise, if a contract keeps forgetting to call resume on yielded execution, then that will accumulate state in the protocol.

My high level idea is that when some execution is yielded and it is not resumed within N blocks, then the protocol will resume it with a timeout error. Something like this could then also be used a feature if a contract just wants a callback sometime in the future.

There are two options for an API here. First is that the N blocks specified above is constant of the protocol that cannot be changed and then the gas fees are calculated based on that. But we could consider generalising this a bit more and say that the contract can specify N (which some upper bounds if needed) and then the gas fee will be calculated accordingly. Does anyone have opinions on which approach to take here?

An additional API we could offer is that when N is about to run out, the contract could request to extend the time that the execution is yielded for. I think that we should keep this as future work. I think we can have the simpler API for now and build this in future if needed.

— Reply to this email directly, view it on GitHub https://github.com/near/NEPs/issues/516#issuecomment-1814029751, or unsubscribe https://github.com/notifications/unsubscribe-auth/ACVMHARMKFU7DE46CGQAMJDYEXIK5AVCNFSM6AAAAAA66AQOB6VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTQMJUGAZDSNZVGE . You are receiving this because you were mentioned.Message ID: @.***>

akhi3030 commented 11 months ago

Do we have to have an opaque reference type. Is it not possible to do it all on the application level? Surely if the contract has to verify the response it can also associate it with the correct callback?

@DavidM-D: I am not sure how that will work. For each yielded execution, the protocol has to create and some state. And when the application wants to resume some execution, it has to tell the application which yielded execution to continue.

itegulov commented 11 months ago

@akhi3030: having a timeout mechanism seems very reasonable to me. We can have a retry mechanism on the application level and even potentially subsidize it (e.g., set up a relayer that, given proof that you sent a transaction that resulted in timeout, funds a new retry transaction for you).

There are two options for an API here. First is that the N blocks specified above is constant of the protocol that cannot be changed and then the gas fees are calculated based on that. But we could consider generalising this a bit more and say that the contract can specify N (which some upper bounds if needed) and then the gas fee will be calculated accordingly. Does anyone have opinions on which approach to take here?

Let me think this through, in the meantime I have a couple of counter-questions. Let's say the execution got resumed in K blocks. Do you think it would be possible to refund unspent gas for N-K blocks of not storing the yielded execution? Also, what sort of magnitude would this block limit be - 10s, 100s, 1000s of blocks?

An additional API we could offer is that when N is about to run out, the contract could request to extend the time that the execution is yielded for. I think that we should keep this as future work. I think we can have the simpler API for now and build this in future if needed.

Agreed, this can be kept as a potential future extension. I don't see this being particularly useful for our use case, but a need might arise from somewhere else.

akhi3030 commented 11 months ago

Let me think this through, in the meantime I have a couple of counter-questions. Let's say the execution got resumed in K blocks. Do you think it would be possible to refund unspent gas for N-K blocks of not storing the yielded execution?

Good point. Yes, it would make sense to refund the remaining gas for N-K blocks.

Also, what sort of magnitude would this block limit be - 10s, 100s, 1000s of blocks?

Hmm.... I don't have a good intuition for this yet. It will depend on how much state we need to store in the protocol. I imagine that we are probably talking storing less than around 500 bytes per yielded execution. So we will have to do some gas estimations to figure out how much gas we should charge per block for that much storage. Maybe you have some intuition for this?

The other thought I had on this just now is that we probably do not need to specify an upper limit in the protocol on how big N can be. There will be a implicit maximum value for N based on how much gas is left for the account to burn and maybe that is a good enough upper limit.

walnut-the-cat commented 11 months ago

We will have to figure out how to charge for the additional state created when yield is called. One idea is that there is actually a time limit on how long this state is kept around for. Something that is yielded can be kept around for a fixed amount of time and if it not resumed during that time, it is resumed with an error.

This sounds very much like the previous discussion on charging delayed receipts for their storage usage..

My high level idea is that when some execution is yielded and it is not resumed within N blocks, then the protocol will resume it with a timeout error. Something like this could then also be used a feature if a contract just wants a callback sometime in the future.

What's realistic feasibility of this? Currently, we do not guarantee when delayed receipts will be executed in the future and it seems for this type of timeout to work, we need to provide some guarantee such as 'resume() will be executed one block after condition is met'.

akhi3030 commented 11 months ago

What's realistic feasibility of this? Currently, we do not guarantee when delayed receipts will be executed in the future and it seems for this type of timeout to work, we need to provide some guarantee such as 'resume() will be executed one block after condition is met'.

I don't think we need to provide any guarantee on when something will execute. Once the timeout passes, we just need to mark the receipt ready to be executed. It doesn't matter when it actually executes.

akhi3030 commented 11 months ago

This sounds very much like the previous discussion on charging delayed receipts for their storage usage..

Yes. This is like the single trigger model which should work well with the current gas model.

walnut-the-cat commented 11 months ago

I don't think we need to provide any guarantee on when something will execute. Once the timeout passes, we just need to mark the receipt ready to be executed. It doesn't matter when it actually executes.

But doesn't that mean users cannot control or estimate how much gas will be refunded or what 'N (which some upper bounds if needed)' should be?

I am afraid this resulting in the similar situation we have with gas attachment for txn call(where users always use highest/largest value)

akhi3030 commented 11 months ago

No... that is not how I envision the API working. If the contract requests that the execution be yielded for up to N blocks then in the worst case, the amount of gas it will pay will be a function of N. However, just because the contract requested that the callback happen at most N blocks later, doesn't mean that the protocol has to provide that guarantee. The protocol will provide the guarantee that the call will not take place before N blocks however due to congestion and being busy, the protocol can choose to arbitrarily delay when the callback happens.

akhi3030 commented 11 months ago

I have created a draft NEP for this work.

walnut-the-cat commented 11 months ago

just because the contract requested that the callback happen at most N blocks later, doesn't mean that the protocol has to provide that guarantee.

I agree that Protocol doesn't have to provide such guarantee, but in the case where callback is yielded for N blocks and timeout as it couldn't get executed within the time limit, will contract end up wasting gas for nothing?

The protocol will provide the guarantee that the call will not take place before N blocks

Just to be clear, N used here is different from N used in the past sentence, as we are talking about 'minimum' delay, instead of 'maximum' delay?

saketh-are commented 11 months ago

I agree that Protocol doesn't have to provide such guarantee, but in the case where callback is yielded for N blocks and timeout as it couldn't get executed within the time limit, will contract end up wasting gas for nothing?

Suppose that a contract requests that execution be yielded for up to N blocks.

The protocol will provide the guarantee that the call will not take place before N blocks

This guarantee is referring to the fact that the timeout case won't be triggered by the protocol within N blocks.

saketh-are commented 11 months ago

Following a discussion with @bowenwang1996, I propose removing the timeout feature for a couple of reasons:

Without a timeout, we would take a storage deposit to pay for storing the yielded computation indefinitely.

DavidM-D commented 11 months ago

Will storage deposits be denominated in gas or NEAR? NEAR storage deposits make for complex interactions with relayers.

bowenwang1996 commented 10 months ago

Will storage deposits be denominated in gas or NEAR? NEAR storage deposits make for complex interactions with relayers.

Normally it should be in NEAR. However, I agree that the interactions would be more complex. Given that such a receipt will be quite small (a few hundred bytes), I think we can also consider burning gas (similar to zero-balance accounts).

akhi3030 commented 10 months ago

I have some more questions about how things would work if we do not have timeouts. I mentioned them on a slack thread but also mentioning here in case someone else is following the conversation here.

If we do not have timeouts, then the following situation can happen:

This can create some problems such as:

bowenwang1996 commented 10 months ago

Summary of a meeting with @DavidM-D @itegulov @saketh-are:

encody commented 10 months ago

Re: storage costs.

Lightweight Yields

Couple of questions: is it reasonable to assume that a YieldReceipt may only be resumed by the contract that created it? It seems like it, given the examples being discussed, but I don't think it has been explicitly said. If so, storage costs can be reduced, even bounded. (If not, ignore the rest of this, I guess.) It should not drastically change the usability of the API: external actors can still resume yields via a cross-contract call to this contract, which then resumes the requested yield.

First, it obviates the need to specify a resuming contract ID, and it could eliminate the need to specify a resuming function name, if a standard one is agreed-upon (not required for this optimization, and it leads to slightly worse devx imo).

However, it would also, and more importantly, eliminate the need to support promise chaining (.thenable-ness), which is a source of dynamic memory consumption, because all promise chains can be calculated and submitted as normal from the resumed callback (and functionally-equivalently, since the YieldPromise and its resumption are technically still part of the same call chain).

An API similar to that described by @itegulov above:

// Yield to self (current account id), meaning only this contract can resume. Pre-allocates gas for the resume
// transaction.
YieldPromise::new(env::current_account_id(), {sign_request_id, payload, derivation, caller}, 30k TGas)
  .then(Promise::new(env::current_account_id(), "sign_on_finish"))

Can therefore be simplified to have an upper-bound in storage cost. The arguments struct in particular can be replaced in favor of a contract-generated resume promise ID (e.g. sign_request_id in this case). Let us say that this is a u128 (16 bytes). Additional data can be stored in normal storage, indexed by that ID, incurring normal storage costs, and retrieved by the resuming function as needed. This also would allow the contract to clean up some associated state for never-resumed yields.

Element Byte Cost
Resume Promise ID (contract-generated) 16
Gas Amount 16
Receipt ID (vm-internal) 32
Resume Function Name (optional) 256
Total 320
Total (without function name) 64

The near_sdk could provide helpers or method wrappers that support an API similar to the above.

Of course, this is not a complete picture (e.g. I assume that keeping an action receipt around, unresolved, like the one referenced in the YieldPromise, costs more than just storing the bytes of its ID), but it may simplify some calculations.

akhi3030 commented 10 months ago

thanks for the feedback encody. The plan is indeed that only the contract that yielded a receipt can then resume it.

saketh-are commented 10 months ago

Here's how the proposed yield_create/yield_resume API could fit into the MPC contract. Thanks @itegulov for some helpful input on this already.

pub struct MpcContract {
    protocol_state: ProtocolContractState,
    // Maps yielded promise id to the requested payload
    pending_requests: LookupMap<PromiseId, [u8; 32]>,
}

impl MpcContract {
    // Called by the end user; accepts a payload and returns a signature.
    pub fn sign(&mut self, payload: [u8; 32]) -> Promise {
        let promise = yield_create(
            // Callback with return type Option<Signature>
            Self::ext(env::current_account_id()).sign_on_finish(),
            YIELD_NUM_BLOCKS,
            STATIC_GAS_ALLOTMENT,
            GAS_WEIGHT
        );

        self.pending_requests.insert(env::promise_id(promise), payload);
        promise
    }

    // Called by an MPC node to submit a completed signature
    pub fn sign_respond(&mut self, request_promise_id: PromiseId, signature: Signature) {
        assert_is_allowed_to_respond!(env::current_account_id());

        let Some(payload) = self.pending_requests.get(&request_promise_id) {
            assert_valid_sig!(payload, signature);

            // The arg tuple passed here needs to match the type signature of the callback function
            yield_resume(request_promise_id, (request_promise_id, Ok(signature),));
        } else {
            env::panic_str("Unexpected response");
        }
    }

    // Callback made after the yield has completed, whether due to resumption or due to timeout
    pub fn sign_on_finish(
        &mut self,
        request_promise_id: PromiseId,
        signature: Result<Signature, PromiseError>
    ) -> Option<Signature> {
        self.pending_requests.remove(request_promise_id);
        signature.ok()
    }
}
akhi3030 commented 10 months ago

Thank you @saketh-are! Some questions:

saketh-are commented 10 months ago

Thanks, I have edited above:

akhi3030 commented 10 months ago

Thanks for making the change. I think the following change might still be needed to make the types match up:

-             yield_resume(request_promise_id, (request_promise_id, signature,));
+            yield_resume(request_promise_id, (request_promise_id, Ok(signature),));

Otherwise, this looks good to me.

saketh-are commented 9 months ago

After some tinkering on implementation I think it makes more sense to design the host functions in the following way:

promise_await_data(account_id, yield_num_blocks) -> (Promise, DataId);

promise_submit_data(data_id, data);

Simply, promise_await_data creates a Promise which will resolve to the data passed through promise_submit_data.

We can rely on the composability of promises to attach a callback and consume the data as desired. MPCContract::Sign in the example shared above would instead look like:

impl MpcContract {
    pub fn sign(&mut self, payload: [u8; 32]) -> Promise {
        let (promise, data_id) = promise_await_data(
            env::current_account_id(),
            YIELD_NUM_BLOCKS
        );

        self.pending_requests.insert(data_id, payload);

        promise.then(Self::ext(env::current_account_id()).sign_on_finish())
    }
}
akhi3030 commented 9 months ago

@saketh-are: I like this simplification very much! Question: why do need to include the account_id when calling promise_await_data()? Should it not always be the current account id?