Closed ielashi closed 2 weeks ago
🤖 Here's your preview: https://oov64-qqaaa-aaaak-qcnsa-cai.icp0.io/docs
@mraszyk I didn't make any changes to the formal model in this PR yet, as I first wanted to get your feedback that the proposal as-is looks good.
@mraszyk I didn't make any changes to the formal model in this PR yet, as I first wanted to get your feedback that the proposal as-is looks good.
@ielashi @mraszyk I assume that we're fine with the current status of this PR so can the formal model be updated so we can move this to "FINAL" and be ready to start implementation at our convenience?
@mraszyk Is the formal model something that you'd update yourself or should we look into that on our side?
I assume that we're fine with the current status of this PR
How about the call_on_cleanup analogy that I raised in the unresolved comment above?
I assume that we're fine with the current status of this PR
How about the call_on_cleanup analogy that I raised in the unresolved comment above?
@mraszyk Doesn't this answer your question?
@ielashi I replied
The reason for that is that call_on_cleanup, unlike this hook, cannot make outbound calls.
As long as it can schedule a one-off timer, I don't see a major issue with this limitation.
on that thread and you put thumbs up so I thought you saw that.
@ielashi I replied
The reason for that is that call_on_cleanup, unlike this hook, cannot make outbound calls.
As long as it can schedule a one-off timer, I don't see a major issue with this limitation.
on that thread and you put thumbs up so I thought you saw that.
I did, and my impression was the issue is resolved, or did I misunderstand?
We discussed this PR in yesterday's spec meeting together with @dsarlis and we concluded that it'd make more sense to guarantee that enough cycles are reserved for the hook and that the hook runs as the first message after the wasm memory limit threshold is crossed and then it's fine to treat it as a system task (rather than, e.g., call_on_cleanup).
@mraszyk @ielashi @dsarlis Question 1: WDYT when this hook should be executed:
Question 2: If canister executes multiple messages (and/or tasks) in a single round should the hook be invoked before every execution?
As described above, we suggest keeping the old semantics (specifying a limit on allocated heap memory rather than "remaining" memory), in which case the scheduling is also much clearer (just execute once when an execution of the canister exceeds the limit, immediately after the execution that causes the hook to run).
@Dfinity-Bjoern @mraszyk If we start keeping information about the hook as {"Condition is not satisfied", "Executed", "Ready to be executed"}, and based on that we use to determine if we will run hook, do you think we should persist that information in snapshots?
The implementation only considers stable memory and ignores, e.g., the chunk store and snapshot size, when deriving the wasm memory capacity.
The hook should not run after an upgrade/reinstall/uninstall/install if the condition is not satisfied after the upgrade (although the condition was satisfied before the upgrade and the hook did not execute yet).
Superseded by https://github.com/dfinity/portal/pull/3761
This proposal is based on this forum post and has already been approved by motion proposal 106146.
Canister developers have to actively monitor their canisters to know if they are low on wasm memory. If detected too late, a canister can be completely stuck and forever un-upgradable.
To address this, we introduce a hook called
on_low_wasm_memory
. When defined, it is triggered whenever the canister's memory usage exceeds some developer-defined threshold.