8459514233 reverted the GC strategy to copying, this is the commit that needs to be backed out.
Methods with pros/cons
So I propose to do the dumbest thing that works, for now. From what I understood we have two alternatives:
allocation-time check with 3GB limit (pros: simple; cons: Cycles overhead per allocation, dead data may trigger rejection, queries could be penalised unnecessarily)
post-GC check with 3 GB limit (pros: cheap, dead data not counted, query-fairness; cons: stable variables’ footprint may overshoot the (de)serialisation limits of 1 GB when streaming).
Hazards
Several hazards lurk in the upgrade process
DAG-like stable variable data gets serialised to trees, blowing up size
serialisation buffer may overflow due to huge stable variable volume (FIXED: #3149)
stable memory may get exhausted (if many regions allocated already)
deserialisation buffer eats up too much heap space to successfully unpack
streaming back stable variables overshoots the heap
The danger of overshooting the serialisation buffers is intuitively tallied in the following table.
(YOU CAN LOOK, BUT THINKOS MAY BE INSIDE...)
8459514233 reverted the GC strategy to
copying
, this is the commit that needs to be backed out.Methods with pros/cons
So I propose to do the dumbest thing that works, for now. From what I understood we have two alternatives:
Hazards
Several hazards lurk in the upgrade process
The danger of overshooting the serialisation buffers is intuitively tallied in the following table.
TODO: a similar table for cycle exhaustion