Open r3v4s opened 10 months ago
Based on discussions from our call today, the suggestions were:
I'm curious if you can provide us some more information:
Aside from what Milos said, I should point out that the real solution is necessarily not increasing the allocation limit, but rather addressing synchronous GC: #266.
This way, the "allocation" of a contract reflects how much memory it has actually stored, rather than a sum of all of the times it has attempted to store memory.
I should point out that the real solution is necessarily not increasing the allocation limit, but rather addressing synchronous GC
I also believe it's more generalizable in the long term to address this problem directly at the language level.
While 'external solutions' (I don't know if this is the right expression) such as optimization in SC, are still practical and valuable and should have considered in development process. But some approaches may to be cumbersome or necessitate distinct strategies for each project, which can be restrictive.
BTW has there any further discussion about GC related things?
BTW has there any further discussion about GC related things?
@petar-dambovaliev Do you know what the latest status on the GC efforts is? I vaguely remember us discussing it, that it was temporarily tabled
This should be addressed by sync GC, which should have a reasonable default. We can also expect some users to run special RPC nodes with higher values. In other words, it makes sense to make it configurable while providing a reasonable default.
Description
In pr #267, max memory cap has been set to 1.5g for qeval with comment
// higher limit for queries
I'm aware of increasing this limit can break rpc service (similar research against ethereum => paper & video
However current limit may not be enough for certain use case.
Question
cc @mconcat @notJoon @dongwon8247 @zivkovicmilos