gnolang / gno

Gno: An interpreted, stack-based Go virtual machine to build succinct and composable apps + gno.land: a blockchain for timeless code and fair open-source.
https://gno.land/
Other
901 stars 378 forks source link

handling memory allocation limit for query eval #1506

Open r3v4s opened 10 months ago

r3v4s commented 10 months ago

Description

In pr #267, max memory cap has been set to 1.5g for qeval with comment // higher limit for queries

I'm aware of increasing this limit can break rpc service (similar research against ethereum => paper & video

One of suggested countermeasure by author is, Performance anomaly detection plus security deposit which is for client(or d-app) to use rpc, it has to deposit certain amount money to rpc provider and if abnormal behavior has been detected deposit going to be confiscated.

However current limit may not be enough for certain use case.

For example, to get best result in dex(defi) it needs to search all over existing position. And to give user a estimated result, interface calls DrySwap over RPC using qeval to get it. This is where current limit can be insufficient. ( if positions are spread sparsely in all over range, bunch of iteration can be happen in qeval requests which can result panic, allocation limit reached )

Question

  1. Is current limit(1.5g) is well-known number(or calculated from certain formula)
  2. Does it have to be static value? Can't it be dynamic?

cc @mconcat @notJoon @dongwon8247 @zivkovicmilos

zivkovicmilos commented 10 months ago

Based on discussions from our call today, the suggestions were:

I'm curious if you can provide us some more information:

thehowl commented 10 months ago

Aside from what Milos said, I should point out that the real solution is necessarily not increasing the allocation limit, but rather addressing synchronous GC: #266.

This way, the "allocation" of a contract reflects how much memory it has actually stored, rather than a sum of all of the times it has attempted to store memory.

notJoon commented 10 months ago

I should point out that the real solution is necessarily not increasing the allocation limit, but rather addressing synchronous GC

I also believe it's more generalizable in the long term to address this problem directly at the language level.

While 'external solutions' (I don't know if this is the right expression) such as optimization in SC, are still practical and valuable and should have considered in development process. But some approaches may to be cumbersome or necessitate distinct strategies for each project, which can be restrictive.

BTW has there any further discussion about GC related things?

zivkovicmilos commented 10 months ago

BTW has there any further discussion about GC related things?

@petar-dambovaliev Do you know what the latest status on the GC efforts is? I vaguely remember us discussing it, that it was temporarily tabled

moul commented 1 month ago

This should be addressed by sync GC, which should have a reasonable default. We can also expect some users to run special RPC nodes with higher values. In other words, it makes sense to make it configurable while providing a reasonable default.