Open spoonincode opened 2 months ago
We think that prior to leap v5.0.0 it was likely capped by time. So it is possible this was introduced in v5.0.0.
Kevin mentioned an alternative approach that may be easiest would be to always finish out a scope once it is started. This could present potential memory issues though.
Also, it is worth discussing whether this can/should be superceded by a solution based on read-only transactions.
The chosen solution targeting 1.1.x should be a non-breaking change, or if a breaking change is preferred we should target 2.0.0.
get_table_by_scope
returns 4 name
s (uint64_t
) and 1 uint32_t
. Since this is bounded, is there really any harm in always including complete scope
? Current RAM cost is 0.009547. Overhead per code,scope,table
billed is 108 bytes. 100K rows would cost over 100K. Returning 100K rows does not seem like a lot. The conversion to JSON is done in the http thread pool. This would only tie up the main thread for creating the 100K entries into a vector<uin64_t, uint64_t, uint64_t, uint64_t, uint32_t>
.
I think 256MiB of RAM is around 1 million tables though. Do we know the CPU time and memory overhead of dealing with something on that scale?
get_table_by_scope
interface has a deficiency it how it handles itsmore
pagination scheme. Without a table filter, the returnedmore
will cause the next request to start at first table in that scope again. This means if a scope contains overlimit
tables (which is capped at 1000) themore
returned will always be thelower_bound
requested. Repeat forever.To illustrate this consider scopes and tables of,
Now consider requests are made with
limit=2
. The first request (with nolower_bound
) will return apple:balance and banana:balance withmore=banana
. The next response will return banana:balance[^1] and banana:foo withmore=banana
. And.. repeat. It's impossible to make further progress.Using a table filter as part of the request doesn't entirely resolve the problem. Even in the above example,
limit=2,lower_bound=banana,table=balance
will cause an endless pagination loop returningmore=banana
each time. This is because the limit (appropriately) applies to the number of walked tables, not the number of matched tables.This might be fixable in the current exposed API by having
more
expose a longer (possibly opaque) value representing scope+table, and then havinglower_bound
understand that value as being a scope+table instead of just a scope.lower_bound
andmore
are strings so this is allowed from that standpoint; but it's possible clients out there are treating these as names instead.[^1]: Notice how banana:balance was returned twice: once in first request and once in second request. That's undesirable behavior of the pagination scheme as well, but a client would need to be prepared for changes between pagination requests anyways due to possible state changes between requests.