Closed warner closed 2 years ago
A further take on @warner's analysis and design above:
I now understand the precise source of the size limit of 243 that Atredis found.
The default key size of LMDB is 511 bytes. However, the JavaScript language binding for LMDB uses JavaScript's string encoding, which is UTF-16, so 511 bytes is 255 characters (510 bytes, because we can't store half a character). So the limit in our world would be 255 characters, but the library tacks a nul onto the end, so 254 characters.
Brian's analysis is otherwise basically on target:
They generated a 243 character string (243 'A'
characters) which they used as a key in a map store.
The key encoding is sAAAA...
, thus 244 characters.
This is stored into, in this case, store 2, so the vatstore key is prefixed with vc.2|
, thus 249 characters.
The vat is, in this case, v2
, so the vatstore syscall prefixes the key with v2.vs.
, thus 255 characters. Boom.
I've done an exhaustive analysis of all the keys that get written to the vatstore. The only open ended keys that can potentially include user provided values of arbitrary size are:
All other keys are bounded and will not, in practical terms exceed the limits. ( Well, keys that incorporate vrefs could, in principle, also overflow, but this would require the generation of something on the rough order of 2*120 vrefs (half of 240, since there is one variety of key which has 2 vrefs in it), which I'm not going to worry about during this century. You could also overflow with super large vat ids, but you'd need something like 2240 vats, which I'm not concerned about either.)
This means that we need to put guards on store keys and the vatstore vat power. Attempts to use these keys will always be direct user code actions, not under-the-hood system operations, so simply throwing some kind of hey your key is too big
error should be fine.
We should reserve some space for the vat ID and the collection ID, since these can grow, plus a bit more for the fixed length portion of the encoding (currently 9 characters, but I'm thinking a bit more for future flexibility). So, say, 40 characters for overhead, thus put a limit of 215 characters for a store key encoding. Or we could say 200 just because it's a nice, easy to remember round number, and since we're going to have to replace how we store stores anyway I don't see this limit being a long term thing.
The vatstore power uses less overhead, so we could get away with reserving, say, just 15 characters for overhead in this case, but I don't see much practical benefit to providing an extra 25 characters for user keys here just because we can. If you're bumping up against the limit using the vatstore power you're probably doing something wrong anyway.
I'm not sure there is value in putting guards on the other uses of vatstore keys (or on direct LMDB accesses from the kernel), since LMDB will already complain if we hit one of those millennial limits -- either way you're dead at the same point due to an error being thrown, so why bother with the extra effort?
After a brief discussion @warner and I decided that the best and easiest way to deal with the issue with respect to the vatstore vat power is simply to remove the vatstore vat power entirely from vat API. There's no functionality it provides that is not already better provided by map stores and the baggage (which didn't exist when the vatstore vat power was introduced), and there's no code that currently uses this feature aside from its own tests.
What is the Problem Being Solved?
Our friends at Atredis pointed out that LMDB has some surprisingly small key-length limits, and there are ways for userspace to trigger these, causing an LMDB error as the block buffer is committed (which would crash the host application).
With the framing we put around Store keys, the largest key that can be stored in a low-numbered collection (
collectionID = '2')
for a low-numbered vat (vatID = 'v2'
) is 242 characters long.That
Store.init
key expands to asyscall.vatstoreSet
key ofvc.2.sAAAAAA..
, which expands to an LMDB key ofv2.vs.vc.2.sAAAA..
. That sounds like a key-size limit of 254-ish bytes.Since Stores are frequently used to hold remotely-provided data (e.g. names), this could probably be triggered remotely.
Description of the Design
The long-term fix is to move away from LMDB to something like SQLite, where key lengths are much less restricted.
The short-term fix is to have
Store.init
throw an error if given a key that is too long. We can choose a safe limit like 128 characters, giving us plenty of headroom to guard against long vatIDs and collectionIDs (whose identifiers are included in the prefix, and thus count against the limit). The other methods should treat oversized keys as not existing in the table (sohas
returnsfalse
,get
anddelete
throw a "no such key" error), without encoding the key or submitting it to avatstore
operation.We can impose the same limit on userspace
vatstore.get()/set()/etc
calls, if those are enabled byoptions.enableVatstore
(which defaults tofalse
). Both limitations are imposed by liveslots. By throwing a catchable error, userspace can deny specific requests by abusive clients without losing the ability to provide good service to well-behaved clients.At the
syscall.vatstoreSet
layer, we can impose a larger limit (maybe 200 bytes, giving some headroom for the vatID), and make it a vat-fatal error to exceed the limit. This protects the kernel (really the host application) against a renegade liveslots or compromised worker process, which gets full syscall access.The kernel should start
initializeSwingset
by writing a DB record with a 250-byte key. If the kernel is given a SwingStore that cannot handle keys of this size, this check will let us find out early. It'd be nice to delete this key after the check is complete, but given the single initial DB commit, I'm not sure there's a good time to do it. (and we can't delete it duringinitializeSwingset
, because then it won't actually test the DB's abilities).Later, if/when we move to a more capable database, we can enlarge or remove the syscall limit, and release a new version of liveslots (used only by new vats) which has a corresondingly larger internal limit. (Moving to SQLite is likely to change the syscall.vatstore API significantly anyways).
Alternate Designs
I briefly considered a more detailed approach, in which the SwingStore's
kvStore
has an additional method namedgetMaxKeySize()
, and the kernel reads that to decide what limit to impose uponsyscall.vatstore
operations (where long vatIDs would get a slightly shorter limit), and it tells liveslots about the per-vat limit, so liveslots can compute the per-collection limit (again, slightly shorter for longcollectionID
prefix strings), and impose it upon Store operations (which know their own key encoding overhead).That's far too much work for the flexibility it buys.
Security Considerations
For MN-1, it's critical that we do not allow remote callers to crash the entire kernel by passing a surprisingly large name string. For MN-2/3, it will be equally critical that we protect the kernel against individual vats deliberately using a large key.
The limit must be enforced identically on all validators.
Test Plan
Unit tests