Open ggreif opened 4 years ago
Funny, I was thinking about similar things this week. Good to collect ideas.
But I would argue that we should not do any such optimization before we have a decent benchmark suite to measure the effect in the VM used by the platform (or maybe using wasmtime already). And ideally with some profiling to go for the big wins first.
(Nothing wrong with implementing something nice and not too complicated for fun and mental sanity.)
Currently the access of object fields is compiled to a linear search for the corresponding field hash (assoc vector).
The code contains comments about performing a binary search to find the hash faster.
While binary search is not always faster than a linear one, I'd like to mention here two more techniques that could be used:
Check the statically declared type's field count against the object's dynamic one. If they match, use random-access. Small surplus in the latter allows us to only check a few candidates around the expected random-access position.
Linear (resp. binary) search should not hard-code the start index as
0
, instead it should take the start index from the number of fields declared in the static type before the filed being accessed (in hash order). Similarly for the end index.Finally, if we can (someday) perform some analysis about an object type whether it lives in a closed scope, then we will be able to compute static (min, max) bounds on the number of object fields, and this will narrow down the count of possible access strategies to pick from.