Open HerrCai0907 opened 2 weeks ago
What's the use case for this?
In high performance embedded device which has larger RAM than the traditional embedded device but not too much (>1M). In this case, 64kB per page is still to much granularity.
Then it can use lowMemoryLimit
to control the memory usage.
Does this do anything that --maximumMemory
doesn't?
Does this do anything that
--maximumMemory
doesn't?
maximumMemory is in 64kB size, but lowMemoryLimit is 16B size. But I agree we may can unify them.
Hmm, I think I'm confused. If you have more than 64 KiB of memory on your device, wouldn't --maximumMemory
work?
Or are you restricted to an amount of memory that's not a multiple of 64 KiB (for instance, 68 KiB, 123 KiB, etc.)?
Or are you restricted to an amount of memory that's not a multiple of 64 KiB (for instance, 68 KiB, 123 KiB, etc.)?
Yes. For example, I have device with 150kB memory but want to run 2 wasm job. It is impossible to split 3 x 64kB linear memory. By this option, each job can use 75kB.
Some background:
Lots of runtime use virtual memory which will not commit the linear memory from memory.grow
utill it is really used. So the memory.grow
will / can success even we don't have so many physical memory.
When WASM want to write some data in linear memory, it will commit it and make it writable.
But AS's allocator will write information at the end of linear memory, which will make runtime commit the whole linear memory. Then when runtime wants to commit the memory, it will trigger OOM and be killed.
The original
lowMemoryLimit
only support memory less than 64kB (1 wasm page). This PR wants to extend the function oflowMemoryLimit
to support more than 64kB memory limitation.