However this simple model does not actually represent the underlying behavior of an optimizing compiler. Right now we try to overcome this by generously allocating the stack space for a relative small number of logical items. What I've missed is that the regalloc will generate spill slots based on the number of active live ranges.
Introducing the additional parameters into the mix is rather annoying and provides a very rough upper bound which I am not sure if useful.
As an alternative, Chris F. has suggested to look into having a virtually unlimited native stack and throw a stack overflow trap based on logical consumption. It's not trivial to implement though. There are certain considerations, i.e. how exactly provide such a stretchable native stack. If done naively we could introduce performance cliffs (such as what segmented stacks in Go had).
Currently for PVF execution we are using a rather naive algorithm for stack metering.
See:
However this simple model does not actually represent the underlying behavior of an optimizing compiler. Right now we try to overcome this by generously allocating the stack space for a relative small number of logical items. What I've missed is that the regalloc will generate spill slots based on the number of active live ranges.
See this discussion https://bytecodealliance.zulipchat.com/#narrow/stream/217126-wasmtime/topic/deterministic.20stack.20usage
Introducing the additional parameters into the mix is rather annoying and provides a very rough upper bound which I am not sure if useful.
As an alternative, Chris F. has suggested to look into having a virtually unlimited native stack and throw a stack overflow trap based on logical consumption. It's not trivial to implement though. There are certain considerations, i.e. how exactly provide such a stretchable native stack. If done naively we could introduce performance cliffs (such as what segmented stacks in Go had).