Open AbsurdlySuspicious opened 3 years ago
Also, this is a purely theoretical question, I don't currently have any real case related to this, but I wonder what options are if use of "already-async" code is unavoidable
Hi @AbsurdlySuspicious! That's a good question.
An async function from Rust's perspective is a function returning a Future
that exposes a poll
function. The executor calls this function until it returns Poll::Ready(T)
. In your example the Lunatic runtime's executor running inside the host would need to call a wasm guest's poll
function. This would be an issue, because the guest code is suddenly executed in the host environment and escaping the sandboxing.
The correct way of doing this would be to expose a lunatic::block_on
function that acts as an executor inside the guest code, but still works together with the host scheduler to minimise any overhead.
Currently the WASM/WASI specification doesn't expose any low level primitives for real async code. It's still left to be defined what a truly async function inside the guest will mean. From Lunatic's perspective it should be possible to support something like this once it's ready.
So it's hypothetically possible, that's reassuring.
I also completely forgot that conventional runtimes won't run inside wasm. I've seen some lightweight "pollers" designed specifically for wasm that allows blocking execution of async functions. If I'd use one of these, will lunatic be able to insert preemption points between polls? Maybe it'd be relatively easier to expose a function that would hint lunaticvm this point is preferred for switching to other process (analogous to std::thread::yield_now
) and build a simple executor on top of it?
If I'd use one of these, will lunatic be able to insert preemption points between polls?
I believe so, if you link one of these implementations I can take a look at it and maybe provide more feedback.
In general, Lunatic will preempt if you are waiting on any io or if you have a compute heavy task running longer than allowed. There is an edge case that is currently not implemented. If you have a long running loop without any function calls inside it, Lunatic can't currently preempt it. However, it will be able to do so in the future.
Maybe it'd be relatively easier to expose a function that would hint lunaticvm this point is preferred for switching to other process (analogous to
std::thread::yield_now
) and build a simple executor on top of it?
Lunatic actually has such a function: https://docs.rs/lunatic/0.2.0/lunatic/fn.yield_.html. You can use it to build bigger abstractions. I just want to note that this exposed API is a bit of an ad-hoc implementation and probably will change in the future, so the function may end up under a different module.
Looking a bit ahead into the future, I can imagine Lunatic being a direct target for Rust (aka wasm32-lunatic
) where threads are mapped onto processes that share memory and something like std::thread::yield_now
is mapped to our internal yield
. This would allow us to have much greater support for Rust primitives than what for example WASI currently provides.
https://github.com/richardanaya/executor - this one for example. Here's it's main loop:
https://github.com/richardanaya/executor/blob/efebe660862d6a76a4a6a5ba3541a68aa00fd9f8/executor/src/lib.rs#L60
Lunatic actually has such a function
Oh, I somehow missed it in docs. I guess it shouldn't be a big deal to use it in one of executors like above. With this I think there's no really any urgent need in exposing executor since lunatic itself covers features of bigger runtimes and with some wrapper around Process::spawn
I see it as convenient as usual. Thank you for explanation!
how about wasmtime-fiber
wasmtime seems to have gained some really interesting async support: https://docs.rs/wasmtime/0.26.0/wasmtime/struct.Config.html#method.async_support .
(that's only relevant for the host side for now, and doesn't cover the wasm side)
Hi @krircc & @theduke,
Wasmtime's fiber implementation and async support is inspired by Lunatic. We had support for running async host functions from the start.
This issues is about running async rust code inside of the guest, not the host. And how this async guest functions would interact with the async host functions. Currently, the Wasm spec doesn't have support for async. Even we fully utilise async in Lunatic, from the perspective of the guest code these calls are seemingly blocking. This makes it impossible to compile Rust libraries that use async code to Wasm and run inside of Lunatic processes.
I wonder if we could take some inspiration for how the browser somehow handles async WASM. I don't understand it fully but I've seen someting called "microtasks" that might have something to do with it.
I think browser's async WASM might not be perfect, but it is possible to use wasm_bindgen_futures::spawn_local
to kick of a non-blocking async block on the same thread in WASM.
Hi @zicklag, everything that is supported in the browser should also work in Lunatic. Executing simple Rust Futures should not be too difficult to implement.
The bigger issues is, how do you compile something that depends on let's say tokio.rs to Wasm? You need to expose low level operating system primitives (epoll, using, IOCP, ...) as host functions to the Wasm, so that tokio would compile. This is currently not possible, neither in the browser nor in Lunatic.
Is there any plans to support rust async functions inside processes? Like ability to pass an async function to
Process::spawn
or some other way of properly exposing async runtime? Or setting up the runtime by yourself (like callingsmol::block_on
inside process function) is the right way?Actually I guess it's practically impossible to implement since process function is likely needs to be a raw pointer to actual process entry point and given the fact that processes don't share any memory, so I'd like to know is spawning a runtime per process is efficient enough or should be avoided?