Open luciusmagn opened 7 years ago
Would something like Engine::register_async_fn
be possible? Most futures would probably come from host provided functions, such as interfacing with an http client. Even though the rhai script would still make API calls sequentially, the host is not bogged down by a blocking script. (More relevant if running multiple scripts inside a server for example).
Well, in that case, it would make sense to make the eval
methods async
as well. You don't do async
half-ways...
Register an async
function, and if the Engine
blocks on the call, then it is essentially blocking the threading running Engine
as well. In that case, you gain nothing since you're blocking a thread no matter what.
Better to put the .await
inside the function itself and make the function non-aynsc
.
You mean putting the .await in the host function? That would only be allowed if the function is async. Moving back into sync world would require block_on, which puts us back at the start.
I'm not familiar at all with the engine internals, and assumed it runs in the same thread as the host calling eval
. register_async_fn
would at least allow the server to do something else as the engine would yield.
as the engine would yield.
Exactly. That's the thing. The Engine
needs to yield. So something like eval_async
.
Might be easy to make such a method, but I'm not quite sure about the way to store an async
function pointer and whether it is essentially the same as storing a normal function pointer...
I'm not very familiar with async
myself so I don't know how to go there yet...
I'll have a look at this sometime next week. The register macro is a bit arcane, so will need to do some reading. :)
Would Lua/Wren-style fibers/coroutines work? If you're implementing a language runtime it doesn't require too much plumbing to add, and would probably integrate quite neatly with Rust's existing async system. Your Rhai script could call a http_request()
function and that'd yield to Rust, which at some point would yield control back to the Rhai engine when the request is done, "blocking" the Rhai code from proceeding without actually blocking any threads. Could even make the Rust-side function callbacks async themselves, and just have code execution continue once the future completes with its result?
In such a case, the Rhai script would be blocking, but the Engine will be async such that it doesn't occupy a thread until someone (the application) yields a value back to it. How to do that and how to make the application aware that the Engine is waiting for something (and yielding the correct value back) is going to be tricky. I'll have to research on this some more, as I haven't written a lot of async Rust yet to be very proficient in it...
Wonder if you could literally just... have a register_async_fn
taking a function returning a impl Future<Output=Result<T, EvalAltResult>>
. Add an async variant of the eval functions and just await those callbacks when the interpreter hits them - the Engine itself and the code inside it would be "single-threaded" and "blocking" but not with respect to the program as a whole, as it would just defer to Rust's task scheduler. This would require "async-ifying" the entire interpreter, at least up to the point where registered functions are executed, but could be hidden behind a feature flag or otherwise refactored to make less big a deal, somehow.
Instead, how about eval_async
that'll wait for the async
function to finish?
I'm not sure if that requires Engine
to be Pin
-able or something...
I don't think we'd have to change to much about the Engine internals. Pin is only required if we're implementing the future trait by hand, which we are not. Storing impl Future<Output=Result<T, EvalAltResult>> should do the trick.
I have given it some more thoughts. It is not as easy as it sounds...
1) register_async_fn
for functions returning Future<Output=Result<Dynamic, Box<EvalAltResult>>>
2) Since function calls can now be async, eval_stmt
and eval_expr
need to be async.
3) The entire evaluation stack needs to be async from top to bottom.
4) There is no way to detect whether a script calls an async function (remember eval
), so all eval's now need to be async, essentially making Rhai async.
5) Engine::eval
would call block_on
with eval_async
to run a script.
It is not particularly difficult per se, but it requires a change of operation mode for the entire library. I am not sure whether making Rhai async will make normal usage much slower...
If we compile Rhai into byte-codes, however, this suddenly becomes much simpler. That's because a byte-code interpreter is basically one giant match
statement plus a loop. That function can be made async
(possibly behind a feature gate) and Bob's our uncle.
@schungx
Engine
then just spawn tasks, that'll remove the overhead of block_on
which basically is heavy because it needs to spawn threads.BTW, actix
actor library can wait a future synchronously, basically its context
is a mini executor, I think you can see how it implements this.
Eventually I think the following can be interesting, not sure if it's ok for rhai
engine.register_fn(
"async_call",
move |arg1: ImmutableString, ctx: rhai::Context| {
// wait on a future return Future::Output
ctx.wait(reqwest::get(&arg1).send())
},
);
engine.eval_with_scope("let dom = async_call('https://rust.org');")
Ref: https://github.com/actix/actix/blob/master/src/contextimpl.rs https://docs.rs/actix/0.10.0-alpha.3/actix/trait.AsyncContext.html#tymethod.wait
That would make the function call blocking... so it is still blocking within the Engine itself.
The idea of this issue is to make the Engine yield, meaning that it can register a function that returns a future, and will yield out to the calling environment once it hits that function call, to be resumed later on when the future resolves. But that means storing all the states plus stack of the evaluation up to that point - this will be much easier with a byte-codes system.
So technically speaking you can do:
fn async do_work() -> i64 {
engine.eval::<i64>("http.call(url)").await?
}
@schungx
I think the importance is
when we call
ctx.wait
it spawn task immediately for handling this future. If this is done, it's non-blocking. The only blocking code is just waiting for a response
we can also have spawn
to not wait for response but it's a little bit useless
engine.register_fn(
"async_call",
move |arg1: ImmutableString, ctx: rhai::Context| {
// wait on a future return Future::Output
ctx.spawn(reqwest::get(&arg1).send()) //just spawn task without waiting for response
},
);
engine.eval_with_scope("let dom = async_call('https://rust.org');")
I like Rhai, the language and its Rust integration, and having async support would be a significant feature, especially in backend, API, and network services. Rhai is not the fastest embedded scripting language but I’m confident that it will get better over time and that it’s probably already fast enough for some mixed Rust/Rhai use cases.
I think having a fully-async Engine
could also be another safety feature as it would allow to make the Rhai tasks Abortable
or to run them in a timeout (tokio::time::timeout
). Maybe this is already possible with on_progress
but having it async would make it much easier and more straightforward.
I could only find mlua
as an Rust-embedded scripting environment that supports async/await. The API with call_async
matches what was described above and it is implemented using Lua‘s interruptible coroutines
https://docs.rs/mlua/0.4.1/mlua/#asyncawait-support – while I like Lua I would prefer to have a safe alternative (and the author admits that Lua’s and LuaJIT’s runtimes probably make it impossible to exclude all the unsafe side effects).
There’s also mun
and while it’s designed to be extremely fast using LLVM’s byte code, I cannot find any hint of an async API. And compared to Rhai, the Rust API is not so well documented and not so nice 😬
I myself haven't written much async Rust so far... so I can't really judge how easy or difficult it'll be to add async to Rhai.
The reason why you'd want something async is that you'd like to do something constructive while waiting for something else to come back. This almost always mean interacting with the system or hardware. Rhai, however, is sandboxed meaning that it deliberately cuts itself off from the environment. This means that there really is not much use for something async...
Except for the use case where you register external functions that do interact with the system and you would want the script to return a future when it gets to that function call, so you can await
on it.
All this doesn't mean having Rhai scripts that are concurrent will not be useful. In fact, long-running script operations can be broken up into tasks and run concurrently. Nevertheless, there is nothing stopping you from spawning one independent, single-threaded Engine
per task - engine instantiation can be made extremely cheap, if you wrap up all your external logic into a custom package.
So, my point is, how can having async/concurrency support benefit Rhai in a significant manner?
The reason why you'd want something async is that you'd like to do something constructive while waiting for something else to come back. This almost always mean interacting with the system or hardware.
As I’ve mentioned, my main use case is anything related to networking and the web of some kind. Once it involves networking, “the system or hardware” is every networking call that you make.
Rhai, however, is sandboxed meaning that it deliberately cuts itself off from the environment. This means that there really is not much use for something async...
Having it safely sandboxed is the nice feature of Rhai over Lua ;-) I have written some privsep’ed / sandboxed networking daemons in C and Rust and I’m aware of the necessity of it.
Except for the use case where you register external functions that do interact with the system and you would want the script to return a future when it gets to that function call, so you can await on it.
Exactly, I think I always I interact with such callbacks. For example: use a language like Rhai or Lua to parse/prepare the request and apply some logic according to a configuration, call into C or Rust for the fast path to send/receive I/O, use the script to parse the result.
If we forget the performance difference for a moment, I think the most prominent example for something like this is nginx + Lua(JIT). More exotic examples would involve Tcl and some custom hardware APIs.
Nevertheless, there is nothing stopping you from spawning one independent, single-threaded Engine per task.
It is possible and I previously tried that approach out of pure curiosity: call the engine from the async code path with tokio’s task::spawn_blocking
, use a registered callback that gets the runtime handle of the current thread and calls tokio::spawn
to perform an async Rust networking operation from within the script, parse the returned result in the script.
There are three problems with that approach:
spawn_blocking
puts the task on a new thread from a separate pool which is hard-limited to a defined number of threads/tasks (512 by default) and shouldn’t be much higher than a reasonable factor of the available CPU cores. Async tasks are much more scalable as there can be thousands per thread.abortable
in the Futures/tokio sense of it.So, my point is, how can having async/concurrency support benefit Rhai in a significant manner?
I think it all boils down to the one fact that it would allow using Rhai from async code. While it is possible to call long-running blocking code from async and vice versa, it is very unpopular and comes with many problems.
I think it all boils down to the one fact that it would allow using Rhai from async code. While it is possible to call long-running blocking code from async and vice versa, it is very unpopular and comes with many problems.
You're right. It'll open up another dimension of usage scenarios.
However, I am not sure I know how to save the execution state of the Engine
during an evaluation in order for async
to stop it and return said state as a Future
.
I'll need to read up on async Rust to find out..
All the execution states of an Engine
are wrapped up in a number of data structures. They are quite a few, but not too many. Since the Engine
is re-entrant, it does not by itself hold any state at all.
Is there any plan do implement async in the next time? I'm building right now a Platform which is leveraging Rhai in the frontend(wasm) and backend. For the backend I can use blocking function for I/O and bypass the limitation of rhai. For the frontend I can not go around async, because the browser has only 1 thread which I can not block in rhai....
There is a such a plan, but it is going to be a large task. It is going to essentially duplicate the entire evaluation code base for async, plus all the function registrations (to support functions returning futures). In other words, a large part of Rhai will be duplicated in an async manner.
It is usually possible to use Rhai in an async manner by moving all the async stuff outside of scripting. This is usually the preferred method, because a Rhai engine can be made to spin up so cheaply, you can just spin one up to continue. But essentially, you'll be writing continuation-passing code.
As for your wasm uses one thread, I believe multi-threading is coming to wasm soon. In the meantime, maybe you can check out:
This is an experimental multi-threading wasm executor.
As others have pointed out, there is also the option to bundle in an async executor (split into the tokio
and async-std
camps these days) with Rhai, then it simplifies a whole lot of matter because the Rhai engine will then become its own async execution environment.
However, doing so has a cost, which is to deviate from Rhai's original design to be a small engine, not to mention lock users into one particular async executor.
Another user has pointed out that it may be possible to make Rhai executor-neutral and only write to the Future
trait, but I have not investigated that yet.
Still, essentially we're back to doing either:
1) building in async support which makes 99.9% of use cases run slower (because they don't need async)
2) duplicate the non-async code stream into an async version.
Is it possible currently to use callbacks? I have a simple use-case where I just need to be able to sleep
in async tasks so that I can handle multiple jobs concurrently while not blocking when a task wants to sleep
purely for timing reasons.
Even without sleep, though, in JavaScript, even before async
was a language feature they used setTimeout
or setInterval
combined with callbacks to accomplish non-blocking tasks. Would that be a possible workaround in Rhai or are callbacks not supported for some reason?
Another user has pointed out that it may be possible to make Rhai executor-neutral and only write to the Future trait, but I have not investigated that yet
I think that would be possible. I think a lot of the reasons that libraries are executor-specific nowadays is the need to spawn_blocking
or use a specific async IO trait that is specific to tokio
and stuff like that. These are all things I think Rhai would not be specific to or need to use directly.
Is it possible currently to use callbacks?
Yes, callbacks are simple. Search the Book and you'll find samples.
setTimeout
orsetInterval
Yes, you can do your own setTimeout
and pass a Rhai callback to it. What you cannot do is to pass a "continuation" that encapsulates the call state up to that point (including multiple layers of function calls).
So think of it as ES5-style callbacks and not an await
which automatically generates the continuation based on an implicit state machine.
Search the Book and you'll find samples.
Oh, yeah, should have searched that first! :sweat_smile:
Yes, you can do your own setTimeout and pass a Rhai callback to it...So think of it as ES5-style callbacks and not an await which automatically generates the continuation based on an implicit state machine.
OK, great, that will work for what I need right now then, thanks! :+1:
However, you have to be careful about the scope. In most situations, you'd want to keep the scope of the previous invocation alive and encapsulate it into your callback.
actix-lua is a relevant project here
ctx.spawn
and similar shenanigans are not available in WASM
environment which would be one of the best use case for Rhia
actually we are facing this issue and it seems impossible to fix with Rhia to register async function in the script
Yes, Rhai is not async. Therefore, it cannot register async functions. This is by design, a choice to avoid adding async overheads to all code paths for the majority of use cases that do not require it.
I am also very in need of rhai supporting async. My use case is to be able to do async networking - for example making an http call. I am using rhai in library which is full async. I would need rhai to support something like this:
let response = await run_action('fetch', {url: "https://google.com", timeout: "5s"})
...
I do of course understand that technically I could register a blocking fetch
function to rhai. The issue is that basically all my "actions" live inside a registry and they are all async by design. If rhai would support async I could just register an run_action
function and automatically be able to re-use all of my available functionality out-of-the-box without having to provide a blocking variant for all of them.
(classic rust async problem here :cry: )
@timo-klarshift Have a look at https://github.com/rune-rs/rune it's like rhai but with async.
@timo-klarshift Have a look at https://github.com/rune-rs/rune it's like rhai but with async.
Yes, Rune compiles to bytecode (not like Rhai which is an AST-walker) so it is much easier to stop-the-world by saving the bytecode execution context.
If async is your core requirement, then go with an async engine.
Add futures to Rhai (exact design TODO)