Open MageSlayer opened 1 year ago
It is not recommended to use rust-websocket
(i.e. websocket
crate) for new projects, you should stick with tungstenite
.
If by "half-blocking" you mean writing messages to Websocket from sync world and reading from it from async (e.g. Tokio) world (or the same with reading and writing swapped), then you should use tokio-tungstenite
in async mode and make a channel (e.g. tokio::sync::mpsc
or flume
) to bridge sync and async worlds.
You should specify you use case (how such "half-blocking" Websockets intergrate into a larger scheme) for a more specific advice.
You should specify you use case (how such "half-blocking" Websockets intergrate into a larger scheme) for a more specific advice.
My scheme is following:
So the only approach is non-blocking single-thread C-like manner. That what I call half-blocking mode. So it's:
My point is that reading is problematic here. It requires some sort of "cursor" letting know other parts of system where it left off last time if message wasn't read entirely (buffering). Unfortunately I cannot see where this "cursor" might be implemented reliably in my scheme.
Perhaps same issue in tungstenite might be useful here https://github.com/snapview/tungstenite-rs/issues/308
eager reading from socket if anything is there or flagging if no message received
What thread will wait for socket events?
flagging if no message received
What happens after such flagging? How will the application know when to attempt reading from the websocket next time?
blocking on socket write as it's expected to be rather quick
Note that if you Rust server (or network between the client and the server) is slow then that socket write gets length - backpressure.
What thread will wait for socket events?
It's the same single thread only available.
What happens after such flagging? How will the application know when to attempt reading from the websocket next time?
It's done by timer. So it's poor man's polling.
blocking on socket write as it's expected to be rather quick
Note that if you Rust server (or network between the client and the server) is slow then that socket write gets length - backpressure.
Yes. That's unfortunate, but I cannot see any other viable alternative.
It's done by timer. So it's poor man's polling.
OK, so we are in the hacks land.
In this case I would still use async/nonblocking everywhere (including for sending) by rolling customized low-level async utils (socket wrapper, timers) and maybe executor (though async-executor
's try_tick()
seems to do the trick on itself. Maybe futures_executor::LocalPool
is even better match.).
This way the only tradeoffs I expect are:
Runtime::block_on
to be really running somewhere.Other than that, from outside the plugin should look as if socket were used directly, without any additional file descriptors, threads and so on.
Here is my demo using async-tungstenite
: https://gist.github.com/vi/28117c2583ea74d35babfcd6abbef9e6
It should handle backpressure properly.
Maybe there are ready-made crates for such use case, but sometimes it is simpler to write than to find.
That's a real gift (not gist) :) I'll try that asap!
Hi again. Sorry for late response, but I'd like to ask some more stupid questions :)
I'm not that experienced in async Rust, so it looks like "subtask1" & "subtask2" are doing async send/receive. Thus a question - how can I "drive" them synchronously?
I mean - should I build an intermediate queue between sync & async code to pass values when sending/receiving from host application? Making use of something like following does not seem to be right as the "main loop" is hidden behind self.exe.try_tick()
.
let send_fut = c_tx.send(Message::Text(format!("Hello, {}", 1))); //.await;
block_on(send_fut)?;
I'm not that experienced in async Rust, so it looks like "subtask1" & "subtask2" are doing async send/receive. Thus a question - how can I "drive" them synchronously?
What do you mean "to drive synchronously"?
If you want to interact with sync code, you'll probably need a channel like flume
. This channel can be sync from one side and async from the other side. Maybe it would just work as is.
For driving subtasks simultaneously futures::future::select
is only one of the ways. Just adding more tasks to the executor (i.e. multiple exe.spawn
s) would probably be better.
Here is my second demo that shows some of the ideas above applied:
https://gist.github.com/vi/39607d1963b069a5167099f3fbffebf4
flume
does not require any additional hacks.If you want to interact with sync code, you'll probably need a channel like
flume
. This channel can be sync from one side and async from the other side. Maybe it would just work as is.
Yes. I'd like send/receive values to/from async part into/from sync functions.
Here is my second demo that shows some of the ideas above applied:
Thanks a lot for details. So, I guess the right way to emulate "non-blocking receive" is to read channel after doing following. Right?
self.wakers.wake_all();
self.exe.run_until_stalled();
Thanks a lot for details. So, I guess the right way to emulate "non-blocking receive" is to read channel after doing following. Right?
Yes, using flume::Receiver::try_recv
.
Note that if you want to do more tricky things (timeouts, retries, reconnects, simultaneous things) while staying single-threaded&nonblocking then you may prefer doing them within async world and only deliver final result to sync when needed.
Note that if you want to do more tricky things (timeouts, retries, reconnects, simultaneous things) while staying single-threaded&nonblocking then you may prefer doing them within async world and only deliver final result to sync when needed.
Yes. I guess it looks possible now with your help. Thanks a lot for your help and especially examples.
Hi
I am trying to implement "half-blocking" mode. That is blocking write & non-blocking read. Currently I use following code together with tungstenite. Blocking write is done using "write_inner". TcpStream is from "mio" crate.
Unfortunately, it does not work reliably. I am getting various errors at handshake & later time.
I'd like to ask. If rust-websocket could be used to implement working scheme like that?