Open vi opened 7 years ago
There isn't a way to get how much of the queue is being used, but the default queue size per connection is very low (5). If you are on default settings, then the queue is 500 events (5 * 100 default max connections). So, if you know your application needs a larger queue, just increase the queue size.
For example, if "all messages" is say 1000 messages, then I would change the queue_size setting to 1000+. If you know you need to go higher, go higher. If you have an application where it is possible to queue a very very large number of messages, then you can go up to usize::MAX / Settings::max_connections (e.g. with default max connections on a 64bit system that would be 184467440737095516). If you are finding that you need to schedule that many events at once on a single connection, I think you will find that there are other more important issues to fix with your application.
So, my advice is, rather than worry about how much queue is left at any given time, now that you know you need more queue, increase the queue size so that it's more than large enough to accommodate your use case. (I don't know how many messages you are trying, but I bet you change Settings::queue_size to 10_000 you will be fine).
Let me know if you need more help.
I mean messages accumulating in memory instead of being flow controlled.
Using the supplied example:
yes $(perl -E 'say "A"x511') | pv -c -N in | ./client | pv -c -N out > /dev/null
("yes" generates lines of AAAA, "pv" measures bandwidth from stdin to stdout)
Expected: "in" bandwidth and "out" bandwith equalizes. Memory usage of client is constant.
Actual: "in" is much higher than "out", memory usage raises until the program fails.
To avoid the error, you need to increase the queue_size. To be able to deal with the memory usage, you probably need to block or you need to wait for some indication that the other side is ready to receive another message. Do you have control over the server? Can you give me more details about your code and why you need to send a bunch of messages on a client all at once?
I was trying to do a generic TCP-to-WebSocket tunnel like wstunnel. The code was just modified example "client" with binary buffers instead of text.
If you are creating a proxy like wstunnel, then you should have control over the server. You can have the server send a confirmation message that it received your message and then use that as a signal to read more from your source. In the future, I am hoping to add support for futures, which will allow you to wait until a message has been sent in order to send a new one.
Broadcasting retains a seemingly infinite buffer, which gradually grows the Rust application's memory:
socket.broadcaster().send("some message"); // This accumulates memory infinitely
When a new client connects, it first receives of a dump of all past messages sent, which is generally undesirable.
Opened a new issue instead: https://github.com/housleyjk/ws-rs/issues/244
Currently ws-rs seems to just queue up all messages, accumulating all data in memory. If I feed to much data, it fails with
Unable to send signal on event loop
.Is there a blocking version of
ws::Sender::send
? Can I at least obtain length of the queue of pending messages (to insert delays dynamically)?