Closed fake-name closed 6 years ago
Ironically, I just noticed https://github.com/boostorg/beast/issues/1070, which seems to indicate that this should be safe?
My understanding is since the writes are explicitly bound to a strand, they should be serialized, and therefore you can issue multiple writes at the same time. Perhaps this is not the correct understanding?
Not correct. Beast does not performing queuing for you, that is the responsibility of the application. You can only have one call to async_write
active at a time. The assertion detects that condition.
Ironically, I just noticed #1070, which seems to indicate that this should be safe?
You can have simultaneous different operations. For example you can have async_write
, async_read
, and async_ping
active at the same time. But you can't have two of the same operation.
Ah, ok, that clears that issue up. Thanks!
Would it make sense to have beast to do the queing? I (think) plain ASIO does write queuing, wouldn't beast doing the same be consistent (or am I not understanding ASIO)?
I don't think Asio does write queuing. If Beast did the queuing, it would raise all sorts of questions. How is memory allocated to store the queued messages? What policy is used to prevent unbounded growth? What technique is used to limit the amount of bandwidth used? A queue is no simple thing, and the Beast philosophy is to avoid making odd choices on behalf of the user. Since a queue can be built on top of Beast in an almost limitless number of different ways depending on the needs of the application, this behavior is left to users to define.
Nothing stops people from building a higher level library on top of Beast that does offer queueing. And if it satisfies most people's needs, it would become popular.
Huh, doing some more reading, and apparently strands are basically intended for handlers, so the issue was my misunderstanding.
AFICT, what'll happen is the async_write_some calls for both writes will occur with some runtime-determined interleaving in the same strand. It's thread "safe", but it'd be completely broken.
Anyways, I rewrote my system with chained callbacks and a queue, and it seems to be working fine, now.
@vinniefalco Nothing stops people from building a higher level library on top of Beast that does offer queueing. And if it satisfies most people's needs, it would become popular.
Have you though about doing it? boost::beast::helpers?
I was thinking about this and it seems to me most people needing to queue writes are going to have a class with a write(std::string message)
method and queue those strings. If the message is composed of two buffers they will copy them into the string and that's it. Not efficient, but easy, fuck scatter/gather I/O and ConstBufferSequence.
It seems to me that most users would want to keep using ConstBufferSequence, the ConstBufferSequence passed to async_write is what needs to be queued. And there is basically only one way to implement it, no? Each ConstBufferSequence can be of a different type, you can't store them into a vector<T>
, you can't merge them into a single Sequence, so you need to allocate memory for each one independently and there is only one way to allocate that memory: using the handler associated allocator (How is memory allocated to store the queued messages?
). The whole thing would work similar to timers, with a write_queue in place of the timer_queue (https://github.com/chriskohlhoff/asio/blob/master/asio/include/asio/detail/timer_queue.hpp). Each element containing the ConstBufferSequence and the CompletionHandler. When looking at the next element in the queue, instead of setting the system timer facility, you start a write. Since you are not deciding the allocation you don't need to worry about being cache friendly or anything exotic, each write_queue element would link to each other making a linked list.
What policy is used to prevent unbounded growth?
Don't have a policy for this, let an unbounded growth. The user of this boost::beast::helpers::stream_with_queue knows how much data it has queued and how much data has already been writen to the socket buffer (the CompletionHandler has executed). It can still define his own policy.
What technique is used to limit the amount of bandwidth used?
Probably something like Linux's SO_MAX_PACING_RATE should be used. But even without this, the user can still limit the rate at which it writes into the queue of boost::beast::helpers::stream_with_queue. That's no different than limiting the rate at which you write to a socket (i.e. the socket buffer/queue), you are just appending your queue to the one of the socket. Most of the time the user doesn't even know the size of the socket buffer, so he doesn't mind if you make it bigger.
Now, I may have got this completely wrong, it would not be the first time. But if there is really only one way to do the queue "right" (at least for 99% of the user cases), but it's difficult enough to implement to make everybody do it "wrong", it looks like offering the "right" solution would make the world a bit better.
it looks like offering the "right" solution would make the world a bit better.
Well, what you describe is entirely implementable as a separate class which works with a websocket::stream
. You might consider writing it :)
Version of Beast
(Boost overall version 1.66.0)
Steps necessary to reproduce the problem
Basically, I have a project that involves a websocket interface that has both a message->response interface, and unprompted messages to the client from the server.
My understanding about ASIO is that I can just call
async_write()
bound to the correct strand, and the resulting writes will be enqueued and then send.However, it seems like trying to queue up multiple writes on the websocket leads to a debug abort:
Assertion failed: ! base_, file c:\boost\boost_1_66_0\boost\beast\websocket\detail\pausation.hpp, line 213
I've built a small test out of the
advanced_server.cpp
file. The relevant bits:My understanding is since the writes are explicitly bound to a strand, they should be serialized, and therefore you can issue multiple writes at the same time. Perhaps this is not the correct understanding?
All relevant compiler information
MSVC++ 2015
Complete source:
index.html