Closed OfferLifted closed 1 month ago
I'm not quite sure how this API would work in practice. You could easily get counter-intuitive results.
Let's say the server starts sending a "batch" of events. websockets starts receiving them. How does it know that it has reached the end of the batch?
If you know how many events are in the batch, just receive that many events with a loop.
If you don't know, well... I'm quite unsure about what the behavior should be and I don't want to provide an API with ill-defined or potentially confusing semantics.
If you really want to receive "whatever has already arrived", you can receive events in a loop with a very short timeout.
(untested code -- treat it as pseudo-code)
messages = []
try:
while True:
messages.append(await asyncio.wait_for(websocket.recv(), timeout=0.01))
except TimeoutError:
pass
To be clear: I'm leaning against adding this API because I don't think that the semantics are obvious enough.
Currently there is no way of the client receiving more than 1 message at a time from the .messages queue (as far as I'm aware). In the documentation I couldn't find anything related to the deque used by .messages on the
WebSocketCommonProtocol
. Probably because it's used under the hood and shouldn't be used as a public attribute. Consider a scenario where bursts of messages are sent by the server and are received by the client using:Under the hood this uses the
__aiter__
which in turn yieldsawait self.recv()
. This is all completely fine however it would be nice to have the option to receive a batch of messages if the client just received a burst of messages in its queue so that a batch/burst can be processed at once. Now you probably don't want to always dump out all data in the queue. If you application got backed up and the queue is full it's probably a bad idea to get out the completely filled queue at once so perhaps a max batch size argument can be provided somewhere to control it (maybe using the max_queue parameter).I hacked together something primitive that kinda works (no max batch size it just dumps out the entire queue). Consider it a proof of concept.
Add a
recv_batch
function:change the
__aiter__
to yield await the new batch_recv() instead of self.recv()Again this is a temporary hacked together solution and might break all sorts of things I didn't think about or have some big performance drawbacks.
Having this new
__aiter__
we can now again use the following syntax:Instead of getting out 1 message (data) at a time we now get a list of messages (data).