zaphoyd / websocketpp

C++ websocket client/server library
http://www.zaphoyd.com/websocketpp
Other
6.99k stars 1.97k forks source link

Library is buffering the data #683

Open peererror opened 6 years ago

peererror commented 6 years ago

Hi , I see that library is buffering the data internally is there a way to stop buffering so that when I can call send the data is delivered to network socket as soon as possible in and tcp sockets handle the rest .

inline size_t MediaClient::getBufferedAmount(websocketpp::connection_hdl hdl) {
    websocketpp::client<websocketpp::config::asio_tls_client >::connection_ptr connection = m_client.get_con_from_hdl(hdl);

    return connection->get_buffered_amount();
}

I am getting buffered data on websocket connection like this.

zaphoyd commented 6 years ago

WebSocket++'s Asio transport buffers data for two purposes, neither of these can be disabled, but after understanding them they may not actually be a problem. If you have an exotic need for completely zero buffering WebSocket++ has a pluggable transport system that would let you write such behavior.

The main buffering is related to the transport being asynchronous. The connection send method writes outgoing data to a buffer so that it can return immediately, allowing you to perform other actions before yielding control back to the async event loop. The event loop will then process the writes that your handlers queued up without any delay. This buffering is necessary because send might be called with read only variables (send needs a buffer to write to for processing masking and headers) or local variables that are not guaranteed to be in scope when the write handler runs.

There are no artificial delays in the system or waiting until buffers are full to send. The buffers are exactly as big as they need to be to hold and process your messages. They are sent as soon as control is returned to the event loop. If you keep your handlers short (or send from another thread) your messages will be written out immediately.

If your trouble is that you are sending too quickly for the network connection to deliver, the get_buffered_amount() method can help you determine when to back off. The only alternative to this is synchronously blocking on writes, which completely destroys the responsiveness of an asynchronous system.

Additionally, in most cases the buffering significantly improves performance, particularly when sending lots of small messages to the same client. This is because multiple messages sent in the same async callback will be coalesced into a single TCP packet and/or single TLS frame. This reduces overhead significantly compared to synchronously writing out a new packet every time connection::send is called. Note: WebSocket++ doesn't wait for more messages to coalesce, it only bundles the ones that happen to already be there when the write handler runs.

Finally, if you explicitly need synchronous WebSocket behavior for an edge case (very low memory, single thread, single connection??) The core library supports both synchronous and asynchronous transport behavior. The raw/iostream transport is an example of a synchronous one. You could build one using raw sockets, synchronous Asio methods, or any other network plumbing makes sense for your use case.

peererror commented 6 years ago

Is there any tutorial for implementing synchronous Asio or raw sockets.

amykhaylyshyn commented 6 years ago

@zaphoyd Hi,

I noticed performance issue when pushing hundreds of messages per second. The library keeps adding messages to the send queue then the data is sent by one huge chunk (30-100mb). Library passes huge send queue to async_write in one call which causes some bottleneck inside of ASIO so that it sends only 30-40 kbytes of data per second. I found fix for this:

https://imgur.com/a/EBzAn

One more improvement could be using while-loop and adding additional parameter - how many messages are allowed to be added to m_send_buffer. E.g.

while (next_messaage && m_current_msgs.size() < MAX_SEND_QUEUE_SIZE)

UPD: I've checked option with MAX_SEND_QUEUE_SIZE and it looks like it does not make any performance boost but increases memory usage. I would stick to the option with "if" instead of "while".

vinniefalco commented 6 years ago

Are you familiar with Boost.Beast? It has WebSockets, and it doesn't buffer: https://github.com/boostorg/beast/

Comparison to websocketpp: http://www.boost.org/doc/libs/1_66_0/libs/beast/doc/html/beast/design_choices/comparison_to_zaphoyd_studios_we.html

amykhaylyshyn commented 6 years ago

Yes, I know about Boost.Beast. However, it cannot be compiled in VS2010 which is critical for me. I've created fork with my fixes: https://github.com/amykhajlyshyn/websocketpp/commits/0.7.0.a

qqsea commented 4 years ago

@amykhaylyshyn's idea is very good. Now we use websocketpp as the server and send data (no buffer and message sent immediately) to client. But one thing need suggestion, every time we send data (around 5Mb), then the interval between each message is 1 second (Within wireshark or chrome devtool). So why the latency is too big between every message, how can I decrease it?

Thank you very much! latency

amykhaylyshyn commented 4 years ago

@qqsea thank you for feedback. Actually, I'm using code with my fixes for two years in production and everything works well. I think in your case low network bandwidth is the reason why the interval between messages is 1 seconds as the network needs some time to send all your packets. Try checking on different network conditions.

qqsea commented 4 years ago

@amykhaylyshyn Thanks for your answer. We are in the local net work which should not be a problem. By the way, there is error within my problem description. Every time the message size is around 512K. So 512K per second indeed is problem within local net work. And also we did another experiment. We enable log of websocketpp and record the time when sending data. Then here is the result. Every 4 frames as one block received from server. The interval between blocks is around 2 seconds. But the interval between frames of one block, the interval is small. So I think there should be some mechanism within websocketpp can decrease the sending interval. 2

qqsea commented 4 years ago

@amykhaylyshyn The problem now should be resolved. Actually I want to enable the log to find the bottle neck of the websocket request and response chain. Luckily we find a workaround to resolve the problem. Some how by product of the log mechanism within websocketpp. After add the following code, it is OK to receive and send the data in time. Even no need to change the code @amykhaylyshyn provided.

m_server.clear_access_channels(websocketpp::log::alevel::all); m_server.clear_error_channels(websocketpp::log::elevel::all);

As to why this code piece worked, I still have no clue. Maybe in future there is some time for me to make some investigation.

Thanks @amykhaylyshyn again!

flyerSon commented 3 years ago

@amykhaylyshyn good idea,maybe resolve the send delay problem