max-mapper / websocket-stream

websockets with the node stream API
BSD 2-Clause "Simplified" License
668 stars 114 forks source link

Flow control #143

Closed anderspitman closed 5 years ago

anderspitman commented 6 years ago

Does websocket-stream provide an backpressuring, or at least is there any way to close down the stream from the receiving side to tell it to stop sending? My sender is making my receiver run out of memory.

mcollina commented 6 years ago

in theory yes. Can you make an example? Il giorno ven 31 ago 2018 alle 04:52 Anders Pitman notifications@github.com ha scritto:

Does websocket-stream provide an backpressuring, or at least is there any way to close down the stream from the receiving side to tell it to stop sending? My sender is making my receiver run out of memory.

— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/maxogden/websocket-stream/issues/143, or mute the thread https://github.com/notifications/unsubscribe-auth/AADL46PMe1u4vNpeDe2TD0DOfQlYGkb5ks5uWKTngaJpZM4WUhHQ .

dhirajtech86 commented 5 years ago

Guys any update on this. I am facing same issue @anderspitman. My scenario is that my sender socket client is written in c#. There is receiver client which is in node written using ws and websocket stream. Then there is a middle server which pipe the connection with both sender and receiver. This server is also written in node with ws and websocket-stream. When I pipe in this middle server, then data coming from my c# client is at very fast pace and receiver consuming it slowly. But the middle server memory consumption is growing like anything. It crashes after some time.

I have implemented pause and resume on receiver client using events and that works on receiver part but middle server memory keeps growing.

Also tried to use raw socket of websocket provided with ws for pause and resume but it seems my pause and resume have not but impact on data incoming.

Please help me here and let me know if my scenario is not clear to you. I will explain more.

FYI I am sending 5 gb file over this connection.

anderspitman commented 5 years ago

@mcollina sorry I somehow missed that you had responded. I ended needing a language-independent solution, but thanks for your efforts.

@dhirajtech86 I've spent quite a bit of time working on this. Unfortunately the WebSocket protocol doesn't include any built-in flow control, so you have to do backpressuring at the application level using something like websocket-stream. However, like you, I wanted to be able to use languages other than JS on the backend. From my research, I'm not aware of any existing language-independent solution. Reactive Streams seems to be the closest, but it appears that the non-Java implementations have basically been abandoned.

I've started working on a very simple specification (and reference implementations in JS, Rust, and Go), called omnistreams. You can check it out here. It's still early but the JavaScript implementation has been working well for me and will be going into production for iobio soon. I'd love to start getting external feedback, and I'd be willing to help out with a C# implementation. The protocol is extremely simple and designed to be implemented easily.

dhirajtech86 commented 5 years ago

Thanks for replying this fast @anderspitman. Though I am not an expert on the e topic, but according to my understanding we can manually pause underlying TCP socket which will then trigger TCP backpressure mechanism.

As on other side when sending client detects that TCP buffer for sending is filled then it will not send more data.

In my scenario sender is reading data in chunks and sending it. So if write is not flushed it will stop reading more data and will pause until data is flushed.

Am I missing something here ?

anderspitman commented 5 years ago

@dhirajtech86 I'm not an expert either, but you're correct that TCP has backpressuring. You don't even have to do it manually. The problem from what I can tell is that since WebSockets is an event-based protocol, it fires the message events as soon as they come in, and expect your application to take care of it from there. So if your application can't keep up, the WebSocket buffer is just going to keep filling. But at that point the TCP stack is no longer aware of the messages, it just knows that it handed off to the WebSockets stack. Therefore TCP doesn't do any backpressuring. So the situation where you run into a problem is when your application is unable to keep up with your network speed. So the only solution I've found is application-level flow control. A more ideal solution might be for WebSockets to automatically detect when your application is behind processing message events and pause the TCP stream, but that doesn't currently exist.

mcollina commented 5 years ago

If Node.js is the target receiver, I would recommend to use http2 instead.

This library is just a wrapper on top of https://www.npmjs.com/package/ws. So, you might want to open an issue there to check how to handle flow control situation on the receiver side. We can handle these on the sending side because of the callback to send().

anderspitman commented 5 years ago

@mcollina unfortunately the browser is a primary target for my case. I see that you're using bufferedAmount with a timeout. When I was looking into this my research indicated that bufferedAmount works very differently between browsers and can't really be relied on for flow control. Have you found it to work well? It doesn't really matter in my case because as I said it still doesn't save you if your TCP stack is faster than the receiving application.

Either way we can close this. Thanks again for your help.

anderspitman commented 5 years ago

@dhirajtech86 I just discovered rsocket today. They already have JS and .NET support and should be able to do everything you need.

dhirajtech86 commented 5 years ago

@anderspitman Thanks, will have a look into it.