Closed jonasfj closed 8 years ago
Yeah, WebSockets have no built-in way to say "slow down" AFAIK -- you'd have to put it in the layer above WebSockets. Anything else would require an addition to the WebSockets spec (RFC 6455), I'm afraid. Websockify strives to just implement the RFC, and not anything else on top of that.
Okay, so I could be wrong, but websockify isn't a implementation of the WebSockets spec... Don't you use libraries that implement websocket RFCs.
I view websockify as a proxy for connections over websockets. Where messages are intended to be raw data from the connection being proxied. It would seem completely sane to implement a congestion control scheme as part of proxing a connection. As connections have congestion control.
Okay, so I could be wrong, but websockify isn't a implementation of the WebSockets spec... Don't you use libraries that implement websocket RFCs.
Websockify implements the WebSockets spec itself -- we don't use a separate library.
I view websockify as a proxy for connections over websockets. Where messages are intended to be raw data from the connection being proxied. It would seem completely sane to implement a congestion control scheme as part of proxing a connection. As connections have congestion control.
Websockify is designed to be protocol agnostic, from both the client and server perspective -- it simply wraps and unwraps TCP packets in WebSocket frames (and responds to PING/PONG messages). This means that it can be substituted for any other pure-WebSocket proxy, and WebSocket clients don't need to have any special Websockify code -- they can simply speak whatever protocol that they normally speak.
Adding a custom protocol on top of Websockets for congestion control as part of Websockify would break any existing clients of Websockify, and fundamentally change the goals of Websockify. That being said, Websockify is designed to be somewhat extensible, so it should be possible to implement something like what you described for your use case. Alternatively (and I'm not be sarcastic here), it might be a good idea to look at whether or not the WebSockets RFC should have congestion control (cc @kanaka).
Oh, I didn't realize it actually proxied the packets, I assumed it was working at socket level.
Hmm.. Perhaps it's better to implement this as a separate thing. As you right, clients would need to be modified.
Note: I'm looking at it for increasing robustness of noVNC, but didn't find an easy way to switch out the websockify client from noVNC (last I looked). But perhaps that is a better route to take.
This is probably more of a question, but doesn't this lack congestion control?
If client or server side is not able to keep with with processing of data then the only option is to stop reading from the websocket, which implies no more ping/pong, hence the websocket break...
Okay, I haven't studied the details of the websocket protocol, but from looking at implementations on both golang and node.js it seems pretty clear that websockets do not have builtin congestion control. As in there is no way to keep the connection alive without accepting all new data...
With pure TCP packets are dropped by the OS and the sender slows down, if receiver can't keep up...
In protocol similar to this (I'm exposing a shell over a websocket) I've successfully used a scheme where receiver acknowledges number of bytes processed... That way sender can decide how many bytes to have outstanding before blocking the incoming stream.
I guess this might be overkill for a VNC connection, but if we're trying to expose raw sockets over websockets congestion control is sort of important. In my shell example where I did congestion control, I have no problems running
cat bigfile.txt
and transmit hundreds of megabytes over the websocket. Obviously, that wouldn't be fun without congestion control if receiver was a tiny bit slow...