dgrr / websocket

WebSocket for fasthttp
MIT License
59 stars 13 forks source link

Production-ready? (+ a use case question) #1

Open Tozuko opened 3 years ago

Tozuko commented 3 years ago

Would you say this library is currently suitable for production?

Also, feel free to skip this question since it's pretty broad:

I'm working on a websocket-based web game where I want to minimize latency and maximize synchronicity between all players. To the extent possible, I want the server to broadcast messages to every connected client at the same time, and have it quickly respond to player actions. The plan is to use fasthttp + this library.

When a player client makes an action, it sends a message to the websocket server which verifies the message and broadcasts it to every other client. I understand due to connection latency differences it'll never be 100% real-time, but I want the player experience to feel as real-time, synchronized, and responsive as possible.

My question is if there are any tips or techniques I should keep in mind to optimize this while using this library. I was planning to basically do what's shown in https://github.com/dgrr/websocket/blob/master/examples/broadcast/main.go#L38, plus a read loop, but I'm wondering if there's anything else I should look into as well.

For example, do you think there'd be any benefit to using the lower-level frame APIs? Are there any other special considerations for implementing a read loop with a lot of concurrent connections + broadcasting each received message to all clients?

Thanks.

dgrr commented 3 years ago

Hello. Yes, I'd say is production ready. This library is basically a fork of fastws with some changes to make it easier for server developers.

That is quite a challenge. My main concern is the machine constraints. This library is not fully high-performance (none in Golang), why do I say that? Because they are all bounded to one connection one coroutine. That means, for every client you will have 2 coroutines. The other approach would be to use epoll (or kqueue), that means, using gnet or evio. This library uses 3 coroutines per client, one for reading, and another for writing, and another for dispatching the events. That's how it manages to make everything work async. I don't know about recent versions of Go, but I remember back in Go1.6 that we were quite tied to the coroutine scheduler, so the best way was to use epoll.

Now, even if you decide to use epoll (or kqueue) you can still use this library, because you can build the websocket frames very easily and without allocations.

To the matter... I think you should follow the example, yes, but maybe with some changes like not using the sync.Map, but using a slice of clients, and whenever you want to remove a client just mark the client as disconnected and when you finish iterating over them, remove the client. Something to train your cache. A map is not very cache friendly (in my opinion, maybe it is).

Using the Frame structure will provide you with some benefit, like handling all the different Frames and you'll be able to reuse buffers by yourself and avoid one copy. For example, in my experience if you want to encode JSON a not allocate any new buffer, you can either do this:

bf := bytebufferpool.Get()
bf.B = myfastjson.MarshalTo(bf.B[:0])

fr := websocket.AcquireFrame()
fr.SetPayload(bf.B)

bytebufferpool.Put(bf)

Or you can do this:

fr := websocket.AcquireFrame()
fr.SetPayload(myfastjson.MarshalTo(fr.Payload()))

You can see a benchmark here.

If you have any further questions you can reach me out in Telegram (dgrr91) or you'll find me in the gofiber.io's discord.