Closed RobDavenport closed 4 years ago
This is a really good question! The reason that both Server::send
and Server::recv
mutably borrow the same data is because they genuinely need to do that.
There are a couple of reasons for this. The reason you can't split the sending and receiving half of the server is ultimately down to the fact that you can't split an openssl SslStream. Since you can't split an ssl stream, you ultimately can't read and write to any one cilent at the same time. Besides this, even if that were possible, the Server
still needs to send acknowledgement packets when receiving incoming packets, so receiving data still would involve both halves of the connection.
Tangentially, you can't split per-client for a different reason that has to do with how UDP sockets work. Since UDP is connectionless, it is my understanding that it is not possible to have multiple independent UDP sockets listening on the same port, all incoming packets would simply arrive on the first socket. I'm STILL not clear on what happens if the socket is "connected" to a remote, and if the OS will treat multiple UDP sockets connected to different remotes correctly. It is my understanding that the "connection" is just a simple filter on the incoming packet source address? If there was a way to make this work at the OS level with one UDP socket receiving un-connected packets and independent sockets receiving "connected" packets, then this could work, but I'm not sure that's currently possible or even desirable.
Since you probably want to be able to serve multiple clients simultaneously as well as be able to read and write simultaneously, you can't really get away from needing a multiplexing layer on top of webrtc_unreliable::Server
. In earlier versions of the API I tried to include this multiplexing for the user of the crate, but that's really the wrong solution as it should be handled by the user of the crate since there isn't a universal best solution.
The simple answer is just to use channels. By having both an incoming and an outgoing channel, your normal state will be select!
-ing on either Server::recv
or receiving from the outgoing channel. Once either future is ready, you then either need to send the incoming message to the incoming channel you created or send the outgoing message via Server::send
, then go back to selecting!
-ing.
It's unfortunate that all of the WebRTC handling effectively has to be in a single Task, but it should be reasonable for a single WebRTC task to handle many connections and forward messages to the appropriate channel.
Thanks for the detailed reply. I was previously looking at the channel method and the select! macro as it seemed to be the best choice, ie, selecting over a 'to-clients' receiver, and then responding on the result of recv by sending down the 'from-clients' sender. A similar solution is implemented the WebSockets part of my server. I'll definitely take another look at this solution and see if I can get it to work.
@RobDavenport example of sending a message to a client from another thread can be seen in https://github.com/lineCode/webrtc-datachannels , it uses official google webrtc library, C++
@RobDavenport example of sending a message to a client from another thread can be seen in https://github.com/lineCode/webrtc-datachannels , it uses official google webrtc library, C++
Just FYI, that library appears to be doing effectively the same thing as would be accomplished here by using channels, because sending messages appears to be done through an in memory queue, and all actual work is done by a different thread. I don't see in a quick glance how receiving is handled, is it through callbacks?
I don't know what googles WebRTC library is doing internally either, but in any case the single task limitation is ultimately down to the limitations of a UDP socket so I don't know to handle it differently. I think when used in a browser each peer might have their own UDP port and that sidesteps the problem?
If anybody has any concrete suggestions on how this could work differently please let me know.
Oh, on that note, if you do happen to run into single task performance limitations (if you do that's a LOT of players and I'm interested in your project!) you should be able to run multiple Server instances on a range of UDP addresses and spread the load. It should be pretty easy to round robin session requests between them as well.
Thanks for the help. After fighting through it and refactoring a lot of my code I was able to get it all set up properly. I managed to get it working by using a combination of select!
on recv(), as well as on the tokio channel receiver.
Great example of the performance benefits:
Left is WS, right is WebRTC. Tick rate is 30 times per second.
@kyren Actually the way I have the project set up now, it functions more as a "Master Server" which would spawn more tasks (game rooms, etc) if necessary. Although I think for now (thanks to everyone's help) I can start focusing on more game play related things.
With that in mind, feel free to close this issue!
Although, do you see any risks with potentially exposing up some of the values on the server side? Such as "Clients", or a function to "Broadcast All" by looping through the keys? I think these could be useful from a user perspective. If you're open to a PR for that I can also assist there.
@kyren Actually the way I have the project set up now, it functions more as a "Master Server" which would spawn more tasks (game rooms, etc) if necessary. Although I think for now (thanks to everyone's help) I can start focusing on more game play related things
Oh yeah when I said that it would be a perf problem I meant if you had a truly enormous number of players such that JUST WebRTC networking took 100% of one core. Non networking things can obviously already go on their own Tasks / other threads.
Although, do you see any risks with potentially exposing up some of the values on the server side? Such as "Clients", or a function to "Broadcast All" by looping through the keys? I think these could be useful from a user perspective. If you're open to a PR for that I can also assist there.
I guess I wouldn't be opposed to a method that gives you a list of established connections, would that cover it?
Great example of the performance benefit
By the way that's a great demo of TCP vs UDP game networking! I'm glad webrtc-unreliable is working for you now :D
Can you give an example of how the select!
ing would work?
Hi awesome create! Thanks for your contribution!
But I'm running into an issue: currently, I have a server set up similar to the echo server example, but in my case I'm using it for a game server. A problem I ran into is I can't figure out a way to call the Server's.Send() function, because both recv and send have &mut self references. This is only further exasperated by lifetime requirements in Rust's compiler.
Any ideas or further examples?
Update for more info:
Once this task is spawned and running, it captures ownership of the RtcServer object and therefore I can't figure out how to access it again. Any attempt to relieve this by adding a Mutex of some sort doesn't seem to work since the recv().await method doesn't release the lock.