Open toqueteos opened 9 years ago
See my comments in: https://github.com/PistonDevelopers/hematite/issues/203 talking about my server design. (I also updated last comment talking about the client)
With an threaded/actor model, async I/O would be pretty much built in
(Meta/ reasoning) Map type requests in minecraft are usually pretty small, but the results size and computation costs can be huge. Channels/multi-threaded id ideal for this, since the sender is non-blocking,and have unlimited que, and the receiver can manage the total data, while preventing data races and doing computation at the same time. Designing it this way may be slightly more complicated, but would allow for multi threaded map reading/generation, and while more complicated than using a async I/O library, It would make ownership/concurrency/async easier as a combination.
The model could be more simplified to prevent tons of channels being generated by having one master thread, which manages slave threads
This (possible) layout would allow for when a client is added, the only thing that needs to be handed to them is the receiver of the master (there could also be multiple masters).
The masters would then hand off the clients request of map generation/data/editing
to the appropriate slave, which is defined by the region they manage
The master will allocate who is responsible for what region, and when to spawn another slave. The slave threads would be ignorant of what part of the map they use until they receive it, so if lets say a player was walking north, it requests the NW, N, NE chunks to be generated to each of the slaves, and gives each of them the TcpStream of the client
The slaves could then do the computation e.g. is this move legal. when the slaves task is completed they would send it to the client if needed. (unneeded might be if its generating spawn in advance of any players, etc.)
So it Would look lineally like this
**Clients THREAD** **Master** **slave**
request sent --->tasks delogated ---> request fulfilled and sent to client's TcpStream
(tcpstream is passed at each step)
The client thread could also just be any thread that sends a map request (such as generating a tree)
The master would be the owner of the map data, and hands out references to slaves.
The Master would be responsible for
The Slave would be responsible for all requests, including
Slaves perform all general actions. Each slaves could manage 1/16 of one chunk (chunk column) up to the entire map. (minimum map transfer size is 1/16 of packet, technically we could get around this with multi block updates, but at that scale I don't think concurrency will help, unless you had dozens of cores)
The only Logic the master would do is determine which slave to give the request to.
If the server is not under stress, It could hand the total request off to just one slave, so the client would receive a better compressed packet. (such as loading cached map data)
If it was a heavy request (such as generating new map data) the master could hand the request off to multiple slaves, which could be sent to the client in parts (when its better for the client to receive 1/3 total data at a x intervals than receive the whole "request" at 3x intervals)
This slave model would take care of blocking I/O, and allow for concurrency at the same time.
Benefits of this method
EDIT: If any part of this doesn't make sense please ask, even if that means the whole thing. I'm so absorbed into this that I probably can't properly view it from an outside perspective.
Note that each world does need a global game loop which ticks synchronously, so game logic would need to run on the master.
Also can we please use the term worker thread (or connection thread for those communicating with clients) instead of slave?
Most of the Java based servers use http://netty.io/ (async io) should we aim for that right from the beginning via mio/mio+mioco/mio+rotor or just plain threads (with pools) and channels?
/cc @fenhl