Open cuppajoeman opened 6 months ago
4. Also along the same vein, wouldn't calling
enet_host_service
with 0ms of wait-time (non-blocking) as fast as possible also give the same results (when it's in thewhile (true)
loop)?
There's a lot of questions but I'll only chime in on this one. If you call enet_host_service
with a 0ms timeout, then it will return immediately if there is nothing to process. If you stick that in a while (true)
loop, you'll have 100% CPU usage while doing absolutely nothing, which is not desirable.
Hence, using a timeout of 0ms is only recommended when you have something else limiting the frequency at which this function is called (for example calling it only once per frame in a game loop, as suggested by the docs).
All good and valid questions, I will let Lee chime in and try to answer as best as I can meanwhile. The way enet_host_service
works is rooted in history of how BSD sockets work, let's start from how the sockets work:
In old BSD sockets world, when you want to send data, you push them out using send
and your code typically continues as if nothing happened, and kernel will make sure that data will get sent eventually. But send
may also block if there is no memory available in kernel - for example if you try to send a lot lot lot lot data. If send
blocks, your game freezes.
When you want to receive
data you ask the kernel, and then your game blocks until there is data.
Assuming your game uses just one thread, which is what was common in the past, you see that you can block and freeze randomly. Thus poll
and select
got invented and allowed the programmers to first ask whether it is safe to send
or receive
without actually blocking. So you can do other useful stuff like painting a mouse moving over the screen :)
Typically select
needs a timeout to wait and return whether there is something waiting to be received
or whether there is space so that you can send
. If you give it 0ms, it returns immediately, if you give it 10ms, it will block for up to maximum of 10ms to wait for receive
and send
to be ready. This timeout is basically what you give to enet_host_service
.
Ideally you call enet_host_service
when you know that there is data waiting in the kernel to be received
. And also at times when you know that you need to send data out.
How do you know first? you use select
.
How do you know second? Well when you enet_host_connect
or enet_peer_send
you know that there are data that need to be shuffled to the network. All of these can be retransmitted automatically periodically in case of reliable data. And also enet periodically pings all connected peers so that it can disconnect them if they do not respond. If you do not call enet_host_service
nothing of this happens. So you need to make sure that you call enet_host_service
say at least once a second to send out pings, retransmits, etc...
Now to your questions:
- A) Why does a call
enet_host_service
with a greater timeout (which means thatenet_host_service
is being called not as much as before) cause less adequate performance [based on the remark in the docs which says ' enet_host_service should be called fairly regularly for adequate performance'] B) What is the definition of 'fairly regularly'
Specifying timeout
to enet_host_service will basically freeze and wait for that timeout
until something happens. So more is not automatically better.
- Should the rate at which
enet_host_service
is called on the server be the same rate at which it is called on the client side? What about if there are n <= 16 clients? Do I have to take this into consideration at all?
Does not matter at all. You need to call enet_host_service
when you know that there is data waiting, or when you know that you want to talk to somebody.
- From what I understand the call
enet_host_service
is encased in an outer while loop which runs constantly, this is so that if nothing happens within 100ms the program still continues to try polling again instead of exiting immediately. Is this what most people do? (havewhile enet_host_service(...)
encased in some other while loop?)
There will always be that loop where you call outer enet_host_service
and then inner enet_host_check_events
.
- A) It's mentioned that you can put a call to
enet_host_service
with a timeout of 0 in each iteration of your game loop, is this because if we put a call toenet_host_service
with a non-zero timeout in each iteration of our game loop then it would cause our game loop to run slower because this is a blocking call? B) If we use c++ and the thread library to separate out the network loop into it's own thread then is it preferred to callenet_host_service
with a non-zero timeout? C) Are there any other considerations that need to be made if we separate the enet code into its own thread?
Sooner or later you will end up with multiple threads. Since enet is single threaded and does not use any locking mechanism anywhere, you will have to make sure to have one thread to invoke all enet calls. Calling with 0 timeout and just looping will consume 100% CPU, so that is not wise.
It's better to call enet_host_service
with 0 timeout, but only when you know there is data available to be received, and then periodically (say every 100ms) to make sure you send your own out data frequently enough. Depending on your game that may need to be done 120fps or once per second.
Ideally you spawn another thread where you can do select
with larger timeout, say half a second and just wait, when there is data you signal the enet thread to perform service.
- A) Looking at the source code of
enet_host_service
it seems thatenet_host_service
will try to send outgoing packets and receive incoming packets within a do while loop (which I believe runs until the waiting time is up). If that's the case, then what is the point of timeout because it seems like if we had a timeout of 10ms then as it's encased in awhile (true)
thenenet_host_service
would be called 10 times more often, but still leading to the same amount of time waiting for events as the 100ms wait time would produce? B) Also along the same vein, wouldn't callingenet_host_service
with 0ms of wait-time (non-blocking) as fast as possible also give the same results (when it's in thewhile (true)
loop)?
The code tries to do its best to work in single threaded environment where you have several milliseconds per frame and you need to receive, send, handle retransmits, ACKS, etc... and the logic wants to make sure that you have predictable deadlines.
So in 60fps environment you have 16ms per frame, so you give enet 5ms and it will try to make sure that you still have 10ms for other computations.
But from the performance perspective it's better to put enet aside to another thread and not block the rendering thread at all. Which can be quite challenging.
I appreciate everyone's input on this, it's definitely helping me get this. I thought I'd ask a few more which relate to more concrete code examples.
Suppose we have a client which sends out packets at a rate of 20 times per second (20Hz = 0.05 seconds = 50ms).
Then we have the following loop on our server.
while (true) {
while (enet_host_service(..., x) > 0) {
...
}
}
where x
is some constant representing a number of milliseconds for the timeout. Assuming x >= 50
then the value of x
doesn't determine the rate of this inner loop at all right? My reasoning is that if the client is sending packets at a rate 20Hz, and since enet_host_service
returns immediately after it's received any event, then this server loop is also running at 20Hz regardless of the value of x
?
If the above paragraph holds true, then what is the point of setting the variable x
to a specific value? Is it's purpose really only to define when we should timeout and have some custom behavior?
int iterations_without_data_within_x_milliseconds = 0;
while (true) {
while (enet_host_service(..., x) > 0) {
iterations_without_data_within_x_milliseconds = 0;
...
}
iterations_without_data_within_x_milliseconds += 1;
if (iterations_without_data_within_x_milliseconds >= 5) {
... do something specific because there is no network activity for a while ...
}
}
Also based on @bjorn's comment it seems like a loop of the form
while (true) {
while (enet_host_service(..., x) > 0) {
}
}
has the property that as x
decreases in value, then the CPU usage increases, so is it beneficial to do some experimental tests to see how fast your end up doing network events and then set x
to be no higher than your experimental value to avoid extra cycles?
so is it beneficial to do some experimental tests to see how fast your end up doing network events and then set
x
to be no higher than your experimental value to avoid extra cycles?
While experimentation can't hurt, I think it just means you should keep x
large enough as to not cause undesired CPU activity in the idle case, but low enough that whatever else you need to be doing doesn't stall for too long. At the very least, you'll probably want to exit your program at some point, and for a clean exit you'll want that loop to check for the exit condition from time to time, regardless of whether it is running in the main or a separate thread.
I have a question relating to a client server setup that relates to enet_host_service
, suppose we have the following server-side files:
a.cpp
void handle_incoming_data() {
...
// Event loop
while (true) {
ENetEvent event;
// Check for events with a 100ms timeout
while (enet_host_service(server, &event, 100) > 0) { // LINE X
switch (event.type) {
case ENetEventType::ENET_EVENT_TYPE_CONNECT:
...
case ENetEventType::ENET_EVENT_TYPE_RECEIVE:
...
case ENetEventType::ENET_EVENT_TYPE_DISCONNECT:
...
default:
break;
}
}
}
...
}
b.cpp
void start_outgoing_data_loop() {
while (true) {
std::this_thread::sleep_until(...); // the loop runs at some fixed rate
unsigned int binary_input_snapshot = this->input_snapshot_to_binary();
printf("%d\n", binary_input_snapshot);
ENetPacket *packet =
enet_packet_create(&binary_input_snapshot, sizeof(binary_input_snapshot), ENET_PACKET_FLAG_RELIABLE);
enet_peer_send(server_connection, 0, packet);
enet_host_service(client, &event, 0); // LINE Y
}
}
Now suppose that these two loops are run in their own separate threads (thread A runs the loop in a.cpp
and thread B runs the loop in b.cpp
).
The reason why the loop in b.cpp
is run in it's on thread is so that it can be run at whatever rate we want (eg. we can tweak the send rate of the server depending on cpu usage and not have it effect anything else)
Suppose that in thread B the line enet_peer_send(server_connection, 0, packet)
is run, but before the next line is executed, LINE X from thread A is run, does this mean that the packet created will be sent by the call to enet_host_service(..., 100)
from thread A rather than enet_host_service(..., 0)
in thread B? Is this a problem?
Also with this setup the two threads may call enet_host_service
"simultaneously", would cause any issues with enet?
In general is this an ok approach to managing an "incoming" and "outgoing" thread with enet? If not can anyone share how they do this. Thanks!
Also with this setup the two threads may call
enet_host_service
"simultaneously", would cause any issues with enet?
Yes, this causes race conditions. See https://github.com/lsalzman/enet/issues/102#issuecomment-458509858 and the first question in the FAQ.
Hey there, I've been trying to get a hang of enet, and mainly trying to understand
enet_host_service
, I've read this about it:This is also mentioned:
For context I'm working on a multiplayer game with a physics engine following a client-server architecture. The client sends out keyboard and mouse updates at a fixed rate. The server has a thread for the network and a thread for the physics loop.
Before I jump into coding the server I wanted to study some existing code which was said to be enet server code:
server.cpp
Questions
enet_host_service
is called on the server be the same rate at which it is called on the client side? What about if there are n <= 16 clients? Do I have to take this into consideration at all?enet_host_service
is encased in an outer while loop which runs constantly, this is so that if nothing happens within 100ms the program still continues to try polling again instead of exiting immediately. Is this what most people do? (havewhile enet_host_service(...)
encased in some other while loop?)enet_host_service
with a timeout of 0 in each iteration of your game loop, is this because if we put a call toenet_host_service
with a non-zero timeout in each iteration of our game loop then it would cause our game loop to run slower because this is a blocking call? B) If we use c++ and the thread library to separate out the network loop into it's own thread then is it preferred to callenet_host_service
with a non-zero timeout? C) Are there any other considerations that need to be made if we separate the enet code into it's own thread?enet_host_service
it seems thatenet_host_service
will try to send outgoing packets and receive incoming packets within a do while loop (which I believe runs until the waiting time is up). If that's the case, then what is the point of timeout because it seems like if we had a timeout of 10ms then as it's encased in awhile (true)
thenenet_host_service
would be called 10 times more often, but still leading to the same amount of time waiting for events as the 100ms wait time would produce? B) Also along the same vein, wouldn't callingenet_host_service
with 0ms of wait-time (non-blocking) as fast as possible also give the same results (when it's in thewhile (true)
loop)?enet_host_service
with a greater timeout (which means thatenet_host_service
is being called not as much as before) cause less adequate performance [based on the remark in the docs which says ' enet_host_service should be called fairly regularly for adequate performance'] B) What is the definition of 'fairly regularly'ps: if I get some good answers on how this works I'll make some pull requests to update the docs with more info about the relevant topics.