ddnet / ddnet

DDraceNetwork, a free cooperative platformer game
https://ddnet.org
Other
586 stars 416 forks source link

`net_socket_read_wait` is catastrophic for `cl_refresh_rate` smoothness #6751

Closed Jupeyy closed 1 year ago

Jupeyy commented 1 year ago

Besides input problems with cl_refresh_rate. It also makes the drawn frames look like trash:

in maplayers.cpp i did this patch in ::OnRender() after vec2 Center = ...:

    static vec2 fakecenter = {};
    fakecenter.x += 1;
    fakecenter.y += 1;
    Center = fakecenter;

with config:

gfx_refresh_rate 0; gfx_asyncrender_old 0; cl_refresh_rate 50

Looks horrible.

net_socket_read_wait also causes more refresh rate than set, bcs the snapshots wake up the clients... Now i patched away net_socket_read_wait in client.cpp, replaced it with a simple sleep:

std::this_thread::sleep_for(SleepTimeInNanoSeconds);

game looks smoother with 50 FPS than with 80FPS (which net_socket_read_wait caused for me)

Q: Does the client need to wake up as soon as possible when a snapshot arrives? Is there any benefit?

@trml Does it have an inpact on prediction smoothness if the timestamp of the incomming packet is delayed?

Chairn commented 1 year ago

Doesn't the sleep prevent game state update when a keyboard event happens?

Jupeyy commented 1 year ago

Doesn't the sleep prevent game state update when a keyboard event happens?

that cl_refresh_rate can affect input is a different problem, and much harder to solve. In theory since tw is tick based it should be playable with lower client refresh rate (in worst case u miss to send an input to server server in time, which causes lags). But since e.g. the relative mouse coordinates are apparently not accurate it makes the input feel weird.

This issue is mainly about the frames that are actually visible to us humans, look completely ugly/unsmooth

trml commented 1 year ago

I can't say for certain. It could introduce jitter in some parts of system/network flow, but in theory things should still get smoothed out (that's the purpose of CSmoothTime, to keep the prediction time smooth relative to pc time).

What was the refresh rate of your screen when you tried this btw, and xorg or wayland? At 60Hz things look smoother with cl_refresh_rate 80 than 50 to me, but when configuring the display to 120Hz it's the opposite (this was on wayland though which seems like it forces vsync). Tried again on xorg, and there it looks better with 80 fps in both cases for me (a bit hard to say since 50 is just consistently unsmooth, 80 is smooth most of the time).

In both xorg and wayland the fps graph indicates some dropped/skipped frames (with some difference for 50 and 80 fps). That might affect the outcome of the test code slightly if OnRender still gets called.

Jupeyy commented 1 year ago

I never set it to 80. I changed the sleep timer. Cl refresh rate 50 gives me 80 fps,BCS of the wake ups

With the changed thread sleep i get 50fps. But these 50 fps are hugely smoother than the 80fps with network wake up

Jupeyy commented 1 year ago

I have 240hz monitor. Wayland etc don't matter

trml commented 1 year ago

Yes, somehow missed a couple of lines in the first post.

Tried again now and it definitely is less smooth when using net_socket_read_wait, especially text in entities. Setting cl_refresh_rate lower (even 1) gives about the same visual result as 50, perhaps because it's mostly the packets that wake up the client and not that the waiting completes.

It looks like there either is some jitter in how the local time is updated before the render, or in the conversion from local time to predicted time (or how it is used by the prediction/rendering).

Jupeyy commented 1 year ago

It's also the simple fact that this destroys frame consistency quite a lot. Randomly firing a frame later or earlier relative to the last frame will make the displayed frames be much less consistent.

To verify I also simply added a random sleep offset:

#include "openssl/rand.h"
[...]
            // after sleep time is calculated
            uint64_t RandNumber = 0;
            RAND_bytes((uint8_t*)&RandNumber, sizeof(RandNumber));
            RandNumber = 0 + (RandNumber % ((12500000 - 0) + 1u));
            int64_t Off = -(int64_t)RandNumber;
            SleepTimeInNanoSeconds = SleepTimeInNanoSeconds + std::chrono::nanoseconds(Off);

Which also visibly destroys this consistency. Biggest problem with network packets is that they also arrive at rather random timestamps.

I also tested the network wait with cl_refresh_rate 240. It's obviously less visible, but the frames still jump more than with a sleep, it's still visible on a high refresh rate monitor.

So yeah, if we don't need the packet timestamp accuracy for prediction, we should simply replace this call

heinrich5991 commented 1 year ago

net_socket_read_wait also causes more refresh rate than set, bcs the snapshots wake up the clients...

Perhaps the sleep could be continued if the wakeup is too early?

Jupeyy commented 1 year ago

then we could also replace it ^^

heinrich5991 commented 1 year ago

It has the difference of making the network able to respond to events, so it's not entirely the same. E.g. for proper network protocols like QUIC, it's important that it gets calls as soon as network packets arrive. I don't know whether the same applies for our network protocol.

Jupeyy commented 1 year ago

Mh then I guess it's good for you to know that cl_refresh_rate_inactive always used sleep, and that vsync also causes/can cause sleeps. And that we only net_sleep for the main connection (not dummy or anything else).

Also the net_wait can only handle microseconds :/