ValveSoftware / halflife

Half-Life 1 engine based games
Other
3.68k stars 624 forks source link

Can you give as an official statement on best/most recommended netcode settings? #3109

Open TibyXD opened 3 years ago

TibyXD commented 3 years ago

Hi!

In my opinion, CS 1.6 has always been a shady topic, since very few know what every CVAR does. I have searched the Internet for about 6 months and tried to understand and determine what the best netcode settings are.

I have found numerous versions of what is best and what is not, and I got to testing on my own, and I found a few problems in which CS deals with netcode.

1) When setting ex_interp 0, the value it calculates is wrong. For example, with cl_updaterate 102, ex_interp should be 0.009804 (1/102). The problem is, CS sets it to 0.009000. Is it OK the way CS calculates it, or we should set it according to 1/updaterate?

2) On servers running sys_ticrate 500/1000, the FPS has a major impact on the bottom line in net_graph 1. If fps_max 100, and fps_override 0, red dots start appearing in the graph on a cl_updaterate value of 83 or higher. cl_updaterate 102 creates a very inconsistent line, represented just by a few blue dots at the top, and a purple one on the bottom. Yet, if you up your fps to about 140 or higher, the line seems to be linear again.

3) FPS also has a major impact on latency in net_graph 1. On my server, the latency on 60 FPS is 10-11, and with 250 FPS is 0.

4) After about 1 hour of playing, and minimising CS in the taskbar, red dots start appearing on the graph. A higher value of cmdrate solves it.

5) rate has a max. value of 100000, but it doesn't seem to improve anything from 25000. Is there an explanation for it? Is there any scenario in which the rate could use that much data? Is it ok to use a higher value all the time?

Can you give us an official answer to this? We should be able to know how to improve our connection, and as server owners, to improve our players connection. What are the maximum rates, and the correct values to them?

Maxi605 commented 3 years ago
  1. FPS also has a major impact on latency in net_graph 1. On my server, the latency on 60 FPS is 10-11, and with 250 FPS is 0.

Are you trying on an actual dedicated server or just on offline?

TibyXD commented 3 years ago

On a dedicated server, of course. I actually tried lots of them. Offline they don't seem to have any effect.

fox3562 commented 3 years ago
  1. ex_interp is the interpolation time, measured in ms. That is, the value of 0.009 = 9ms. The difference between automatically (when ex_interp is 0) with the set value and 1/updaterate, it is 0.804 ms! Explain what the value is negligible, I think it's not worth it. (you can also put 0.00799999992, but there is no sense in this)

  2. I don't know where you see the relationship between the data transfer parameters and the current frequency on the server... If red dots appear in the net_graph 1 graph, this is an indicator of the desynchronization (loss) of packets on the route or in delay (it depends on which part of the graph they appear) The link contains a description In short, the graph shows the stability of the client-server Internet connection. For this reason, all his connections with the server sys_ticrate is false.

  3. by reducing the updaterate value (for example, 25) to a minimum, you can reduce the delay to a minimum, and this is logical, because the client starts receiving only 25 packets per second, therefore the delay will be minimal in processing incoming packets. If you are affected by the number of fps, then what is clearly wrong with you.

  4. If this happens after a while, then most likely the problem is in the computer, because cmdrate is the number of packets sent to the server. If increasing the cmdrate value eliminates the red dots, then most likely either the channel or the memory is clogged (if I'm wrong, someone correct me).

  5. the variable rate is the speed (channel width), measured in bytes per second, the maximum value is 100000 (or about 98 kilobytes per second). The value was increased based on the current realities, although in most cases, 25000-30000 is enough, 100000 is preferable, because the amount of data transferred between the client and the server has increased. On the server side, there are minrate and maxrate quars. It is justified there.

And if we assume that goldsrc was released at the end of 1999, and given what kind of Internet connection was used then (dial-up), then we need to understand that the netcode was developed not for the current speed (width) of the channel, but for what dial-up gave. I don't think it's worth telling about the speed of such a connection, its stability, losses on the march route and other technical aspects of that time.

As a result, we can conclude that there is no point in chasing after some "idiotic" values of some quars, because they are not able to influence them noticeably. For example, the variable cl_dlmax was used to fragment a packet with decals, in order to unload the data channel a little, and this packet is measured in bytes and has a maximum value of 1024, that is, in those years (with a dial-up connection of 14.4 kilobits per second), they were forced to fragment a packet of 1 kilobyte in size to free up the transmission channel. And nowadays people are looking for trying to notice the difference between 10ms and 9ms...

Cvarlist Line option Protocol description

P. s. Sorry for my english, I use a translator.

ivan8m8 commented 3 years ago

First, I would like to thank @TibyXD for starting this really valuable topic. I've as well been wondering why there are no official clarifications on netconfig settings in 202x years.

In the first question the TS carefully describes an awfully complicated situation. I would only add a couple of things:

As far as I know the lower ex_interp value the earlier you can see the models, but the models may flick.

ex_interp can be calculated in at least 4 ways:

_(Let's consider cl_updaterate 102, since that is the most appropriate value in most cases.)_

  1. Using pure math: 1 / 102 gives us 0.00980392156.
  2. In C++ 1.0f/102.0f gives us **0.00980392***.
  3. Setting ex_interp 0 gives us 0.009000.
  4. The lowest value we are able to put is 0.00799999992.

Which one to use? I don't know. But I'll give the lowest value a try (I've played with ex_interp 0).

As for your 5th question, you can see no difference, because the server's max & min rates are equal. I'm unsure about that, though.

* could be wrong, since I'm not a C++ dev & I'm unsure if they use floats.