Open gamemann opened 7 years ago
Hello, I don't know if this is still being looked into, but I would like to say that I've been seeing this issue as well.
I am encountering threading and performance issues with the Linux deployment on CentOS, even gave the server its own CPU, reserved it so nothing else runs on their, and nothing.
Anyone tested csgo linux srcds after today's update? http://blog.counter-strike.net/index.php/2018/10/21286/
its multiple fixes for threading and integration, not fixes for multithreading.
Hi @kisak-valve,
I just wanted to confirm, was the CS:GO update released on October 3rd not intended to fix this issue?
If so, will this issue be addressed in the future?
I've been wanting to run CS:GO servers on linux but cannot due to this limitation.
Thank you.
Hi @kisak-valve
did the CSGO update released from October 3rd fixed the issue? we're other linux servers hosting..waiting for a fix for running "multi-threading CSGO Servers" ;)
\ Lets bring CSGO Server to be improved performance, and less issues! / \ Go for multi-threading! /
Hi everyone,
I just wanted to give an update. I'm hosting a CS:GO Surf Timer server @ 85 Tick on an Ubuntu 18.04 machine. The machine's processor is the Intel i7-4790K @ 4.0 GHz.
My server reached 46 players today and it appears the Linux limitation no longer exists in CS:GO (ensure the sv_parallel_sendsnapshot
ConVar is set to 1). My server was using 140 - 145% CPU (an additional thread was being used) while the occlusion ConVars were disabled. With that said, tests in the original report have different results today (e.g. it reserves one thread when running threadpool_cycle_reserve
today when occlusion is disabled instead of 0 when I initially tested on Linux).
In CS:S and other Source Engine games, having sv_parallel_sendsnapshot
set to 1 can result in server crashes. However, I've never experienced this issue personally (even in CS:S). My CS:GO server has been running for 5 - 6 days straight with no server crashes while this ConVar is enabled. More information on the possible negative side effects of having this ConVar enabled can be found here:
https://github.com/ValveSoftware/Source-1-Games/issues/11
I hope this helps and if anyone is still having issues, please feel free to reply. It just appears to be fixed from what I'm seeing.
Thanks.
Hi @gamemann
Thanks for the detailed response. I hope this wont come down to architecture differences between Xenon's and Core i7's, however I am going to bring back up my CS:GO server instances, do some testing and see if its resolved for myself as well.
Hi, @gamemann if I understand correctly.
so the cvar/command: sv_parallel_sendsnapshot need to be 0?
Hi @gamemann
Thanks for the detailed response. I hope this wont come down to architecture differences between Xenon's and Core i7's, however I am going to bring back up my CS:GO server instances, do some testing and see if its resolved for myself as well.
I would hope it won't come down to architecture differences. It's definitely worth testing, though. My initial report testing was used with an Intel Xeon CPU.
Hi, @gamemann if I understand correctly.
so the cvar/command:
sv_parallel_sendsnapshot need to be 0?
In order to use the extra thread for networking, you have to set sv_parallel_sendsnapshot
to 1. I had this disabled at first and was experiencing poor performance when my server reached 32+ players at 85 tick (was using around 112% CPU max, but the occlusion ConVars may have been enabled at the time). When I enabled the sv_parallel_sendsnapshot
ConVar, performance increased instantly and the server started using 140 - 145% CPU.
I just wanted everyone to keep in mind that there have been reports of the sv_parallel_sendsnapshot
ConVar causing the server to crash when enabled in CS:S/TF2. I've personally never faced this issue, even when running a 64 slot CS:S server. My CS:GO servers have been running with 5 - 6+ days up-time with this ConVar enabled. So I don't believe it will have any negative effects in CS:GO. However, I'm not certain.
Once I get more reports that the Linux limitation is indeed lifted on Intel Xeon CPUs, I will close this issue. I'm really happy this issue was finally addressed by Valve!
I hope this helps!
Thanks.
@Xergxes7 Any update? Did you retest in your environment?
I just wanted to request another update. Has anyone been able to confirm if the Linux limitation has in fact been lifted besides me?
hi guys! valve we need update for this problem :(( i have this problem on ubuntu 16.04 server :|
thank you
@kisak-valve please pull this thread up again on your valve to-do list!
Hello @Knot3n, friendly reminder that I'm a moderator for Valve's issue trackers on Github, and not a CS:GO dev myself. We'll need to hear from one of them if / when there is some progress with this issue.
If you have 8 core 16 threads, can you do -threads 16? or 4 and 8
with -threads 2 and without -threads 2 give the same result (both Starting 1 worker threads), so go with -threads 3, it starts 2 worker threads....
There are currently 8 servers running, gotv's tv_snapshotrate has 128 ticks, and each server has 12 slots. I have amd ryzen 2700 cpu, -threads on each server -launch made as 16. gotv snapshotrate 128tick it puts a bit of load on sv var. it has 3-4 sv 0.* var. a little red on the side. tv_snapshotrate 24 tickrate with 2-3 sv and 0.00 var. but the 128tick tv_snapshotrate demo makes the game look better. Is it wrong to do threads like this? Should -threads be set to 3 for each server?
with -threads 2 and without -threads 2 give the same result (both Starting 1 worker threads), so go with -threads 3, it starts 2 worker threads....
It shouldn't, you doing something wrong.
i have sv_parallel_sendsnapshot "1".
i add -threads 2 to the commandline and restart the server:
Stopping 0 worker threads Starting 1 worker threads 1 threads. 808,746 ticks Note that cmd line -threads 2 is specified
i add -threads 3: Stopping 0 worker threads Starting 1 worker threads 2 threads. 1,339,726 ticks Note that cmd line -threads 3 is specified
Commandline without -threads Stopping 0 worker threads Starting 1 worker threads 1 threads. 819,180 ticks
Yes
Debian 10
overall it works, but -threads 3 = 2 threads -threads 4 = 3 threads and so one.
overall it works, but -threads 3 = 2 threads -threads 4 = 3 threads and so one.
int nThreads = ( CommandLine()->ParmValue( "-threads", -1 ) - 1 );
occlusion_test_async "1" sv_parallel_sendsnapshot "1". sv_occlude_players 0
How to configure these commands in cs2?
Hello, I would like to address an issue with dedicated servers. Currently, any servers running on Linux suffers from a limitation. From what I know, CS:GO servers are suppose to use a separate thread for networking. However, I don't believe this is the case for CS:GO Linux servers at the moment.
Testing
I set up two servers (one Linux, one Windows) with the following settings:
bot_quota 63
).bot_flipout 1
(if nonzero, bots use no CPU for AI. Instead, they run around randomly).sv_stressbots 1
(if set to 1, the server calculates data and fills packets to bots. Used for perf testing).The Windows server had better performance than the Linux server. I've linked images of the CPU usage graphs below.
Windows
http://g.gflclan.com/c750da34520d7b0f.png
Linux
http://g.gflclan.com/060564b1a69a5058.png
Both of these servers ran the same exact settings and as you can see, the Windows server could go above 100% CPU while the Linux server could not. With that said, I've hosted several live servers a couple years ago that filled up to 64 players each day. The same issue occurred on our CS:GO Linux server at 64 players and performance was poor. As soon as we moved over to Windows, we saw a big improvement in performance and the server was using more than 100% CPU (i.e. another thread). This Linux limitation does not exist in any other Source Engine game as far as I am aware. I used to host a CS:S Linux server that reached 64 players each day and we easily went over 100% CPU while at high player counts.
After doing further testing, I've noticed a few things.
Occlusion_test_async Command
"Enable asynchronous occlusion test in another thread; may save some server tick."
On Windows, this command has no effect and doesn't print any output.
On Linux, if set to 0, the server will stop using the (occlusion?) thread and performance is affected negatively.
As far as I am aware, the occlusion thread is used to help prevent wall hacks (please correct me if I'm wrong). I find it strange that this command has no effect on Windows servers but decreases performance on Linux servers when set to 0. The following is printed when the command is changed on Linux:
Threadpool_cycle_reserve Command
This command reserves any extra threads on the server. CS:GO Windows and Linux servers have different results with this command while running the same settings.
Windows
While
occlusion_test_async
is set to 0.While
occlusion_test_async
is set to 1.Linux
While
occlusion_test_async
is set to 0.While
occlusion_test_async
is set to 1.As you can see, there are extra threads running on Windows but not on Linux. The
occlusion_test_async
command has no effect on Windows while runningthreadpool_cycle_reserve
but on Linux, it does have an effect.I've also ran
threadpool_run_tests
on both servers and here are the results:Windows
https://pastebin.com/QiriPVTH
Linux
https://pastebin.com/DPWhHE7r
With that being said, I've tried the following to attempt to unlock the limitation on Linux:
-threads
command line option to 8.host_thread_mode
to 0, 1, and 2.threadpool_affinity
to 0 and 1.net_queued_packet_thread
to 0 and 1.sv_occlude_players
to 0 and 1.None of these worked.
To conclude, I believe I've given enough information. If you need specific details, please let me know! This happens on every Linux distribution I've tried (Ubuntu, Debian, CentOS, and Gentoo). I'm aware this GitHub project is for client-side issues, but I have seen some server-side issues posted as well without being removed. I've tried emailing Valve multiple times about this limitation but I unfortunately haven't received any responses. This is my last attempt at trying to address this issue.
Thank you for reading and I hope Valve can finally look into this issue!