Open arhken666 opened 2 years ago
I try a new config:
{ Readtimeout: 200ms, Writetimeout:200ms, maxidleConnDuration:200ms}
everything will be normal, seems like maxidelConnDuration too low cause this problem
I don't understand why you would want to set MaxIdleConnDuration
so low? Setting up new connections takes CPU (especially if it's HTTPS connections).
I don't understand why you would want to set
MaxIdleConnDuration
so low? Setting up new connections takes CPU (especially if it's HTTPS connections).
My service Get many logs and tranfer to nginx. At first, I set MaxIdleConnDuration
500ms, but only half of Nginx worker(total count 16 workers) compute logs.
In order to keep Nginx computing balance, I set low MaxIdleConnDuration
20ms.Now almost about 14 or 15 Nginx worker could compute logs,
But cpu usage rise to 100% occur in my service, 3s~10s or even 1min. The problem Only happened in no logs Get to transfer(during this time,Only some sub function running like detect...cpu usage only 1%-5%) seems so weird.
If only half of your nginx workers is doing anything then that probably means that nginx doesn't need more? Why not reduce the worker count there?
Setting a low MaxIdleConnDuration
feels like a really bad solution to get nginx to spread it's connections over more workers.
If only half of your nginx workers is doing anything then that probably means that nginx doesn't need more? Why not reduce the worker count there?
Setting a low
MaxIdleConnDuration
feels like a really bad solution to get nginx to spread it's connections over more workers.
Thx ur reply.As I said above, My client service as a log transfer to Nginx, and then I set a low MaxIdleConnDuration value(20ms) to make Nginx worker load balace(16 workers),but cpu occasionally rise to 100%(only in my service running ).I solved it by creating so many goroutines transfer log to nginx, and keep MaxIdleConnDuration
as a default value.
However, I found something interesting happened in the client.go file function connsCleaner
.When I set MaxIdleConnDuration
low value(like 20ms), the sleepFor
variable will easily < 0
cause maxIdleConnDuration - currentTime.Sub(conns[i].lastUseTime)
probably < 0
even if + 1
, so the for loop actually will not sleep, which will takes cpu, I guess this is why cpu 100% only happened in my service in idle time. Maybe+ 1 * time.seconds
will change this problem?
So, what do u think this possible thing?
Even if sleepFor
is less than 0 for one iteration. I won't constantly be less than 0. So I don't see how that could cause 100% CPU usage for longer.
My service deploy on server as a log transfer, QPS about 18000/s, QPM about 1000000/s. In order to make each nginx worker running balance ,so I config maxidelConnDuration to 20ms for making each idle connection close in advance, the connect will restart establish with nginx worker.
Here is my fasthttp config:
{ Readtimeout: 300ms, Writetimeout:300ms, maxidleConnDuration:20ms, }
Howerver, the cpu rise to 100% occasionally keep 5 ~ 10s, and then the cpu usage down to the normal.When I remove the Writetimeout, it means the Writetimeout is unlimited, cpu usage will not rise to 100% anymore. I have no idea why this happend.