bytedance / g3

Enterprise-oriented Generic Proxy Solutions
Apache License 2.0
472 stars 35 forks source link

Memory creep over time #339

Closed zhilingc closed 1 month ago

zhilingc commented 1 month ago

Hi, again, thanks for the great project! I'd appreciate some help on this:

I'm running g3 in production as a straightforward forward proxy with pretty modest load (~2.1kRPS peak), and I've noticed that memory consumption has been trending upwards over time -

image

I considered the possibility of hanging tasks, but the total number of tasks don't seem to reflect the same trend.

image

There is also no configuration to cull long running tasks, or to limit the memory consumption by each thread. I'm not sure if there is a possiblity of memory leak, or some other issue (looking at tokio issues, memory fragmentation seems to be a common problem)?

zh-jq commented 1 month ago

the memory utilization is the total memory used by g3proxy?

zhilingc commented 1 month ago

Hi @zh-jq , It's the memory utilisation captured by node-exporter running on the instance. I have other things running on the instance, but main memory consumption (and growth) is from g3 threads.

image
zh-jq commented 1 month ago

18% is far less than 70% in the first graph. Is it possible for you to add a RES memory usage graph for the g3proxy process?

zh-jq commented 1 month ago

And the mempry utilization in first graph is total-free or total-available?

zhilingc commented 1 month ago

Ah, good catch. Turns out my metric was misguided - the utilisation is 1- (MemFree / MemTotal) which doesn't reflect the memory availability in the system. Due to large number of files opened by the process (I presume) the buffers/cache get quite large over time:

MemTotal:        7950368 kB
MemFree:         1902296 kB
MemAvailable:    5806396 kB

But it's not actually a cause for concern since it's memory that the system can reclaim anyway.

I'll amend my metric, sorry for the bother!