Open dekiesel opened 1 year ago
I honestly have never looked at the memory usage of tandoor in isolation. from some googling postgres seems to usually take 130 alone, nginx and gunicorn will add their part and tandoor certainly has a bit of caching and stuff as well so this might be alright.
Feel free to analyze this further, maybe improvements can be made to reduce the memory usage but for me this kinda feels alright.
Since gunicorn makes a copy of the whole app for each worker the RAM usage is tightly correlated with the nr of workers.
I noticed that even in my 1 CPU setup the usage never crossed 60%, that seems to indicate that from gunicorns point of view the workload isn't CPU bound (The heavy lifting is done by the DB, if I understand correctly). So I set the nr of workers to 2 and increased the threads to 4.
This decreased the memory usage by 55mb(16%)
I noticed though, that the usage with time went up to the original value, and the reason for that is that gunicorn doesn't release used memory.
As a workaround I started gunicorn with max_requests = 100
and max_requests_jitter = 10
and now the memory consumption stayed at about 280mb for the whole stack.
I haven't noticed any slow down or lag during testing, but my DB is tiny (20 recipes).
Imo it'd be a good idea to include max_requests
and max_requests_jitter
in boot.sh
because it prevents memory leaks.
If you want I can create a PR.
The question for me is if this is really safe for large scale installations. Usually there is also an upside to setting such kind of settings a bit higher but sometimes it's a downside as well.
What we could do is add an environment variable do that users can change the setting but I don't really want to change the default. If you want you can pr that change, if not I will leave it as an enhancement for the future.
That's what I meant :) workers and threads can already be set in the env file. I'd just an option for requests and jitter.
Issue
I see a (imho) excessively high memory usage.
I have three containers, tandor, nginx and postgres (as recommended) and the whole setup uses ~350mb.
Gunicorn alone uses 60% of that memory.
Is that to be expected? It seems quite high.
Tandoor Version
1.15
OS Version
Alpine 3.18
Setup
Docker / Docker-Compose
Reverse Proxy
No reverse proxy
Other
No response
Environment file
Docker-Compose file
Relevant logs
No response