unbit / uwsgi

uWSGI application server container
http://projects.unbit.it/uwsgi
Other
3.46k stars 691 forks source link

[RFC] Adaptive process spawning based on memory #768

Open unbit opened 10 years ago

unbit commented 10 years ago

A very common commercial pattern is allocating/selling an amount of memory to a customer for running its app and delegate to him/her the choice of how many processes/threads/coroutines to run.

The idea is adding a cheaper algorithm that constantly monitor the memory usage of an instance and adapt the spawn of processes to the limit set by the admin.

Example:

[uwsgi]
; the workers limit (is another option needed ?)
cheaper-memory-limit = 800
; up to 10 processes
processes = 10
; start with 2 processes
cheaper = 2
; enable memory algo
cheaper-algo = memory

The first 2 workers consume 120 and 80 megs respectively, so the current average usage is 100M per worker. The stack has still 600M free (of the allocated 800), that allows running still 6 (100M avg * 6) processes. So a new process is spawn. For some reason this process consumes 200M, so the new avg is 133M (120+80+200). There are still 500M available and a new process is spawned.

In the mean time the first 2 processes raised to 350M each, so we are over quota, and the oldest process is destroyed.

Notes/gray areas

prymitive commented 10 years ago

So the new algo would keep running maximum number of workers that do not exceed total allowed memory limit?

379 was merged some time ago to allow setting memory limits for cheaper.

unbit commented 10 years ago

More or less, the difference is that the optimal value of running processes is computed based only on memory. To be more clear, it is the memory to govern how much process to run (while #379 is a limit checker). Very probably a combo of spare cheaper algo + #379 would result in the same situation. I just only want something simpler and more focused on memory monitoring.

prymitive commented 10 years ago

Limit is useful when overprovisioning, if you try running as much workers as possible you need to make sure that you do not assign more workers than total memory on given node.

Mikrobit commented 10 years ago

I can't seem to find a good reason to compute the average. Moreover, what would stop uWSGI from generating processes and killing old ones? Maybe a "cheaper-memory-free-limit" option is needed in order to keep a safe margin. In order to do it automatically the master would need to know the size of the to-be-created worker, and for robustness reasons I would choose the size of the biggest process running at the moment, not the average size of the processes. My 2c