Closed aamikus closed 4 years ago
I guess I'm not really familiar enough with Gunicorn to provide much useful commentary. I use this library for long running asyncio applications (network servers generally) and it works well for that use case.
I assume that Gunicorn spins up a worker process (in your case one built upon asyncio) to handle a single request and then stops it. That pattern isn't going to work well at all with a metrics pull model. The worker will be gone before it has been scraped. I guess that is why the solution proposed is to write metrics to a file and then run another process to expose those metrics. It seems kind of complicated with many caveats about things that don't work in that model.
I don't have any plans on personally adding support for this use case to this library. However, I'm open to a merge request if you are interested in adding it.
The official python Prometheus client outlines a problem caused by using multiple Gunicorn workers and provides a solution: https://github.com/prometheus/client_python/#multiprocess-mode-gunicorn
I am working on an asynchronous web-app using FastAPI with 4 Gunicorn workers, therefore I cannot use the synchronous Prometheus client. However, this library has no mention of dealing with the multiprocessing issue, and after running some tests, has no built-in solution to deal with it. My question is, is there any plan to add support for multiprocessing? If not, what solution would you recommend? I would prefer to continue using the
pull
model.Thank You.