Closed yanngv29 closed 4 years ago
here is my solution : i change 2 files : in nginx.conf :
worker_processes auto;
...
http {
...
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
log change are only here to see the logs.
and supervisord.conf
[program:fcgiwrap]
command=/bin/bash -c "find /nginx -type s -print0 | xargs -0 --no-run-if-empty rm && fcgiwrap -n 50 -s unix:/nginx/fcgiwrap.socket"
i add "-n 50" to start more fcgiwrap process according to the man page.
After those 2 changes, one image can manage 20 req/seconds . (before it was 4req/seconds)
-n 50 is perhaps a little bit too much for my case.
Thank you for digging into this. How I would like to approach this is to make this environment variable and make it tunable for users and start with a more conservative value of around 4 instead of 50.
You're right, variabilisation is a good approach .
Fixed in 1ce3c24112ef6776753924a6157c29e52f6e02d3
i have up to 20 req/s to my overpass server using you docker image, and it quickly send HTTP 502 error with this message :
[error] 19#19: *717 connect() to unix:/nginx/fcgiwrap.socket failed (11: Resource temporarily unavailable) while connecting to upstream
I try to change a lot of nginx configuration, but none is working.
To test it, i have a Gatling script that send 8 request / seconds. sample request : [timeout:180][out:json];way(48.4655000,7.7280000,48.4656000,7.7281990)[highway];(._;>;);out;
Gatling result show that the serveur only reply 4 time / seconds. never more.