Open cpbrust opened 4 years ago
Hi, the first thing i noticed is the multithreading usage, as well as the additional proxy. They are obviously not bad 'by default' but will introduce additional complexities in the test.
I would rewrite it as:
[uwsgi]
http-socket = :8080
wsgi-file = app.py
callable = app
processes = 2
lazy-apps = true
strict = true
disable-logging = true
log-4xx = true
log-5xx = true
master = true
Thanks for your input! I tested removing both multithreading as well as the additional proxy. It seems that multithreading is not a factor in this equation, as removing it does not seem to help. However, removing the proxy does indeed make this issue go away. Based on my understanding of the documentation I believe that in my use case I do indeed need to be using the proxy.
A 43-millisecond conundrum
Hello! I'd like your assistance in getting to the bottom of an issue that I've uncovered while working with uWSGI (2.0.18) in Linux. I'll detail the issue here, and below I've included a minimal set of code which reproduces the issue.
Thank you very much!
What exactly the problem is
When I run uWSGI running Flask code in Linux filesystems, it seems that some or all requests over a certain size take about 43 milliseconds longer, even as measured by the server, even if the server is doing nothing with the data. Below is a 2d histogram showing the number of requests as a function of the request size and response time (as measured by time-on-server, see the below example).
As you can see, once we get above about 37 kb, the response takes substantially longer for no apparent reason. As a result, in a real-live service with relatively little to do with 20-50 kb of data, we end up with e.g. bimodal distributions for latencies:
What I have tried
This same issue does not occur if:
I have spent a modest amount of time investigating uWSGI settings, including trying async workers, socket settings, thunder lock, checking the listen and accept sockets, reuse-port, and others. Nothing made a substantial change in this.
Another thing to note is that upon switching to Gunicorn for testing, I noticed that a memory leak went away in my service. Perhaps it's possible that this issue may be related to a memory leak.
A minimal example
This example uses Docker. You can use
make build
andmake run
, or if you're in a native Linux filesystem already, you can just run the code directly.I benchmarked this code by making repeated API calls in a Python script using code of the form
and then writing
to_write
to a file to be read in by a plotting library. Below is the code to run for the server.Dockerfile
Makefile
app.py
uwsgi.ini