etianen / aiohttp-wsgi

WSGI adapter for aiohttp.
https://aiohttp-wsgi.readthedocs.io
BSD 3-Clause "New" or "Revised" License
232 stars 20 forks source link

Memory exhaustion for large request. #7

Closed defnull closed 8 years ago

defnull commented 8 years ago

The WSGI adapter reads the complete request body into memory. This results in extreme memory consumption for large file uploads and makes it very easy to run denial-of-service attacks.

curl --data-binary "@/dev/zero" http://example.com/wsgi
etianen commented 8 years ago

We have to buffer the entire request in order to prevent slow network clients from stalling the wsgi worker threads. However, the current situation is not ideal.

I think it needs to be possible to configure a max request size for the wsgi handler, which will reject an incoming request with the appropriate status code if it's above the allowed size. Setting it to a default value of 100MB or so would seem to mitigate malicious attacks while still allowing most sensible files through.

Sound reasonable? On Thu, 3 Mar 2016 at 22:42, Marcel Hellkamp notifications@github.com wrote:

The WSGI adapter reads the complete request body into memory. This results in extreme memory consumption for large file uploads and makes it very easy to run denial-of-service attacks.

curl --data-binary "@/dev/zero" http://example.com/wsgi

— Reply to this email directly or view it on GitHub https://github.com/etianen/aiohttp-wsgi/issues/7.

defnull commented 8 years ago

Stalling the worker thread pool is always better than exhausting the server memory. It brings down only the WSGI application, not the whole server.

With a hard per-request limit, you could still get the server into swapping with multiple smaller requests. Also, uploading bigger files would no longer be possible.

Why not buffer into a reasonably sized tempfile.SpooledTemporaryFile, chunk by chunk? Then, a maximum of memory_limit_per_tempfile * thread_pool_size of memory is used and bigger requests are still allowed. Bottle does it this way.

etianen commented 8 years ago

A spoiled temporary file seems good too, but I think a max upload size is still sensible. Most webservers have this built in. Nginx limits to 5MB by default, I think, which is tiny! On Fri, 4 Mar 2016 at 11:08, Marcel Hellkamp notifications@github.com wrote:

Then you could still get the server into swapping with multiple smaller requests. Also, uploading bigger files would no longer be possible. Why not buffer into a reasonably sized tempfile.SpooledTemporaryFile, chunk by chunk?

Other than that, stalling the worker thread pool is still better than exhausting the server memory. It brings down only the WSGI application, not the whole server.

— Reply to this email directly or view it on GitHub https://github.com/etianen/aiohttp-wsgi/issues/7#issuecomment-192235853.

etianen commented 8 years ago

Here we go: https://github.com/etianen/aiohttp-wsgi/blob/master/CHANGELOG.rst#040