evansd / whitenoise

Radically simplified static file serving for Python web apps
https://whitenoise.readthedocs.io
MIT License
2.51k stars 148 forks source link

Speeding up generating compressed files #148

Open edmorley opened 7 years ago

edmorley commented 7 years ago

In a project I work on, we use both CompressedStaticFilesMixin and the standalone compressor (python -m whitenoise.compress <DIR>) during Heroku deployments.

At the moment these steps are a considerable percentage (30-40%) of our deployment times.

For example using Python 2.7.13, Django 1.11.5, WhiteNoise master, Brotli 0.6.0, a Heroku-16 one-off performance-m dyno (2 cores, 2.5GB RAM, Ubuntu 16.04) with the static files directory cleared (to emulate deployment, since state intentionally isn't carried over):

~ $ time ./manage.py collectstatic --noinput
...
156 static files copied to '/app/treeherder/static', 202 post-processed.

real    0m29.837s
user    0m29.405s
sys     0m0.359s

As a baseline, using the stock Django ManifestStaticFilesStorage results in:

real    0m1.031s
user    0m0.855s
sys     0m0.167s

For the above, the 202 files output from ManifestStaticFilesStorage have a combined file-size of 15MB.

Moving onto the standalone compressor (which we use on the output of a webpack build, for the SPA part of the project):

~ $ find dist/ -type f | wc -l
35
~ $ du -hs dist/
5.2M    dist/
~ $ time python -m whitenoise.compress dist/
...
real    0m11.929s
user    0m11.841s
sys     0m0.084s

Ideas off the top of my head to speed this up: 1) Use concurrent.futures or similar to take advantage of all cores 2) See if the scantree() implementation might be faster than compress.py's os.walk() plus later stats 3) Reduce the number of files being compressed (eg WHITENOISE_KEEP_ONLY_HASHED_FILES and #147) 4) Profile both CompressedStaticFilesMixin and the CLI version, to double check that most of the time is indeed being spent in the compiled gzip/brotli code and not somewhere unexpected. 5) Compare the performance of the gzip stdlib and compiled brotli python package with command line equivalents.

edmorley commented 7 years ago

Also worth noting is that if I switch from CompressedStaticFilesMixin back to ManifestStaticFilesStorage, and instead manually run python -m whitenoise.compress <path to static dir> afterwards, the total time taken is 12% faster -- even though it's now doing more work (due to the latter approach compressing the extra missed intermediate files - eg the base.5af66c1b1797.css instance in #147).

edmorley commented 7 years ago

Moving onto the standalone compressor (which we use on the output of a webpack build, for the SPA part of the project):

Breakdown of python -m whitenoise.compress dist/ times:

So this is all on Brotli, and not due to the filesystem walking/reading parts or gzip (albeit the standalone compressor example here was just for 35 files; but even for a 10,000 file directory Brotli compression times would dwarf anything else even if the filesystem walking happened to be inefficient).

edmorley commented 6 years ago

@evansd I have a multi-threading solution locally that uses multiprocessing.dummy which is present in the stdlib for both Python 2 and 3, however it's not great (eg doesn't raise child thread exceptions unless I add lots more boilerplate).

Would you be open to me adding a dependency for Python 2 only on the futures package (which is a backport of Python 3 concurrent.futures)? The wheel is only 13KB, likely to be used by projects anyway, and I can use a version range specifier in setup.py so it won't be installed under Python 3.

evansd commented 6 years ago

@edmorley Thanks a lot for this Ed, and for the other work you've been doing on whitenoise recently. Sorry I haven't responded sooner; things have been a bit busy lately.

Yes, I'd be open to adding a dependency on futures. In general I like the fact that whitenoise is dependency-free, but backports of the Python 3 stdlib are a different case and I don't think it's a problem to add those.

sonthonaxrk commented 3 years ago

Really, the compression level should be configurable.

https://github.com/evansd/whitenoise/blob/master/whitenoise/compress.py#L84

rik commented 1 year ago

I've taken a stab at processing files in parallel in https://github.com/evansd/whitenoise/pull/484.