Open edmorley opened 7 years ago
Also worth noting is that if I switch from CompressedStaticFilesMixin
back to ManifestStaticFilesStorage
, and instead manually run python -m whitenoise.compress <path to static dir>
afterwards, the total time taken is 12% faster -- even though it's now doing more work (due to the latter approach compressing the extra missed intermediate files - eg the base.5af66c1b1797.css
instance in #147).
Moving onto the standalone compressor (which we use on the output of a webpack build, for the SPA part of the project):
Breakdown of python -m whitenoise.compress dist/
times:
--no-brotli
): 0.35s--no-gzip
): 11.66s--no-gzip --no-brotli
: 0.05s (this walks filesystem and reads files from disk but no compression/writes)So this is all on Brotli, and not due to the filesystem walking/reading parts or gzip (albeit the standalone compressor example here was just for 35 files; but even for a 10,000 file directory Brotli compression times would dwarf anything else even if the filesystem walking happened to be inefficient).
@evansd I have a multi-threading solution locally that uses multiprocessing.dummy
which is present in the stdlib for both Python 2 and 3, however it's not great (eg doesn't raise child thread exceptions unless I add lots more boilerplate).
Would you be open to me adding a dependency for Python 2 only on the futures package (which is a backport of Python 3 concurrent.futures
)? The wheel is only 13KB, likely to be used by projects anyway, and I can use a version range specifier in setup.py so it won't be installed under Python 3.
@edmorley Thanks a lot for this Ed, and for the other work you've been doing on whitenoise recently. Sorry I haven't responded sooner; things have been a bit busy lately.
Yes, I'd be open to adding a dependency on futures. In general I like the fact that whitenoise is dependency-free, but backports of the Python 3 stdlib are a different case and I don't think it's a problem to add those.
Really, the compression level should be configurable.
https://github.com/evansd/whitenoise/blob/master/whitenoise/compress.py#L84
I've taken a stab at processing files in parallel in https://github.com/evansd/whitenoise/pull/484.
In a project I work on, we use both
CompressedStaticFilesMixin
and the standalone compressor (python -m whitenoise.compress <DIR>
) during Heroku deployments.At the moment these steps are a considerable percentage (30-40%) of our deployment times.
For example using Python 2.7.13, Django 1.11.5, WhiteNoise master, Brotli 0.6.0, a Heroku-16 one-off performance-m dyno (2 cores, 2.5GB RAM, Ubuntu 16.04) with the static files directory cleared (to emulate deployment, since state intentionally isn't carried over):
As a baseline, using the stock Django
ManifestStaticFilesStorage
results in:For the above, the 202 files output from
ManifestStaticFilesStorage
have a combined file-size of 15MB.Moving onto the standalone compressor (which we use on the output of a webpack build, for the SPA part of the project):
Ideas off the top of my head to speed this up: 1) Use
concurrent.futures
or similar to take advantage of all cores 2) See if thescantree()
implementation might be faster than compress.py'sos.walk()
plus later stats 3) Reduce the number of files being compressed (egWHITENOISE_KEEP_ONLY_HASHED_FILES
and #147) 4) Profile bothCompressedStaticFilesMixin
and the CLI version, to double check that most of the time is indeed being spent in the compiled gzip/brotli code and not somewhere unexpected. 5) Compare the performance of the gzip stdlib and compiled brotli python package with command line equivalents.