peterjc / flake8-black

flake8 plugin to run black for checking Python coding style
MIT License
166 stars 10 forks source link

Performance overhead #26

Open peterjc opened 4 years ago

peterjc commented 4 years ago

Testing with Biopython (which recently finished applying black to the entire code base), numbers on a multi-core Mac:

Black alone (best of three)

$ time black --check Bio/ BioSQL/ Tests/ Scripts/ Doc/
All done! ✨ 🍰 ✨
559 files would be left unchanged.

real    0m0.618s
user    0m0.484s
sys 0m0.129s

Using flake8 with assorted plugins, but without flake8-black (best of three)

$ time flake8 Bio/ BioSQL/ Tests/ Scripts/ Doc/

real    0m56.956s
user    4m1.520s
sys 0m3.470s

Adding the above together gives us an expected run time for running black via flake8.

So, same setup, but with flake8-black (best of three)

$ time flake8 Bio/ BioSQL/ Tests/ Scripts/ Doc/

real    1m36.095s
user    6m51.635s
sys 0m5.928s

That's an overhead of about 40s real and well over 2m user time - not good!

peterjc commented 4 years ago

I didn't get the expected speedup trying this on TravisCI, I now suspect the black cache is the reason why:

$ rm -rf /Users/$USER/Library/Caches/black/19.10b0/ && time black --check Bio/ BioSQL/ Tests/ Scripts/ Doc/
All done! ✨ 🍰 ✨
559 files would be left unchanged.

real    0m29.016s
user    3m7.108s
sys 0m3.005s

So, while we may not be able to dramatically speed up flake8-black on a first run, can we tap into the black cache?

peterjc commented 4 years ago

Or could flake8 include a similar caching mechanism? Black does this with a one cache per mode (covering the relatively short list of black options) per version of black. Doing something similar for flake8 would also have cover the combination of all installed plugins and their versions. This seems like it would be too complicated.

peterjc commented 4 years ago

We can probably take advantage of the black cache by imitating calling the black command line and letting black parse the file (rather than letting flake8 parse the file and giving the data to an internal black function). This will be a performance trade off - I suspect it will be faster for the local use case (e.g. git pre-commit hook), but could be slower for continuous integration (with no black cache present).