Open ajjackson opened 2 weeks ago
I'm not sure it's worth it, but maybe check? Just looking at a recent run setting up conda and installing tox
takes ~1min which is probably not worth caching. Running tox
takes the most time, but from the output it seems the actual tests takes ~11min on Linux, ~14min on Windows and ~6min on MacOS, implying that setting the environments takes ~3min on Linux, ~4min on Windows and ~2min on MacOS.
Saving and restoring the cache itself takes some times (~a few mins from memory... the runs I had in other projects seems to have expired so I can't see how long), so it might be worthwhile to cache the .tox
folder), but you're still writing/reading to/from a network storage location so the bandwidth use will probably be the same. I presume that github has a local mirror of PyPI so using that or the cache would be nearly equivalent...
Yeah, the benefit should be tested before committing to it!
Another thing that could improve performance is running tests in parallel; I think the typical github runner is 4 cores now. It would be good if the force constant tests can still block and use OpenMP, but everything else could be smashed through more quickly.
That might require marking the OpenMP-friendly tests and running them in a separate tox step. https://github.com/pytest-dev/pytest-xdist/issues/385
Also I think caching .tox
doesn't just save the "grab installers" time but also saves the "run installation" time, which can be significant for packages that want to look at your environment and compile things.
Currently a lot of test time (and bandwidth) is spent installing packages. This can be sped up with caching. There are a few levels to consider:
I think the first two will be captured by caching the miniconda environment, while the latter would require the .tox directory to be cached.
The latter is probably more critical but it is harder to find timing information.