Open stephenroller opened 4 years ago
This issue has not had activity in 30 days. Please feel free to reopen if you have more issues. You may apply the "never-stale" tag to prevent this from happening.
Now that we have some data caching, I think this is actually much more impactful, @dianaglzrico
The nightly gpu tests still (mostly) use only one gpu. We have full support for multiprocessing eval, and we should be taking advantage of it.
My suggestion is to add a "use_multiprocessing" argument to
testing_utils.eval_model
and have it toggle between the two scripts appropriatelyWe probably will see a 25% speedup in the long gpu tests. Measure the improvement locally (which should be about 2x), report here, and push to report improvements in real CI.
Note this is pretty much only useful for generative models, but there are a large number of those: at least the unlikelihood, and dodeca models. Perhaps Eric's style tests. Possibly the controllable models too.