facebookresearch / ParlAI

A framework for training and evaluating AI models on a variety of openly available dialogue datasets.
https://parl.ai
MIT License
10.49k stars 2.1k forks source link

Turn on multiprocessing eval for nightly gpu tests #2906

Open stephenroller opened 4 years ago

stephenroller commented 4 years ago

The nightly gpu tests still (mostly) use only one gpu. We have full support for multiprocessing eval, and we should be taking advantage of it.

My suggestion is to add a "use_multiprocessing" argument to testing_utils.eval_model and have it toggle between the two scripts appropriately

We probably will see a 25% speedup in the long gpu tests. Measure the improvement locally (which should be about 2x), report here, and push to report improvements in real CI.

Note this is pretty much only useful for generative models, but there are a large number of those: at least the unlikelihood, and dodeca models. Perhaps Eric's style tests. Possibly the controllable models too.

github-actions[bot] commented 4 years ago

This issue has not had activity in 30 days. Please feel free to reopen if you have more issues. You may apply the "never-stale" tag to prevent this from happening.

stephenroller commented 4 years ago

Now that we have some data caching, I think this is actually much more impactful, @dianaglzrico