mlcommons / training_policies

Issues related to MLPerf™ training policies, including rules and suggested changes
https://mlcommons.org/en/groups/training
Apache License 2.0
92 stars 66 forks source link

Results with matching seeds should be pruned #479

Closed nvaprodromou closed 2 years ago

nvaprodromou commented 2 years ago

This PR defines what happens when two or more models use the same (randomly chosen) seed. Problem is most likely to happen in weak scaling results due to the large number of simultaneously trained models. This was a point of discussion during the MLPerf HPC v0.7 review round.

github-actions[bot] commented 2 years ago

MLCommons CLA bot All contributors have signed the MLCommons CLA ✍️ ✅

sparticlesteve commented 2 years ago

This rules change was approved in the HPC WG meeting on Apr 18, 2022. The PR can now be merged by those with authority (not me), e.g. @johntran-nv