mlcommons / training_policies

Issues related to MLPerf™ training policies, including rules and suggested changes
https://mlcommons.org/en/groups/training
Apache License 2.0
93 stars 66 forks source link

[DLRMv2] Remark on epoch_num for the new recommender benchmark #518

Closed janekl closed 1 year ago

janekl commented 1 year ago

Author: Jan Lasek, Nvidia (jlasek_at_nvidia_dot_com)

Benchmarks should employ one-based numbering for epoch number in general. In the case of DLRMv2 benchmark, however, it is only trained for at most one epoch. To make it slightly less confusing for the users I'm adding a remark that epoch numbering is zero-based in this case (which is in fact the convention currently employed in the reference implementation).

github-actions[bot] commented 1 year ago

MLCommons CLA bot All contributors have signed the MLCommons CLA ✍️ ✅