mlcommons / ck

Collective Knowledge (CK) and Collective Mind (CM): educational community projects to learn how to run AI, ML and other emerging workloads in a more efficient and cost-effective way across diverse models, datasets, software and hardware using MLPerf and CM automations
https://docs.mlcommons.org/ck
Apache License 2.0
609 stars 118 forks source link

[Suggestion] MLPerf reproducibility/repeatability methodology from ACM/IEEE/NeurIPS? #1080

Open gfursin opened 10 months ago

gfursin commented 10 months ago

Following many recent discussions at MLCommons about improving the repeatability and reproducibility of MLPerf inference benchmarks, we suggest to look at similar initiatives at computer systems conferences (artifact evaluation and reproducibility initiatives) and maybe adopt their methodology and badges:

Our repeatability study for MLPerf inference v3.1 highlights similar repeatability issues to what we already saw in compiler, systems and ML conferences:

A potential solution is improve repeatability of MLPerf submissions (full reproducibility is probably too costly and impossible at this stage) by introducing MLPerf reproducibility badges similar to ACM reproducibility badges:

We can evaluate results after submission deadline and before the publication deadline, and assign badges to all results in the final table that is officially published. It may motivate everyone to improve the quality of their submission and get all such badges in the future instead of the community discovering such issues after MLPerf publication of results.

gfursin commented 9 months ago

We have developed a prototype infrastructure to track MLPerf configurations and give ACM badges: