mlcommons / algorithmic-efficiency

MLCommons Algorithmic Efficiency is a benchmark and competition measuring neural network training speedups due to algorithmic improvements in both training algorithms and models.
https://mlcommons.org/en/groups/research-algorithms/
Apache License 2.0
319 stars 60 forks source link

Skip eval on train and test for self-reporting results #725

Open Niccolo-Ajroldi opened 4 months ago

Niccolo-Ajroldi commented 4 months ago

Feature request: allow users to skip eval on train and test

Evaluating on the training and test sets is time-consuming and not necessary for self-reporting results. We should add a flag that allow the user to skip eval on these datasets, to make scoring faster.

Accordingly, in this scenario we should modify:

goals_reached = (
              train_state['validation_goal_reached'] and
              train_state['test_goal_reached'])

into:

goals_reached = (train_state['validation_goal_reached'])

This would speed up self-evalution even more, by stopping training when validation target is reached, avoiding unnecessary usage of computational resources.

priyakasimbeg commented 4 months ago

You're right this is a good suggestion to allow to skip on train and test splits.