mlcommons / algorithmic-efficiency

MLCommons Algorithmic Efficiency is a benchmark and competition measuring neural network training speedups due to algorithmic improvements in both training algorithms and models.
https://mlcommons.org/en/groups/research-algorithms/
Apache License 2.0
319 stars 60 forks source link

Incorrect Imagenet evals with pytorch_eval_num_workers > 0 #732

Open priyakasimbeg opened 4 months ago

priyakasimbeg commented 4 months ago

AlgoPerf submitter team reports that they are no longer able to reproduce the NAdam baseline results in PyTorch using the current repo in PyTorch on the ImageNet workloads (both ResNet and ViT). See the plot below in terms of differences in the training/validation loss and accuracy between the given NAdam Jax results and the current run's results on ImageNet ViT.

They did not see a change in OGBG and FastMRI.

The list of commits that we merged were from 389fe3f823a5016289b55b48aa8061a37b18b401 to 79ccc5e860d7928cf896ffe12ec686c72fd840d4.

image

Steps to Reproduce

Running submission runner with eval_num_workers=4 (recently changed default to help speed up evals).

Source or Possible Fix

Setting the eval_num_workers to 0 resolves the discrepancy in evals. We are still investigating why.

priyakasimbeg commented 4 months ago

Changed default number of workers for PyTorch data loaders to 0. Important update: for speech workloads the pytorch_eval_num_workers flag to submission_runner.py has to be set to >0, to prevent data loader crash in jax code.

runame commented 4 months ago

I tried reproducing the issue by running the target setting run on the current dev branch with pytorch_eval_num_workers=4, but I don't see the drop in eval metrics compared to an older reference run (this one).

If someone can share the exact command and commit they used to produce the run in the plot I will try to run this instead.