EleutherAI / gpt-neox

An implementation of model parallel autoregressive transformers on GPUs, based on the Megatron and DeepSpeed libraries
https://www.eleuther.ai/
Apache License 2.0
6.95k stars 1.02k forks source link

The results of running eval show only 1 digit after decimal point for acc on all tested tasks #1227

Closed lernerjenny closed 4 months ago

lernerjenny commented 6 months ago

Describe the bug The results of running eval.py show only 1 digit after decimal point for acc on all tested tasks. If there is some configuration argument to set this, I found no mention of it.

Example: { "results": { "hellaswag": { "acc,none": 0.3, "acc_stderr,none": 0.15275252316519466, "acc_norm,none": 0.4, "acc_norm_stderr,none": 0.16329931618554522 }, "arc_easy": { "acc,none": 0.3, "acc_stderr,none": 0.15275252316519466, "acc_norm,none": 0.3, "acc_norm_stderr,none": 0.15275252316519466 }, "piqa": { "acc,none": 0.8, "acc_stderr,none": 0.13333333333333333, "acc_norm,none": 0.8, "acc_norm_stderr,none": 0.13333333333333333 }, "sciq": { "acc,none": 0.9, "acc_stderr,none": 0.09999999999999999, "acc_norm,none": 0.9, "acc_norm_stderr,none": 0.09999999999999999 }, "arc_challenge": { "acc,none": 0.2, "acc_stderr,none": 0.13333333333333333, "acc_norm,none": 0.2, "acc_norm_stderr,none": 0.13333333333333333 }, }, To Reproduce Steps to reproduce the behavior:

  1. run python deepy.py eval.py --conf_dir pythia 1B.yml --eval_tasks lambada_openai hellaswag piqa arc_easy arc_challenge winogrande sciq
  2. observe the generated result json

Expected behavior present a configuration argument to set the number of digits after decimal point, and show above 4 digits after decimal point by default

Proposed solution If you have an idea for how we can fix this problem, describe it here.

Screenshots If applicable, add screenshots to help explain your problem.

Environment (please complete the following information):

Additional context Add any other context about the problem here.

lernerjenny commented 5 months ago

I found the problem: https://github.com/EleutherAI/gpt-neox/blob/dfc6722f2ab0e3efb65ce5b49449a2a8b14a26b7/eval_tasks/eval_adapter.py#L493

limit=10 causes this issue and much worse - incorrect eval results. The following warning can be found in the lm-evaluation-harness: "--limit SHOULD ONLY BE USED FOR TESTING.REAL METRICS SHOULD NOT BE COMPUTED USING LIMIT."

StellaAthena commented 5 months ago

I found the problem:

https://github.com/EleutherAI/gpt-neox/blob/dfc6722f2ab0e3efb65ce5b49449a2a8b14a26b7/eval_tasks/eval_adapter.py#L493

limit=10 causes this issue and much worse - incorrect eval results. The following warning can be found in the lm-evaluation-harness: "--limit SHOULD ONLY BE USED FOR TESTING.REAL METRICS SHOULD NOT BE COMPUTED USING LIMIT."

Yes, and specifically using limit 10 means only 10 items are run so it's mathematically impossible to have the other digits be non-zero :)