logikon-ai / cot-eval

A framework for evaluating the effectiveness of chain-of-thought reasoning in language models.
https://huggingface.co/spaces/logikon/open_cot_leaderboard
MIT License
7 stars 1 forks source link

harness: --log_samples #57

Open ggbetz opened 2 months ago

ggbetz commented 2 months ago

Use --log_samples when calling harness and upload them in separate repo for later diagnostics:

See: https://github.com/EleutherAI/lm-evaluation-harness/issues/1842