issues
search
logikon-ai
/
cot-eval
A framework for evaluating the effectiveness of chain-of-thought reasoning in language models.
https://huggingface.co/spaces/logikon/open_cot_leaderboard
MIT License
12
stars
2
forks
source link
internlm/internlm2_5-1_8b-chat lm-eval bug
#64
Open
ggbetz
opened
1 month ago
ggbetz
commented
1 month ago
see
https://github.com/EleutherAI/lm-evaluation-harness/issues/2370
see https://github.com/EleutherAI/lm-evaluation-harness/issues/2370