logikon-ai / cot-eval

A framework for evaluating the effectiveness of chain-of-thought reasoning in language models.
https://huggingface.co/spaces/logikon/open_cot_leaderboard
MIT License
12 stars 2 forks source link

Evaluate: meta-llama/Meta-Llama-3-XXX #49

Closed ggbetz closed 6 months ago

ggbetz commented 7 months ago

Check upon issue creation:

Parameters:

With XXX in

NEXT_MODEL_PATH=meta-llama/Meta-Llama-3-XXX
NEXT_MODEL_REVISION=main
NEXT_MODEL_PRECISION=bfloat16
MAX_LENGTH=2048 
GPU_MEMORY_UTILIZATION=0.8
VLLM_SWAP_SPACE=12

ToDos: