logikon-ai / cot-eval

A framework for evaluating the effectiveness of chain-of-thought reasoning in language models.
https://huggingface.co/spaces/logikon/open_cot_leaderboard
MIT License
9 stars 1 forks source link

Evaluate: NousResearch/Nous-Hermes-Llama2-13b #22

Closed ggbetz closed 6 months ago

ggbetz commented 6 months ago

Check:

Parameters:

NEXT_MODEL_PATH=meta-llama/Llama-2-13b-hf
NEXT_MODEL_REVISION=main
NEXT_MODEL_PRECISION=float16
MAX_LENGTH=2048 
GPU_MEMORY_UTILIZATION=0.8
VLLM_SWAP_SPACE=6
ggbetz commented 6 months ago

Might not work either, see #17

yakazimir commented 6 months ago

Is this the right model name?

ggbetz commented 6 months ago

Think it is (https://huggingface.co/NousResearch/Nous-Hermes-Llama2-13b) -- but will close this for now, as vllm doesn't work well with these axolotl models...