aidatatools / ollama-benchmark

LLM Benchmark for Throughput via Ollama (Local LLMs)
https://llm.aidatatools.com/
MIT License
64 stars 13 forks source link

Why the prepared prompts of mistral are different from other models? #4

Closed nicholaslck closed 3 months ago

nicholaslck commented 3 months ago

Hello!

I am new to the topic on benchmarking LLM throughput, so when I run the benchmark I am quite confused on why the prompts that feeds to Mistral model are different from the other models.

Especially in the file https://github.com/aidatatools/ollama-benchmark/blob/main/llm_benchmark/data/benchmark1.yml The prompt batches are separated with instruct and question-answer.

May I know if there are any specific reason for such design?

Good work btw👍

chuangtc commented 3 months ago

Hi, from the beginning of announcing Mistral-7B-Instruct-v0.1 https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1 and Mistral-7B-Instruct-v0.2 https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2 It's famous for its instruction mode, that's why I use instruction type of prompt to feed the Mistral model. I hope this clarify your doubts.