EQ-bench / EQ-Bench

A benchmark for emotional intelligence in large language models
MIT License
195 stars 17 forks source link

Add llama server inference #17

Closed dnhkng closed 8 months ago

dnhkng commented 8 months ago

Using the llama.cpp server currently uses the /chat/completion endpoint, preventing the selection of the appropriate prompt formatting template. This can lead to lower-than-expected scores when the standard ChatGPT template is applied. This pull request allows for the selection of both the llama.cpp server API and a specific template from the config file.

dnhkng commented 8 months ago

This does not yet include starting and stopping the llama.cpp server! It does seem to work though :)