EQ-bench / EQ-Bench

A benchmark for emotional intelligence in large language models
MIT License
180 stars 13 forks source link

Start llama.cpp server #18

Open dnhkng opened 6 months ago

dnhkng commented 6 months ago

This PR adds starting the llama.cpp server with parameters taken from the config file.

As the parameters are in the same position as those from the ooba process, the name has been refactored from 'ooba_params' to 'server_params'.

The server is started and stopped using subprocess, but tries to follow a similar style as the code for ooba.

Note: This requires you pull and compile llama.cpp first!

sam-paech commented 6 months ago

Thanks for this! I want to make a few changes to this before merging:

I'll let you know when I have a chance to start working on this, or if you want to do this please feel free.

dnhkng commented 6 months ago

- set --ctx-size 1024+, throw error if user sets this arg less than 1024 in config Seems easy, I can add this.

resolve relative paths (~) To the model? to the executable?

add same functionality as the ooba class (download model etc) Probably best if you take care of this, as I'm not sure of the details.