Closed bartmch closed 1 month ago
The OpenAI model class just means that the model is served with an OpenAI compatible server. In the paper we use Llama 3.1 70B and serve the model through vLLM's OpenAI compatible server (https://docs.vllm.ai/en/latest/getting_started/quickstart.html#openai-compatible-server). I believe llama.cpp also supports the OpenAI API so you could also use that.
Hey @sidjha1, thanks for your reply. My bad, let me read the paper first before trying things out. You are right about llama-cpp as explained here. Thanks!
Great!
Hey, is it also possible to use an open-source LLM? If yes, which models would you recommend and how would you serve these models? E.g. Ollama or Llama.cpp? I looked at the LOTUS "providers" but could only see OpenAI. Great work!