TAG-Research / TAG-Bench

TAG-Bench: A benchmark for table-augmented generation (TAG)
https://arxiv.org/pdf/2408.14717
MIT License
556 stars 55 forks source link

OS LLM #1

Closed bartmch closed 1 month ago

bartmch commented 1 month ago

Hey, is it also possible to use an open-source LLM? If yes, which models would you recommend and how would you serve these models? E.g. Ollama or Llama.cpp? I looked at the LOTUS "providers" but could only see OpenAI. Great work!

sidjha1 commented 1 month ago

The OpenAI model class just means that the model is served with an OpenAI compatible server. In the paper we use Llama 3.1 70B and serve the model through vLLM's OpenAI compatible server (https://docs.vllm.ai/en/latest/getting_started/quickstart.html#openai-compatible-server). I believe llama.cpp also supports the OpenAI API so you could also use that.

bartmch commented 1 month ago

Hey @sidjha1, thanks for your reply. My bad, let me read the paper first before trying things out. You are right about llama-cpp as explained here. Thanks!

sidjha1 commented 1 month ago

Great!