Closed ngeraks closed 4 months ago
We’ve recently moved to a LiteLLM wrapper around completions, which allows support for various LLM APIs/systems.
It seems like Llama through Ollama is supported.
I’ll test and it out this weekend and post instructions 👍 Maybe using --llm= ollama/llama2
is sufficient.
In my experience from limited experimentation, both Llama2 and CodeLlama are less capable than GPT-4 with CWhy.
Finally got around to properly testing this today. Running on an Ubuntu Docker container on Windows, it appears to work pretty well out of the box. Issues would probably come up if using a different port than default for ollama.
Functions are not supported in Llama2 and seem to fail through LiteLLM, will investigate separately.
% curl -fsSL https://ollama.com/install.sh | sh
% screen -dmS ollama ollama serve
% ollama pull llama2
% cwhy --llm=ollama/llama2 --- g++ -c ./tests/c++/missing-hash.cpp
[works as expected.]
The title says it all. Can the configuration be modified so that we can use local LLMs?