plasma-umass / cwhy

"See why!" Explains and suggests fixes for compile-time errors for C, C++, C#, Go, Java, LaTeX, PHP, Python, Ruby, Rust, and TypeScript
Apache License 2.0
272 stars 6 forks source link

Can this work with local llm like meta's llama #62

Closed ngeraks closed 4 months ago

ngeraks commented 5 months ago

The title says it all. Can the configuration be modified so that we can use local LLMs?

nicovank commented 5 months ago

We’ve recently moved to a LiteLLM wrapper around completions, which allows support for various LLM APIs/systems.

It seems like Llama through Ollama is supported.

I’ll test and it out this weekend and post instructions 👍 Maybe using --llm= ollama/llama2 is sufficient.

In my experience from limited experimentation, both Llama2 and CodeLlama are less capable than GPT-4 with CWhy.

nicovank commented 4 months ago

Finally got around to properly testing this today. Running on an Ubuntu Docker container on Windows, it appears to work pretty well out of the box. Issues would probably come up if using a different port than default for ollama.

Functions are not supported in Llama2 and seem to fail through LiteLLM, will investigate separately.

% curl -fsSL https://ollama.com/install.sh | sh
% screen -dmS ollama ollama serve
% ollama pull llama2
% cwhy --llm=ollama/llama2 --- g++ -c ./tests/c++/missing-hash.cpp
[works as expected.]