## Notes
I am new to coding with LLM and just wanted to play around with using helix and local hosted ollama.
As I don't have access to other providers for comparison, and not much experience with prompt engineering, this is just something that seems working. Please help to test it out and let me know what is missing.
## Discussion
- for user without strong hardware, some actions with larger file may takes too long and trigger helix `Async job failed: request 8 timed out`
- prompt and parameters may need some more fine tuning
This PR add basic support for Ollama.
Prompts are copied from openai provider
Testing
ollama pull codellam
languages.toml
[language-server.gpt] command = "bun" args = [ "--inspect=0.0.0.0:6499", "run", "helix-gpt/src/app.ts", "--handler", "ollama", "--logFile", "helix-gpt.log" ]