Closed Fastidious closed 2 weeks ago
Yes! You will be able to use it with a locally running Ollama model very soon - it's in the works and should be pushed shortly. We'll update here when it's ready!
Ollama support is officially in beta with this commit, and is available in v0.1.9! Here's a demo video:
https://github.com/squaredtechnologies/thread/assets/26368245/e324ce26-195a-4231-832d-98a59f5bb7cf
The new model selector allows you to choose which model to use - either OpenAI or a local Ollama model. By pointing to the Ollama URL and entering the name of the running model, Thread can use AI fully locally!
Sidenote: Local models work well for the chat, but I experienced a bit of trouble with respecting function calls for the code generation / edit tasks. We'll be hard at work getting those up and running. 🤝
Closing this issue out since local LLMs are working - please feel free to give it a try and raise any additional issues!
Will it be possible to add the ability to use a locally running solution, like Ollama?