Closed alejandro-alzate closed 2 months ago
Hey @alejandro-alzate, we currently have support for Ollama in the assistant panel:
https://zed.dev/docs/assistant-panel#using-ollama-on-macos
We also have an issue up for someone wanting to potentially add Ollama support for inline completions (copilot/supermaven style completions):
Feel free to upvote that with a 👍 one, I'm going to close this one out.
Sorry for bothering though. but thanks!
Hey @alejandro-alzate, we currently have support for Ollama in the assistant panel:
https://zed.dev/docs/assistant-panel#using-ollama-on-macos
We also have an issue up for someone wanting to potentially add Ollama support for inline completions (copilot/supermaven style completions):
Feel free to upvote that with a 👍 one, I'm going to close this one out.
@JosephTLyons - it looks like you linked the same issue here.
I personally think local LLM support for inline completions would be excellent. I work on a few private codebases and don't trust Supermaven / Copilot with those codebases.
Should we open a new issue or did you mean to refer to an existing issue?
Check for existing issues
Describe the feature
Cut and Dry: Forget using copilot or chatgpt with their subscriptions and privacy nightmares.
We have generative transformers at home generative transformers at home:
$ ollama run codellama
llama2
etc.So interfacing with a local running one although with more latency which at least it only costs the electricity bill and a bit of heat.
So having in-house code completion hints would be nice.
If applicable, add mockups / screenshots to help present your vision of the feature
No response