Closed lukeab closed 2 months ago
Updated the description to correct the cody.autocomplete.advanced.model
setting name.
Thanks @lukeab for reporting this. We will follow up ASAP.
I am trying to configure Cody with local ollama
, the doc says to select the unstable-ollama
provider but it does not appear in the list.
VSCode v1.85.2
Cody AI v1.4.4
@lukeab cody.autocomplete.advanced.serverEndpoint
is actually not used when the Ollama provider is configured. Sorry for that (I know it's super confusing and we have to make this easier, sorry!). You want to use the cody.autocomplete.experimental.ollamaOptions
option like this:
"cody.autocomplete.advanced.provider": "experimental-ollama",
"cody.autocomplete.experimental.ollamaOptions": {
"url": "http://localhost:11434",
"model": "deepseek-coder:6.7b-base-q4_K_M"
},
@lakarpusky Which doc do you refer to? We have renamed the option from unstable-ollama
to experimental-ollama
, so that doc will need updating I’m afraid.
Thanks @philipp-spiess it worked with that configuration. It was my mistake I meant to say log
changelog, but it seems I checked the wrong file.
Does Cody allow to change the chat model? Because I already have the codellama:7b-instruct
model downloaded and being able to use the chat offline would be great.
@lakarpusky Unfortunlaty we don't have support for Ollama in Chat yet. :/
Is there a plan to enable Ollama for chat?
What's local Ollama used for when?
@mejoe
Is there a plan to enable Ollama for chat?
Yep, but I can't promise a timeline yet. I have created a separate issue to track demand for this though: https://github.com/sourcegraph/cody/issues/3252
@klimchuk
What's local Ollama used for when?
Currently Ollama support is experimental and only available for the autocomplete feature (so for inline code suggestions). A quick guide on how to enable it can be found here: https://github.com/sourcegraph/cody/blob/main/vscode/doc/ollama.md
One of the main reasons I was drawn to Cody was the fact that one could use Ollama with it. It's disappointing to see that only "autocomplete" is used with local Ollama models (and this needs to be documented better) and not any of the other features.
@ByerRA coming soon =)
@lukeab
cody.autocomplete.advanced.serverEndpoint
is actually not used when the Ollama provider is configured. Sorry for that (I know it's super confusing and we have to make this easier, sorry!). You want to use thecody.autocomplete.experimental.ollamaOptions
option like this:"cody.autocomplete.advanced.provider": "experimental-ollama", "cody.autocomplete.experimental.ollamaOptions": { "url": "http://localhost:11434", "model": "deepseek-coder:6.7b-base-q4_K_M" },
@lakarpusky Which doc do you refer to? We have renamed the option from
unstable-ollama
toexperimental-ollama
, so that doc will need updating I’m afraid.
It is not clear to me how I can add the following to my visual code settings file
"cody.autocomplete.experimental.ollamaOptions": {
"url": "http://localhost:11434",
"model": "deepseek-coder:6.7b-base-q4_K_M"
},
it's great, the cody.autocomplete.experimental.ollamaOptions
: {} url and model works now.
It would be ideal to have a sense of when experiemental may be merged into advanced? Or what the plan is to include the experimental features into the settings dialogs in vscode? Happy this works and now with PR #3282 answering the need in ticket #3252 things are looking better, would just love them all incorporated into the settings form fields.
@lukeab glad you are enjoying it! We don't have any plans on next steps. Right now we are gathering feedback (like this) and creating content to see what's next.
@jdorfman @philipp-spiess For me these options in VS Code 1.90.0 with Cody 1.20.3 no longer seem to work:
"cody.autocomplete.advanced.provider": "experimental-ollama",
"cody.autocomplete.experimental.ollamaOptions": {
"url": "http://localhost:11434",
"model": "granite-code:34b"
},
"cody.autocomplete.languages": {
"*": true
},
"cody.experimental.ollamaChat": true,
Namely, the cody.autocomplete.experimental.ollamaOptions
and cody.experimental.ollamaChat
show the Unknown Configuration Setting tooltip.
If I click Cody: Generate Code option from lightbulb icon I get No chat model found in server-provided config
and output window is showing:
ConfigFeaturesSingleton: refreshConfigFeatures accessing Sourcegraph GraphQL API: Error: HTTP status code 401: Invalid access token.
Note that I am not logged into sourcegraph account, but I am also not getting any auto-completion offers.
Another error I am seeing in earlier output is:
LocalEmbeddingsController: load 5f1385928dff462ab5e748800ac37144 {"code":-32099}
Could you please point me to some updated docs which say what exactly is supported in current version and what are the correct configuration keys now?
+1
The document is old.
is there any solution existing right now. it does not work when sign out for VS Code 1.92.0 and Cody 1.28.1
is there any solution existing right now. it does not work when sign out for VS Code 1.92.0 and Cody 1.28.1
Yes, the solution is not to use Cody.
Switch to https://www.continue.dev/ -- it's much better and fully open-source anyway.
is there any solution existing right now. it does not work when sign out for VS Code 1.92.0 and Cody 1.28.1
Yes, the solution is not to use Cody.
Switch to https://www.continue.dev/ -- it's much better and fully open-source anyway.
Thanks for mentioning! will give a try
is there any solution existing right now. it does not work when sign out for VS Code 1.92.0 and Cody 1.28.1
it seems one has to log on to have autocomplete working according to link.
when sign in with the right setting. the sourcegraph output gives
CompletionLogger:onComplete: {"type":"code-completion","endpoint":"xxxx","status":"success","duration":188}
which confirms my ollama is working. But sign in with a local service is hhhh
is there any solution existing right now. it does not work when sign out for VS Code 1.92.0 and Cody 1.28.1
Yes, the solution is not to use Cody.
Switch to https://www.continue.dev/ -- it's much better and fully open-source anyway.
DO NOT DO THIS! Continue is very poor when interactin with your code. It only sugest code into chat tab and not update your code automaticaly as Cursor do. You must copy/paste code from the chat tab and this is very dangerous because it may lead you to make mistakes.
Cody is closer you can be to Cursor IDE. Great code interaction.
Version
v1.2.3
Describe the bug
When selecting to use a self hosted ollama instance, there is no way to do 2 things:
cody.autocomplete.advanced.serverEndpoint
, cody will always attempt to use http://localhost:11434, so i cannot sepcify the ip of my desktop machine hosting ollama.cody.autocomplete.advanced.model
, for example when llama-code-13b is selected, the vscode output tab for cody always says:█ CodyCompletionProvider:initialized: unstable-ollama/codellama:7b-code
Expected behavior
When a esrver endpoint is specified, the ollama configuration should use that value instead of the default http://localhost:11434 when a model is selected it should use that model in the config and request paramteres, instead of always using
codellama:7b-code
Additional context
image shows output for unstable-ollama/codellama:7b-code being used despite 13b being selected.