continuedev / continue

⏩ Continue is the leading open-source AI code assistant. You can connect any models and any context to build custom autocomplete and chat experiences inside VS Code and JetBrains
https://docs.continue.dev/
Apache License 2.0
19.63k stars 1.71k forks source link

config.json file not working properly #3046

Open hiofficial4477 opened 6 days ago

hiofficial4477 commented 6 days ago

Before submitting your bug report

Relevant environment info

Error streaming response: Cannot read properties of undefined (reading 'model')

Description

config.json file I accidentally deleted, and now after reinstalling the plugin also config.json file is coming blank/empty. Can I get this file

To reproduce

No response

Log output

No response

hiofficial4477 commented 6 days ago

it got fixed you need to delete the config.json file from the installed location

and then connect with the preferred API key

so default file is

{ "models": [ { "model": "claude-3-5-sonnet-latest", "provider": "anthropic", "apiKey": "", "title": "Claude 3.5 Sonnet" }, { "model": "gpt-3.5-turbo", "title": "GPT-3.5-Turbo", "apiKey": Your API key", "provider": "openai" } ], "tabAutocompleteModel": { "title": "Codestral", "provider": "mistral", "model": "codestral-latest", "apiKey": "" }, "customCommands": [ { "name": "test", "prompt": "{{{ input }}}\n\nWrite a comprehensive set of unit tests for the selected code. It should setup, run tests that check for correctness including important edge cases, and teardown. Ensure that the tests are complete and sophisticated. Give the tests just as chat output, don't edit any file.", "description": "Write unit tests for highlighted code" } ], "contextProviders": [ { "name": "code", "params": {} }, { "name": "docs", "params": {} }, { "name": "diff", "params": {} }, { "name": "terminal", "params": {} }, { "name": "problems", "params": {} }, { "name": "folder", "params": {} }, { "name": "codebase", "params": {} } ], "slashCommands": [ { "name": "share", "description": "Export the current chat session to markdown" }, { "name": "cmd", "description": "Generate a shell command" }, { "name": "commit", "description": "Generate a git commit message" } ] }

hiofficial4477 commented 6 days ago

Now New error its showing

HTTP 401 Unauthorized from https://api.openai.com/v1/chat/completions

hiofficial4477 commented 6 days ago

it is also fixed just by changing the open ai key , and setting up ollama locally with local gpt

micuentadecasa commented 6 days ago

I'm having the same issue, I was adding lm studio and started giving errors.

ducphamle2 commented 6 days ago

In my case, Continue can't detect my custom self-hosted models

image

how to fix it guys?

ducphamle2 commented 6 days ago

It just doesnt show anything, even after I "Add Chat model"

micuentadecasa commented 5 days ago

I have tried a lot of things, as @ducphamle2 say to add chat model, rewriting the config file (I saved the config for lm studio that was working), delete the file, use the content of the file referred on this thread, etc, and it doesn't works, in my case I did a update of vs code, but I'm not totally sure if the error started there.

hiofficial4477 commented 5 days ago

I got it fixed first remove everything from the config.json file

Just leave it as { } >> save this config.json file >> uninstall your extension >> restart your VS code editor >> install again the continue dev extension >> then first try to add the open ai or any other LLM key first before opening your config file >> it will generate config.json file automatically according to your provided input

hiofficial4477 commented 5 days ago

After everything works I will give you an error if you have added an open API key and not the other key mentioned in the config.json file just remove other models from there andwill work fine + in the autocomplete section of the config file you have to install the ollama model + model and change the name according to ollama in autocomplete section would look like this

"tabAutocompleteModel": { "title": "qwen2.5-coder", "provider": "ollama", "model": "qwen2.5-coder:32b", },

I am using qwen2.5 coder 32 billion parameter model that is better than Gpt4 for auto complete coding sections

Hope it help's :)

micuentadecasa commented 5 days ago

I got it fixed first remove everything from the config.json file

Just leave it as { } >> save this config.json file >> uninstall your extension >> restart your VS code editor >> install again the continue dev extension >> then first try to add the open ai or any other LLM key first before opening your config file >> it will generate config.json file automatically according to your provided input

This solution didn't work for me

ducphamle2 commented 5 days ago

I have tried a lot of things, as @ducphamle2 say to add chat model, rewriting the config file (I saved the config for lm studio that was working), delete the file, use the content of the file referred on this thread, etc, and it doesn't works, in my case I did a update of vs code, but I'm not totally sure if the error started there.

same here. I have tried everything as well. Sometimes it works sometimes it just doesn't. It's pure luck afak.

Also, it only works for one VS Code project, not multiple ones.

Another thing is that the @codebase indexing feature is so slow.

hiofficial4477 commented 5 days ago

can you share some screenshots

micuentadecasa commented 4 days ago

the screenshot that @ducphamle2 shared is what I see, continue doesn´t load properly the config file therefore the "select model" is visible in the chat window, and even adding a new model doesn´t works

tomasz-stefaniak commented 3 days ago

I'm wondering if what @micuentadecasa is saying is correct - it could be an unrelated bug that's breaking Continue and preventing the config from loading correctly. Normally, if the config is misconfigured you should get an error like this in the UI:

image

If your problem is that the config.json file is empty and you don't know how to regenerate it, here is an example of a functional config.json:

{
  "models": [
    {
      "title": "Codegemma",
      "provider": "ollama",
      "model": "codegemma:2b"
    },
    {
      "model": "claude-3-5-sonnet-20240620",
      "provider": "anthropic",
      "apiKey": "",
      "title": "Claude 3.5 Sonnet"
    },
    {
      "title": "Starcoder",
      "provider": "ollama",
      "model": "starcoder2:3b"
    }
  ],
  "tabAutocompleteModel": {
    "title": "Starcoder",
    "provider": "ollama",
    "model": "starcoder2:3b"
  },
  "tabAutocompleteOptions": {
    "useCache": false
    // "transform": false
  },
  "customCommands": [
    {
      "name": "test",
      "prompt": "{{{ input }}}\n\nWrite a comprehensive set of unit tests for the selected code. It should setup, run tests that check for correctness including important edge cases, and teardown. Ensure that the tests are complete and sophisticated. Give the tests just as chat output, don't edit any file.",
      "description": "Write unit tests for highlighted code"
    }
  ],
  "contextProviders": [
    {
      "name": "code",
      "params": {}
    },
    {
      "name": "docs",
      "params": {}
    },
    {
      "name": "diff",
      "params": {}
    },
    {
      "name": "terminal",
      "params": {}
    },
    {
      "name": "problems",
      "params": {}
    },
    {
      "name": "folder",
      "params": {}
    },
    {
      "name": "codebase",
      "params": {}
    }
  ],
  "slashCommands": [
    {
      "name": "edit",
      "description": "Edit selected code"
    },
    {
      "name": "comment",
      "description": "Write comments for the selected code"
    },
    {
      "name": "share",
      "description": "Export the current chat session to markdown"
    },
    {
      "name": "cmd",
      "description": "Generate a shell command"
    },
    {
      "name": "commit",
      "description": "Generate a git commit message"
    }
  ],
  "docs": []
}
hiofficial4477 commented 1 day ago

@micuentadecasa - I looked into the issue and to be honest if nothing works out for you.

I suggest cleaning and uninstalling the whole vs code. And its extension file from C > user directory also

and if you save some backup on Github or Mikroosft I would suggest taking the backup of it and also removing the resources from there.

But tbh, I personally tried all of these, and it doesn't work, for me.

So this is what works out for me...

You can try out the default config.json file given by @tomasz-stefaniak. Then, try to add 1 local model from oolama and auto-detect the model for it. But remember to provide the correct llama destination in your environment variable; otherwise, it won't work properly. It should look like this (image attached below) to save any llama model at any location in your PC/Mac. And remember after your continue dev extension works properly with your local LLM model, any other online LLM would also work perfectly it can any model like open ai or any other listed model on the extension website.

ollama

and in your config.json file add this line somewhere at the start of the code

{ "model": "AUTODETECT", "title": "Autodetect", "provider": "ollama" }

cheers...

ducphamle2 commented 1 day ago

@micuentadecasa - I looked into the issue and to be honest if nothing works out for you.

I suggest cleaning and uninstalling the whole vs code. And its extension file from C > user directory also

and if you save some backup on Github or Mikroosft I would suggest taking the backup of it and also removing the resources from there.

But tbh, I personally tried all of these, and it doesn't work, for me.

So this is what works out for me...

You can try out the default config.json file given by @tomasz-stefaniak. Then, try to add 1 local model from oolama and auto-detect the model for it. But remember to provide the correct llama destination in your environment variable; otherwise, it won't work properly. It should look like this (image attached below) to save any llama model at any location in your PC/Mac. And remember after your continue dev extension works properly with your local LLM model, any other online LLM would also work perfectly it can any model like open ai or any other listed model on the extension website.

ollama

and in your config.json file add this line somewhere at the start of the code

{

  "model": "AUTODETECT",

  "title": "Autodetect",

  "provider": "ollama"

}

cheers...

I tried this on Mac, and it worked at first, but if I played with it by changing the models or adding new workspaces -> went back to the same problem. It's the only bug that's preventing me from using Continue

tomasz-stefaniak commented 1 day ago

@ducphamle2 could you try starting from scratch again and adding a single new model, then checking if it works? If the error occurs again, could you share your config.json with that one additional model added?

If changing the model breaks Continue somehow, could you attach a short video reproduction?

Thanks!

micuentadecasa commented 12 hours ago

I tried everything including removing the plugin (removing manually the folder of the plugin, etc.) and nothing works. As @ducphamle2 commented this is stopping us to use it.

ducphamle2 commented 8 hours ago

@ducphamle2 could you try starting from scratch again and adding a single new model, then checking if it works? If the error occurs again, could you share your config.json with that one additional model added?

If changing the model breaks Continue somehow, could you attach a short video reproduction?

Thanks!

Sure lemme prepare a short video to demonstrate