⏩ Continue is the leading open-source AI code assistant. You can connect any models and any context to build custom autocomplete and chat experiences inside VS Code and JetBrains
- OS: Windows 11 Pro 23H2 22631.3880
- Continue: Preliminary Version
- IDE: Visual studio Code 1.91.1
- Model: With Ollama
- config.json:
{
"models": [
{
"title": "Claude 3 Sonnet (Free Trial)",
"provider": "free-trial",
"model": "claude-3-sonnet-20240229"
},
{
"title": "GPT-4 Turbo (Free Trial)",
"provider": "free-trial",
"model": "gpt-4-turbo"
},
{
"title": "GPT-3.5-Turbo (Free Trial)",
"provider": "free-trial",
"model": "gpt-3.5-turbo"
},
{
"title": "Gemini Pro (Free Trial)",
"provider": "free-trial",
"model": "gemini-pro"
},
{
"title": "Mixtral (Free Trial)",
"provider": "free-trial",
"model": "mistral-8x7b"
},
{
"model": "AUTODETECT",
"title": "Ollama",
"provider": "ollama",
"apiKey": ""
}
],
"slashCommands": [
{
"name": "edit",
"description": "Edit selected code"
},
{
"name": "comment",
"description": "Write comments for the selected code"
},
{
"name": "share",
"description": "Export this session as markdown"
},
{
"name": "cmd",
"description": "Generate a shell command"
}
],
"customCommands": [
{
"name": "test",
"prompt": "Write a comprehensive set of unit tests for the selected code. It should setup, run tests that check for correctness including important edge cases, and teardown. Ensure that the tests are complete and sophisticated. Give the tests just as chat output, don't edit any file.",
"description": "Write unit tests for highlighted code"
}
],
"contextProviders": [
{
"name": "code",
"params": {}
},
{
"name": "docs",
"params": {}
},
{
"name": "diff",
"params": {}
},
{
"name": "open",
"params": {}
},
{
"name": "terminal",
"params": {}
},
{
"name": "problems",
"params": {}
},
{
"name": "folder",
"params": {}
},
{
"name": "codebase",
"params": {}
}
],
"tabAutocompleteModel": {
"title": "Tab Autocomplete Model",
"model": "starcoder2:7b",
"provider": "ollama"
},
"allowAnonymousTelemetry": false
}
### Description
Hello,
I have noticed that in the preliminary version, the auto-detection feature for the models I have downloaded, such as Ollama, is not working. It fails to recognize the models I have downloaded with Ollama, and I have to input them manually. For now, I am sticking with the older version where this issue did not occur.
Thank you. I hope this can be resolved soon.
### To reproduce
_No response_
### Log output
_No response_
Before submitting your bug report
Relevant environment info