Open magnooj opened 4 months ago
Same fundamental issue as #1705. This was introduced in 0.8.43, if you revert to 0.8.42 for the time being, you should be good until this gets fixed.
Thanks for the +1 on this. We are aware and working on solving it this morning! I'll hopefully have an update for you very soon
Update:
In 0.8.43, no tag change could solve the problem. I assumed changing "disableIndexing": true
would reduce the bandwidth usage. Neither false
nor true
solved the problem.
By switching to 0.8.42, the bandwidth usage got better by 30%, but there was still a lot of disconnection. But I changed "disableIndexing": true
again to force Continue
to stop the indexing. Finally, everything worked normally.
@magnooj thank you for this extra information, that's very good to know. One other question that comes to mind: is your VS Code workspace the root of a git repository, or is it a subdirectory of a git repository?
I ask because there's some chance that we wouldn't be looking for .gitignore files in the directory above, which could cause too many files reads
@magnooj thank you for this extra information, that's very good to know. One other question that comes to mind: is your VS Code workspace the root of a git repository, or is it a subdirectory of a git repository?
I ask because there's some chance that we wouldn't be looking for .gitignore files in the directory above, which could cause too many files reads
It is a subdirectory of a remote server without any .gitingnore file. But the number of files is not much! Is it possible to limit indexing to a level? for example only 2?
@magnooj So even if you include all of the files that are ignored by git, like a .git folder, any build folders, or anything else, this is still a very small number of total files? Also, a couple of questions I can think of additionally that might help:
mine is a perforce (not git) client - some examples with tons of files, one example with small-medium amount the large client
find . -type f | wc -l
1240363
the smaller client
find . -type f | wc -l
1758
it mostly happened in the larger client
it would be nice if Continue used the same ignore list as the VSCode File Watcher excludes (per remote server in the settings.json) I don't want to have to specify the same list per perforce client since they are ephemeral
Good to know. @magnooj you aren't by chance using perforce too are you?
to be clear - in my examples I'm not actively using Continue at all - I'm just browsing / editing code. So it is a persistent issue that I and a number of others at work are hitting even when not interacting with Continue at all
@sestinj I think I found what is causing the problem: hidden folders! It is a common practice in my company to create local environments for each project by calling conda create -p .env
. Also, we don't use git much. Therefore, there is no .gitignore most of the time.
I tried a .gitignore and added hidden folders in it, and "disableIndexing": false
:
Removing the "folder" context provider from config.json didn't help. I am not using perforce either. I work on our dedicated Linux (Rockey 8.10) servers, and my work is to go through each project, activate the local env, and check the codes.
I also updated the config.json file, here is the latest one which works better:
{
"models": [
{
"model": "codestral",
"title": "codestral",
"apiBase": "http://localhost:11434",
"provider": "ollama",
"completionOptions": {
"temperature": 0.5
}
}
],
"customCommands": [
{
"name": "test",
"prompt": "{{{ input }}}\n\nWrite a comprehensive set of unit tests for the selected code. It should setup, run tests that check for correctness including important edge cases, and teardown. Ensure that the tests are complete and sophisticated. Give the tests just as chat output, don't edit any file.",
"description": "Write unit tests for highlighted code"
}
],
"tabAutocompleteModel": {
"title": "Tab Autocomplete Model",
"provider": "ollama",
"apiBase": "http://localhost:11434",
"model": "codestral"
},
"tabAutocompleteOptions": {
"useCopyBuffer": false,
"maxPromptTokens": 400,
"prefixPercentage": 0.5,
"multilineCompletions": "always",
"contextLength": 8192,
"useOtherFiles": true,
"debounceDelay": 100
},
"allowAnonymousTelemetry": false,
"embeddingsProvider": {
"provider": "ollama",
"apiBase": "http://localhost:11434",
"model": "nomic-embed-text"
},
"contextProviders": [
{
"name": "codebase",
"params": {
"nRetrieve": 25,
"nFinal": 5,
"useReranking": true
}
}
],
"disableIndexing": false
}
@sestinj A new problem arose with 0.8.42. It kills my kernel when I am using my notebook in the VScode. Removing the embedding by nomic, removing the context provider, and disabling the indexing won't solve it.
Here is the log:
[Continue.continue]Parsing failed
Error: Parsing failed
at _Parser.parse (~\.vscode\extensions\continue.continue-0.8.42-win32-x64\out\extension.js:94928:81)
at _ImportDefinitionsService._getFileInfo (~\.vscode\extensions\continue.continue-0.8.42-win32-x64\out\extension.js:95825:28)
at async PrecalculatedLruCache.initKey (~\.vscode\extensions\continue.continue-0.8.42-win32-x64\out\extension.js:93784:25)
[Extension Host] warn 14:30:29.938: Jupyter Extension: Cancel all remaining cells due to dead kernel
is this still an issue?
Before submitting your bug report
Relevant environment info
Description
I've encountered a critical issue when using the Continue.dev extension on a remote SSH server: Problem: After enabling Continue.dev and configuring it in the JSON file, the extension consumes excessive bandwidth, leading to VSCode disconnecting from the remote server.
To reproduce
Log output