Open tribbloid opened 2 weeks ago
BTW, how to enable plugin telemetry & remote error reporting?
once enabled we won't have the plaintext token leaking problem
I am also experiencing this issue, though when attempting to make use of the LLM-based autocompletion. I would not be surprised if this has tie-ins to some @codebase
functionality. I can share my config due to using local Ollama models only.
deepseek-coder-v2:16b-lite-base-q4_0
){
"models": [
{
"title": "Ollama Phind Codellama 34b-2",
"provider": "ollama",
"model": "phind-codellama:34b-v2",
"apiBase": "http://localhost:11434",
"completionOptions": {}
},
{
"title": "Deepseek-Coder v2 Lite (instruct)",
"provider": "ollama",
"model": "deepseek-coder-v2:16b",
"apiBase": "http://localhost:11434",
"completionOptions": {}
}
],
"embeddingsProvider": {
"provider": "ollama",
"model": "nomic-embed-text:latest",
"apiBase": "http://localhost:11434"
},
"tabAutocompleteModel": {
"title": "Deepseek Coder v2 Lite (base)",
"provider": "ollama",
"model": "deepseek-coder-v2:16b-lite-base-q4_0",
"apiBase": "http://localhost:11434"
},
"slashCommands": [
{
"name": "edit",
"description": "Edit selected code"
},
{
"name": "comment",
"description": "Write comments for the selected code"
},
{
"name": "share",
"description": "Export this session as markdown"
}
],
"customCommands": [
{
"name": "test",
"prompt": "Write a comprehensive set of unit tests for the selected code. It should setup, run tests that check for correctness including important edge cases, and teardown. Ensure that the tests are complete and sophisticated. Give the tests just as chat output, don't edit any file.",
"description": "Write unit tests for highlighted code"
}
],
"contextProviders": [
{
"name": "open",
"params": {}
},
{
"name": "code",
"params": {}
},
{
"name": "diff",
"params": {}
},
{
"name": "codebase",
"params": {
"nRetrieve": 25,
"nFinal": 5,
"useReranking": true
}
},
{
"name": "folder",
"params": {
"nRetrieve": 25,
"nFinal": 5,
"useReranking": true
}
},
{ "name": "url" }
],
"allowAnonymousTelemetry": false
}
After configuring a tabAutocompleteModel
, after typing some code and waiting for a possible auto-completion, I see see some query activity in my Ollama server and then this error occurs in my IDE.
I observed this issue while using CLion, but I am using the same version of Continue in PyCharm which I'm also not getting auto-completions in, however I am not seeing the above error being raised while in PyCharm under what I think are the same conditions.
com.intellij.openapi.diagnostic.RuntimeExceptionWithAttachments: Read access is allowed from inside read-action only (see Application.runReadAction()); see https://jb.gg/ij-platform-threading for details
Current thread: Thread[#193,DefaultDispatcher-worker-3,5,main] 29119011 (EventQueue.isDispatchThread()=false)
SystemEventQueueThread: Thread[#72,AWT-EventQueue-0,6,main] 183590824
at com.intellij.util.concurrency.ThreadingAssertions.createThreadAccessException(ThreadingAssertions.java:177)
at com.intellij.util.concurrency.ThreadingAssertions.softAssertReadAccess(ThreadingAssertions.java:129)
at com.intellij.openapi.application.impl.ApplicationImpl.assertReadAccessAllowed(ApplicationImpl.java:915)
at com.intellij.openapi.editor.impl.CaretImpl.getOffset(CaretImpl.java:661)
at com.intellij.openapi.editor.CaretModel.getOffset(CaretModel.java:129)
at com.github.continuedev.continueintellijextension.autocomplete.AutocompleteService.shouldRenderCompletion(AutocompleteService.kt:115)
at com.github.continuedev.continueintellijextension.autocomplete.AutocompleteService.access$shouldRenderCompletion(AutocompleteService.kt:39)
at com.github.continuedev.continueintellijextension.autocomplete.AutocompleteService$triggerCompletion$1.invoke(AutocompleteService.kt:97)
at com.github.continuedev.continueintellijextension.autocomplete.AutocompleteService$triggerCompletion$1.invoke(AutocompleteService.kt:90)
at com.github.continuedev.continueintellijextension.continue.CoreMessenger.handleMessage(CoreMessenger.kt:104)
at com.github.continuedev.continueintellijextension.continue.CoreMessenger.access$handleMessage(CoreMessenger.kt:18)
at com.github.continuedev.continueintellijextension.continue.CoreMessenger$4.invokeSuspend(CoreMessenger.kt:239)
at kotlin.coroutines.jvm.internal.BaseContinuationImpl.resumeWith(ContinuationImpl.kt:33)
at kotlinx.coroutines.DispatchedTask.run(DispatchedTask.kt:106)
at kotlinx.coroutines.internal.LimitedDispatcher$Worker.run(LimitedDispatcher.kt:115)
at kotlinx.coroutines.scheduling.TaskImpl.run(Tasks.kt:100)
at kotlinx.coroutines.scheduling.CoroutineScheduler.runSafely(CoroutineScheduler.kt:584)
at kotlinx.coroutines.scheduling.CoroutineScheduler$Worker.executeTask(CoroutineScheduler.kt:793)
at kotlinx.coroutines.scheduling.CoroutineScheduler$Worker.runWorker(CoroutineScheduler.kt:697)
at kotlinx.coroutines.scheduling.CoroutineScheduler$Worker.run(CoroutineScheduler.kt:684)
I'm having trouble getting @codebase to even attempt to run in IntelliJ with v0.0.70
. Will be digging into this shortly.
@tribbloid , r.e. telemetry, is this what you're looking for? https://docs.continue.dev/telemetry#how-to-opt-out
{
"allowAnonymousTelemetry": true
}
I too am having issues with Autocomplete. The details are different, but the outcome is the same. My ticket was opened here: #2343
I too am having issues with Autocomplete. The details are different, but the outcome is the same. My ticket was opened here: #2343
Yours seems like a different issue where the LLM web service you are using is rate-limiting you. The frequency of requests triggered by autocomplete may be too high a volume for the service you are using. This thread of issues is, I'm interpreting, is more a JetBrains plugin related issue where some background thread is crashing that otherwise would mediate the autocompletion functionality. :shrug:
Updating to JetBrains plugin version 0.0.75 seems to no longer have this issue for me.
^ If other folks are able to resolve the issue by upgrading to 0.0.75 give this message a 👍
Before submitting your bug report
Relevant environment info
Description
including @codebase as context trigger the following error:
To reproduce
including @codebase.
select part of the code
Log output
No response