continuedev / continue

⏩ Continue is the leading open-source AI code assistant. You can connect any models and any context to build custom autocomplete and chat experiences inside VS Code and JetBrains
https://docs.continue.dev/
Apache License 2.0
16.8k stars 1.31k forks source link

Error getting context items from folder #983

Closed Greatz08 closed 3 months ago

Greatz08 commented 6 months ago

Before submitting your bug report

Relevant environment info

- OS:Linux
- IDE:vscode

Description

trying to feed folder and get answer for some files but getting this error

To reproduce

No response

Log output

Error getting context items from folder: Error: `local_files_only=true` or `env.allowRemoteModels=false` and file was not found locally at "/config/extensions/continue.continue-0.8.16-linux-x64/models/all-MiniLM-L6-v2/tokenizer.json".
alexsoyes commented 6 months ago

Same issue here when trying to index some documentation and then query it from chat:

Error getting context items from docs: Error: lance error: LanceError(IO): Schema error: No field named baseurl. Valid fields are title, "baseUrl", content, path, "startLine", "endLine", vector., /Users/runner/.cargo/registry/src/index.crates.io-6f17d22bba15001f/lance-0.10.4/src/io/exec/planner.rs:659:20
sestinj commented 6 months ago

@alexsoyes I just solved this here. Either it's in the current pre-release or will be in the next one

thebetauser commented 4 months ago

@sestinj will this fix be pushed to jetbrains plugin? I'm on 0.0.46 and I'm experiencing this issue as well

womd commented 4 months ago

when trying to use "Select Code ( CTRL + J ) /edit ..."

Error running handler for "context/getContextItems": Error: local_files_only=true or env.allowRemoteModels=false and file was not found locally at "C:\snapshot\continue-deploy\binary\models/all-MiniLM-L6-v2/tokenizer.json". Error: local_files_only=true or env.allowRemoteModels=false and file was not found locally at "C:\snapshot\continue-deploy\binary\models/all-MiniLM-L6-v2/tokenizer.json".

bradleymize commented 3 months ago

Similar issue on Windows w/ the Jetbrains plugin, this appears in the logs on start up:

[info] Starting Continue core...
[2024-06-07T22:35:39] [info] Starting Continue core... 
[2024-06-07T22:35:40] Error updating the vectordb::all-MiniLM-L6-v2 index: Error: `local_files_only=true` or `env.allowRemoteModels=false` and file was not found locally at "C:\snapshot\continue-deploy\binary\models/all-MiniLM-L6-v2/tokenizer.json". 

The tokenizer.json file does not exist locally (nor does C:\snapshot for that matter).

Cheizr commented 3 months ago

Fresh installation, when using the @folder context:

Error getting context items from folder: Error: Schema Error. No vector column found to create index Source: Continue - Codestral, GPT-40, and more

Devil-Mix commented 3 months ago

I'm getting this error message.

Error getting context items from folder: SyntaxError: Unexpected end of JSON input

I'm using Linux on a remote VSCode server.

sestinj commented 3 months ago

@womd @bradleymize we don't currently support the transformers.js embeddings provider on JetBrains, which is why you're seeing this error. In newer versions we'll have a more clear warning, but for now the solution is to use "ollama" or another embeddings provider: https://docs.continue.dev/walkthroughs/codebase-embeddings#embeddings-providers

sestinj commented 3 months ago

@Devil-Mix it looks like your problem is entirely different. Can you share what your config.json looks like?

bradleymize commented 3 months ago

@womd @bradleymize we don't currently support the transformers.js embeddings provider on JetBrains, which is why you're seeing this error. In newer versions we'll have a more clear warning, but for now the solution is to use "ollama" or another embeddings provider: https://docs.continue.dev/walkthroughs/codebase-embeddings#embeddings-providers

@sestinj Thanks for the clarification. Confirmed I could use the code in Writing a custom EmbeddingsProvider and updating the url to LM Studio's http://localhost:1234/v1/embeddings and indexing was successful after restarting IntelliJ.

For anyone else with this issue, I updated config.ts as documented, and removed the configuration in config.json for the embeddingsProvider.

sestinj commented 3 months ago

Very nice! One other way to approach this would be using the "openai" embeddingsProvider with an apiBase pointing to LMStudio, as it is OpenAI-compatible

Devil-Mix commented 3 months ago

@sestinj

@Devil-Mix it looks like your problem is entirely different. Can you share what your config.json looks like?

Thx. It's working fine now. I think it was something to do with when adding workspace.

Cheizr commented 3 months ago

It's working for me now!