Open hs7558 opened 1 year ago
Hi! We upload your files to compute embeddings for them. This enables you to ask codebase-wide questions, such as "where do we configure logging?", and get a correct answer back from anywhere in your folder.
We do not store your actual files on our server. We do store the embeddings, which are a lossy compression of the original code.
You can always opt out of codebase-wide understanding by clicking "Delete" under the "Codebase Indexing" section in the settings, as shown below. Let me know if you're having trouble finding this option.
Separately: we want to make sure that we communicate this more clearly to users in the future. We definitely do not want this to come as a surprise. To help with that, could you answer the following questions to help us understand how to convey this more clearly?
will there be a cursor vscode plugin? more keen to use that actually
thanks!
Why can't embeddings also be stored locally?
My dream is to use a local-only open source model, like llama, along with full embeddings stored locally. That way I could feel much safer working on enterprise/production codebases with AI.
Why cursor upload my files????? ALL THE FILES