Open ThalesAugusto0 opened 1 month ago
Considering the size and effectiveness of the local model and the commercialization of the Cursor product, the likelihood of this proposal coming to fruition is quite small. 😂
Considering the size and effectiveness of the local model and the commercialization of the Cursor product, the likelihood of this proposal coming to fruition is quite small. 😂
The company doesn't need to do this, since the code is open, why can't we, the development community, do this?
Cursor is not open source. This is an issues only repo.
Upping this anyway. The company can still monetize with the thousands of devs that do not have a powerful GPU.
Check this : #1380 (comment)
The devs broke that as well (likely on purpose), they're in for the money and don't care about you and me
Check this : #1380 (comment)
The devs broke that as well (likely on purpose), they're in for the money and don't care about you and me
For me it's working perfectly fine, using ollama + ngrok. I use the latest version of cursor
Check this : #1380 (comment)
The devs broke that as well (likely on purpose), they're in for the money and don't care about you and me
I doubt the devs would do such an easily visible thing like breaking support specifically for ollama (as here we are anyway using an OpenAI endpoint, so is pretty generic). Anyway, the loss of quality using 8b models this way is not worth saving 20 bucks per month. They are not in danger.
Check this : #1380 (comment)
The devs broke that as well (likely on purpose), they're in for the money and don't care about you and me
For me it's working perfectly fine, using ollama + ngrok. I use the latest version of cursor
I applied your workaround properly but I keep on getting error 403 from ngrok like many other people, do I need to forward some port or?
Description: We need to enhance Cursor IDE by implementing support for local AI models using Ollama, similar to the Continue extension for VS Code. This will enable developers to use AI-powered code assistance offline, ensuring privacy and reducing dependency on external APIs.
1. Ollama Integration:
Integrate Ollama into the Cursor IDE to run AI models locally. This should include the ability to configure the path to Ollama and select specific models for different coding tasks.
Ensure that the local AI models are available for features like code completion, refactoring, and contextual code understanding.
2. Model Support:
Provide compatibility with a range of models available through Ollama, such as Llama 3 and Starcoder 2, which offer support for Fill-the-Middle (FIM) predictions and embeddings.
Allow users to customize which models are used for specific tasks (e.g., code completion, embedding generation). Configuration Options:
Add options in Cursor’s settings to configure and manage local AI models. This should include the ability to switch between different AI providers, like Ollama and any cloud-based alternatives. Implement a configuration UI that allows users to easily select and manage their local AI setups. Performance and Usability:
Optimize the interaction between Cursor and the local AI models to minimize latency and resource usage. Ensure that the local AI features are as seamless and user-friendly as their cloud-based counterparts, with clear feedback on model performance and any potential issues.