getcursor / cursor

The AI Code Editor
https://cursor.com
20.63k stars 1.38k forks source link

[FEAT]: ➕Add Support Groq LPU⚡ + Gemini 🧠 #1257

Open jjfantini opened 4 months ago

jjfantini commented 4 months ago

Is your feature request related to a problem? Please describe.

Cursor IDE is revolutionary in its integrated AI support and functionality. It is unbeaten; all other extensions and add-ons are in comparison. The next step for Cursor AI is speed⚡. If you're using AI correctly, you can be waiting for some time for all of the completions to occur and update. The new cursor-fast model is amazing; 60-70% of the time, it does well enough to warrant not using gpt-4. here are the two improvments that could be made.

1.Context Improvement 🧠:

Describe the solution you'd like

1.Context Improvement 🧠:

MarArMar commented 4 months ago

For

"""

  1. Speed ⚡:
    • Using a Groq LPU inference, the ability for parallel generation/linting/conversation of code really would be amazing. Instead of waiting 1-2min for some scenarios to finish. It would be done in seconds. """

I believe it is already supported

Just create a local server and expose your API in an OpenAI compatible way and set the local URL in the OpenAI URL override and you are done

So just buy your hardware and launch your server

Also for """ 1.Context Improvement 🧠:

I am a pro user and I see this as super pointless, GPT-4 is just better and so much more battle tested.

Maybe wait the day when google has any edge over GPT-4, today it is totally unclear if Gemini brings anything more to the table

jjfantini commented 4 months ago

Hey @MarArMar, yeah can def setup your own server to serve a faster inference. I think it would be a great offering for Cursor and their devs, if this can be done in-house and options to support an even higher paid tier.

For the Gemini vs GPT-4. I think the importance of the user being able to choose the model that they use, easily and from the UI can improve the Cursor experience and dev friendliness. While it is true that GPT-4 is battle tested, and used extensively, their is room to use other models, that have been shown to perform better on multiple LLM benchmarks than GPT-4. The only way to battle test it, is to implement it. Gemini does offer improvements over GPT-4 in multiple areas, and the context length improvement might make the GPT-4 + RAG pipeline on the backend of Cursor look redundant - we wont know until is is tried.

You're right, these are not urgent features to add, but will lend to a beautiful experience using Cursor AI and align with long-term goals of Cursor.