Open codebycarlos opened 1 year ago
bump
Thanks for writing in about this.
We may push more control over the context defaults for advanced users soon (I like the idea of being able to toggle no context as default in settings).
That said, I do want to be up front that our focus is on getting the AI to do more work for you and is less on cost cutting. This is in large part because we believe that LLM prices will continue to drop drastically.
(If you'd like, we do offer a pro plan for $20 / month which gives generous usage due to bulk pricing)
Thanks for writing in about this.
We may push more control over the context defaults for advanced users soon (I like the idea of being able to toggle no context as default in settings).
That said, I do want to be up front that our focus is on getting the AI to do more work for you and is less on cost cutting. This is in large part because we believe that LLM prices will continue to drop drastically.
(If you'd like, we do offer a pro plan for $20 / month which gives generous usage due to bulk pricing)
That would be great. Having that flexibility will be key, at least for those using their own API keys.
Yes, those LLM prices are bound to come down (fingers crossed). In the meantime, I'm definitely signing up for pro, thanks!
Is your feature request related to a problem? Please describe. I'm using my own API key and Cursor is amazing! However, I've noticed the token usage is quite high, even when just using the chat.
You can clearly see here when I started using Cursor (as opposed to another chatGPT UI solution).
Describe the solution you'd like I figure it's because each chat includes the current file as context by default, and if it's large then that can quickly add up. Users should be able to choose between:
And also might be handy if these defaults could be set per model. When using gpt-3.5 for instance, I'd be willing to use more context in general.
This may be a larger problem as a whole with this type of solution. It's great that we can easily include loads of context, but it needs to be balanced given high token costs.