warpdotdev / Warp

Warp is a modern, Rust-based terminal with AI built in so you and your team can build great software, faster.
https://warp.dev
Other
21.1k stars 360 forks source link

Using LLMs other than OpenAI for AI command search #3779

Open iryzhkov opened 11 months ago

iryzhkov commented 11 months ago

Discord username (optional)

No response

Describe the solution you'd like?

Would it be possible to use other LLM service providers for powering the AI command search feature?

A particularly interesting example for LLM provider is [Ollama](https://ollama.ai/), which runs the LLMs locally on Mac. Privacy-wise this solution would be more preferable than Open AI.

Is your feature request related to a problem? Please describe.

No response

Additional context

No response

How important is this feature to you?

2

Warp Internal (ignore) - linear-label:770f6576-d6c0-4e4f-a259-fc64b5156087

None

HarmonyTechLabs commented 11 months ago

If this is a feature that will be looked at, please include APIs for kobold, text-generation-webui, text-generation-inference, and aphrodite. They are the major backends that are used in the OSS AI world.

dannyneira commented 11 months ago

Thanks for this feature request! To anyone else interested in this feature, please add a 👍 to the original post at the top to signal that you want this feature, and subscribe if you'd like to be notified. (Please avoid adding spam comments like 👍 or +1)

Also, there is a related request here to set your own OpenAI API key #2788

keo commented 8 months ago

👍

dannyneira commented 7 months ago

Hey Folks, the "bring your own LLM" feature is now an Enterprise tier. Please see our pricing page for the most updated info. https://www.warp.dev/pricing

nmcbride commented 6 months ago

It's disappointing this is the solution for those of us using it personally but care about privacy.

I was hoping warp was calling openai directly so I could just redirect it to a local model... But it seems that is not the case.

It is just calling app.warp.dev/graphql and passing your comment and and minor system information to a notebook running on their server it seems. By default configured for openai but if you pay then they can point you somewhere else. Still not a local model though.

While it is cool and there are features I like, I am not really keen on having my data going through two companies.

Their model prompt is pretty simple. Just tells the LLM what it can answer questions about, what format to answer in and passes that minor machine information to it. Pretty standard stuff.

I think the only solution for someone like me who wants a smart terminal but cares about privacy would be to use a regular terminal and something like ask.sh and open interpreter.

I'm actually surprised they didn't integrate open interpreter into warp. Seems like a missed opportunity.

keyvez commented 5 months ago

Why not include support for litellm for a fixed one time price?

kbaegis commented 2 months ago

localllama or bust for me. Sorry.

Love the product otherwise.

utegental commented 1 month ago

Hey Folks, the "bring your own LLM" feature is now an Enterprise tier. Please see our pricing page for the most updated info. https://www.warp.dev/pricing

There are some countries, in which You are not able to subscribe, because VISA and MasterCard banned those countries. Willing or not - You just can't subscribe or not able to do one-time payment(if it will become possible).

There are some companies, in which it's strongly advised not to leak any info to any online services and the only safe way to use any AI-kind instruments is to host them locally or inside company environment.

xhlr8 commented 1 month ago

Why is this not a standard feature anyways? And to limit it behind enterprise payment wall — can you think of another way to say "f*ck you" to the user?

Seriously unfortunate that this otherwise amazing product is severely gimped in functionality by this short sightedness