OpenInterpreter / open-interpreter

A natural language interface for computers
http://openinterpreter.com/
GNU Affero General Public License v3.0
52.15k stars 4.6k forks source link

Can we consider adding a customized API #296

Closed shenmadouyaowen closed 10 months ago

shenmadouyaowen commented 1 year ago

Is your feature request related to a problem? Please describe.

Some models have different input formats, can we add custom API functionality

Describe the solution you'd like

I use other API interfaces, but I cannot have them run the code

Describe alternatives you've considered

No response

Additional context

No response

Joackk commented 1 year ago

Yes, I usually use third-party APIs, but it seems that I cannot customize the interface and can only use the official API interface. Can I add custom interface calls?

Niche-Apps commented 1 year ago

I would like to use other APIs as well, especially Poe since I already paid for it.

ishaan-jaff commented 1 year ago

I'm the maintainer of LiteLLM, o-i uses litellm for the LLM API calls. You can start calling your custom api endpoints like this:

OpenAI ChatCompletion proxy:

import os 
import litellm
from litellm import completion

os.environ["OPENAI_API_KEY"] = ""

# set custom api base to your proxy
# either set .env or litellm.api_base
# os.environ["OPENAI_API_BASE"] = ""
litellm.api_base = "https://openai-proxy.berriai.repl.co"

messages = [{ "content": "Hello, how are you?","role": "user"}]

# openai call
response = completion("gpt-3.5-turbo", messages)

Docs: https://docs.litellm.ai/docs/providers/openai

Custom LLM API endpoint:

from litellm import completion
response = completion(
    model="custom/meta-llama/Llama-2-13b-hf", 
    messages= [{"content": "what is custom llama?", "role": "user"}],
    temperature=0.2,
    max_tokens=10,
    api_base="https://api.autoai.dev/inference",
    request_timeout=300,
)
print("got response\n", response)

Docs: https://docs.litellm.ai/docs/providers/custom

ishaan-jaff commented 1 year ago

@shenmadouyaowen @Joackk @Niche-Apps please let me know if anything is missing from our support for this

RedwindA commented 1 year ago

@shenmadouyaowen @Joackk @Niche-Apps please let me know if anything is missing from our support for this

Thanks for your support, but I'm a rookie, could you teach me directly how to set up the custom api_base and key in the windows command line to use o-i?

ishaan-jaff commented 1 year ago

i'm not sure if o-i allows you to set custom api_base, you might need to make the change in interpreter.py and then run it

Happy to help on a discord chat if you'd like!

Sagners commented 1 year ago

It seem that we need do something in the sourcecode, it seems simple

Sagners commented 1 year ago
image

It seems that the groundwork was done

RedwindA commented 1 year ago

But it seems that it still cannot work on the API endpoint I created myself, possibly because my endpoint only provides chat completion

image

It seems that the groundwork was done

Joackk commented 1 year ago

I'm the maintainer of LiteLLM, o-i uses litellm for the LLM API calls. You can start calling your custom api endpoints like this:

OpenAI ChatCompletion proxy:

import os 
import litellm
from litellm import completion

os.environ["OPENAI_API_KEY"] = ""

# set custom api base to your proxy
# either set .env or litellm.api_base
# os.environ["OPENAI_API_BASE"] = ""
litellm.api_base = "https://openai-proxy.berriai.repl.co"

messages = [{ "content": "Hello, how are you?","role": "user"}]

# openai call
response = completion("gpt-3.5-turbo", messages)

Docs: https://docs.litellm.ai/docs/providers/openai

Custom LLM API endpoint:

from litellm import completion
response = completion(
    model="custom/meta-llama/Llama-2-13b-hf", 
    messages= [{"content": "what is custom llama?", "role": "user"}],
    temperature=0.2,
    max_tokens=10,
    api_base="https://api.autoai.dev/inference",
    request_timeout=300,
)
print("got response\n", response)

Docs: https://docs.litellm.ai/docs/providers/custom

Do you want to modify the source code? Or create a new py script and write this content to call it yourself? Isn't the start of the interpreter called directly from the command line?

Joackk commented 1 year ago
image

It seems that the groundwork was done

Are you ready to use the custom API?

shenmadouyaowen commented 1 year ago

@ishaan-jaff @Sagners Thank you. If the data returned by the custom API does not meet this judgment, the code will not be interpreted image

ericrallen commented 10 months ago

I’m going to close this one as it seems like it is covered by the LiteLLM integration.

Please reopen this issue if there’s still a problem.