Closed shenmadouyaowen closed 10 months ago
Yes, I usually use third-party APIs, but it seems that I cannot customize the interface and can only use the official API interface. Can I add custom interface calls?
I would like to use other APIs as well, especially Poe since I already paid for it.
I'm the maintainer of LiteLLM, o-i uses litellm for the LLM API calls. You can start calling your custom api endpoints like this:
import os
import litellm
from litellm import completion
os.environ["OPENAI_API_KEY"] = ""
# set custom api base to your proxy
# either set .env or litellm.api_base
# os.environ["OPENAI_API_BASE"] = ""
litellm.api_base = "https://openai-proxy.berriai.repl.co"
messages = [{ "content": "Hello, how are you?","role": "user"}]
# openai call
response = completion("gpt-3.5-turbo", messages)
Docs: https://docs.litellm.ai/docs/providers/openai
from litellm import completion
response = completion(
model="custom/meta-llama/Llama-2-13b-hf",
messages= [{"content": "what is custom llama?", "role": "user"}],
temperature=0.2,
max_tokens=10,
api_base="https://api.autoai.dev/inference",
request_timeout=300,
)
print("got response\n", response)
@shenmadouyaowen @Joackk @Niche-Apps please let me know if anything is missing from our support for this
@shenmadouyaowen @Joackk @Niche-Apps please let me know if anything is missing from our support for this
Thanks for your support, but I'm a rookie, could you teach me directly how to set up the custom api_base and key in the windows command line to use o-i?
i'm not sure if o-i allows you to set custom api_base, you might need to make the change in interpreter.py and then run it
Happy to help on a discord chat if you'd like!
It seem that we need do something in the sourcecode, it seems simple
It seems that the groundwork was done
But it seems that it still cannot work on the API endpoint I created myself, possibly because my endpoint only provides chat completion
It seems that the groundwork was done
I'm the maintainer of LiteLLM, o-i uses litellm for the LLM API calls. You can start calling your custom api endpoints like this:
OpenAI ChatCompletion proxy:
import os import litellm from litellm import completion os.environ["OPENAI_API_KEY"] = "" # set custom api base to your proxy # either set .env or litellm.api_base # os.environ["OPENAI_API_BASE"] = "" litellm.api_base = "https://openai-proxy.berriai.repl.co" messages = [{ "content": "Hello, how are you?","role": "user"}] # openai call response = completion("gpt-3.5-turbo", messages)
Docs: https://docs.litellm.ai/docs/providers/openai
Custom LLM API endpoint:
from litellm import completion response = completion( model="custom/meta-llama/Llama-2-13b-hf", messages= [{"content": "what is custom llama?", "role": "user"}], temperature=0.2, max_tokens=10, api_base="https://api.autoai.dev/inference", request_timeout=300, ) print("got response\n", response)
Do you want to modify the source code? Or create a new py script and write this content to call it yourself? Isn't the start of the interpreter called directly from the command line?
It seems that the groundwork was done
Are you ready to use the custom API?
@ishaan-jaff @Sagners Thank you. If the data returned by the custom API does not meet this judgment, the code will not be interpreted
I’m going to close this one as it seems like it is covered by the LiteLLM integration.
Please reopen this issue if there’s still a problem.
Is your feature request related to a problem? Please describe.
Some models have different input formats, can we add custom API functionality
Describe the solution you'd like
I use other API interfaces, but I cannot have them run the code
Describe alternatives you've considered
No response
Additional context
No response