OpenInterpreter / open-interpreter

A natural language interface for computers
http://openinterpreter.com/
GNU Affero General Public License v3.0
52.36k stars 4.62k forks source link

Support for Azure GP4 #43

Closed Ftrybe closed 1 year ago

Ftrybe commented 1 year ago

I am looking to work with Azure GP4. Do you have any plans to support it in the near future?

KillianLucas commented 1 year ago

Hi @Ftrybe! Yes, have you used it in Python before? I'm seeing this in the documentation:

import os
import openai
openai.api_type = "azure"
openai.api_version = "2023-05-15" 
openai.api_base = os.getenv("OPENAI_API_BASE")  # Your Azure OpenAI resource's endpoint value.
openai.api_key = os.getenv("OPENAI_API_KEY")

response = openai.ChatCompletion.create(
    engine="gpt-35-turbo", # The deployment name you chose when you deployed the GPT-35-Turbo or GPT-4 model.
    messages=[
        {"role": "system", "content": "Assistant is a large language model trained by OpenAI."},
        {"role": "user", "content": "Who were the founders of Microsoft?"}
    ]
)

print(response)

print(response['choices'][0]['message']['content'])

Is this the most minimal possible use of it? Seems like a lot. Let me know if you can use it by just switching the api_type, api_base and engine. I wouldn't want to lock us into that specific api version or something, if it isn't necessary.


For Open Interpreter, I could see this being implemented in this way:

interpreter.model = "azure-gpt-35-turbo" # "azure-" followed by your deployment name
interpreter.api_base = os.getenv("OPENAI_API_BASE") # Azure OpenAI resource's endpoint value
interpreter.api_key = os.getenv("OPENAI_API_KEY") # Same as usual

Then on the CLI:

interpreter --model azure-gpt-35-turbo --api_base ... --api_key ...

Or you could edit interpreter's config file with interpreter config, which would let you define a YAML with these defaults, so it ran with your Azure deployment every time.

None of the above is implemented, I just wanted to run it by you before building it. Does that seem intuitive/easy to use?

Thanks again! K

Ftrybe commented 1 year ago

Thanks.

Vybo commented 1 year ago

I was looking for Azure support as well and the suggested functionality looks awesome, thank you as well!

nick917 commented 1 year ago

I think it should work using azure API. I have tried:

openai.api_key = os.getenv("AZURE_OPENAI_KEY") openai.api_version = "2023-08-01-preview" openai.api_type = "azure" openai.api_base = os.getenv("AZURE_OPENAI_ENDPOINT")

response = openai.ChatCompletion.create(
        engine='gpt-35-turbo', # can be replaced by gpt 4
        messages=messages,
        functions=[function_schema],
        function_call="auto",
        stream=True,
        temperature=self.temperature,
      )

print(response['choices'][0]['message']['content']) does not work if stream=True which causes response to be a generator.

There is more work to do after line 340 in `interpreter.py' since the code is not compatible.

Great work. keep it up.

ifsheldon commented 1 year ago

+1. I am also looking into the code, hopefully I can implement this and make a PR when I have some time.

Then on the CLI: interpreter --model azure-gpt-35-turbo --api_base ... --api_key ...

@KillianLucas Probably we just need a flag for the interpreter? Say, interpreter --use-azure then on the first launch, instead of asking whether to use GPT4 or Code-Llama, we can just ask the API Key, API base and deployment name(engine name).

SimplyJuanjo commented 1 year ago

Yeh, this one would be amazing, since some could have access to 32k API via Azure easily than with OpenAI.

Planned support for 32k GPT4 or LLaMA models?

Ftrybe commented 1 year ago

In the latest code, I noticed that you have added configurations for Azure GPT, including a configurable azure_api_base. Would you consider renaming it to a more generic term? I'd like the flexibility to access it through third-party API endpoints, such as those hosted by the one-api service. Thanks.

KillianLucas commented 1 year ago

@Ftrybe absolutely! On the roadmap. We're trying to figure out a unified --model command that should connect to an LLM across one-api, HuggingFace, localhost, etc.

KillianLucas commented 1 year ago

@Vybo @Ftrybe @nick917 and @SimplyJuanjo:

Happy to report that Open Interpreter now supports Azure deployments, thanks to the incredible @ifsheldon (many thanks, great work feng!)

Read the Azure integration docs here.

To use, simply upgrade your Interpreter then run --use-azure:

pip install --upgrade open-interpreter
interpreter --use-azure
KillianLucas commented 9 months ago

Azure Update

To anyone searching for this, we have a new way of connecting to Azure! ↓

https://docs.openinterpreter.com/language-model-setup/hosted-models/azure