Significant-Gravitas / AutoGPT

AutoGPT is the vision of accessible AI for everyone, to use and to build on. Our mission is to provide the tools, so that you can focus on what matters.
https://agpt.co
MIT License
166.52k stars 44.06k forks source link

Azure support broken? #2186

Closed cnkang closed 1 year ago

cnkang commented 1 year ago

⚠️ Search for existing issues first ⚠️

GPT-3 or GPT-4

Steps to reproduce 🕹

azure.yaml:
azure_api_type: azure
azure_api_base: https://test.openai.azure.com/
azure_api_version: 2023-03-15-preview
azure_model_map:
    fast_llm_model_deployment_id: "gpt-35-turbo"
    smart_llm_model_deployment_id: "gpt-4"
    embedding_model_deployment_id: "emb-ada"  

Current behavior 😯

When I run "python -m autogpt", it just broken Welcome back! Would you like me to return to being Entrepreneur-GPT? Continue with the last settings? Name: Entrepreneur-GPT Role: an AI designed to autonomously develop and run businesses with the Goals: ['Increase net worth', 'Grow Twitter Account', 'Develop and manage multiple businesses autonomously'] Continue (y/n): y Using memory of type: LocalCache Using Browser: chrome

Traceback (most recent call last): File "", line 198, in _run_module_as_main File "", line 88, in _run_code File "/data/Auto-GPT/autogpt/main.py", line 50, in main() File "/data/Auto-GPT/autogpt/main.py", line 46, in main agent.start_interaction_loop() File "/data/Auto-GPT/autogpt/agent/agent.py", line 75, in start_interaction_loop assistant_reply = chat_with_ai( ^^^^^^^^^^^^^ File "/data/Auto-GPT/autogpt/chat.py", line 159, in chat_with_ai assistant_reply = create_chat_completion( ^^^^^^^^^^^^^^^^^^^^^^^ File "/data/Auto-GPT/autogpt/llm_utils.py", line 84, in create_chat_completion deployment_id=CFG.get_azure_deployment_id_for_model(model), ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/data/Auto-GPT/autogpt/config/config.py", line 120, in get_azure_deployment_id_for_model return self.azure_model_to_deployment_id_map[ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ TypeError: list indices must be integers or slices, not str

Expected behavior 🤔

It should works well.

Your prompt 📝

# Paste your prompt here
cnkang commented 1 year ago

git rev-parse HEAD 10cd0f3362ad6c86eefe7fc2a1f276ca49af98fe

k-boikov commented 1 year ago

Seems like you are missing the value for "azure_model_map" in your .env.

rasmusaslak commented 1 year ago

Seems like you are missing the value for "azure_model_map" in your .env.

But in .env.template it says these should be in .azure.yaml. Quoting: AZURE cleanup azure env as already moved to azure.yaml.template

erikcvisser commented 1 year ago

Line 133 in /autogpt/config/config.py should read:

AZURE_CONFIG_FILE = os.path.join(os.path.dirname(__file__), "..", "..", "azure.yaml")

The azure.yaml file is two folders up (instead of one).

k-boikov commented 1 year ago

Fixed in https://github.com/Significant-Gravitas/Auto-GPT/pull/2214

amit-dingare commented 1 year ago

still facing this error. I have made the above changes. Looks like the above merge request is not complete.

longxinzhang commented 1 year ago

RN, you still need to make the change of adding ".." in line 119 of config.py

Ben-Pattinson commented 1 year ago

Erroring in the same way here. To be clear, I tried both the current "stable" 0.2.1 release, and the one from commit: 10cd0f3362ad6c86eefe7fc2a1f276ca49af98fe as detailed above. I then made the path fix as described above. Same error, same line. Would be great to be able to use this with our azure capacity.

Ben-Pattinson commented 1 year ago

Update, it's because the base URL isn't getting through. If you edit api_requestor.py and add a pint on line 227, you see: "Connecting to OPENAI: /openai/deployments/ChatGPT/chat/completions?api-version=api-version=2022-12-01" Which is wrong. There is the base url? Trying to debug how the URL is supposed to get through now.

Ben-Pattinson commented 1 year ago

Problems so far:

The format of the conversation in completely different vs GPT3. The json isn't compatible at all. As far as I can see, there is no chance that this would work on GPT3. So unless someone dramatically updates this, don't bother trying with Azure and GPT3.5 only. I was hoping to get this going whilst waiting for GPT4 access to turn up, but this looks less and less likely.

xboxeer commented 1 year ago

changed the ../../ in config.py, the config issue is gone, now a new error: openai.error.InvalidRequestError: Resource not found Using GPT3.5, model map looks like this

azure_model_map:
    fast_llm_model_deployment_id: "gpt-35-turbo"
    smart_llm_model_deployment_id: "gpt4-deployment-id-for-azure"
    embedding_model_deployment_id: "text-embedding-ada-002"
Christoph-ModelMe commented 1 year ago

@xboxeer are these the deployment IDs you have given? They look more generic. You have to add model deployments in azure and name them there, then put these names to the azure_model_map above

xboxeer commented 1 year ago

@xboxeer are these the deployment IDs you have given? They look more generic. You have to add model deployments in azure and name them there, then put these names to the azure_model_map above

Figured out the problem is api version incorrect, seems like it has to be 2023-03-15-preview Now I face another error: openai.error.APIError: Invalid response object from API: '{ "statusCode": 401, "message": "Unauthorized. Access token is missing, invalid, audience is incorrect (https://cognitiveservices.azure.com), or have expired." }' (HTTP response code was 401)

azure.yaml:

azure_api_type: azure_ad
azure_api_base: "https://xxxx.openai.azure.com/"
azure_api_version: "2023-03-15-preview"
azure_model_map:
    fast_llm_model_deployment_id: "gpt-35-turbo"
    smart_llm_model_deployment_id: "gpt-35-turbo" -> I don't have GPT4 access so I change it to gpt 35, won't be called anyway I assume
    embedding_model_deployment_id: "text-embedding-ada-002"

I have setup my openai key in .env

image

The key should work as I tested it in other project (senmentic-kernel), don't know why it is not working in the context of AutoGPT

Issues I met can be a book of Azure OpenAI FAQ in AutoGPT I guess:)

ssugar commented 1 year ago

@xboxeer are these the deployment IDs you have given? They look more generic. You have to add model deployments in azure and name them there, then put these names to the azure_model_map above

Figured out the problem is api version incorrect, seems like it has to be 2023-03-15-preview Now I face another error: openai.error.APIError: Invalid response object from API: '{ "statusCode": 401, "message": "Unauthorized. Access token is missing, invalid, audience is incorrect (https://cognitiveservices.azure.com), or have expired." }' (HTTP response code was 401)

azure.yaml:

azure_api_type: azure_ad
azure_api_base: "https://xxxx.openai.azure.com/"
azure_api_version: "2023-03-15-preview"
azure_model_map:
    fast_llm_model_deployment_id: "gpt-35-turbo"
    smart_llm_model_deployment_id: "gpt-35-turbo" -> I don't have GPT4 access so I change it to gpt 35, won't be called anyway I assume
    embedding_model_deployment_id: "text-embedding-ada-002"

I have setup my openai key in .env image

The key should work as I tested it in other project (senmentic-kernel), don't know why it is not working in the context of AutoGPT

Issues I met can be a book of Azure OpenAI FAQ in AutoGPT I guess:)

@xboxeer Change your azure_api_type to "azure" (including quotes) instead of azure_ad Also remove the trailing / on the azure_api_base

ssugar commented 1 year ago

Problems so far:

* The base URL not being passed in

* the url being wrong (chat/completions should be completions), due to the OBJECT_NAME beign set to "chat.completions" in chat_completion.py

* The content type not being set to  "application/json" on the header in api_requestor (line 438)

* The api-key not being set on the headers

The format of the conversation in completely different vs GPT3. The json isn't compatible at all. As far as I can see, there is no chance that this would work on GPT3. So unless someone dramatically updates this, don't bother trying with Azure and GPT3.5 only. I was hoping to get this going whilst waiting for GPT4 access to turn up, but this looks less and less likely.

@Ben-Pattinson I was running into the same issues as you. I then saw @xboxeer comment above noting a need to change the API version to "2023-03-15-preview" and setting the smart_llm_model_deployment_id to the same as the fast_llm_model_deployment_id in the model map (knowing it won't be used anyways) and it started working for me.

primaryobjects commented 1 year ago

Related https://github.com/Significant-Gravitas/Auto-GPT/pull/2214 Related https://github.com/Significant-Gravitas/Auto-GPT/pull/2437

Allows /autogpt/azure.yaml and /azure.yaml.

xboxeer commented 1 year ago

@xboxeer are these the deployment IDs you have given? They look more generic. You have to add model deployments in azure and name them there, then put these names to the azure_model_map above

Figured out the problem is api version incorrect, seems like it has to be 2023-03-15-preview Now I face another error: openai.error.APIError: Invalid response object from API: '{ "statusCode": 401, "message": "Unauthorized. Access token is missing, invalid, audience is incorrect (https://cognitiveservices.azure.com), or have expired." }' (HTTP response code was 401) azure.yaml:

azure_api_type: azure_ad
azure_api_base: "https://xxxx.openai.azure.com/"
azure_api_version: "2023-03-15-preview"
azure_model_map:
    fast_llm_model_deployment_id: "gpt-35-turbo"
    smart_llm_model_deployment_id: "gpt-35-turbo" -> I don't have GPT4 access so I change it to gpt 35, won't be called anyway I assume
    embedding_model_deployment_id: "text-embedding-ada-002"

I have setup my openai key in .env image The key should work as I tested it in other project (senmentic-kernel), don't know why it is not working in the context of AutoGPT Issues I met can be a book of Azure OpenAI FAQ in AutoGPT I guess:)

@xboxeer Change your azure_api_type to "azure" (including quotes) instead of azure_ad Also remove the trailing / on the azure_api_base

Awesome! changing azure_api_type to "azure" works for me

Ben-Pattinson commented 1 year ago

Will be interested to hear if you get it going, as the code doesn't seem to support the Azure implementation

ssugar commented 1 year ago

@Ben-Pattinson with the changes listed above it works for me on Azure

taohongxiu commented 1 year ago

azure.yaml

azure_credential_config:
    azure_object_id: *********
    azure_tenant_id: *********
    azure_client_id: *********
    azure_password: *********
    azure_scopes:
        - https://cognitiveservices.azure.com/.default

llm_utils.py

    def load_azure_config(self, config_file: str = AZURE_CONFIG_FILE) -> None:
        """
        Loads the configuration parameters for Azure hosting from the specified file
          path as a yaml file.

        Parameters:
            config_file(str): The path to the config yaml file. DEFAULT: "../azure.yaml"

        Returns:
            None
        """
        with open(config_file) as file:
            config_params = yaml.load(file, Loader=yaml.FullLoader)
        self.openai_api_type = config_params.get("azure_api_type") or "azure"
        self.openai_api_base = config_params.get("azure_api_base") or ""
        self.openai_api_version = (
            config_params.get("azure_api_version") or "2023-03-15-preview"
        )
        self.azure_model_to_deployment_id_map = config_params.get("azure_model_map", {})
        if self.openai_api_type == "azure_ad":
            azure_credential_config = config_params.get("azure_credential_config")
            self.openai_api_key = self.get_azure_token(azure_credential_config)

    def get_azure_token(self, azure_credential_config):
        from azure.identity import ClientSecretCredential
        sp_credential = ClientSecretCredential(
            client_id=azure_credential_config.get("azure_client_id"),
            client_secret=azure_credential_config.get("azure_password"),
            tenant_id=azure_credential_config.get("azure_tenant_id"))
        token = sp_credential.get_token(*azure_credential_config.get("azure_scopes"))
        return token.token
ntindle commented 1 year ago

I would love to be put into contact with your Microsoft rep at Azure. We currently can't run the Azure pathways in our automation for a wide variety of reasons but most of the team also doesn't have keys for testing. Any help you can get us there would be amazing

Pwuts commented 1 year ago

The original issue should be resolved in #2351