microsoft / promptflow

Build high-quality LLM apps - from prototyping, testing to production deployment and monitoring.
https://microsoft.github.io/promptflow/
MIT License
8.43k stars 725 forks source link

[BUG] Custom connection doesn't work with endpoints from AML model catalog #1672

Closed ZhiliangWu closed 2 months ago

ZhiliangWu commented 4 months ago

Describe the bug I developed using Promptflow locally, and it works well with Azure OpenAI connections. However, the custom connection failed. I created an endpoint of gpt2 using one of the models in AML model catalog. The endpoint is deployed successfully, and I filled the example yaml from the docs with the REST endpoint and Primary key in the Consume page.

$schema: https://azuremlschemas.azureedge.net/promptflow/latest/CustomConnection.schema.json
name: custom_connection
type: custom
configs:
  endpoint: "<your-endpoint>"
secrets: 
  my_key: "<your-api-key>"

I added it to the Promptflow and the run failed with

Create run failed with ResolveToolError: Tool load failed in 'llm_node_with_template': (APINotFound) The API 'None.None' is not found.

How To Reproduce the bug Steps to reproduce the behavior, how frequent can you experience the bug:

  1. Create a Real-time endpoint from the AML workspace
  2. Fill out the custom_connection.yaml with the values from the deployed endpoint
$schema: https://azuremlschemas.azureedge.net/promptflow/latest/CustomConnection.schema.json
name: custom_connection
type: custom
configs:
  endpoint: "<your-endpoint>"
secrets: 
  my_key: "<your-api-key>"
  1. Configure the custom connection in the Promptflow pf connection create -f custom_connection.yaml
  2. Add a variant using this custom connection, similar to
      variant_2:
        node:
          type: llm
          source:
            type: code
            path: llm_node_with_template.jinja2
          inputs:
            instruction: ${inputs.instruction}
            input: ${inputs.input}
          connection: custom_connection
  1. Create the run with this variant.

Expected behavior It should work as connections like Azure OpenAI connections. This blocks all users from using available models from the model category with promptflow.

Running Information(please complete the following information):

gjwoods commented 4 months ago

Hi @ZhiliangWu - thanks for reaching out. Can you confirm which tool you are using for accessing the AzureML model catalog endpoint? The "LLM" tool is currently only used for OpenAI and Azure OpenAI connections. To use AzureML model catalog endpoints you can use the Open_Model_LLM Tool (found under "+ More tools").

To set expectations, the Custom Connection type is generic for different tools, for the Open Model LLM tool, the connection needs the following values: endpoint_url = "" model_family = "GPT2" # for GPT2 endpoint_api_key = "" (secret)

Please give it a try and let me know.

brynn-code commented 4 months ago

The error The API 'None.None' is not found raised because no relevant api is found for custom connection, the message is vague, and we will improve it. And for llm node, like Gerard said, we don't support custom connection for now.

ZhiliangWu commented 4 months ago

/@gjwoods thanks for your quick reply. I also checked the related docs at https://microsoft.github.io/promptflow/reference/tools-reference/open_model_llm_tool.html#open-model-llm. The following code works to create the custom connection with pf connection create -f custom_connection.yaml

$schema: https://azuremlschemas.azureedge.net/promptflow/latest/CustomConnection.schema.json
name: custom_connection
type: custom
configs:
  endpoint_url: <url>
  model_family: GPT2
secrets:
  endpoint_api_key: <key>

Meanwhile, I met some issues to run it with the node with type Open_Model_LLM having configurations as

- name: Open_Model_LLM
  type: custom_llm
  source:
    type: package_with_prompt
    tool: promptflow.tools.open_model_llm.OpenModelLLM.call
    path: Open_Model_LLM.jinja2
  inputs:
    api: completion
    instruction: ${inputs.instruction}
    input: ${inputs.input}
    endpoint_name: custom_connection   #not sure whether this is correct

The error says

2024-01-17 23:09:20 +0100   20192 execution.flow     INFO     [Open_Model_LLM in line 0 (index starts from 0)] stdout> Executing Open Model LLM Tool for endpoint: 'custom_connection', deployment: 'None'
2024-01-17 23:09:20 +0100   20192 execution          ERROR    Node Open_Model_LLM in line 0 failed. Exception: Execution failure in 'Open_Model_LLM': (IndexError) list index out of range.
...
in parse_endpoint_connection_type
    return (endpoint_connection_details[0].lower(), endpoint_connection_details[1])
IndexError: list index out of range

Do you know if I need to do any futher steps to configure this tool?

ZhiliangWu commented 4 months ago

@brynn-code Comments on the feedback above would be appreciated!

ZhiliangWu commented 3 months ago

@gjwoods

github-actions[bot] commented 2 months ago

Hi, we're sending this friendly reminder because we haven't heard back from you in 30 days. We need more information about this issue to help address it. Please be sure to give us your input. If we don't hear back from you within 7 days of this comment, the issue will be automatically closed. Thank you!

ZhiliangWu commented 2 months ago

@gjwoods

jessegoraya commented 1 month ago

I would like to keep this issue open if possible. I too am seeing the index out of range error related to my Open Model LLM node that is trying to connect to my LLama 2 pay as you go deployed instance. Does anyone have any insights?

Open Model LLM Node Error

jessegoraya commented 1 month ago

As a follow-up I also removed trying to use the connection in my Azure Workspace and recreated my Llama connection locally and I still get the same error.

Open Model LLM Node Error - Local Connection