microsoft / promptflow

Build high-quality LLM apps - from prototyping, testing to production deployment and monitoring.
https://microsoft.github.io/promptflow/
MIT License
9.51k stars 868 forks source link

[BUG] LLM tool reports error code: 422 with Mistral-large #3343

Closed wliuoc closed 4 months ago

wliuoc commented 5 months ago

Describe the bug LLM tool reports error code: 422 when using Mistral-large based serverless connection.

How To Reproduce the bug

  1. deploy Mistral-large from model catalog as serverless endpoint in azureml studio
  2. create serverless connection using the endpoint
  3. create a flow by cloning 'web classification' example
  4. use the newly created connection from step 2 in 'summarize_text_content' node
  5. run 'fetch_text_content_from_url' node and then 'summarize_text_content' node

Expected behavior 'summarize_text_content' node runs successfully.

Additional context same error can be reproduced by posting to the serverless endpoint using curl:

curl -X POST -L https://your-Mistral-large-serverless.eastus2.inference.ai.azure.com/v1/chat/completions -H 'Content-Type: application/json' -H 'Authorization: apikey' -d '{"messages":[{"content":"You are a helpful assistant.","role":"system"},{"content":"What is good about Paris?","role":"user"}], "temperature": 1, "top_p": 1, "stream":false, "user":"", "max_tokens": 50}'

if "user":"" (which is right in front of "max_tokens": 50 in the above command) is removed , the curl command would get proper response instead of erroring out:

curl -X POST -L https://your-Mistral-large-serverless.eastus2.inference.ai.azure.com/v1/chat/completions -H 'Content-Type: application/json' -H 'Authorization: aaaa' -d '{"messages":[{"content":"You are a helpful assistant.","role":"system"},{"content":"What is good about Paris?","role":"user"}], "temperature": 1, "top_p": 1, "stream":false, "max_tokens": 50}'

It looks like when the LLM tool generates request to a model, it always include "user":"". I don't think this can be modified or turned off when authoring the flow. I tested other open models, so far llama3 and cohere's command-r-plus worked fine, just not Mistral-large.

DaweiCai commented 5 months ago

We have a change to fix this and will be rolled out in next week's release. You can specify this image (mcr.microsoft.com/azureml/promptflow/promptflow-runtime:20240520.v6) in the flow.day.yaml to unblock first.

wliuoc commented 5 months ago

the image worked, thanks. one issue remains: if I add a "response_format": {"type":"json_object"} or "response_format": {"type":"text"}, the node works with llama3 and mistral models, but gives this error with command-r-plus model: Run failed: OpenAI API hits BadRequestError: Error code: 400 - {'message': 'invalid type: parameter response_format is of type object but should be of type string.'} [Error reference: https://platform.openai.com/docs/guides/error-codes/api-errors] is this fixable too?

github-actions[bot] commented 4 months ago

Hi, we're sending this friendly reminder because we haven't heard back from you in 30 days. We need more information about this issue to help address it. Please be sure to give us your input. If we don't hear back from you within 7 days of this comment, the issue will be automatically closed. Thank you!