microsoft / promptflow

Build high-quality LLM apps - from prototyping, testing to production deployment and monitoring.
https://microsoft.github.io/promptflow/
MIT License
8.43k stars 725 forks source link

[BUG]: Basic chat flow raises an error: Failed to serialize inputs or output for flow run because of cannot pickle 'generator' object. #2704

Closed jhakulin closed 2 weeks ago

jhakulin commented 1 month ago

Describe the bug

Following error happens when testing basic chat flow using PromptFlow SDK

2024-04-08 15:00:20 -0700 7692 execution.flow WARNING Failed to serialize inputs or output for flow run because of cannot pickle 'generator' object.The inputs and output field in api_calls will be None. Flow outputs: {'answer': <generator object generate_from_proxy at 0x0000027CB11D7580>}

How To Reproduce the bug Steps to reproduce the behavior, how frequent can you experience the bug:

  1. Create basic chat flow by using pf flow init --flow firstchat --type chat
  2. Create sample that runs the flow
    
    from promptflow.client import PFClient
    from promptflow.entities import AzureOpenAIConnection

PFClient can help manage your runs and connections.

pf = PFClient()

try: conn_name = "aoai_assistant_connection" conn = pf.connections.get(name=conn_name) print("using existing connection")

except: connection = AzureOpenAIConnection( name=conn_name, api_key="", api_base="", api_type="azure", api_version="", )

conn = pf.connections.create_or_update(connection)
print("successfully created connection")

flow_path = "." # Chat flow is in the same folder with this sample

flow_inputs = { "chat_history": [], "question": "tell me a joke" }

flow_result = pf.test(flow=flow_path, inputs=flow_inputs) print(f"Flow outputs: {flow_result}")


4.  Run the sample for example in VSCode with Python debugger

**Expected behavior**
Chat flow succeeds and there is an answer returned to user's question
 
**Screenshots**
2024-04-08 15:00:20 -0700    7692 execution.flow     WARNING  Failed to serialize inputs or output for flow run because of cannot pickle 'generator' object.The inputs and output field in api_calls will be None.
Flow outputs: {'answer': <generator object generate_from_proxy at 0x0000027CB11D7580>}

**Running Information(please complete the following information):**
 - Promptflow Package Version using `pf -v`: [e.g. 0.0.102309906]

promptflow                       1.7.0

 - Operating System: [e.g. Ubuntu 20.04, Windows 11]
 - Python Version using `python --version`: [e.g. python==3.10.12]

Python (Windows) 3.11.7 (tags/v3.11.7:fa7a6f2, Dec  4 2023, 19:24:49) [MSC v.1937 64 bit (AMD64)]

**Additional context**
The problem seems to be related to the fact that stream is enabled by default in chat completion.
If stream is set to false the error does not occur.
lalala123123 commented 1 month ago

Hi @jhakulin , it is by design for chat flow that return generator type. You can get the string output like this example

flow_path = "." # Chat flow is in the same folder with this sample

flow_inputs = {
    "chat_history": [],
    "question": "tell me a joke"
}

flow_result = pf.test(flow=flow_path, inputs=flow_inputs)
answer_result = ""
for item in flow_result["answer"]:
    answer_result += item
print(f"Flow outputs: {answer_result}")

In this section, it shows that how to handle the generator type of outputs. https://microsoft.github.io/promptflow/how-to-guides/init-and-test-a-flow.html#test-with-interactive-mode

github-actions[bot] commented 3 weeks ago

Hi, we're sending this friendly reminder because we haven't heard back from you in 30 days. We need more information about this issue to help address it. Please be sure to give us your input. If we don't hear back from you within 7 days of this comment, the issue will be automatically closed. Thank you!