Open bwilliams2 opened 3 months ago
Hi @bwilliams2 , it seems that you forgot to specify "streaming" to true in completion API, and the method to iterate over result is not right. Please try change your code to the following:
response = client.chat.completions.create(
model="gpt-35-turbo",
messages=[
{"role": "system", "content": "Create a story about the topic provided by the user"},
{"role": "user", "content": f"Tell me a story about {topic}"},
],
max_tokens=150,
stream=True
)
for chunk in response:
print(f"chunk: {chunk}")
if len(chunk.choices) > 0 and (message := chunk.choices[0].delta.content):
yield str(message)
@guming-learning
I updated with these changes and get the exact same error. I don't believe the function ever gets executed because of the pickling error in the original issue. It seems that the promptflow batch run cannot be used with streaming flows.
Describe the bug Batch runs cannot be successfully completed if the flow call produces an AsyncIterator that ends up wrapped by
promptflow.tracing.TracedAsyncIterator
. Each run from data files errors withTypeError: cannot pickle 'thread.lock' object
How To Reproduce the bug Code below produces error consistently
Bug can be traced to
run_info
submitted to queue athttps://github.com/microsoft/promptflow/blob/745704a5b7f868c61c71f7a12eb13ef695ab4333/src/promptflow-core/promptflow/storage/_queue_run_storage.py#L24
For the example code, the results property within
run_info
has an instance ofpromptflow.tracing.TracedAsyncIterator
which is not able to be pickled when submitted to multiprocessing queue and raises the mentioned error.Error file from batch run attached error.json
Expected behavior Successful execution of batch run
Running Information(please complete the following information):
pf -v
: 1.12.0python --version
: 3.12.2