Closed lucashofer closed 1 month ago
ah @lucashofer, sorry those docs don't make it clear, you have to use AsyncOpenAI()
for async requests.
for example
import openai
import asyncio
async def test_streaming():
client = openai.AsyncOpenAI()
async with client.beta.chat.completions.stream(
model='gpt-4o-2024-08-06',
messages=[
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Tell me a joke."},
],
) as stream:
async for event in stream:
if event.type == 'content.delta':
print(event.delta, flush=True, end='')
elif event.type == 'content.done':
print("\nContent generation complete.")
break
# Run the streaming test
asyncio.run(test_streaming())
Confirm this is an issue with the Python library and not an underlying OpenAI API
Describe the bug
The docs here say that the following should be possible
However, this gives
TypeError: 'ChatCompletionStreamManager' object does not support the asynchronous context manager protocol
When I run without async it works fine ie
To Reproduce
Run the above code snippet which is the beta async chat_completion (and should handle the new pydantic parsing)
Code snippets
OS
macOS
Python version
Python 3.11-3.12
Library version
1.40.4