openai / openai-python

The official Python library for the OpenAI API
https://pypi.org/project/openai/
Apache License 2.0
22.05k stars 3.04k forks source link

ChatCompletionStreamManager object does not support the asynchronous context manager protocol #1639

Closed lucashofer closed 1 month ago

lucashofer commented 1 month ago

Confirm this is an issue with the Python library and not an underlying OpenAI API

Describe the bug

The docs here say that the following should be possible

import openai
import asyncio

async def test_streaming():
    client = openai.OpenAI()

    async with client.beta.chat.completions.stream(
        model='gpt-4o-2024-08-06',
        messages=[
            {"role": "system", "content": "You are a helpful assistant."},
            {"role": "user", "content": "Tell me a joke."},
        ],
    ) as stream:
        async for event in stream:
            if event.type == 'content.delta':
                print(event.delta, flush=True, end='')
            elif event.type == 'content.done':
                print("\nContent generation complete.")
                break

# Run the streaming test
asyncio.run(test_streaming())

However, this gives

TypeError: 'ChatCompletionStreamManager' object does not support the asynchronous context manager protocol

When I run without async it works fine ie

import openai

def test_streaming():
    client = openai.OpenAI()

    with client.beta.chat.completions.stream(
        model='gpt-4o-2024-08-06',
        messages=[
            {"role": "system", "content": "You are a helpful assistant."},
            {"role": "user", "content": "Tell me a joke."},
        ],
    ) as stream:
        for event in stream:
            if event.type == 'content.delta':
                print(event.delta, flush=True, end='')
            elif event.type == 'content.done':
                print("\nContent generation complete.")
                break

# Run the streaming test
test_streaming()

To Reproduce

Run the above code snippet which is the beta async chat_completion (and should handle the new pydantic parsing)

Code snippets

OS

macOS

Python version

Python 3.11-3.12

Library version

1.40.4

RobertCraigie commented 1 month ago

ah @lucashofer, sorry those docs don't make it clear, you have to use AsyncOpenAI() for async requests.

for example

import openai
import asyncio

async def test_streaming():
    client = openai.AsyncOpenAI()

    async with client.beta.chat.completions.stream(
        model='gpt-4o-2024-08-06',
        messages=[
            {"role": "system", "content": "You are a helpful assistant."},
            {"role": "user", "content": "Tell me a joke."},
        ],
    ) as stream:
        async for event in stream:
            if event.type == 'content.delta':
                print(event.delta, flush=True, end='')
            elif event.type == 'content.done':
                print("\nContent generation complete.")
                break

# Run the streaming test
asyncio.run(test_streaming())