getsentry / sentry-python

The official Python SDK for Sentry.io
https://sentry.io/for/python/
MIT License
1.9k stars 501 forks source link

LLM Monitoring not working for Async OpenAI requests #3494

Open AltafHussain4748 opened 2 months ago

AltafHussain4748 commented 2 months ago

Problem Statement

I just experimented LLM monitoring and could not make it work with AsyncOpenAI.

Below is my code

My sentry init is

sentry_sdk.init(
    dsn=os.environ.get("SENTRY_DSN"),
    integrations=[sentry_logging],
    environment=os.environ.get("ENVIRONMENT", "prod"),
    send_default_pii=True,
)

Function i am using:

@ai_track("Tracking Name")
@async_retry(retries=4)
async def func():
    client = AsyncOpenAI()
    with sentry_sdk.start_transaction(op="ai-inference", name="Structured Data Prompt"):
        response = await client.chat.completions.create(
            model=model,
            messages=messages,
            functions=functions,
            temperature=0.0,
            timeout=120,
        )

Versions

sentry-sdk==2.13.0
openai==1.37.1

Solution Brainstorm

No response

Product Area

Insights

getsantry[bot] commented 2 months ago

Assigning to @getsentry/support for routing ⏲️

getsantry[bot] commented 2 months ago

Routing to @getsentry/product-owners-insights for triage ⏲️

bcoe commented 2 months ago

@AltafHussain4748 thank you for the bug report, I just added to our backlog of work to triage next week. We'll update soon.

antonpirker commented 2 months ago

Hey @AltafHussain4748 !

Your code looks good, I think the only thing your are missing that you need to enable tracing. (Setting tracing_sample_rate=1.0 in your init() call.)

You can check if data is sent to Sentry by setting debug=True to your init() call and then you see something like this message in your console:

Sending envelope [envelope with 1 items (transaction)] project:5461230 host:o447951.ingest.sentry.io

(The LLM data is in the transaction envelope)

Hope this help!

vetyy commented 2 months ago

Hey, sorry I didn't notice this issue before and created a new one, but it doesn't work because of this: https://github.com/getsentry/sentry-python/issues/3496

antonpirker commented 2 months ago

Cool @vetyy , thanks for linking!

vetyy commented 2 months ago

@AltafHussain4748

This is the configuration I am using

    sentry_sdk.init(
        dsn=dsn,
        release=release,
        environment=environment,
        send_default_pii=True,
        enable_tracing=True,
        integrations=[
            OpenAIIntegration(
                include_prompts=False,  # Exclude prompts from being sent to Sentry, despite send_default_pii=True
                tiktoken_encoding_name="cl100k_base",
            )
        ],
    )

Don't forget to specify tiktoken_encoding_name otherwise it will calculate 0 tokens.

    def count_tokens(self, s):
        # type: (OpenAIIntegration, str) -> int
        if self.tiktoken_encoding is not None:
            return len(self.tiktoken_encoding.encode_ordinary(s))
        return 0

and also make sure you have pip install tiktoken installed.

AltafHussain4748 commented 1 month ago

Hey @AltafHussain4748 !

Your code looks good, I think the only thing your are missing that you need to enable tracing. (Setting tracing_sample_rate=1.0 in your init() call.)

You can check if data is sent to Sentry by setting debug=True to your init() call and then you see something like this message in your console:

Sending envelope [envelope with 1 items (transaction)] project:5461230 host:o447951.ingest.sentry.io

(The LLM data is in the transaction envelope)

Hope this help!

Thanks for your reply even with this parameter i was not able to make it work