traceloop / openllmetry-js

Sister project to OpenLLMetry, but in Typescript. Open-source observability for your LLM application, based on OpenTelemetry
https://www.traceloop.com/openllmetry
Apache License 2.0
242 stars 23 forks source link

Bug: token count not being saved #352

Open kashifali11 opened 5 days ago

kashifali11 commented 5 days ago

Token count is not being saved when a request is made to open ai.

nirga commented 5 days ago

Thanks for reporting @kashifali11! Can you provide some details as this works for most cases. What API are you using?

kashifali11 commented 5 days ago

I am using latest version of the sdk and openai version 4.11.1
this is the span object that I get { 'gen_ai.system': 'OpenAI', 'llm.request.type': 'chat', 'gen_ai.request.model': 'gpt-3.5-turbo-1106', 'gen_ai.request.max_tokens': 4096, 'gen_ai.request.temperature': 0.7, 'gen_ai.request.top_p': 1, 'gen_ai.prompt.0.role': 'system', 'gen_ai.prompt.0.content': '', 'gen_ai.prompt.1.role': 'user', 'gen_ai.prompt.1.content': 'Your name is naruto', 'gen_ai.response.model': 'gpt-3.5-turbo-1106', 'gen_ai.completion.0.finish_reason': 'stop', 'gen_ai.completion.0.role': 'assistant', 'gen_ai.completion.0.content': "I'm not Naruto, but I'm familiar with the character! Naruto is a popular anime and manga series about a young ninja with big dreams. If you have any questions about Naruto or anything else, feel free to ask!" } image

Note: image and object point to different iterations

nirga commented 5 days ago

Thanks @kashifali11! Any chance you're using the streaming API? Can you provide a small example of how you're calling OpenAI?

kashifali11 commented 5 days ago

I am not using streaming api. const response = await this.openai.chat.completions.create( completionParams as ChatCompletionCreateParamsNonStreaming ); here is the code snippet. I think the issue is with enrichTokens being not set due to intialization of instrumentation here