open-telemetry / semantic-conventions

Defines standards for generating consistent, accessible telemetry across a variety of domains
Apache License 2.0
255 stars 165 forks source link

Pick a set of LLM systems to support/prototype #839

Open lmolkova opened 6 months ago

lmolkova commented 6 months ago

In the #825 we only mention openai, but we should pick a set of vendors/systems we want to support and prototype/validate if attributes/events are applicable to them.

iag commented 6 months ago

When talking about "LLM Systems" we may want to consider:

mxiamxia commented 5 months ago

AWS Bedrock supports multiple models, I can help to create the prototype against Bedrock LLM interactions with OTel Java SDK auto-instrumentation (and Python later), and adhering to span semantic convention definitions.

bhanupisupati commented 5 months ago

NViDIA supports AI Foundation endpoints (https://www.nvidia.com/en-us/ai-data-science/foundation-models/) that we would like to support for generating open telemetry based traces using these semantic conventions.

Note that NVIDIA generative AI examples repo showcases adding open telemetry based observability for Python based generative AI applications using langchain and llama index. Please see: https://github.com/NVIDIA/GenerativeAIExamples/blob/main/docs/observability.md

We will explore modifying these traces to adhere to the proposed semantics.

In a side note, we would like to present the current work performed for supporting open telemetry based testing for llm, RAG, vector databases etc detailed in the documentation above. Is there a periodic sync up of this community where we can present?

gyliu513 commented 5 months ago

@bhanupisupati please check https://docs.google.com/document/d/1EKIeDgBGXQPGehUigIRLwAUpRGa7-1kXB736EaYuJ2M/edit#heading=h.ylazl6464n0c for meeting details.

bhanupisupati commented 4 months ago

Thank you!

On Wednesday, May 8, 2024, Guangya Liu @.***> wrote:

@bhanupisupati https://github.com/bhanupisupati please check https://docs.google.com/document/d/1EKIeDgBGXQPGehUigIRLwAUpRGa7- 1kXB736EaYuJ2M/edit#heading=h.ylazl6464n0c for meeting details.

— Reply to this email directly, view it on GitHub https://github.com/open-telemetry/semantic-conventions/issues/839#issuecomment-2101200917, or unsubscribe https://github.com/notifications/unsubscribe-auth/ALZTDSTTWANBBJEAH4M25ALZBJWKVAVCNFSM6AAAAABFJ7LOMSVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDCMBRGIYDAOJRG4 . You are receiving this because you were mentioned.Message ID: @.***>

karthikscale3 commented 4 months ago

I have reviewed the API docs of Anthropic, Cohere and Google and also the new model spec introduced by OpenAI that has some changes. Described below is the summary of my findings along with proposed recommendations for the same. After discussing this on the WG call, I am happy to make a PR for the same.

  1. OpenAI introduced a new model spec where they are renaming ‘system’ role to ‘developer’ role.

Proposal:

Rename gen_ai.system.message -> gen_ai.developer.message

  1. OpenAI introduced a new role called ‘tool’ which has the content generated by a tool like a program.

Proposal: Introduce gen_ai.tool.message

  1. Clarification required

Difference between gen_ai.assistant.message vs gen_ai.choice

Should we just stick to gen_ai.assistant.message?

Proposal: stick to gen_ai_assistant.message

  1. Stop Sequences

We need to instrument stop which is used for providing up to 4 sequences where the API will stop generating further tokens. This is provided by other LLM vendors like anthropic and cohere aswell.

Proposal: Introduce gen_ai.request.stop

  1. Top K

Anthropic also provides an option to specify top_k. This option lets the developer only sample from the top K options for each subsequent token. Used to remove "long tail" low probability responses.

Proposal: Introduce gen_ai.request.top_k

  1. Anthropic

Equivalent attributes(Maps from OpenAI -> Anthropic):

These attributes need to be mapped accordingly by the instrumentation library.

  1. Cohere

Equivalent attributes(Maps from OpenAI -> Cohere):

There is an additional “preamble” field in addition to the system role. Preamble adds content to the top of the messages fed to the LLM and adjusts the model behavior for the entire conversation while system message is part of the message history.

Proposal: introduce a new field gen_ai.request.preamble

Proposal: Should be transformed and mapped to gen_ai.user.message as a list of json objects

Proposal: Introduce gen_ai.request.connectors

Proposal: Introduce gen_ai.request.documents

  1. Google