Open lmolkova opened 6 months ago
When talking about "LLM Systems" we may want to consider:
AWS Bedrock supports multiple models, I can help to create the prototype against Bedrock LLM interactions with OTel Java SDK auto-instrumentation (and Python later), and adhering to span semantic convention definitions.
NViDIA supports AI Foundation endpoints (https://www.nvidia.com/en-us/ai-data-science/foundation-models/) that we would like to support for generating open telemetry based traces using these semantic conventions.
Note that NVIDIA generative AI examples repo showcases adding open telemetry based observability for Python based generative AI applications using langchain and llama index. Please see: https://github.com/NVIDIA/GenerativeAIExamples/blob/main/docs/observability.md
We will explore modifying these traces to adhere to the proposed semantics.
In a side note, we would like to present the current work performed for supporting open telemetry based testing for llm, RAG, vector databases etc detailed in the documentation above. Is there a periodic sync up of this community where we can present?
@bhanupisupati please check https://docs.google.com/document/d/1EKIeDgBGXQPGehUigIRLwAUpRGa7-1kXB736EaYuJ2M/edit#heading=h.ylazl6464n0c for meeting details.
Thank you!
On Wednesday, May 8, 2024, Guangya Liu @.***> wrote:
@bhanupisupati https://github.com/bhanupisupati please check https://docs.google.com/document/d/1EKIeDgBGXQPGehUigIRLwAUpRGa7- 1kXB736EaYuJ2M/edit#heading=h.ylazl6464n0c for meeting details.
— Reply to this email directly, view it on GitHub https://github.com/open-telemetry/semantic-conventions/issues/839#issuecomment-2101200917, or unsubscribe https://github.com/notifications/unsubscribe-auth/ALZTDSTTWANBBJEAH4M25ALZBJWKVAVCNFSM6AAAAABFJ7LOMSVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDCMBRGIYDAOJRG4 . You are receiving this because you were mentioned.Message ID: @.***>
I have reviewed the API docs of Anthropic, Cohere and Google and also the new model spec introduced by OpenAI that has some changes. Described below is the summary of my findings along with proposed recommendations for the same. After discussing this on the WG call, I am happy to make a PR for the same.
Proposal:
Rename gen_ai.system.message
-> gen_ai.developer.message
Proposal: Introduce gen_ai.tool.message
Difference between gen_ai.assistant.message
vs gen_ai.choice
Should we just stick to gen_ai.assistant.message
?
Proposal: stick to gen_ai_assistant.message
We need to instrument stop
which is used for providing up to 4 sequences where the API will stop generating further tokens. This is provided by other LLM vendors like anthropic and cohere aswell.
Proposal: Introduce gen_ai.request.stop
Anthropic also provides an option to specify top_k. This option lets the developer only sample from the top K options for each subsequent token. Used to remove "long tail" low probability responses.
Proposal: Introduce gen_ai.request.top_k
Equivalent attributes(Maps from OpenAI -> Anthropic):
These attributes need to be mapped accordingly by the instrumentation library.
Equivalent attributes(Maps from OpenAI -> Cohere):
Role: Assistant -> CHATBOT
Role: System -> Preamble
There is an additional “preamble” field in addition to the system role. Preamble adds content to the top of the messages fed to the LLM and adjusts the model behavior for the entire conversation while system message is part of the message history.
Proposal: introduce a new field gen_ai.request.preamble
Proposal: Should be transformed and mapped to gen_ai.user.message
as a list of json objects
Proposal: Introduce gen_ai.request.connectors
Proposal: Introduce gen_ai.request.documents
gen_ai.request.frequency_penalty
and gen_ai.request.presence_penalty
.gen_ai.request.safety_settings
which is a list of objects.
In the #825 we only mention
openai
, but we should pick a set of vendors/systems we want to support and prototype/validate if attributes/events are applicable to them.