Description of changes:
Updated local OTEL repo to support the following GenAI attributes for BedrockRuntime InvokeModel calls:
gen_ai.request.max_tokensgen_ai.request.temperaturegen_ai.request.top_pgen_ai.usage.input_tokensgen_ai.usage.output_tokensgen_ai.response.finish_reasons
E2E testing below shows that these additional GenAI attributes are made visible with the InvokeModel call for each of the 6 models supported by Bedrock.
GenAI attributes are incomplete for several models because the Bedrock API for .NET does not expose the response headers in the API response, leaving only what's available in the response body for extraction. See this doc for more details.
Amazon Titan:
Anthropic Claude:
Meta Llama:
Cohere Command:
AI21 Labs Jamba:
Mistral AI:
By submitting this pull request, I confirm that you can use, modify, copy, and redistribute this contribution, under the terms of your choice.
Description of changes: Updated local OTEL repo to support the following GenAI attributes for BedrockRuntime InvokeModel calls:
gen_ai.request.max_tokens
gen_ai.request.temperature
gen_ai.request.top_p
gen_ai.usage.input_tokens
gen_ai.usage.output_tokens
gen_ai.response.finish_reasons
E2E testing below shows that these additional GenAI attributes are made visible with the InvokeModel call for each of the 6 models supported by Bedrock. GenAI attributes are incomplete for several models because the Bedrock API for .NET does not expose the response headers in the API response, leaving only what's available in the response body for extraction. See this doc for more details.
Amazon Titan:
Anthropic Claude:
Meta Llama:
Cohere Command:
AI21 Labs Jamba:
Mistral AI:
By submitting this pull request, I confirm that you can use, modify, copy, and redistribute this contribution, under the terms of your choice.