fix an issue when the same OpenAI client is instrumented more than once (mostly useful for the tests)
const openai_ = new OpenAI();
const openai = client.instrumentation.openai({
// the initial client needs to be passed when instrumenting
client: openai_
});
await openai.chat.completions.create(
{
model: 'gpt-3.5-turbo',
messages: [
{ role: 'system', content: 'You are a helpful assistant.' },
{ role: 'user', content: 'What is the capital of Canada?' }
]
},
{
literalaiTags: ['tag3', 'tag4'],
literalaiMetadata: { otherKey: 'otherValue' },
// You can also specify a Step ID at the call level
// When this generation is logged, it will be created with the given Step ID
literalaiStepId
}
);
Langchain
add an option to specify a Step ID in a call's metadata
add tags/metadata from LangChain to Literal AI generations
const literalaiStepId = uuidv4();
await model.invoke('Hello, how are you?', {
callbacks: [cb],
metadata: {
key: 'value',
// use literalaiStepId in the metadata to specify a Step ID
literalaiStepId,
},
tags: ['tag1', 'tag2'],
});
Vercel AI SDK : add option to add tags, metadata and a step ID at the call level
Decorate wrapper : this wrapper allows you to store tags, metadata and a step ID inside the context
Threads, steps and generations logged inside the context will inherit the tags & metadata
The first generation/step created inside the context will inherit the step ID
await client.decorate({ metadata, tags, stepId }).wrap(async () => {
// This generation will be logged with the provided stepId, metadata and tags
const completion = await openai.chat.completions.create({
model: 'gpt-4',
messages: [{ role: 'user', content: 'Say hello !' }]
});
// Because step IDs are unique, this call will revert to the default behaviour of
// assigning a random UUID to the generation when it is logged.
const completion = await openai.chat.completions.create({
model: 'gpt-4',
messages: [{ role: 'user', content: 'Say hello !' }]
});
})
To test this PR :
Changes