Chainlit / literalai-typescript

https://docs.literalai.com
Apache License 2.0
4 stars 0 forks source link

feat(instrumentation): make the openai instrumentation context aware #42

Closed Dam-Buty closed 2 months ago

Dam-Buty commented 2 months ago

The idea is to simplify the use of the openai instrumentation, making it more like the one in Python where you instrument the lib once, and each call is automatically captured and logged without having to instrument the result.

This is made possible by the AsyncLocalStorage context we have added in the last version.

Old syntax

    const response = await openai.chat.completions.create({
      model: 'gpt-3.5-turbo',
      messages: []
    });

    await client.instrumentation.openai(response);

    const run = thread.run({ name: "Bla" }).send()

    const response2 = await openai.chat.completions.create({
      model: 'gpt-3.5-turbo',
      messages: []
    });

    await client.instrumentation.openai(response2, run);

New syntax

client.instrumentation.openai();

// Will be logged as a simple generation outside of threads/steps
const response = await openai.chat.completions.create({
  model: 'gpt-3.5-turbo',
  messages: []
});

client.thread({ name: "Bla" }).wrap(async () => {
  // Will be logged as a step inside the "Bla" thread
  return openai.chat.completions.create({
      model: 'gpt-3.5-turbo',
      messages: []
  });
})
linear[bot] commented 2 months ago
ENG-1634 Change the OpenAI Instrumentation

It should be possible now with the context to instrument OpenAI call the same way we do in the [Python SDK](https://docs.getliteral.ai/python-client/api-reference/client#instrument-openai) . This means we just instrument the openai methods out of context, then when we intercept a call we determine where to put the generation : * if we have a step in the context we push it to that step * otherwise we just push the generation without a step