brainlid / langchain

Elixir implementation of an AI focused LangChain style framework.
https://hexdocs.pm/langchain/
Other
568 stars 67 forks source link

Add telemetry support? #153

Open brainlid opened 2 months ago

brainlid commented 2 months ago

@brainlid Do you think it would be useful to add telemetry at this point? I imagine emitting telemetry events for with duration of response cycle, token usage, errors.

If you think it is a good idea, I could work on a PR.

Thanks for the work!

Originally posted by @tubedude in https://github.com/brainlid/langchain/discussions/103#discussioncomment-9930460

brainlid commented 2 months ago

I'm open to telemetry support. I'm not very experienced in what makes a good telemetry event and would love feedback, PRs, etc.

Also: @derekkraan https://github.com/brainlid/langchain/discussions/103#discussioncomment-9933231

dkuku commented 2 months ago

I was just wondering why there is no support for it yet :D I imagine it would be good to emit the request and response for debugging purposes.

I added this on top op the getting started livebook and it's enough to see what is sent but it would be better to have dedicated langchain events:

frame = Kino.Frame.new()

defmodule LiveTelemetryHandler do
  @frame frame
  def handle_event([:finch, :request, :start] = event, measurements, metadata, _config) do
    content = [
      Kino.Markdown.new("### Event: `#{inspect(event)}`"),
      Kino.Markdown.new("#### Measurements:"),
      Kino.Text.new(pretty_print(measurements)),
      Kino.Markdown.new("#### Metadata:"),
      Kino.Text.new(pretty_print(IO.iodata_to_binary(metadata.request.body))),
      Kino.Markdown.new("---")
    ]
    Enum.each(content, &Kino.Frame.append(@frame, &1))
  end
  def handle_event([:finch, :request, :stop] = event, measurements, metadata, _config) do
    content = [
      Kino.Markdown.new("### Event: `#{inspect(event)}`"),
      Kino.Markdown.new("#### Measurements:"),
      Kino.Text.new(pretty_print(measurements)),
      Kino.Markdown.new("#### Metadata:"),
      #Kino.Text.new(elem(metadata.result, 1).body),
      Kino.Markdown.new("---")
    ]
    Enum.each(content, &Kino.Frame.append(@frame, &1))
  end

  defp pretty_print(data) do
    data
    |> inspect(pretty: true, width: 60)
    |> String.split("\n")
    |> Enum.map_join("\n", &("  " <> &1))
  end
end
:telemetry.attach_many("live-telemetry", [
  [:finch, :request, :start],
  [:finch, :request, :stop]
  # Add more event names as needed
], &LiveTelemetryHandler.handle_event/4, nil)

frame
georgeguimaraes commented 1 month ago

I guess the callbacks don't return for now the timestamps for the calls to OpenAI or other LLMs? Looking for that to send it to Langfuse, Langsmith or other observability tools.

brainlid commented 1 month ago

Hi @georgeguimaraes! The APIs themselves don't return a server created timestamp, so there is nothing to return. The callbacks fire as they happen, so it you want or need a timestamp, just generate it at that time.