Open brainlid opened 2 months ago
I'm open to telemetry support. I'm not very experienced in what makes a good telemetry event and would love feedback, PRs, etc.
Also: @derekkraan https://github.com/brainlid/langchain/discussions/103#discussioncomment-9933231
I was just wondering why there is no support for it yet :D I imagine it would be good to emit the request and response for debugging purposes.
I added this on top op the getting started livebook and it's enough to see what is sent but it would be better to have dedicated langchain events:
frame = Kino.Frame.new()
defmodule LiveTelemetryHandler do
@frame frame
def handle_event([:finch, :request, :start] = event, measurements, metadata, _config) do
content = [
Kino.Markdown.new("### Event: `#{inspect(event)}`"),
Kino.Markdown.new("#### Measurements:"),
Kino.Text.new(pretty_print(measurements)),
Kino.Markdown.new("#### Metadata:"),
Kino.Text.new(pretty_print(IO.iodata_to_binary(metadata.request.body))),
Kino.Markdown.new("---")
]
Enum.each(content, &Kino.Frame.append(@frame, &1))
end
def handle_event([:finch, :request, :stop] = event, measurements, metadata, _config) do
content = [
Kino.Markdown.new("### Event: `#{inspect(event)}`"),
Kino.Markdown.new("#### Measurements:"),
Kino.Text.new(pretty_print(measurements)),
Kino.Markdown.new("#### Metadata:"),
#Kino.Text.new(elem(metadata.result, 1).body),
Kino.Markdown.new("---")
]
Enum.each(content, &Kino.Frame.append(@frame, &1))
end
defp pretty_print(data) do
data
|> inspect(pretty: true, width: 60)
|> String.split("\n")
|> Enum.map_join("\n", &(" " <> &1))
end
end
:telemetry.attach_many("live-telemetry", [
[:finch, :request, :start],
[:finch, :request, :stop]
# Add more event names as needed
], &LiveTelemetryHandler.handle_event/4, nil)
frame
I guess the callbacks don't return for now the timestamps for the calls to OpenAI or other LLMs? Looking for that to send it to Langfuse, Langsmith or other observability tools.
Hi @georgeguimaraes! The APIs themselves don't return a server created timestamp, so there is nothing to return. The callbacks fire as they happen, so it you want or need a timestamp, just generate it at that time.
@brainlid Do you think it would be useful to add telemetry at this point? I imagine emitting telemetry events for with duration of response cycle, token usage, errors.
If you think it is a good idea, I could work on a PR.
Thanks for the work!
Originally posted by @tubedude in https://github.com/brainlid/langchain/discussions/103#discussioncomment-9930460