simonw / llm

Access large language models from the command-line
https://llm.datasette.io
Apache License 2.0
4.09k stars 229 forks source link

Mechanism for models to influence Markdown display of their logs #285

Open simonw opened 1 year ago

simonw commented 1 year ago

It's a bit annoying that the only way to see the log probs is to dig around in the SQLite database for them.

[...]

One option would be to teach the llm logs Markdown output how to display them. That's nicer than messing around in SQLite directly.

It's a bit weird to have code in llm logs that's specific to the OpenAI models though. Maybe I should add a model plugin mechanism that allows models to influence the display of logs?

Originally posted by @simonw in https://github.com/simonw/llm/issues/284#issuecomment-1724782862

simonw commented 1 year ago

The code that generates Markdown currently lives in the cli.py llm logs list command:

https://github.com/simonw/llm/blob/4d18da4e1149b69b44a0480729b4e2ef24bc756a/llm/cli.py#L740-L763

I could:

  1. Move that logic into a method on Model such that other models can subclass it
  2. Introduce a method on Model that appends extra Markdown to that log, defaulting to returning nothing but such that other models can subclass it.

I slightly prefer 2.

simonw commented 1 year ago

When I use this to implement logprobs I need to remember that streaming and non-streaming completion models end up storing them in different shapes in the JSON DB log. https://github.com/simonw/llm/issues/284#issuecomment-1724834791