karthink / gptel

A simple LLM client for Emacs
GNU General Public License v3.0
1.04k stars 113 forks source link

[FR] Optionally log all interactions with models #165

Closed NightMachinery closed 5 months ago

NightMachinery commented 6 months ago

With these kinds of extensions, one cannot be sure what context, system prompt, temperature, etc. are being used in any given interaction. Logging all the interactions can be helpful for diagnostics and making sure the behavior one is expecting is actually happening.

E.g., it seems using GPTEL_MODEL changes the model for current subheading, and GPTEL_TOPIC starts a new conversation from that point onwards. But this is just a guess on the user's part, and it's hard to verify.

* _
:PROPERTIES:
:GPTEL_MODEL: gpt-4-turbo
:GPTEL_TOPIC: f2
:END:
*** Which model are you?

I am the GPT-4 Turbo model.
karthink commented 6 months ago

@NightMachinery What would you like to see in the log for it to be useful?

Anything else? System messages can be quite long (> 1000 words, example) so I'm not sure it's convenient to log these.

NightMachinery commented 6 months ago

@karthink Thanks. All of the above are useful, but the response matters less to me. I can easily notice a failed response anyway.

Cost of each interaction would be a welcome addition.

I like to see a complete prompt (with the system message) for each request, even if it’s very long. Indeed, I would most probably change the system prompt if it’s too long as that would be very expensive. I don’t think a cluttered log file is too big of a problem. I would only look at it for diagnostics anyway.

NightMachinery commented 6 months ago

One way to reduce the clutter in the log file is to have a directory of log files and create a new file for each interaction, using the datetime + a random number for the file name. A command gptel-open-last-log will open the most recent interaction’s log file.

karthink commented 5 months ago

Logging support added:

Log entries are in JSON but with one exception -- Streaming responses are typically not provided as valid JSON and logged as-is. If you want to parse the log buffer programmatically, your JSON parser should be able to work around this.

@NightMachinery: feedback welcome!