Closed ParetoOptimalDev closed 3 months ago
Looks interesting. I don't think this breaks gptel's abstraction, or llm's for that matter. But adding support might be tricky. If there's a REST API it will be easy enough, but I couldn't find one even though there's a Discord bot. If you find a REST API and want to experiment, duplicating and modifying one of the individual backend files (gptel-ollama
, gptel-openai
etc) is all you'll need to add support to gptel.
If there's no REST API and communication is over a process pipe with Python, the infrastructure for it isn't present in gptel, and I don't intend to add it (purely for lack of time). In this case it might be worth adding process support to the llm
library, which will make it available to all LLM packages in the future. Including gptel at some point if I can manage the transition without loss of performance/feature parity.
Are you still interested in adding support for this feature?
Please reopen if you're still interested in adding this.
I'm not sure if this makes sense given that I don't think it's openai like or that it can be open AI like. Perhaps a sample session will show you want I mean:
It also greys out the thought bubbles representing the llm's "thoughts".
I don't know if integrating something like this makes sense with gptel given that it will require tagging different sorts of messages then:
Maybe this doesn't break the gptel abstraction or the llm.el library I heard was being worked on in core emacs though?
I haven't dove in to try and support this yet, but might later today.
Would appreciate any thoughts before-hand or pointers, as well as potential challenges or... letting me know it won't really work well if that's the case.