karthink / gptel

A simple LLM client for Emacs
GNU General Public License v3.0
1.04k stars 113 forks source link

Support memgpt (llm+memory) #161

Closed ParetoOptimalDev closed 3 months ago

ParetoOptimalDev commented 6 months ago

I'm not sure if this makes sense given that I don't think it's openai like or that it can be open AI like. Perhaps a sample session will show you want I mean:

~/code/MemGPT $ memgpt run

? Would you like to select an existing agent? Yes
? Select agent: agent_60

🔁 Using existing agent agent_60

Hit enter to begin (will request first MemGPT message)

💭 Responding to user's request.
🤖 Memory updated. If you need anything else, feel free to ask!
> Enter your message: Hi
💭 Greeting the user.
🤖 Hello Chad! How can I assist you today?
> Enter your message: My name is Brad
💭 Correcting the user's name.
🤖 Hi Brad! Thanks for pointing that out. How can I assist you today?
> Enter your message: What do you think of interacting with llm's in emacs?
💭 Sharing thoughts on using llm's in emacs.
🤖 Interacting with llms in Emacs can be quite useful. It allows for seamless integration of powerful 
AI models directly within your workflow, enabling you to quickly access and utilize their capabilities.
This can greatly enhance productivity and efficiency when working on complex tasks or projects.
> Enter your message: 

It also greys out the thought bubbles representing the llm's "thoughts".

I don't know if integrating something like this makes sense with gptel given that it will require tagging different sorts of messages then:

Maybe this doesn't break the gptel abstraction or the llm.el library I heard was being worked on in core emacs though?

I haven't dove in to try and support this yet, but might later today.

Would appreciate any thoughts before-hand or pointers, as well as potential challenges or... letting me know it won't really work well if that's the case.

karthink commented 6 months ago

Looks interesting. I don't think this breaks gptel's abstraction, or llm's for that matter. But adding support might be tricky. If there's a REST API it will be easy enough, but I couldn't find one even though there's a Discord bot. If you find a REST API and want to experiment, duplicating and modifying one of the individual backend files (gptel-ollama, gptel-openai etc) is all you'll need to add support to gptel.

If there's no REST API and communication is over a process pipe with Python, the infrastructure for it isn't present in gptel, and I don't intend to add it (purely for lack of time). In this case it might be worth adding process support to the llm library, which will make it available to all LLM packages in the future. Including gptel at some point if I can manage the transition without loss of performance/feature parity.

karthink commented 5 months ago

Are you still interested in adding support for this feature?

karthink commented 3 months ago

Please reopen if you're still interested in adding this.