Open david-strejc opened 8 months ago
Thanks for the rqeust; it's hard to understand the effort until we see the API, but it's a good idea. Depending on how good the API is, we can choose whether to adopt it, or whether to mimic the same functionality. Do you know who's gonna be in charge of the API? Historically those projects don't do as good as a job with APIs as they do with their core code.
Thanks for the rqeust; it's hard to understand the effort until we see the API, but it's a good idea. Depending on how good the API is, we can choose whether to adopt it, or whether to mimic the same functionality. Do you know who's gonna be in charge of the API? Historically those projects don't do as good as a job with APIs as they do with their core code.
I only know it is quite a new project - probably two months old from people from Berkley - but they promised API in version 1.0. It works through commandline already - but there is probably lot of work to be done. I am watching this project because I need to connect our own data with LLMs and this is probably the only "right way" for now.
Yes, it's very interesting indeed. I wonder if it's "the" way, or just a stepping stone, like embeddings, rag, and large context windows.
There is project which will in future probably add "memory" to LLMs models.
They don't have API yet but it will be there soon.
Maybe implement this connection?
https://github.com/cpacker/MemGPT