Closed MariasStory closed 1 year ago
nope, i'll only focus on openai's models currently, and i will make it as light weight as possible, with a local model like LLaMA, lots of people cant run it, and it will increase the setup difficulty of this program. so for now i will only use remote api.
Thanks for the answer. You may want to consider a local API such as the one offered by LocalAI.
Maybe you could add the long term memory support to llama.cpp or use it as a "local back-end".