I would like to be able to connect from FlowiseAI to this locally running AI, getumbrel/llama-gpt (stared via Docker and running at http://ip:port).
I rather not using the LocalAI solution, unless there is no other choice. If this is possible, please point me in the right direction or please provide me with some instructions or guidelines on how to do this.
looks like getumbrel/llama-gpt is a full chat application running using llama models. You can use Ollama to have your llama2 models running, and use it within F;owise
I would like to be able to connect from FlowiseAI to this locally running AI, getumbrel/llama-gpt (stared via Docker and running at http://ip:port). I rather not using the LocalAI solution, unless there is no other choice. If this is possible, please point me in the right direction or please provide me with some instructions or guidelines on how to do this.