Closed Lanhild closed 3 months ago
Ollama is local. I think that does not really make sense for a tool like Typebot which is 100% online?
What would be the use case?
Ollama is also deployed on servers, and having this node compatible with other OpenAI based APIs would allow to use them in Typebot.
There would simply need to add an option to specify the endpoint in the block credentials.
@baptisteArno Hi
Local LLMs are really important and are also a trend.
Please also add xinference, which supports more models than ollama and has more comprehensive functions.
Official documentation: https://inference.readthedocs.io/en/latest/index.html
I don't think it is a good idea to offer the Ollama block since most people would like to consume it locally. Typebot by default is 100% online so it won't work with local ollama instances. Closing it for now.
Hi has this been deployed or is it still not planned?
A use case is that an organisation can use local AI to allow (sales) staff to ask queries about services and staff documentation which is complex, and is updated frequently. But the data must be internal only.
I can't find another way to do this without using extensive cloud resources (and yes, I am aware of Botpress, langflow, localAI, lobechat, anythingLLM, LMStudio, ActivePieces/ n8n, AsktheDoc, Docuchat, ChatDoc, Documind and PrivateGPT etc...)
@Lanhild I did think of one way to do this, not the best, but;
Change the hosts file on the machine running Typebot so that api.openai.com (or whatever) uses the IP for your Ollama instead.
Assuming all users are local, they could chat and Typebot would (should) communicate with Ollama over the local IP.
typebot must support ollama. It is the need of the hour.
APIs like Ollama are fully compatible with the OpenAI spec, thus we should have the capabability of adding a custom endpoint in OpenAI block credentials