If no model is entered, OpenAI API is used instead.
I now recall, that openAI API also allows to choose between different APIs, so this automatic selection has to be replaced with a toggle, or some other, precise, way to distinguish if user wants to use oLLama or openAI
To run ollama models, all that we need is:
ollama pull llama3.1:8b
llama3.1:8b
If no model is entered, OpenAI API is used instead.
I now recall, that openAI API also allows to choose between different APIs, so this automatic selection has to be replaced with a toggle, or some other, precise, way to distinguish if user wants to use oLLama or openAI