Closed moritzrfs closed 5 months ago
Running locally using ollama with the following settings:
ollama_base_url = "http://localhost:11434/v1/" ollama_model_name = "llama3:8b" openai_api_key = "123456"
returns
## generating video script 2024-05-26 20:40:06.423 | INFO | app.services.llm:generate_script:223 - subject: beach 2024-05-26 20:40:06.423 | INFO | app.services.llm:_generate_response:18 - llm provider: ollama 2024-05-26 20:40:08.952 | ERROR | app.services.llm:generate_script:259 - failed to generate script: Connection error. 2024-05-26 20:40:08.952 | WARNING | app.services.llm:generate_script:262 - failed to generate video script, trying again... 1
and docker logs
: _generate_response - llm provider: ollama 2024-05-26 22:40:08 webui | 2024-05-26 20:40:08 | ERROR | "./app/services/llm.py:259": generate_script - failed to generate script: Connection error. 2024-05-26 22:40:08 webui | 2024-05-26 20:40:08 | WARNING | "./app/services/llm.py:262": generate_script - failed to generate video script, trying again... 1 2024-05-26 22:40:08 webui | 2024-05-26 20:40:08 | INFO | "./app/services/llm.py:18": _generate_response - llm provider: ollama
Is there anything wrong with the settings? Ollama is installed locally on the machine and not running in a container.
Nevermind. Having ollama locally (outside the docker network) requires to access the hosts ollama server via host.docker.internal.
host.docker.internal
Problem solved.
Running locally using ollama with the following settings:
returns
and docker logs
Is there anything wrong with the settings? Ollama is installed locally on the machine and not running in a container.