docker / genai-stack

Langchain + Docker + Neo4j + Ollama
Creative Commons Zero v1.0 Universal
4.06k stars 880 forks source link

error running docker compose on linux #175

Closed yeeeuw closed 3 months ago

yeeeuw commented 3 months ago

On Ubuntu 24.04 when i run: docker compose --profile linux up I get the following error:

pull-model-1 | pulling ollama model llama2 using http://host.docker.internal:11434 pull-model-1 | panic: $HOME is not defined pull-model-1 | pull-model-1 | goroutine 1 [running]: pull-model-1 | github.com/ollama/ollama/envconfig.Models() pull-model-1 | github.com/ollama/ollama/envconfig/config.go:93 +0xa9 pull-model-1 | github.com/ollama/ollama/envconfig.AsMap() pull-model-1 | github.com/ollama/ollama/envconfig/config.go:253 +0x699 pull-model-1 | github.com/ollama/ollama/cmd.NewCLI() pull-model-1 | github.com/ollama/ollama/cmd/cmd.go:1329 +0xb68 pull-model-1 | main.main() pull-model-1 | github.com/ollama/ollama/main.go:11 +0x13 pull-model-1 | panic: $HOME is not defined pull-model-1 | pull-model-1 | goroutine 1 [running]: pull-model-1 | github.com/ollama/ollama/envconfig.Models() pull-model-1 | github.com/ollama/ollama/envconfig/config.go:93 +0xa9 pull-model-1 | github.com/ollama/ollama/envconfig.AsMap() pull-model-1 | github.com/ollama/ollama/envconfig/config.go:253 +0x699 pull-model-1 | github.com/ollama/ollama/cmd.NewCLI() pull-model-1 | github.com/ollama/ollama/cmd/cmd.go:1329 +0xb68 pull-model-1 | main.main() pull-model-1 | github.com/ollama/ollama/main.go:11 +0x13 database-1 | Installing Plugin 'apoc' from /var/lib/neo4j/labs/apoc-*-core.jar to /var/lib/neo4j/plugins/apoc.jar database-1 | Applying default values for plugin apoc to neo4j.conf pull-model-1 exited with code 1

** further down *** database-1 | 2024-08-05 10:42:14.659+0000 INFO Started. Gracefully stopping... (press Ctrl+C again to force) service "pull-model" didn't complete successfully: exit 1

it seems to be that my default fails as home doesn't exist: $HOME/.ollama/models

samchenghowing commented 3 months ago

Which model you are using? could you provide your .env file?

yeeeuw commented 3 months ago

below is .env

*****

LLM and Embedding Model

*****

LLM=llama2 #or any Ollama model tag, gpt-4, gpt-3.5, or claudev2 EMBEDDING_MODEL=sentence_transformer #or google-genai-embedding-001 openai, ollama, or aws

*****

Neo4j

*****

NEO4J_URI=neo4j://database:7687 NEO4J_USERNAME=neo4j NEO4J_PASSWORD=password

*****

Langchain

*****

Optional for enabling Langchain Smith API

LANGCHAIN_TRACING_V2=true # false

LANGCHAIN_ENDPOINT="https://api.smith.langchain.com"

LANGCHAIN_PROJECT=#your-project-name

LANGCHAIN_APIKEY=#your-api-key ls...

*****

Ollama

*****

OLLAMA_BASE_URL=http://host.docker.internal:11434

*****

OpenAI

*****

Only required when using OpenAI LLM or embedding model

OPENAI_API_KEY=sk-...

*****

AWS

*****

Only required when using AWS Bedrock LLM or embedding model

AWS_ACCESS_KEY_ID=

AWS_SECRET_ACCESS_KEY=

AWS_DEFAULT_REGION=us-east-1

*****

GOOGLE

*****

Only required when using GoogleGenai LLM or embedding model

GOOGLE_API_KEY=

samchenghowing commented 3 months ago

It seems weird, llama2 should work. What's your run time environment and docker-compose.yml then?

***edited Have you install ollama in your host machine? OLLAMA_BASE_URL=http://host.docker.internal:11434/ <= here is calling the ollama in your host machine in container


p.s. I have encounter the same problem while using gemma2:2b. I fix it following this guide. You might also install the ollama in your host machine and connect it from container.

yeeeuw commented 3 months ago

oki, did two things, confirmed that ollama_base_url is set to http://llm:11434 same error as before.

When running ollama locally and following the guide you sited, i get an error when chatting to the bot on: http://localhost:8501/

ConnectionError: HTTPConnectionPool(host='localhost', port=11434): Max retries exceeded with url: / (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x73f8ba4db190>: Failed to establish a new connection: [Errno 111] Connection refused'))

This is with ollama_base_url set to: http://host.docker.internal:11434

samchenghowing commented 3 months ago

ConnectionError

The ConnectionError is showing that it can't connect to your local ollama. Is your local ollama running? Please check it in your terminal by

curl 127.0.0.1:11434

You should get a "ollama is running" message.

If it is not running, start it by

ollama serve

yeeeuw commented 3 months ago

re previous comment: ollama is running but i believe you may have issues back referencing the host using host.docker.internal on linux. I've subsequently tried this on windows which has the exact same error as the original issue around $HOME. Also on windows if i follow the guide you referenced docker compose up doesn't work as the api fails its health check. Disabling the health check seems to generate further issues.

yeeeuw commented 3 months ago

tracked it down to a computer issue, this is working

samchenghowing commented 3 months ago

tracked it down to a computer issue, this is working

Hi, I find that might be due to the configuration of install scripts does not comply with the latest ollama docker image. Try using the ollama image version before June, together with llama2.

yeeeuw commented 3 months ago

FYI: I am currently still running it as per your guide, haven't got ollama to work in docker.

yeeeuw commented 3 months ago

also which operating system do you use? i don't believe host.docker.internal works on linux.

samchenghowing commented 3 months ago

also which operating system do you use? i don't believe host.docker.internal works on linux.

I am using Ubuntu 24.04 in WSL. Haven't tried running it on Linux natively.

samchenghowing commented 3 months ago

Hi, I think I find out the way to fix, actually quite simple. Just add the "HOME" (System/getProperty "user.home") in the clojure script, and the HOME not found is fix. You might check out this commit. I guess the problem is caused by ollama update in June or some newer version.