langgenius / dify

Dify is an open-source LLM app development platform. Dify's intuitive interface combines AI workflow, RAG pipeline, agent capabilities, model management, observability features and more, letting you quickly go from prototype to production.
https://dify.ai
Other
43.06k stars 5.97k forks source link

Can't connect Dify with OLLAMA #2540

Closed ntvuongg closed 6 months ago

ntvuongg commented 6 months ago

Self Checks

Provide a description of the help you need

Dify version: 0.5.6

Cloud or Self Hosted: Self Hosted (Docker)

Steps to reproduce

Add OLLAMA as new model provider in settings but got error

✔️ Expected Behavior

It should be connect to OLLAMA

❌ Actual Behavior Screenshot 2024-02-23 at 12 47 03

👀 Actions Here is my settings: Screenshot 2024-02-23 at 13 16 25

Note: I try to send request to OLLAMA url and got results

Screenshot 2024-02-23 at 13 18 21
dosubot[bot] commented 6 months ago

🤖

Hey @ntvuongg! 🚀 It's fantastic to see you diving into another adventure with us. How's the digital world treating you today?

Based on the information you've provided and similar issues I found in the repository, it seems like the problem might be related to the network configuration of your Docker setup. When Dify is deployed using Docker, the localhost inside the Docker container is not the same as the localhost of the host machine. Therefore, if OLLAMA is running on the host machine and Dify is running inside a Docker container, they won't be able to communicate using http://localhost:11434.

Instead, you should use the local network IP address or the Docker host machine IP address. For example, you can use http://192.168.1.100:11434 or http://172.17.0.1:11434 as the Base URL in the OLLAMA settings in Dify.

Here is the relevant part from the Dify documentation:

- Base URL: `http://<your-ollama-endpoint-domain>:11434`

  Enter the base URL where the Ollama service is accessible.

  If Dify is deployed using docker, consider using the local network IP address, e.g., `http://192.168.1.100:11434` or the docker host machine IP address, e.g., `http://172.17.0.1:11434`.

  For local source code deployment, use `http://localhost:11434`.

Please replace <your-ollama-endpoint-domain> with your local network IP address or Docker host machine IP address and try again.

As for the exact error message you're seeing, I'm unable to view the images you've posted as I can't process images. Could you please provide the exact error message in text form? This will help me provide a more accurate solution.

Sources

#### About Dosu This response is meant to be useful and save you time. It is **not meant to be a precise solution**, but rather a starting point for your own research. Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant. If you want to continue the conversation, start your reply with @dosu-bot.
ntvuongg commented 6 months ago

🤖

Hey @ntvuongg! 🚀 It's fantastic to see you diving into another adventure with us. How's the digital world treating you today?

Based on the information you've provided and similar issues I found in the repository, it seems like the problem might be related to the network configuration of your Docker setup. When Dify is deployed using Docker, the localhost inside the Docker container is not the same as the localhost of the host machine. Therefore, if OLLAMA is running on the host machine and Dify is running inside a Docker container, they won't be able to communicate using http://localhost:11434.

Instead, you should use the local network IP address or the Docker host machine IP address. For example, you can use http://192.168.1.100:11434 or http://172.17.0.1:11434 as the Base URL in the OLLAMA settings in Dify.

Here is the relevant part from the Dify documentation:

- Base URL: `http://<your-ollama-endpoint-domain>:11434`

  Enter the base URL where the Ollama service is accessible.

  If Dify is deployed using docker, consider using the local network IP address, e.g., `http://192.168.1.100:11434` or the docker host machine IP address, e.g., `http://172.17.0.1:11434`.

  For local source code deployment, use `http://localhost:11434`.

Please replace <your-ollama-endpoint-domain> with your local network IP address or Docker host machine IP address and try again.

As for the exact error message you're seeing, I'm unable to view the images you've posted as I can't process images. Could you please provide the exact error message in text form? This will help me provide a more accurate solution.

Sources

About Dosu

This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research.

Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant.

If you want to continue the conversation, start your reply with @dosu-bot.

But I tested with POSTMAN it worked?

dosubot[bot] commented 6 months ago

🤖

Based on the similar issues found in the repository, it seems that the problem might be related to the IP address and URL you're using to connect to OLLAMA. One user was able to resolve a similar issue by discovering that the correct IP address needed to connect to was different from the one they were using, and also that the URL was not complete. Although the exact IP address and the correct URL format were not provided in the comments, the user's resolution indicates that ensuring the correct IP address and complete URL are used can solve the problem. You can refer to this issue for more details.

Another solution suggested in a different issue was to use LiteLLM with Ollama. This solution received positive reactions, indicating that it might be a viable solution for your problem.

As for the steps to connect Dify with OLLAMA, you can follow the instructions provided in the Dify documentation. Make sure that OLLAMA is running and accessible at the correct endpoint, and that you've filled in the correct details in the Dify settings.

If you're still encountering an error after following these steps, it would be helpful if you could provide the exact error message you're seeing. This could give us more information to help diagnose and solve the problem.

Sources

#### About Dosu This response is meant to be useful and save you time. It is **not meant to be a precise solution**, but rather a starting point for your own research. Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant. If you want to continue the conversation, start your reply with @dosu-bot.
crazywoola commented 6 months ago

Seems ollama upgraded it's client. https://github.com/langgenius/dify/issues/2516 We will take a look at this.

ntvuongg commented 6 months ago

which version of ollama did you use that would compatible with Dify?

crazywoola commented 6 months ago

which version of ollama did you use that would compatible with Dify?

I think we might need to adjust this in our side. I haven't added that(older version) in my case. But there are issues related to ollama but they installed this successfully. I tested with the latest version. So we should fix this soon.

xiangpingjiang commented 6 months ago

@ntvuongg I can add ollama as LLM for my local dify, this may be useful https://stackoverflow.com/questions/24319662/from-inside-of-a-docker-container-how-do-i-connect-to-the-localhost-of-the-mach

ntvuongg commented 6 months ago

I have checked in Ollama provider and maybe bellow code causes error: Screenshot 2024-02-23 at 17 01 03

But when I tested with POSTMAN it still working 🥲

Screenshot 2024-02-23 at 17 03 11

Updated: I connect to the api container and try to make request to ollama url but got timeout. Seem that container can't access ollama url Here is my context: Dify image running on other PC B (server) that same network with PC A (my PC). When I use POSTMAN to test from PC A, it worked but when I access to the api container (deployed in PC B) and make the same request, got time out.

xiangpingjiang commented 6 months ago

@ntvuongg did you try http://host.docker.internal:11434 , this works for me

ntvuongg commented 6 months ago

@ntvuongg did you try http://host.docker.internal:11434 , this works for me

Tried but still not working. Do I need to change anything before change the url?

Yeuoly commented 6 months ago

Have your tried execute curl inside the container instead of your own PC.

crazywoola commented 6 months ago

Have you tried this one https://github.com/ollama/ollama/blob/main/docs/faq.md#how-do-i-configure-ollama-server

I set up a new environment on my other laptop

ollama serve

curl 127.0.0.1:11434 success curl 192.x.x.x:11434 failed

And I tried this one

export OLLAMA_HOST="0.0.0.0" && ollama serve

curl 127.0.0.1:11434 success curl 192.x.x.x:11434 success in dify success

ntvuongg commented 6 months ago

Have your tried execute curl inside the container instead of your own PC.

Working on my PC but got time out in container

ntvuongg commented 6 months ago

Have you tried this one https://github.com/ollama/ollama/blob/main/docs/faq.md#how-do-i-configure-ollama-server

I set up a new environment on my other laptop

ollama serve

curl 127.0.0.1:11434 success curl 192.x.x.x:11434 failed

And I tried this one

export OLLAMA_HOST="0.0.0.0" && ollama serve

curl 127.0.0.1:11434 success curl 192.x.x.x:11434 success in dify success

I tried:

curl 192.168.34.63:11434 # failed
curl 127.0.0.1:11434 # success

when I use:

ollama serve

I got "Error: listen tcp 0.0.0.0:11434: bind: address already in use" Updated: I also try to curl on server PC to 192.168.34.63:11434 -> failed but 127.17.0.2:11434 (IP of ollama container) -> worked

crazywoola commented 6 months ago

You need to close previous ollama instance and run launchctl setenv OLLAMA_HOST "0.0.0.0" and restart it again.

ntvuongg commented 6 months ago

You need to close previous ollama instance and run launchctl setenv OLLAMA_HOST "0.0.0.0" and restart it again.

I have already set OLLAMA_HOST=0.0.0.0 as env variable of ollama service in docker-compose file and seem it still not working. I think that I got something wrong with network configuration.

Here is my docker-compose.yaml:

version: '3.1'
services:
  # API service
  api:
    image: langgenius/dify-api:0.5.6
    restart: always
    environment:
      # Startup mode, 'api' starts the API server.
      MODE: api
      # The log level for the application. Supported values are `DEBUG`, `INFO`, `WARNING`, `ERROR`, `CRITICAL`
      LOG_LEVEL: INFO
      # A secret key that is used for securely signing the session cookie and encrypting sensitive information on the database. You can generate a strong key using `openssl rand -base64 42`.
      SECRET_KEY: sk-9f73s3ljTXVcMT3Blb3ljTqtsKiGHXVcMT3BlbkFJLK7U
      # The base URL of console application web frontend, refers to the Console base URL of WEB service if console domain is
      # different from api or web app domain.
      # example: http://cloud.dify.ai
      CONSOLE_WEB_URL: ''
      # Password for admin user initialization.
      # If left unset, admin user will not be prompted for a password when creating the initial admin account.
      INIT_PASSWORD: ''
      # The base URL of console application api server, refers to the Console base URL of WEB service if console domain is
      # different from api or web app domain.
      # example: http://cloud.dify.ai
      CONSOLE_API_URL: ''
      # The URL prefix for Service API endpoints, refers to the base URL of the current API service if api domain is
      # different from console domain.
      # example: http://api.dify.ai
      SERVICE_API_URL: ''
      # The URL prefix for Web APP frontend, refers to the Web App base URL of WEB service if web app domain is different from
      # console or api domain.
      # example: http://udify.app
      APP_WEB_URL: ''
      # File preview or download Url prefix.
      # used to display File preview or download Url to the front-end or as Multi-model inputs;
      # Url is signed and has expiration time.
      FILES_URL: ''
      # When enabled, migrations will be executed prior to application startup and the application will start after the migrations have completed.
      MIGRATION_ENABLED: 'true'
      # The configurations of postgres database connection.
      # It is consistent with the configuration in the 'db' service below.
      DB_USERNAME: postgres
      DB_PASSWORD: difyai123456
      DB_HOST: db
      DB_PORT: 5432
      DB_DATABASE: dify
      # The configurations of redis connection.
      # It is consistent with the configuration in the 'redis' service below.
      REDIS_HOST: redis
      REDIS_PORT: 6379
      REDIS_USERNAME: ''
      REDIS_PASSWORD: difyai123456
      REDIS_USE_SSL: 'false'
      # use redis db 0 for redis cache
      REDIS_DB: 0
      # The configurations of celery broker.
      # Use redis as the broker, and redis db 1 for celery broker.
      CELERY_BROKER_URL: redis://:difyai123456@redis:6379/1
      # Specifies the allowed origins for cross-origin requests to the Web API, e.g. https://dify.app or * for all origins.
      WEB_API_CORS_ALLOW_ORIGINS: '*'
      # Specifies the allowed origins for cross-origin requests to the console API, e.g. https://cloud.dify.ai or * for all origins.
      CONSOLE_CORS_ALLOW_ORIGINS: '*'
      # CSRF Cookie settings
      # Controls whether a cookie is sent with cross-site requests,
      # providing some protection against cross-site request forgery attacks
      #
      # Default: `SameSite=Lax, Secure=false, HttpOnly=true`
      # This default configuration supports same-origin requests using either HTTP or HTTPS,
      # but does not support cross-origin requests. It is suitable for local debugging purposes.
      #
      # If you want to enable cross-origin support,
      # you must use the HTTPS protocol and set the configuration to `SameSite=None, Secure=true, HttpOnly=true`.
      #
      # The type of storage to use for storing user files. Supported values are `local` and `s3`, Default: `local`
      STORAGE_TYPE: local
      # The path to the local storage directory, the directory relative the root path of API service codes or absolute path. Default: `storage` or `/home/john/storage`.
      # only available when STORAGE_TYPE is `local`.
      STORAGE_LOCAL_PATH: storage
      # The S3 storage configurations, only available when STORAGE_TYPE is `s3`.
      S3_ENDPOINT: 'https://xxx.r2.cloudflarestorage.com'
      S3_BUCKET_NAME: 'difyai'
      S3_ACCESS_KEY: 'ak-difyai'
      S3_SECRET_KEY: 'sk-difyai'
      S3_REGION: 'us-east-1'
      # The type of vector store to use. Supported values are `weaviate`, `qdrant`, `milvus`.
      VECTOR_STORE: weaviate
      # The Weaviate endpoint URL. Only available when VECTOR_STORE is `weaviate`.
      WEAVIATE_ENDPOINT: http://weaviate:8080
      # The Weaviate API key.
      WEAVIATE_API_KEY: WVF5YThaHlkYwhGUSmCRgsX3tD5ngdN8pkih
      # The Qdrant endpoint URL. Only available when VECTOR_STORE is `qdrant`.
      QDRANT_URL: http://qdrant:6333
      # The Qdrant API key.
      QDRANT_API_KEY: difyai123456
      # The Qdrant clinet timeout setting.
      QDRANT_CLIENT_TIMEOUT: 20
      # Milvus configuration Only available when VECTOR_STORE is `milvus`.
      # The milvus host.
      MILVUS_HOST: 192.168.34.63
      # The milvus host.
      MILVUS_PORT: 19530
      # The milvus username.
      MILVUS_USER: root
      # The milvus password.
      MILVUS_PASSWORD: Milvus
      # The milvus tls switch.
      MILVUS_SECURE: 'false'
      # Mail configuration, support: resend, smtp
      MAIL_TYPE: ''
      # default send from email address, if not specified
      MAIL_DEFAULT_SEND_FROM: 'YOUR EMAIL FROM (eg: no-reply <no-reply@dify.ai>)'
      SMTP_SERVER: ''
      SMTP_PORT: 587
      SMTP_USERNAME: ''
      SMTP_PASSWORD: ''
      SMTP_USE_TLS: 'true'
      # the api-key for resend (https://resend.com)
      RESEND_API_KEY: ''
      RESEND_API_URL: https://api.resend.com
      # The DSN for Sentry error reporting. If not set, Sentry error reporting will be disabled.
      SENTRY_DSN: ''
      # The sample rate for Sentry events. Default: `1.0`
      SENTRY_TRACES_SAMPLE_RATE: 1.0
      # The sample rate for Sentry profiles. Default: `1.0`
      SENTRY_PROFILES_SAMPLE_RATE: 1.0
    depends_on:
      - db
      - redis
    volumes:
      # Mount the storage directory to the container, for storing user files.
      - ./volumes/app/storage:/app/api/storage
      - ../api:/app/api
    extra_hosts:
      - "host.docker.internal:host-gateway"
    # uncomment to expose dify-api port to host
    # ports:
    #   - "5001:5001"

    # worker service
    # The Celery worker for processing the queue.
  worker:
    image: langgenius/dify-api:0.5.6
    restart: always
    environment:
      # Startup mode, 'worker' starts the Celery worker for processing the queue.
      MODE: worker

      # --- All the configurations below are the same as those in the 'api' service. ---

      # The log level for the application. Supported values are `DEBUG`, `INFO`, `WARNING`, `ERROR`, `CRITICAL`
      LOG_LEVEL: INFO
      # A secret key that is used for securely signing the session cookie and encrypting sensitive information on the database. You can generate a strong key using `openssl rand -base64 42`.
      # same as the API service
      SECRET_KEY: sk-9f73s3ljTXVcMT3Blb3ljTqtsKiGHXVcMT3BlbkFJLK7U
      # The configurations of postgres database connection.
      # It is consistent with the configuration in the 'db' service below.
      DB_USERNAME: postgres
      DB_PASSWORD: difyai123456
      DB_HOST: db
      DB_PORT: 5432
      DB_DATABASE: dify
      # The configurations of redis cache connection.
      REDIS_HOST: redis
      REDIS_PORT: 6379
      REDIS_USERNAME: ''
      REDIS_PASSWORD: difyai123456
      REDIS_DB: 0
      REDIS_USE_SSL: 'false'
      # The configurations of celery broker.
      CELERY_BROKER_URL: redis://:difyai123456@redis:6379/1
      # The type of storage to use for storing user files. Supported values are `local` and `s3`, Default: `local`
      STORAGE_TYPE: local
      STORAGE_LOCAL_PATH: storage
      # The type of vector store to use. Supported values are `weaviate`, `qdrant`, `milvus`.
      VECTOR_STORE: weaviate
      # The Weaviate endpoint URL. Only available when VECTOR_STORE is `weaviate`.
      WEAVIATE_ENDPOINT: http://weaviate:8080
      # The Weaviate API key.
      WEAVIATE_API_KEY: WVF5YThaHlkYwhGUSmCRgsX3tD5ngdN8pkih
      # The Qdrant endpoint URL. Only available when VECTOR_STORE is `qdrant`.
      QDRANT_URL: http://qdrant:6333
      # The Qdrant API key.
      QDRANT_API_KEY: difyai123456
      # The Qdrant clinet timeout setting.
      QDRANT_CLIENT_TIMEOUT: 20
      # Milvus configuration Only available when VECTOR_STORE is `milvus`.
      # The milvus host.
      MILVUS_HOST: 192.168.34.63
      # The milvus host.
      MILVUS_PORT: 19530
      # The milvus username.
      MILVUS_USER: root
      # The milvus password.
      MILVUS_PASSWORD: Milvus
      # The milvus tls switch.
      MILVUS_SECURE: 'false'
      # Mail configuration, support: resend
      MAIL_TYPE: ''
      # default send from email address, if not specified
      MAIL_DEFAULT_SEND_FROM: 'YOUR EMAIL FROM (eg: no-reply <no-reply@dify.ai>)'
      # the api-key for resend (https://resend.com)
      RESEND_API_KEY: ''
      RESEND_API_URL: https://api.resend.com
    depends_on:
      - db
      - redis
    volumes:
      # Mount the storage directory to the container, for storing user files.
      - ./volumes/app/storage:/app/api/storage

  # Frontend web application.
  web:
    image: langgenius/dify-web:0.5.6
    restart: always
    environment:
      EDITION: SELF_HOSTED
      # The base URL of console application api server, refers to the Console base URL of WEB service if console domain is
      # different from api or web app domain.
      # example: http://cloud.dify.ai
      CONSOLE_API_URL: ''
      # The URL for Web APP api server, refers to the Web App base URL of WEB service if web app domain is different from
      # console or api domain.
      # example: http://udify.app
      APP_API_URL: ''
      # The DSN for Sentry error reporting. If not set, Sentry error reporting will be disabled.
      SENTRY_DSN: ''
    # uncomment to expose dify-web port to host
    ports:
      - "3030:3000"

  # The postgres database.
  db:
    image: postgres:15-alpine
    restart: always
    environment:
      PGUSER: postgres
      # The password for the default postgres user.
      POSTGRES_PASSWORD: difyai123456
      # The name of the default postgres database.
      POSTGRES_DB: dify
      # postgres data directory
      PGDATA: /var/lib/postgresql/data/pgdata
    volumes:
      - ./volumes/db/data:/var/lib/postgresql/data
    # uncomment to expose db(postgresql) port to host
    # ports:
    #   - "5432:5432"
    healthcheck:
      test: [ "CMD", "pg_isready" ]
      interval: 1s
      timeout: 3s
      retries: 30

  # The redis cache.
  redis:
    image: redis:6-alpine
    restart: always
    volumes:
      # Mount the redis data directory to the container.
      - ./volumes/redis/data:/data
    # Set the redis password when startup redis server.
    command: redis-server --requirepass difyai123456
    healthcheck:
      test: [ "CMD", "redis-cli", "ping" ]
    # uncomment to expose redis port to host
    # ports:
    #   - "6379:6379"

    # The Weaviate vector store.
  weaviate:
    image: semitechnologies/weaviate:1.19.0
    restart: always
    volumes:
      # Mount the Weaviate data directory to the container.
      - ./volumes/weaviate:/var/lib/weaviate
    environment:
      # The Weaviate configurations
      # You can refer to the [Weaviate](https://weaviate.io/developers/weaviate/config-refs/env-vars) documentation for more information.
      QUERY_DEFAULTS_LIMIT: 25
      AUTHENTICATION_ANONYMOUS_ACCESS_ENABLED: 'false'
      PERSISTENCE_DATA_PATH: '/var/lib/weaviate'
      DEFAULT_VECTORIZER_MODULE: 'none'
      CLUSTER_HOSTNAME: 'node1'
      AUTHENTICATION_APIKEY_ENABLED: 'true'
      AUTHENTICATION_APIKEY_ALLOWED_KEYS: 'WVF5YThaHlkYwhGUSmCRgsX3tD5ngdN8pkih'
      AUTHENTICATION_APIKEY_USERS: 'hello@dify.ai'
      AUTHORIZATION_ADMINLIST_ENABLED: 'true'
      AUTHORIZATION_ADMINLIST_USERS: 'hello@dify.ai'
    # uncomment to expose weaviate port to host
    # ports:
    #  - "8080:8080"

    # Qdrant vector store.
    # uncomment to use qdrant as vector store.
    # (if uncommented, you need to comment out the weaviate service above,
    # and set VECTOR_STORE to qdrant in the api & worker service.)
    # qdrant:
    #   image: langgenius/qdrant:v1.7.3
    #   restart: always
    #   volumes:
    #     - ./volumes/qdrant:/qdrant/storage
    #   environment:
    #     QDRANT__API_KEY: 'difyai123456'
    #   # uncomment to expose qdrant port to host
    #   # ports:
    #   #  - "6333:6333"

    # The nginx reverse proxy.
    # used for reverse proxying the API service and Web service.
  nginx:
    image: nginx:latest
    restart: always
    volumes:
      - ./nginx/nginx.conf:/etc/nginx/nginx.conf
      - ./nginx/proxy.conf:/etc/nginx/proxy.conf
      - ./nginx/conf.d:/etc/nginx/conf.d
    depends_on:
      - api
      - web
    ports:
      - "80:80"

  ollama:
    volumes:
      - ../ollama:/ollama

    container_name: ollama
    pull_policy: always
    tty: true
    restart: unless-stopped
    image: ollama/ollama:latest
    environment:
      - OLLAMA_HOST=0.0.0.0
    ports:
      - 11434:11434
    deploy:
      resources:
        reservations:
          devices:
            - driver: nvidia
              count: 2
              capabilities: [gpu]

I expected that I can request to 192.168.34.63 (IP of server PC) in my server PC so that the container can also make request to the given url.

ntvuongg commented 6 months ago

Solved by calling container name

yumcc-com commented 4 months ago

@ntvuongg你尝试过 http://host.docker.internal:11434 吗,这对我有用

+1

ICHarmony commented 4 months ago

Solved by calling container name

Do you mean by using "docker run -d -v ollama:/root/.ollama -p 11434:11434 --name ollama ollama/ollama:latest" or some other url? I'm having this issue and don't understand your method.

🙏I appreciate you

unijad commented 3 months ago

this worked for me on the docker.compose.yaml just addextra_hosts to the nginx proxy configuration to include host.docker.internal:host-gateway it seems it's not being exposed by default here's my updated ngix configuration

  # The nginx reverse proxy.
  # used for reverse proxying the API service and Web service.
  nginx:
    image: nginx:latest
    restart: always
    volumes:
      - ./nginx/nginx.conf:/etc/nginx/nginx.conf
      - ./nginx/proxy.conf:/etc/nginx/proxy.conf
      - ./nginx/conf.d:/etc/nginx/conf.d
      #- ./nginx/ssl:/etc/ssl
    depends_on:
      - api
      - web
    ports:
      - "80:80"
      #- "443:443"
    extra_hosts:
      - "host.docker.internal:host-gateway"
      - "127.0.0.1:host-gateway"
Screenshot 2024-05-17 at 4 36 53 AM