Closed CXwudi closed 3 months ago
Hi @CXwudi, I'm here to help you with any bugs, questions, or contributions you have for the Dify project. Let me know how I can assist you while waiting for the human maintainer.
To resolve the race condition between dify-api
and dify-worker
when launching a fresh new instance of Dify version 0.6.15, you can ensure that the database migrations are completed before the dify-worker
starts. This can be achieved by setting the MIGRATION_ENABLED
environment variable to true
and ensuring that the dify-api
service completes its startup process before the dify-worker
service begins.
Here is a modified version of your docker-compose.yaml
file to include a dependency on the api
service for the worker
service, along with the necessary configurations for the database and Redis services:
version: '3'
services:
# The postgres database.
db:
image: postgres:15-alpine
restart: always
environment:
POSTGRES_PASSWORD: difyai123456
POSTGRES_DB: dify
PGDATA: /var/lib/postgresql/data/pgdata
volumes:
- ./volumes/db/data:/var/lib/postgresql/data
ports:
- "5432:5432"
# The redis cache.
redis:
image: redis:6-alpine
restart: always
volumes:
- ./volumes/redis/data:/data
command: redis-server --requirepass difyai123456
ports:
- "6379:6379"
# API service
api:
image: langgenius/dify-api:0.6.15
restart: always
environment:
MODE: api
LOG_LEVEL: INFO
SECRET_KEY: sk-9f73s3ljTXVcMT3Blb3ljTqtsKiGHXVcMT3BlbkFJLK7U
CONSOLE_WEB_URL: ''
INIT_PASSWORD: ''
CONSOLE_API_URL: ''
SERVICE_API_URL: ''
APP_WEB_URL: ''
FILES_URL: ''
FILES_ACCESS_TIMEOUT: 300
MIGRATION_ENABLED: 'true'
DB_USERNAME: postgres
DB_PASSWORD: difyai123456
DB_HOST: db
DB_PORT: 5432
DB_DATABASE: dify
REDIS_HOST: redis
REDIS_PORT: 6379
REDIS_PASSWORD: difyai123456
REDIS_USE_SSL: 'false'
REDIS_DB: 0
CELERY_BROKER_URL: redis://:difyai123456@redis:6379/1
WEB_API_CORS_ALLOW_ORIGINS: '*'
CONSOLE_CORS_ALLOW_ORIGINS: '*'
# Worker service
worker:
image: langgenius/dify-api:0.6.15
restart: always
depends_on:
- api
environment:
MODE: worker
CONSOLE_WEB_URL: ''
DB_USERNAME: postgres
DB_PASSWORD: difyai123456
DB_HOST: db
DB_PORT: 5432
DB_DATABASE: dify
REDIS_HOST: redis
REDIS_PORT: 6379
REDIS_PASSWORD: difyai123456
REDIS_USE_SSL: 'false'
REDIS_DB: 0
CELERY_BROKER_URL: redis://:difyai123456@redis:6379/1
WEB_API_CORS_ALLOW_ORIGINS: '*'
CONSOLE_CORS_ALLOW_ORIGINS: '*'
By adding depends_on: - api
to the worker
service, Docker Compose will ensure that the api
service is started before the worker
service, reducing the likelihood of a race condition causing a sqlalchemy.exc.IntegrityError
. Additionally, the configurations for the database and Redis services are included to ensure they are properly set up. The MIGRATION_ENABLED
environment variable ensures that migrations are executed prior to the application startup and the application starts after the migrations have completed [1].
To continue talking to Dosu, mention @dosu.
Actually just realized I am using an old version of dify. I thought I already pulled the latest image. Closing now
Or you can also do:
api:
image: langgenius/dify-api:0.6.15
restart: always
depends_on:
- worker # let api and worker run one-by-one
In case if you saw this issue again
Self Checks
Dify version
0.6.15
Cloud or Self Hosted
Self Hosted (Docker)
Steps to reproduce
When launching a fresh new instance of Dify, and you specify
restart: none
instead ofrestart: always
. Then eitherdify-api
ordify-worker
crashes with an error:✔️ Expected Behavior
The Dify instance should launch successfully.
❌ Actual Behavior
Either
dify-api
ordify-worker
will crash upon starting:Unfold to check the log
``` 2024-07-27 14:15:50 None of PyTorch, TensorFlow >= 2.0, or Flax have been found. Models won't be available and only tokenizers, configuration and file/data utilities can be used. 2024-07-27 14:15:57 INFO [alembic.runtime.migration] Context impl PostgresqlImpl. 2024-07-27 14:15:57 INFO [alembic.runtime.migration] Will assume transactional DDL. 2024-07-27 14:15:58 Traceback (most recent call last): 2024-07-27 14:15:58 File "/usr/local/lib/python3.10/site-packages/sqlalchemy/engine/base.py", line 1967, in _exec_single_context 2024-07-27 14:15:48 Running migrations 2024-07-27 14:15:58 self.dialect.do_execute( 2024-07-27 14:15:58 File "/usr/local/lib/python3.10/site-packages/sqlalchemy/engine/default.py", line 924, in do_execute 2024-07-27 14:15:58 cursor.execute(statement, parameters) 2024-07-27 14:15:58 psycopg2.errors.UniqueViolation: duplicate key value violates unique constraint "pg_type_typname_nsp_index" 2024-07-27 14:15:58 DETAIL: Key (typname, typnamespace)=(alembic_version, 2200) already exists. 2024-07-27 14:15:58 2024-07-27 14:15:58 2024-07-27 14:15:58 The above exception was the direct cause of the following exception: 2024-07-27 14:15:58 2024-07-27 14:15:58 Traceback (most recent call last): 2024-07-27 14:15:58 File "/usr/local/bin/flask", line 8, inThe workaround is simply just to set
restart: always
orrestart: unless-stopped
.I noticed this issue is not a big deal. But I'd to keep this issue here for anyone who tried
restart: none