AppFlowy-IO / AppFlowy-Cloud

AppFlowy is an open-source alternative to Notion. You are in charge of your data and customizations. Built with Flutter and Rust.
GNU Affero General Public License v3.0
1.07k stars 227 forks source link

[Bug] appflowy_cloud unable to start because of HTTP 502 with content #227

Closed Ecss11 closed 11 months ago

Ecss11 commented 11 months ago

Describe the bug AppFlowy Cloud is unable to start when deploying it. The error message is confusing: "Error: Failed to initialize application state: Got HTTP 502 with content ' '." It does not provide additional content such as which part caused the issue.

To Reproduce Steps to reproduce the behavior:

  1. follow the deploy steps on deployment.
  2. run docker compose up.
  3. the error occurs.

Expected behavior Appflowy should initialize without any problem.

Screenshots image

Additional Information

speed2exe commented 11 months ago

Hi @Ecss11 it seems like the minio is not running properly or you have configured appflowy_cloud to not use it. A file storage system is required for AppFlowy Cloud deployment.

Can I know how did you configure for this section?

# File Storage
USE_MINIO=true
# MINIO_URL=http://localhost:9000 # change this if you are using a different address for minio
AWS_ACCESS_KEY_ID=minioadmin
AWS_SECRET_ACCESS_KEY=minioadmin
AWS_S3_BUCKET=appflowy
AWS_REGION=us-east-1

If you have left it as it is, an output of docker logs appflowy-cloud-minio-1 will be helpful to trace the actual problem.

Ecss11 commented 11 months ago

Thank you @speed2exe for spending your time on this issue,

We did not touch Minio's configuration and kept it in the default settings as below.

# File Storage
USE_MINIO=true
# MINIO_URL=http://localhost:9000 # change this if you are using a different address for minio
AWS_ACCESS_KEY_ID=minioadmin
AWS_SECRET_ACCESS_KEY=minioadmin
AWS_S3_BUCKET=appflowy
AWS_REGION=us-east-1

Here is a result of running docker logs appflowy-cloud-minio-1:

WARNING: Detected default credentials 'minioadmin:minioadmin', we recommend that you change these values with 'MINIO_ROOT_USER' and 'MINIO_ROOT_PASSWORD' environment variables
MinIO Object Storage Server
Copyright: 2015-2023 MinIO, Inc.
License: GNU AGPLv3 <https://www.gnu.org/licenses/agpl-3.0.html>
Version: RELEASE.2023-12-14T18-51-57Z (go1.21.5 linux/amd64)

Status:         1 Online, 0 Offline. 
S3-API: http://172.22.0.2:9000  http://127.0.0.1:9000 
Console: http://localhost/minio 

Documentation: https://min.io/docs/minio/linux/index.html
Warning: The standard parity is set to 0. This can lead to data loss.

 You are running an older version of MinIO released 5 days before the latest release 
 Update: Run `mc admin update`

Additional Information

To make sure it's not caused by any confounding factor. We deployed the AppFlowy Cloud in LAN server A and used an FRP to forward the exposed nginx HTTP port to another server B to allow access from the public.

Some changes in the docker-compose configuration include removing several optional services and modifying nginx and AppFlowy exposed ports.

docker-compose config file:

version: '3'
services:
  nginx:
    restart: on-failure
    image: nginx
    ports:
      - 3100:80
      - 3101:443
    depends_on: # If you did not deploy any of the services below, comment those out
      - minio
      - appflowy_cloud
      - gotrue
      - admin_frontend
    volumes:
      - ./nginx/nginx.conf:/etc/nginx/nginx.conf
      - ./nginx/ssl/certificate.crt:/etc/nginx/ssl/certificate.crt
      - ./nginx/ssl/private_key.key:/etc/nginx/ssl/private_key.key

  # You do not need this if you have configured to use your own s3 file storage
  minio:
    restart: on-failure
    image: minio/minio
    ports:
      - 9000:9000
      - 9001:9001
    environment:
      - MINIO_BROWSER_REDIRECT_URL=http://localhost/minio
    command: server /data --console-address ":9001"
    volumes:
      - minio_data:/data

  postgres:
    restart: on-failure
    image: postgres
    environment:
      - POSTGRES_USER=${POSTGRES_USER:-postgres}
      - POSTGRES_DB=${POSTGRES_DB:-postgres}
      - POSTGRES_PASSWORD=${POSTGRES_PASSWORD:-password}
      - POSTGRES_HOST=${POSTGRES_HOST:-postgres}
    ports:
      - 5433:5432
    volumes:
      - ./migrations/before:/docker-entrypoint-initdb.d
      - postgres_data:/var/lib/postgresql/data

  redis:
    restart: on-failure
    image: redis
    ports:
      - 6380:6379

  gotrue:
    restart: on-failure
    build:
      context: .
      dockerfile: docker/gotrue.Dockerfile
    depends_on:
      - postgres
    environment:
      # Gotrue config: https://github.com/supabase/gotrue/blob/master/example.env
      - GOTRUE_SITE_URL=appflowy-flutter://                           # redirected to AppFlowy application
      - URI_ALLOW_LIST=*                                              # adjust restrict if necessary
      - GOTRUE_JWT_SECRET=${GOTRUE_JWT_SECRET}                        # authentication secret
      - GOTRUE_DB_DRIVER=postgres
      - API_EXTERNAL_URL=${API_EXTERNAL_URL}
      - DATABASE_URL=postgres://supabase_auth_admin:root@postgres:5432/postgres
      - PORT=9999
      - GOTRUE_SMTP_HOST=${GOTRUE_SMTP_HOST}                          # e.g. smtp.gmail.com
      - GOTRUE_SMTP_PORT=${GOTRUE_SMTP_PORT}                          # e.g. 465
      - GOTRUE_SMTP_USER=${GOTRUE_SMTP_USER}                          # email sender, e.g. noreply@appflowy.io
      - GOTRUE_SMTP_PASS=${GOTRUE_SMTP_PASS}                          # email password
      - GOTRUE_MAILER_URLPATHS_CONFIRMATION=/gotrue/verify
      - GOTRUE_MAILER_URLPATHS_INVITE=/gotrue/verify
      - GOTRUE_MAILER_URLPATHS_RECOVERY=/gotrue/verify
      - GOTRUE_MAILER_URLPATHS_EMAIL_CHANGE=/gotrue/verify
      - GOTRUE_SMTP_ADMIN_EMAIL=${GOTRUE_SMTP_ADMIN_EMAIL}            # email with admin privileges e.g. internal@appflowy.io
      - GOTRUE_SMTP_MAX_FREQUENCY=${GOTRUE_SMTP_MAX_FREQUENCY:-1ns}   # set to 1ns for running tests
      - GOTRUE_MAILER_AUTOCONFIRM=${GOTRUE_MAILER_AUTOCONFIRM:-false} # change this to true to skip email confirmation
      # Google OAuth config
      - GOTRUE_EXTERNAL_GOOGLE_ENABLED=${GOTRUE_EXTERNAL_GOOGLE_ENABLED}
      - GOTRUE_EXTERNAL_GOOGLE_CLIENT_ID=${GOTRUE_EXTERNAL_GOOGLE_CLIENT_ID}
      - GOTRUE_EXTERNAL_GOOGLE_SECRET=${GOTRUE_EXTERNAL_GOOGLE_SECRET}
      - GOTRUE_EXTERNAL_GOOGLE_REDIRECT_URI=${GOTRUE_EXTERNAL_GOOGLE_REDIRECT_URI}
      # GITHUB OAuth config
      - GOTRUE_EXTERNAL_GITHUB_ENABLED=${GOTRUE_EXTERNAL_GITHUB_ENABLED}
      - GOTRUE_EXTERNAL_GITHUB_CLIENT_ID=${GOTRUE_EXTERNAL_GITHUB_CLIENT_ID}
      - GOTRUE_EXTERNAL_GITHUB_SECRET=${GOTRUE_EXTERNAL_GITHUB_SECRET}
      - GOTRUE_EXTERNAL_GITHUB_REDIRECT_URI=${GOTRUE_EXTERNAL_GITHUB_REDIRECT_URI}
      # Discord OAuth config
      - GOTRUE_EXTERNAL_DISCORD_ENABLED=${GOTRUE_EXTERNAL_DISCORD_ENABLED}
      - GOTRUE_EXTERNAL_DISCORD_CLIENT_ID=${GOTRUE_EXTERNAL_DISCORD_CLIENT_ID}
      - GOTRUE_EXTERNAL_DISCORD_SECRET=${GOTRUE_EXTERNAL_DISCORD_SECRET}
      - GOTRUE_EXTERNAL_DISCORD_REDIRECT_URI=${GOTRUE_EXTERNAL_DISCORD_REDIRECT_URI}
      # Prometheus Metrics
      - GOTRUE_METRICS_ENABLED=true
      - GOTRUE_METRICS_EXPORTER=prometheus
    ports:
      - 9998:9999

  appflowy_cloud:
    restart: on-failure
    environment:
      - RUST_LOG=${RUST_LOG:-debug}
      - APPFLOWY_ENVIRONMENT=production
      - APPFLOWY_DATABASE_URL=postgres://postgres:password@postgres:5432/postgres
      - APPFLOWY_REDIS_URI=redis://redis:6379
      - APPFLOWY_GOTRUE_JWT_SECRET=${GOTRUE_JWT_SECRET}
      - APPFLOWY_GOTRUE_BASE_URL=http://gotrue:9999
      - APPFLOWY_GOTRUE_EXT_URL=${API_EXTERNAL_URL}
      - APPFLOWY_GOTRUE_ADMIN_EMAIL=${GOTRUE_ADMIN_EMAIL}
      - APPFLOWY_GOTRUE_ADMIN_PASSWORD=${GOTRUE_ADMIN_PASSWORD}
      - APPFLOWY_S3_USE_MINIO=${USE_MINIO}
      - APPFLOWY_S3_MINIO_URL=${MINIO_URL:-http://minio:9000}
      - APPFLOWY_S3_ACCESS_KEY=${AWS_ACCESS_KEY_ID}
      - APPFLOWY_S3_SECRET_KEY=${AWS_SECRET_ACCESS_KEY}
      - APPFLOWY_S3_BUCKET=${AWS_S3_BUCKET}
      - APPFLOWY_S3_REGION=${AWS_REGION}
    build:
      context: .
      dockerfile: Dockerfile
    image: appflowyinc/appflowy_cloud:${BACKEND_VERSION:-latest}
    depends_on:
      - redis
      - postgres
      - gotrue
    ports:
      - 8001:8000

  # Optional
  admin_frontend:
    restart: on-failure
    build:
      context: .
      dockerfile: ./admin_frontend/Dockerfile
    image: appflowyinc/admin_frontend:${BACKEND_VERSION:-latest}
    depends_on:
      - gotrue
    ports:
      - 3000:3000

  # Optional
  pgadmin:
    restart: on-failure
    image: dpage/pgadmin4
    depends_on:
      - postgres
    environment:
      - PGADMIN_DEFAULT_EMAIL=${PGADMIN_DEFAULT_EMAIL}
      - PGADMIN_DEFAULT_PASSWORD=${PGADMIN_DEFAULT_PASSWORD}
    ports:
      - 5400:80
    volumes:
      - ./docker/pgadmin/servers.json:/pgadmin4/servers.json

volumes:
  postgres_data:
  minio_data:

Server A nginx HTTP:

COMMAND      PID USER   FD   TYPE  DEVICE SIZE/OFF NODE NAME
docker-pr 426033 root    4u  IPv4 3410548      0t0  TCP *:3100 (LISTEN)
docker-pr 426041 root    4u  IPv6 3414024      0t0  TCP *:3100 (LISTEN)

Server B FRP service:

COMMAND     PID USER   FD   TYPE    DEVICE SIZE/OFF NODE NAME
frps    1293192 root   10u  IPv6 488387426      0t0  TCP *:3100 (LISTEN)

Server B nginx config:

server {
    # Server name and log config
    ...

    location / {
        proxy_pass http://localhost:3100;
        proxy_set_header HOST $host;
    }

   # Manage by certbot
    ...
}

# HTTP to HTTPS
...
speed2exe commented 11 months ago

@Ecss11 I cant reproduce this. The error seems like appflowy_cloud is not able to connect to minio server even though the config seems right. can you try to use the values directly instead to the docker-compose.yml?

      - APPFLOWY_S3_USE_MINIO=true
      - APPFLOWY_S3_MINIO_URL=http://minio:9000
      - APPFLOWY_S3_ACCESS_KEY=minioadmin
      - APPFLOWY_S3_SECRET_KEY=minioadmin
      - APPFLOWY_S3_BUCKET=appflowy
      - APPFLOWY_S3_REGION=us-east-1
Ecss11 commented 11 months ago

Hi @speed2exe,

Thanks again for the reply. I modify the docker-compose.yml and perform a docker compose up -d. However, the issue is not resolved and the same error occurs as following:


appflowy-cloud-appflowy_cloud-1  | {"timestamp":"2023-12-21T00:48:00.639565Z","level":"INFO","fields":{"message":"Setting up S3 bucket..."},"target":"appflowy_cloud::application"}
appflowy-cloud-appflowy_cloud-1  | {"v":0,"name":"appflowy_cloud","msg":"Setting up S3 bucket...","level":30,"hostname":"c9004da6a837","pid":1,"time":"2023-12-21T00:48:00.639582302Z","target":"appflowy_cloud::application","line":165,"file":"src/application.rs"}
appflowy-cloud-appflowy_cloud-1  | Error: Failed to initialize application state: Got HTTP 502 with content ''
appflowy-cloud-appflowy_cloud-1 exited with code 1```
speed2exe commented 11 months ago

@Ecss11 I am guessing you might not be using the latest from docker image? can you change the line

  appflowy_cloud:
...
    image: appflowyinc/appflowy_cloud:0.0.5

in docker-compose.yml and see it it changes anything

Ecss11 commented 11 months ago

Sure @speed2exe,

It seems like appflowy_cloud is already at the latest version. No significant changes were observed after I changed the config and ran docker compose pull and then docker compose up -d.

appflowy-cloud-pgadmin-1         | 
appflowy-cloud-pgadmin-1         | ----------
appflowy-cloud-pgadmin-1         | Loading servers with:
appflowy-cloud-pgadmin-1         | User: admin@example.com
appflowy-cloud-gotrue-1          | {"level":"info","msg":"Go runtime metrics collection started","time":"2023-12-21T00:50:39Z"}
appflowy-cloud-gotrue-1          | {"level":"info","msg":"prometheus server listening on 0.0.0.0:9100","time":"2023-12-21T00:50:39Z"}
appflowy-cloud-gotrue-1          | {"component":"pop","level":"info","msg":"Migrations already up to date, nothing to apply","time":"2023-12-21T00:50:39Z"}
appflowy-cloud-gotrue-1          | {"args":[0.015101872],"component":"pop","level":"info","msg":"%.4f seconds","time":"2023-12-21T00:50:39Z"}
appflowy-cloud-gotrue-1          | {"level":"info","msg":"GoTrue migrations applied successfully","time":"2023-12-21T00:50:39Z"}
appflowy-cloud-gotrue-1          | {"component":"api","level":"warning","msg":"DEPRECATION NOTICE: GOTRUE_JWT_ADMIN_GROUP_NAME not supported by Supabase's GoTrue, will be removed soon","time":"2023-12-21T00:50:39Z"}
appflowy-cloud-gotrue-1          | {"level":"info","msg":"GoTrue API started on: :9999","time":"2023-12-21T00:50:39Z"}
appflowy-cloud-nginx-1           | /docker-entrypoint.sh: /docker-entrypoint.d/ is not empty, will attempt to perform configuration
appflowy-cloud-nginx-1           | /docker-entrypoint.sh: Looking for shell scripts in /docker-entrypoint.d/
appflowy-cloud-nginx-1           | /docker-entrypoint.sh: Launching /docker-entrypoint.d/10-listen-on-ipv6-by-default.sh
appflowy-cloud-nginx-1           | 10-listen-on-ipv6-by-default.sh: info: Getting the checksum of /etc/nginx/conf.d/default.conf
appflowy-cloud-nginx-1           | 10-listen-on-ipv6-by-default.sh: info: Enabled listen on IPv6 in /etc/nginx/conf.d/default.conf
appflowy-cloud-nginx-1           | /docker-entrypoint.sh: Sourcing /docker-entrypoint.d/15-local-resolvers.envsh
appflowy-cloud-nginx-1           | /docker-entrypoint.sh: Launching /docker-entrypoint.d/20-envsubst-on-templates.sh
appflowy-cloud-nginx-1           | /docker-entrypoint.sh: Launching /docker-entrypoint.d/30-tune-worker-processes.sh
appflowy-cloud-nginx-1           | /docker-entrypoint.sh: Configuration complete; ready for start up
appflowy-cloud-pgadmin-1         | SQLite pgAdmin config: /var/lib/pgadmin/pgadmin4.db
appflowy-cloud-pgadmin-1         | ----------
appflowy-cloud-pgadmin-1         | Added 0 Server Group(s) and 1 Server(s).
appflowy-cloud-pgadmin-1         | postfix/postlog: starting the Postfix mail system
appflowy-cloud-pgadmin-1         | [2023-12-20 08:07:26 +0000] [1] [INFO] Starting gunicorn 20.1.0
appflowy-cloud-pgadmin-1         | [2023-12-20 08:07:26 +0000] [1] [INFO] Listening at: http://[::]:80 (1)
appflowy-cloud-pgadmin-1         | [2023-12-20 08:07:26 +0000] [1] [INFO] Using worker: gthread
appflowy-cloud-pgadmin-1         | [2023-12-20 08:07:26 +0000] [118] [INFO] Booting worker with pid: 118
appflowy-cloud-pgadmin-1         | [2023-12-20 08:45:22 +0000] [1] [INFO] Handling signal: term
appflowy-cloud-pgadmin-1         | [2023-12-20 08:45:22 +0000] [118] [INFO] Worker exiting (pid: 118)
appflowy-cloud-pgadmin-1         | [2023-12-20 08:45:23 +0000] [1] [INFO] Shutting down: Master
appflowy-cloud-pgadmin-1         | postfix/postlog: starting the Postfix mail system
appflowy-cloud-pgadmin-1         | [2023-12-20 21:31:27 +0000] [1] [INFO] Starting gunicorn 20.1.0
appflowy-cloud-pgadmin-1         | [2023-12-20 21:31:27 +0000] [1] [INFO] Listening at: http://[::]:80 (1)
appflowy-cloud-minio-1           | WARNING: Detected default credentials 'minioadmin:minioadmin', we recommend that you change these values with 'MINIO_ROOT_USER' and 'MINIO_ROOT_PASSWORD' environment variables
appflowy-cloud-minio-1           | MinIO Object Storage Server
appflowy-cloud-minio-1           | Copyright: 2015-2023 MinIO, Inc.
appflowy-cloud-minio-1           | License: GNU AGPLv3 <https://www.gnu.org/licenses/agpl-3.0.html>
appflowy-cloud-minio-1           | Version: RELEASE.2023-12-20T01-00-02Z (go1.21.5 linux/amd64)
appflowy-cloud-minio-1           | 
appflowy-cloud-minio-1           | Status:         1 Online, 0 Offline. 
appflowy-cloud-minio-1           | S3-API: http://172.22.0.8:9000  http://127.0.0.1:9000 
appflowy-cloud-minio-1           | Console: http://localhost/minio 
appflowy-cloud-minio-1           | 
appflowy-cloud-minio-1           | Documentation: https://min.io/docs/minio/linux/index.html
appflowy-cloud-minio-1           | Warning: The standard parity is set to 0. This can lead to data loss.
appflowy-cloud-redis-1           | 1:M 20 Dec 2023 08:07:01.273 * Server initialized
appflowy-cloud-redis-1           | 1:M 20 Dec 2023 08:07:01.273 * Ready to accept connections tcp
appflowy-cloud-redis-1           | 1:signal-handler (1703061922) Received SIGTERM scheduling shutdown...
appflowy-cloud-redis-1           | 1:M 20 Dec 2023 08:45:23.080 * User requested shutdown...
appflowy-cloud-redis-1           | 1:M 20 Dec 2023 08:45:23.080 * Saving the final RDB snapshot before exiting.
appflowy-cloud-redis-1           | 1:M 20 Dec 2023 08:45:23.083 * DB saved on disk
appflowy-cloud-redis-1           | 1:M 20 Dec 2023 08:45:23.083 # Redis is now ready to exit, bye bye...
appflowy-cloud-appflowy_cloud-1  | {"timestamp":"2023-12-21T04:31:05.683776Z","level":"INFO","fields":{"message":"Connecting to postgres database with setting: DatabaseSetting { pg_conn_opts: PgConnectOptions { host: \"postgres\", port: 5432, socket: None, username: \"postgres\", password: Some(\"password\"), database: Some(\"postgres\"), ssl_mode: Prefer, ssl_root_cert: None, ssl_client_cert: None, ssl_client_key: None, statement_cache_capacity: 100, application_name: None, log_settings: LogSettings { statements_level: Debug, slow_statements_level: Warn, slow_statements_duration: 1s }, extra_float_digits: Some(\"3\"), options: None }, require_ssl: false, max_connections: 20, database_name: \"postgres\" }"},"target":"appflowy_cloud::application"}
appflowy-cloud-appflowy_cloud-1  | {"v":0,"name":"appflowy_cloud","msg":"Connecting to postgres database with setting: DatabaseSetting { pg_conn_opts: PgConnectOptions { host: \"postgres\", port: 5432, socket: None, username: \"postgres\", password: Some(\"password\"), database: Some(\"postgres\"), ssl_mode: Prefer, ssl_root_cert: None, ssl_client_cert: None, ssl_client_key: None, statement_cache_capacity: 100, application_name: None, log_settings: LogSettings { statements_level: Debug, slow_statements_level: Warn, slow_statements_duration: 1s }, extra_float_digits: Some(\"3\"), options: None }, require_ssl: false, max_connections: 20, database_name: \"postgres\" }","level":30,"hostname":"4627c41fddcc","pid":1,"time":"2023-12-21T04:31:05.683819439Z","target":"appflowy_cloud::application","line":332,"file":"src/application.rs"}
appflowy-cloud-appflowy_cloud-1  | {"timestamp":"2023-12-21T04:31:05.694507Z","level":"INFO","fields":{"message":"Setting up S3 bucket..."},"target":"appflowy_cloud::application"}
appflowy-cloud-appflowy_cloud-1  | {"v":0,"name":"appflowy_cloud","msg":"Setting up S3 bucket...","level":30,"hostname":"4627c41fddcc","pid":1,"time":"2023-12-21T04:31:05.694522091Z","target":"appflowy_cloud::application","line":165,"file":"src/application.rs"}
appflowy-cloud-appflowy_cloud-1  | Error: Failed to initialize application state: Got HTTP 502 with content ''
appflowy-cloud-appflowy_cloud-1  | AppFlowy Cloud with RUST_LOG=debug
appflowy-cloud-appflowy_cloud-1  | {"timestamp":"2023-12-21T04:31:06.678538Z","level":"INFO","fields":{"message":"Preparng to run database migrations..."},"target":"appflowy_cloud::application"}
appflowy-cloud-appflowy_cloud-1  | {"v":0,"name":"appflowy_cloud","msg":"Preparng to run database migrations...","level":30,"hostname":"4627c41fddcc","pid":1,"time":"2023-12-21T04:31:06.678570179Z","target":"appflowy_cloud::application","line":160,"file":"src/application.rs"}
appflowy-cloud-redis-1           | 1:C 20 Dec 2023 21:31:21.953 # WARNING Memory overcommit must be enabled! Without it, a background save or replication may fail under low memory condition. Being disabled, it can also cause failures without low memory condition, see https://github.com/jemalloc/jemalloc/issues/1328. To fix this issue add 'vm.overcommit_memory = 1' to /etc/sysctl.conf and then reboot or run the command 'sysctl vm.overcommit_memory=1' for this to take effect.
appflowy-cloud-redis-1           | 1:C 20 Dec 2023 21:31:21.953 * oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
appflowy-cloud-appflowy_cloud-1  | {"timestamp":"2023-12-21T04:31:06.678575Z","level":"INFO","fields":{"message":"Connecting to postgres database with setting: DatabaseSetting { pg_conn_opts: PgConnectOptions { host: \"postgres\", port: 5432, socket: None, username: \"postgres\", password: Some(\"password\"), database: Some(\"postgres\"), ssl_mode: Prefer, ssl_root_cert: None, ssl_client_cert: None, ssl_client_key: None, statement_cache_capacity: 100, application_name: None, log_settings: LogSettings { statements_level: Debug, slow_statements_level: Warn, slow_statements_duration: 1s }, extra_float_digits: Some(\"3\"), options: None }, require_ssl: false, max_connections: 20, database_name: \"postgres\" }"},"target":"appflowy_cloud::application"}
appflowy-cloud-appflowy_cloud-1  | {"v":0,"name":"appflowy_cloud","msg":"Connecting to postgres database with setting: DatabaseSetting { pg_conn_opts: PgConnectOptions { host: \"postgres\", port: 5432, socket: None, username: \"postgres\", password: Some(\"password\"), database: Some(\"postgres\"), ssl_mode: Prefer, ssl_root_cert: None, ssl_client_cert: None, ssl_client_key: None, statement_cache_capacity: 100, application_name: None, log_settings: LogSettings { statements_level: Debug, slow_statements_level: Warn, slow_statements_duration: 1s }, extra_float_digits: Some(\"3\"), options: None }, require_ssl: false, max_connections: 20, database_name: \"postgres\" }","level":30,"hostname":"4627c41fddcc","pid":1,"time":"2023-12-21T04:31:06.678618777Z","target":"appflowy_cloud::application","line":332,"file":"src/application.rs"}
appflowy-cloud-appflowy_cloud-1  | {"timestamp":"2023-12-21T04:31:06.688815Z","level":"INFO","fields":{"message":"Setting up S3 bucket..."},"target":"appflowy_cloud::application"}
appflowy-cloud-appflowy_cloud-1  | {"v":0,"name":"appflowy_cloud","msg":"Setting up S3 bucket...","level":30,"hostname":"4627c41fddcc","pid":1,"time":"2023-12-21T04:31:06.688831145Z","target":"appflowy_cloud::application","line":165,"file":"src/application.rs"}
appflowy-cloud-appflowy_cloud-1  | Error: Failed to initialize application state: Got HTTP 502 with content ''
appflowy-cloud-pgadmin-1         | [2023-12-20 21:31:27 +0000] [1] [INFO] Using worker: gthread
appflowy-cloud-pgadmin-1         | [2023-12-20 21:31:27 +0000] [82] [INFO] Booting worker with pid: 82
appflowy-cloud-redis-1           | 1:C 20 Dec 2023 21:31:21.953 * Redis version=7.2.3, bits=64, commit=00000000, modified=0, pid=1, just started
appflowy-cloud-redis-1           | 1:C 20 Dec 2023 21:31:21.953 # Warning: no config file specified, using the default config. In order to specify a config file use redis-server /path/to/redis.conf
appflowy-cloud-redis-1           | 1:M 20 Dec 2023 21:31:21.953 * monotonic clock: POSIX clock_gettime
appflowy-cloud-redis-1           | 1:M 20 Dec 2023 21:31:21.954 * Running mode=standalone, port=6379.
appflowy-cloud-redis-1           | 1:M 20 Dec 2023 21:31:21.954 * Server initialized
appflowy-cloud-redis-1           | 1:M 20 Dec 2023 21:31:21.955 * Loading RDB produced by version 7.2.3
appflowy-cloud-redis-1           | 1:M 20 Dec 2023 21:31:21.955 * RDB age 45958 seconds
appflowy-cloud-redis-1           | 1:M 20 Dec 2023 21:31:21.955 * RDB memory usage when created 0.83 Mb
appflowy-cloud-redis-1           | 1:M 20 Dec 2023 21:31:21.955 * Done loading RDB, keys loaded: 0, keys expired: 0.
appflowy-cloud-redis-1           | 1:M 20 Dec 2023 21:31:21.955 * DB loaded from disk: 0.000 seconds
appflowy-cloud-redis-1           | 1:M 20 Dec 2023 21:31:21.955 * Ready to accept connections tcp
appflowy-cloud-postgres-1        | 2023-12-20 08:12:02.629 UTC [62] LOG:  checkpoint starting: time
appflowy-cloud-postgres-1        | 2023-12-20 08:12:34.993 UTC [62] LOG:  checkpoint complete: wrote 325 buffers (2.0%); 0 WAL file(s) added, 0 removed, 0 recycled; write=32.285 s, sync=0.064 s, total=32.365 s; sync files=292, longest=0.005 s, average=0.001 s; distance=1949 kB, estimate=1949 kB; lsn=0/16E0460, redo lsn=0/16E0428
appflowy-cloud-postgres-1        | 2023-12-20 08:45:33.518 UTC [1] LOG:  received fast shutdown request
appflowy-cloud-postgres-1        | 2023-12-20 08:45:33.519 UTC [1] LOG:  aborting any active transactions
appflowy-cloud-postgres-1        | 2023-12-20 08:45:33.521 UTC [1] LOG:  background worker "logical replication launcher" (PID 67) exited with exit code 1
appflowy-cloud-postgres-1        | 2023-12-20 08:45:33.522 UTC [62] LOG:  shutting down
appflowy-cloud-postgres-1        | 2023-12-20 08:45:33.524 UTC [62] LOG:  checkpoint starting: shutdown immediate
appflowy-cloud-postgres-1        | 2023-12-20 08:45:33.530 UTC [62] LOG:  checkpoint complete: wrote 0 buffers (0.0%); 0 WAL file(s) added, 0 removed, 0 recycled; write=0.001 s, sync=0.001 s, total=0.008 s; sync files=0, longest=0.000 s, average=0.000 s; distance=0 kB, estimate=1754 kB; lsn=0/16E0510, redo lsn=0/16E0510
appflowy-cloud-postgres-1        | 2023-12-20 08:45:33.534 UTC [1] LOG:  database system is shut down
appflowy-cloud-postgres-1        | 
appflowy-cloud-postgres-1        | PostgreSQL Database directory appears to contain a database; Skipping initialization
appflowy-cloud-postgres-1        | 
appflowy-cloud-postgres-1        | 2023-12-20 21:31:22.180 UTC [1] LOG:  starting PostgreSQL 16.1 (Debian 16.1-1.pgdg120+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 12.2.0-14) 12.2.0, 64-bit
appflowy-cloud-postgres-1        | 2023-12-20 21:31:22.185 UTC [1] LOG:  listening on IPv4 address "0.0.0.0", port 5432
appflowy-cloud-postgres-1        | 2023-12-20 21:31:22.185 UTC [1] LOG:  listening on IPv6 address "::", port 5432
appflowy-cloud-postgres-1        | 2023-12-20 21:31:22.191 UTC [1] LOG:  listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
appflowy-cloud-postgres-1        | 2023-12-20 21:31:22.197 UTC [29] LOG:  database system was shut down at 2023-12-20 08:45:33 UTC
appflowy-cloud-postgres-1        | 2023-12-20 21:31:22.207 UTC [1] LOG:  database system is ready to accept connections
appflowy-cloud-postgres-1        | 2023-12-20 21:36:22.226 UTC [27] LOG:  checkpoint starting: time
appflowy-cloud-postgres-1        | 2023-12-20 21:36:22.245 UTC [27] LOG:  checkpoint complete: wrote 3 buffers (0.0%); 0 WAL file(s) added, 0 removed, 0 recycled; write=0.005 s, sync=0.002 s, total=0.019 s; sync files=2, longest=0.001 s, average=0.001 s; distance=0 kB, estimate=0 kB; lsn=0/16E05F8, redo lsn=0/16E05C0
appflowy-cloud-appflowy_cloud-1  | {"timestamp":"2023-12-21T04:31:08.195826Z","level":"INFO","fields":{"message":"Setting up S3 bucket..."},"target":"appflowy_cloud::application"}
appflowy-cloud-appflowy_cloud-1  | {"v":0,"name":"appflowy_cloud","msg":"Setting up S3 bucket...","level":30,"hostname":"4627c41fddcc","pid":1,"time":"2023-12-21T04:31:08.195841135Z","target":"appflowy_cloud::application","line":165,"file":"src/application.rs"}
appflowy-cloud-appflowy_cloud-1  | Error: Failed to initialize application state: Got HTTP 502 with content ''
appflowy-cloud-appflowy_cloud-1 exited with code 1
appflowy-cloud-appflowy_cloud-1  | {"timestamp":"2023-12-21T04:31:10.443915Z","level":"INFO","fields":{"message":"Setting up S3 bucket..."},"target":"appflowy_cloud::application"}
appflowy-cloud-appflowy_cloud-1  | {"v":0,"name":"appflowy_cloud","msg":"Setting up S3 bucket...","level":30,"hostname":"4627c41fddcc","pid":1,"time":"2023-12-21T04:31:10.443930258Z","target":"appflowy_cloud::application","line":165,"file":"src/application.rs"}
appflowy-cloud-appflowy_cloud-1  | Error: Failed to initialize application state: Got HTTP 502 with content ''
appflowy-cloud-appflowy_cloud-1 exited with code 1
speed2exe commented 11 months ago

@Ecss11 docker compose pull does not necessarily pull the latest image, can you specify the image to be 0.0.5 and try again with that?

Ecss11 commented 11 months ago

Hi @speed2exe,

The image version is set to 0.0.5 but does not make any changes. Perhaps it's caused by something else?

Screenshot

image

speed2exe commented 11 months ago

@Ecss11 I am thinking that it is related to Minio issue here: https://github.com/minio/minio-java/issues/958

Ecss11 commented 11 months ago

Thanks @speed2exe ,

That seems to have a good chance. I'll check the link now, and I hope it can resolve this issue.

Ecss11 commented 11 months ago

Hi @speed2exe ,

After some investigation of minio, I tried to trace its logs but unfortunately, minio doesn't give any useful output.

Below is the output after performing mc admin trace ALIAS --insecure -a:

2023-12-21T05:36:43.635 [OS] os.Lstat 127.0.0.1:9000 /data/.minio.sys/format.json 7.5µs
2023-12-21T05:36:43.635 [OS] os.OpenFileR 127.0.0.1:9000 /data 13.1µs
2023-12-21T05:36:43.635 [STORAGE] storage.ListVols 127.0.0.1:9000 /data / 616.576µs
2023-12-21T05:36:43.635 [SCANNER] scanner.ScanCycle 127.0.0.1:9000  724.471µs
2023-12-21T05:36:43.636 [OS] os.OpenFileR 127.0.0.1:9000 /data/.minio.sys/buckets/.bloomcycle.bin/xl.meta 17.699µs
2023-12-21T05:36:43.636 [OS] os.OpenFileR 127.0.0.1:9000 /data/.minio.sys/buckets/.usage.json/xl.meta 20.599µs
2023-12-21T05:36:43.636 [OS] os.Mkdir 127.0.0.1:9000 /data/.minio.sys 7.2µs
2023-12-21T05:36:43.636 [OS] os.Mkdir 127.0.0.1:9000 /data/.minio.sys/tmp 4.6µs
2023-12-21T05:36:43.636 [OS] os.Mkdir 127.0.0.1:9000 /data/.minio.sys 4.999µs
2023-12-21T05:36:43.636 [OS] os.Mkdir 127.0.0.1:9000 /data/.minio.sys/tmp 1.4µs
2023-12-21T05:36:43.636 [OS] os.Mkdir 127.0.0.1:9000 /data/.minio.sys/tmp/aaf6546a-eb0a-434d-a984-2451e5fa8684 80.797µs
2023-12-21T05:36:43.636 [OS] os.Mkdir 127.0.0.1:9000 /data/.minio.sys/tmp/60dc9bf9-8502-4ff2-92f1-da450920f223 86.497µs
2023-12-21T05:36:43.636 [OS] os.OpenFileW 127.0.0.1:9000 /data/.minio.sys/tmp/aaf6546a-eb0a-434d-a984-2451e5fa8684/xl.meta 33.899µs
2023-12-21T05:36:43.636 [OS] os.OpenFileW 127.0.0.1:9000 /data/.minio.sys/tmp/60dc9bf9-8502-4ff2-92f1-da450920f223/xl.meta 21.499µs
2023-12-21T05:36:43.639 [OS] os.Rename2 127.0.0.1:9000 /data/.minio.sys/tmp/aaf6546a-eb0a-434d-a984-2451e5fa8684 -> /data/.minio.sys/buckets/.bloomcycle.bin 30.398µs
2023-12-21T05:36:43.636 [STORAGE] storage.RenameData 127.0.0.1:9000 /data aaf6546a-eb0a-434d-a984-2451e5fa8684 407de11f-dbfc-45c0-99a5-7194ef44c045 .minio.sys buckets/.bloomcycle.bin 2.355407ms
2023-12-21T05:36:43.639 [OS] os.Rename 127.0.0.1:9000 /data/.minio.sys/tmp/aaf6546a-eb0a-434d-a984-2451e5fa8684 -> /data/.minio.sys/tmp/.trash/77ce0036-a7ed-42b0-bbdd-ecb3eafdbcec 20.8µs
2023-12-21T05:36:43.639 [STORAGE] storage.Delete 127.0.0.1:9000 /data .minio.sys/tmp aaf6546a-eb0a-434d-a984-2451e5fa8684 35.898µs
2023-12-21T05:36:43.640 [OS] os.Rename2 127.0.0.1:9000 /data/.minio.sys/tmp/60dc9bf9-8502-4ff2-92f1-da450920f223 -> /data/.minio.sys/buckets/.usage.json 21.199µs
2023-12-21T05:36:43.636 [STORAGE] storage.RenameData 127.0.0.1:9000 /data 60dc9bf9-8502-4ff2-92f1-da450920f223 5a16d90b-2707-4628-acbe-4b726cc44c4f .minio.sys buckets/.usage.json 3.821448ms
2023-12-21T05:36:43.640 [OS] os.Rename 127.0.0.1:9000 /data/.minio.sys/tmp/60dc9bf9-8502-4ff2-92f1-da450920f223 -> /data/.minio.sys/tmp/.trash/aaca07f4-8999-46c5-a0f5-5363d7f1642c 24.999µs
2023-12-21T05:36:43.640 [STORAGE] storage.Delete 127.0.0.1:9000 /data .minio.sys/tmp 60dc9bf9-8502-4ff2-92f1-da450920f223 47.498µs
2023-12-21T05:36:43.775 [OS] os.Mkdir 127.0.0.1:9000 /data/.minio.sys 17.599µs
2023-12-21T05:36:43.776 [OS] os.Mkdir 127.0.0.1:9000 /data/.minio.sys/tmp 5.899µs
2023-12-21T05:36:43.776 [OS] os.OpenFileR 127.0.0.1:9000 /data/.minio.sys/tmp/0ff6d73f-89a0-43c1-8b6d-31d0e6a66f5b 157.394µs
2023-12-21T05:36:43.776 [OS] os.Fdatasync 127.0.0.1:9000 /data/.minio.sys/tmp/0ff6d73f-89a0-43c1-8b6d-31d0e6a66f5b 2.854287ms
2023-12-21T05:36:43.779 [OS] os.OpenFileR 127.0.0.1:9000 /data/.minio.sys/tmp/0ff6d73f-89a0-43c1-8b6d-31d0e6a66f5b 20.199µs
2023-12-21T05:36:43.886 [STORAGE] storage.DiskInfo 127.0.0.1:9000 /data 58.298µs
2023-12-21T05:36:43.905 [STORAGE] storage.DiskInfo 127.0.0.1:9000 /data 2.5µs
speed2exe commented 11 months ago

This is strange, because I can't reproduce them in local or in dev environment. You do some experimentation which might help you find the root cause:

speed2exe commented 11 months ago

@Ecss11 You can also try with building the appflowy_cloud, instead of pulling image. just comment out the image section. I have added more debugging information in the main branch.

  appflowy_cloud:
  ...
    build:
      context: .
      dockerfile: Dockerfile
    # image: appflowyinc/appflowy_cloud:${BACKEND_VERSION:-latest}
   ...
Ecss11 commented 11 months ago

Hi @speed2exe ,

Thanks for your assistance; the debug information is the same as last. I also plan on requesting another VM to try setup there since you mentioned it works on your local environment while keeping track of this issue.

Ecss11 commented 11 months ago

Hey @speed2exe ,

Is it possible that it's caused by the geo-location factor since the server is hosted in mainland China? The same error had occurred in the new server that I requested. However, it does not have any problem in our US instance.

speed2exe commented 11 months ago

@Ecss11 Thanks for your efforts. Minio should be self contained and don't require it as far as I know. In the AppFlowy-Cloud, we are using the APIs provided by s3-rust crate. It could be a compatibility issue, since the last time I use it, it requires that I set a region even though I'm using minio.

Ecss11 commented 11 months ago

Hi @speed2exe ,

Thanks for your confirmation. It seems that the issue might be related to Amazon access in China, e.g. S3 access should allow alternate URLs (e.g. China) shows there are some DNS issues in the region. I guess this issue is resolved and can be closed for now.

reecho-tsai commented 7 months ago

Hi @Ecss11 ,

I would like to ask you how you solved this problem finally? I did not find useful messages in the link you provided.

Thanks ;)

Ecss11 commented 7 months ago

Hi @reecho-tsai ,

We switched to a machine in the US at the end. The issue seems related to Minio and the Amazon S3 endpoint in a few regions. However, we still have an old copy of the configurations in the CN machine, and you can try it out yourself.

docker-compose.yaml

version: '3'
services:
  nginx:
    restart: on-failure
    image: nginx
    ports:
      - 3100:80
      - 3101:443
    depends_on: # If you did not deploy any of the services below, comment those out
      - minio
      - appflowy_cloud
      - gotrue
      - admin_frontend
    volumes:
      - ./nginx/nginx.conf:/etc/nginx/nginx.conf
      - ./nginx/ssl/certificate.crt:/etc/nginx/ssl/certificate.crt
      - ./nginx/ssl/private_key.key:/etc/nginx/ssl/private_key.key

  # You do not need this if you have configured to use your own s3 file storage
  minio:
    restart: on-failure
    image: minio/minio
    ports:
      - 9000:9000
      - 9001:9001
    environment:
      - MINIO_BROWSER_REDIRECT_URL=http://localhost/minio
    command: server /data --console-address ":9001"
    volumes:
      - minio_data:/data

  postgres:
    restart: on-failure
    image: postgres
    environment:
      - POSTGRES_USER=${POSTGRES_USER:-postgres}
      - POSTGRES_DB=${POSTGRES_DB:-postgres}
      - POSTGRES_PASSWORD=${POSTGRES_PASSWORD:-password}
      - POSTGRES_HOST=${POSTGRES_HOST:-postgres}
    ports:
      - 5433:5432
    volumes:
      - ./migrations/before:/docker-entrypoint-initdb.d
      - postgres_data:/var/lib/postgresql/data

  redis:
    restart: on-failure
    image: redis
    ports:
      - 6380:6379

  gotrue:
    restart: on-failure
    build:
      context: .
      dockerfile: docker/gotrue.Dockerfile
    depends_on:
      - postgres
    environment:
      # Gotrue config: https://github.com/supabase/gotrue/blob/master/example.env
      - GOTRUE_SITE_URL=appflowy-flutter://                           # redirected to AppFlowy application
      - URI_ALLOW_LIST=*                                              # adjust restrict if necessary
      - GOTRUE_JWT_SECRET=${GOTRUE_JWT_SECRET}                        # authentication secret
      - GOTRUE_DB_DRIVER=postgres
      - API_EXTERNAL_URL=${API_EXTERNAL_URL}
      - DATABASE_URL=postgres://supabase_auth_admin:root@postgres:5432/postgres
      - PORT=9999
      - GOTRUE_SMTP_HOST=${GOTRUE_SMTP_HOST}                          # e.g. smtp.gmail.com
      - GOTRUE_SMTP_PORT=${GOTRUE_SMTP_PORT}                          # e.g. 465
      - GOTRUE_SMTP_USER=${GOTRUE_SMTP_USER}                          # email sender, e.g. noreply@appflowy.io
      - GOTRUE_SMTP_PASS=${GOTRUE_SMTP_PASS}                          # email password
      - GOTRUE_MAILER_URLPATHS_CONFIRMATION=/gotrue/verify
      - GOTRUE_MAILER_URLPATHS_INVITE=/gotrue/verify
      - GOTRUE_MAILER_URLPATHS_RECOVERY=/gotrue/verify
      - GOTRUE_MAILER_URLPATHS_EMAIL_CHANGE=/gotrue/verify
      - GOTRUE_SMTP_ADMIN_EMAIL=${GOTRUE_SMTP_ADMIN_EMAIL}            # email with admin privileges e.g. internal@appflowy.io
      - GOTRUE_SMTP_MAX_FREQUENCY=${GOTRUE_SMTP_MAX_FREQUENCY:-1ns}   # set to 1ns for running tests
      - GOTRUE_MAILER_AUTOCONFIRM=${GOTRUE_MAILER_AUTOCONFIRM:-false} # change this to true to skip email confirmation
      # Google OAuth config
      - GOTRUE_EXTERNAL_GOOGLE_ENABLED=${GOTRUE_EXTERNAL_GOOGLE_ENABLED}
      - GOTRUE_EXTERNAL_GOOGLE_CLIENT_ID=${GOTRUE_EXTERNAL_GOOGLE_CLIENT_ID}
      - GOTRUE_EXTERNAL_GOOGLE_SECRET=${GOTRUE_EXTERNAL_GOOGLE_SECRET}
      - GOTRUE_EXTERNAL_GOOGLE_REDIRECT_URI=${GOTRUE_EXTERNAL_GOOGLE_REDIRECT_URI}
      # GITHUB OAuth config
      - GOTRUE_EXTERNAL_GITHUB_ENABLED=${GOTRUE_EXTERNAL_GITHUB_ENABLED}
      - GOTRUE_EXTERNAL_GITHUB_CLIENT_ID=${GOTRUE_EXTERNAL_GITHUB_CLIENT_ID}
      - GOTRUE_EXTERNAL_GITHUB_SECRET=${GOTRUE_EXTERNAL_GITHUB_SECRET}
      - GOTRUE_EXTERNAL_GITHUB_REDIRECT_URI=${GOTRUE_EXTERNAL_GITHUB_REDIRECT_URI}
      # Discord OAuth config
      - GOTRUE_EXTERNAL_DISCORD_ENABLED=${GOTRUE_EXTERNAL_DISCORD_ENABLED}
      - GOTRUE_EXTERNAL_DISCORD_CLIENT_ID=${GOTRUE_EXTERNAL_DISCORD_CLIENT_ID}
      - GOTRUE_EXTERNAL_DISCORD_SECRET=${GOTRUE_EXTERNAL_DISCORD_SECRET}
      - GOTRUE_EXTERNAL_DISCORD_REDIRECT_URI=${GOTRUE_EXTERNAL_DISCORD_REDIRECT_URI}
      # Prometheus Metrics
      - GOTRUE_METRICS_ENABLED=true
      - GOTRUE_METRICS_EXPORTER=prometheus
    ports:
      - 9998:9999

  appflowy_cloud:
    restart: on-failure
    environment:
      - RUST_LOG=${RUST_LOG:-debug}
      - APPFLOWY_ENVIRONMENT=production
      - APPFLOWY_DATABASE_URL=postgres://postgres:password@postgres:5432/postgres
      - APPFLOWY_REDIS_URI=redis://redis:6379
      - APPFLOWY_GOTRUE_JWT_SECRET=${GOTRUE_JWT_SECRET}
      - APPFLOWY_GOTRUE_BASE_URL=http://gotrue:9999
      - APPFLOWY_GOTRUE_EXT_URL=${API_EXTERNAL_URL}
      - APPFLOWY_GOTRUE_ADMIN_EMAIL=${GOTRUE_ADMIN_EMAIL}
      - APPFLOWY_GOTRUE_ADMIN_PASSWORD=${GOTRUE_ADMIN_PASSWORD}
      - APPFLOWY_S3_USE_MINIO=true
      - APPFLOWY_S3_MINIO_URL=http://minio:9000
      - APPFLOWY_S3_ACCESS_KEY=minioadmin
      - APPFLOWY_S3_SECRET_KEY=minioadmin
      - APPFLOWY_S3_BUCKET=appflowy
      - APPFLOWY_S3_REGION=us-east-1
        # - APPFLOWY_S3_USE_MINIO=${USE_MINIO}
        # - APPFLOWY_S3_MINIO_URL=${MINIO_URL:-http://minio:9000}
        # - APPFLOWY_S3_ACCESS_KEY=${AWS_ACCESS_KEY_ID}
        # - APPFLOWY_S3_SECRET_KEY=${AWS_SECRET_ACCESS_KEY}
        # - APPFLOWY_S3_BUCKET=${AWS_S3_BUCKET}
        # - APPFLOWY_S3_REGION=${AWS_REGION}
    build:
      context: .
      dockerfile: Dockerfile
        # image: appflowyinc/appflowy_cloud:latest
    depends_on:
      - redis
      - postgres
      - gotrue
    ports:
      - 8001:8000

  # Optional
  admin_frontend:
    restart: on-failure
    build:
      context: .
      dockerfile: ./admin_frontend/Dockerfile
    image: appflowyinc/admin_frontend:${BACKEND_VERSION:-latest}
    depends_on:
      - gotrue
    ports:
      - 3000:3000

  # Optional
  pgadmin:
    restart: on-failure
    image: dpage/pgadmin4
    depends_on:
      - postgres
    environment:
      - PGADMIN_DEFAULT_EMAIL=${PGADMIN_DEFAULT_EMAIL}
      - PGADMIN_DEFAULT_PASSWORD=${PGADMIN_DEFAULT_PASSWORD}
    ports:
      - 5400:80
    volumes:
      - ./docker/pgadmin/servers.json:/pgadmin4/servers.json

volumes:
  postgres_data:
  minio_data:

Dockerfile

FROM lukemathwalker/cargo-chef:latest-rust-1.74.0 as chef

WORKDIR /app
RUN apt update && apt install lld clang -y

FROM chef as planner
COPY . .
# Compute a lock-like file for our project
RUN cargo chef prepare --recipe-path recipe.json

FROM chef as builder
COPY --from=planner /app/recipe.json recipe.json
# Build our project dependencies
RUN cargo chef cook --release --recipe-path recipe.json
COPY . .
ENV SQLX_OFFLINE true

# Build the project
RUN cargo build --release --bin appflowy_cloud

FROM debian:bookworm-slim AS runtime
WORKDIR /app
RUN apt-get update -y \
    && apt-get install -y --no-install-recommends openssl \
    # Clean up
    && apt-get autoremove -y \
    && apt-get clean -y \
    && rm -rf /var/lib/apt/lists/*

COPY --from=builder /app/target/release/appflowy_cloud /usr/local/bin/appflowy_cloud
COPY --from=builder /app/configuration configuration
ENV APP_ENVIRONMENT production
ENV RUST_BACKTRACE 1
CMD ["appflowy_cloud"]

.env

# This file is a template for docker compose deployment
# Copy this file to .env and change the values as needed

# authentication key, change this and keep the key safe and secret
# self defined key, you can use any string
GOTRUE_JWT_SECRET=************

# User sign up will automatically be confirmed if this is set to true.
# If you have OAuth2 set up or smtp configured, you can set this to false
# to enforce email confirmation or OAuth2 login instead.
# If you set this to false, you need to either set up SMTP
GOTRUE_MAILER_AUTOCONFIRM=false

# if you enable mail confirmation, you need to set the SMTP configuration below
GOTRUE_SMTP_HOST=xxx.xxx.com
GOTRUE_SMTP_PORT=465
GOTRUE_SMTP_USER=xxx@xxx.xxx.com
GOTRUE_SMTP_PASS=************
GOTRUE_SMTP_ADMIN_EMAIL=admin@xxx.com

# gotrue admin
GOTRUE_ADMIN_EMAIL=admin@xxx.com
GOTRUE_ADMIN_PASSWORD=************

# clicking on email verification link will redirect to this host
# change this to your own domain where you host the docker-compose or gotrue
API_EXTERNAL_URL=https://xxx.xxx.com

# Google OAuth2
GOTRUE_EXTERNAL_GOOGLE_ENABLED=false
GOTRUE_EXTERNAL_GOOGLE_CLIENT_ID=
GOTRUE_EXTERNAL_GOOGLE_SECRET=
GOTRUE_EXTERNAL_GOOGLE_REDIRECT_URI=http://your-host/callback
# GitHub OAuth2
GOTRUE_EXTERNAL_GITHUB_ENABLED=false
GOTRUE_EXTERNAL_GITHUB_CLIENT_ID=
GOTRUE_EXTERNAL_GITHUB_SECRET=
GOTRUE_EXTERNAL_GITHUB_REDIRECT_URI=http://your-host/callback
# Discord OAuth2
GOTRUE_EXTERNAL_DISCORD_ENABLED=false
GOTRUE_EXTERNAL_DISCORD_CLIENT_ID=
GOTRUE_EXTERNAL_DISCORD_SECRET=
GOTRUE_EXTERNAL_DISCORD_REDIRECT_URI=http://your-host/callback
# File Storage
USE_MINIO=true
# MINIO_URL=http://localhost:9000 # change this if you are using a different address for minio
AWS_ACCESS_KEY_ID=minioadmin
AWS_SECRET_ACCESS_KEY=minioadmin
AWS_S3_BUCKET=appflowy
AWS_REGION=us-east-1

RUST_LOG=debug

# PgAdmin
PGADMIN_DEFAULT_EMAIL=admin@example.com
PGADMIN_DEFAULT_PASSWORD=password

# Portainer (username: admin)
PORTAINER_PASSWORD=password1234

# Grafana Dashboard
GF_SECURITY_ADMIN_USER=admin
GF_SECURITY_ADMIN_PASSWORD=password

# Cloudflare tunnel token
CLOUDFLARE_TUNNEL_TOKEN=