basecamp / kamal

Deploy web apps anywhere.
https://kamal-deploy.org
MIT License
9.39k stars 359 forks source link

Run Kamal setup and get error connection on database (Postgres) #768

Closed kenzo2013 closed 2 months ago

kenzo2013 commented 2 months ago

Hi guys,

I have a problem connecting to my database after playing kamal setup.

This my deploy.yml:

# Name of your application. Used to uniquely configure containers.
service: soe

# Name of the container image.
image: noumedem/soe

# Deploy to these servers.
servers:
  web:
    hosts:
      - 51.68.124.156
    labels:
      traefik.http.routers.soe.rule: Host(`www.jeteste.site`)
      traefik.http.routers.soe.entrypoints: websecure
      traefik.http.routers.soe_secure.rule: Host(`www.jeteste.site`)
      traefik.http.routers.soe_secure.tls: true
      traefik.http.routers.soe_secure.tls.certresolver: letsencrypt
    options:
      network: "private"

# Credentials for your image host.
registry:
  # Specify the registry server, if you're not using Docker Hub
  # server: registry.digitalocean.com / ghcr.io / ...
  username: noumedem

  # Always use an access token rather than real password when possible.
  password:
    - KAMAL_REGISTRY_PASSWORD

# Inject ENV variables into containers (secrets come from .env).
# Remember to run `kamal env push` after making changes!
env:
  clear:
    RAILS_ENV: production
    RACK_ENV: production
    RUBY_YJIT_ENABLE: 1
    RAILS_LOG_TO_STDOUT: true
    RAILS_SERVE_STATIC_FILES: true
  secret:
    - RAILS_MASTER_KEY
    - DB_HOST
    - POSTGRES_USER
    - POSTGRES_PASSWORD

# Use a different ssh user than root
ssh:
  user: ubuntu

# Configure builder setup.
builder:
  args:
    RUBY_VERSION: 3.3.0
  # secrets:
  #   - GITHUB_TOKEN
  remote:
    arch: amd64
    #host: ssh://ubuntu@51.68.124.156

# Use accessory services (secrets come from .env).
accessories:
  db:
    image: postgres:16
    host: 51.68.124.156
    #port: 5432
    env:
      clear:
        POSTGRES_USER: "soe"
        POSTGRES_DB: "soe_production" # The database will be created automatically on first boot.
      secret:
        - POSTGRES_PASSWORD
        - POSTGRES_USER
    files:
      - db/production.sql:/docker-entrypoint-initdb.d/setup.sql
    directories:
      - data:/var/lib/postgresql/data
    options:
      network: "private"

# Configure custom arguments for Traefik
traefik:
  options:
    publish:
      - "443:443"
    volume:
      - "/letsencrypt/acme.json:/letsencrypt/acme.json" # To save the configuration file.
    network: "private"
  args:
    entryPoints.web.address: ":80"
    entryPoints.websecure.address: ":443"
    entryPoints.web.http.redirections.entryPoint.to: websecure # We want to force https
    entryPoints.web.http.redirections.entryPoint.scheme: https
    entryPoints.web.http.redirections.entrypoint.permanent: true
    certificatesResolvers.letsencrypt.acme.email: "kkenzo2007@yahoo.fr"
    certificatesResolvers.letsencrypt.acme.storage: "/letsencrypt/acme.json" # Must match the path in `volume`
    certificatesResolvers.letsencrypt.acme.httpchallenge: true
    certificatesResolvers.letsencrypt.acme.httpchallenge.entrypoint: web # Must match the role in `servers`

healthcheck:
  interval: 5s
# Configure a custom healthcheck (default is /up on port 3000)
# healthcheck:
#   path: /healthz
#   port: 4000

# Bridge fingerprinted assets, like JS and CSS, between versions to avoid
# hitting 404 on in-flight requests. Combines all files from new and old
# version inside the asset_path.
#
# If your app is using the Sprockets gem, ensure it sets `config.assets.manifest`.
# See https://github.com/basecamp/kamal/issues/626 for details
#
asset_path: /rails/public/assets

# Configure rolling deploys by setting a wait time between batches of restarts.
# boot:
#   limit: 10 # Can also specify as a percentage of total hosts, such as "25%"
#   wait: 2

# Configure the role used to determine the primary_host. This host takes
# deploy locks, runs health checks during the deploy, and follow logs, etc.
#
# Caution: there's no support for role renaming yet, so be careful to cleanup
#          the previous role on the deployed hosts.
# primary_role: web

# Controls if we abort when see a role with no hosts. Disabling this may be
# useful for more complex deploy configurations.
#
# allow_empty_roles: false

This is error:

Ensure app can pass healthcheck...
  INFO [68123f89] Running docker run --detach --name healthcheck-soe-1f90a98d0314bf57ce89016e8d1fd8439f4919ad --publish 3999:3000 --label service=healthcheck-soe -e KAMAL_CONTAINER_NAME="healthcheck-soe" --env-file .kamal/env/roles/soe-web.env --health-cmd "curl -f http://localhost:3000/up || exit 1" --health-interval "5s" --network "private" noumedem/soe:1f90a98d0314bf57ce89016e8d1fd8439f4919ad on 51.68.124.156
  INFO [68123f89] Finished in 0.950 seconds with exit status 0 (successful).
  INFO [9c7f31ec] Running docker container ls --all --filter name=^healthcheck-soe-1f90a98d0314bf57ce89016e8d1fd8439f4919ad$ --quiet | xargs docker inspect --format '{{if .State.Health}}{{.State.Health.Status}}{{else}}{{.State.Status}}{{end}}' on 51.68.124.156
  INFO [9c7f31ec] Finished in 0.187 seconds with exit status 0 (successful).
  INFO container not ready (starting), retrying in 1s (attempt 1/7)...
  INFO [f6c13cb9] Running docker container ls --all --filter name=^healthcheck-soe-1f90a98d0314bf57ce89016e8d1fd8439f4919ad$ --quiet | xargs docker inspect --format '{{if .State.Health}}{{.State.Health.Status}}{{else}}{{.State.Status}}{{end}}' on 51.68.124.156
  INFO [f6c13cb9] Finished in 0.135 seconds with exit status 0 (successful).
  INFO container not ready (starting), retrying in 2s (attempt 2/7)...
  INFO [e0d45284] Running docker container ls --all --filter name=^healthcheck-soe-1f90a98d0314bf57ce89016e8d1fd8439f4919ad$ --quiet | xargs docker inspect --format '{{if .State.Health}}{{.State.Health.Status}}{{else}}{{.State.Status}}{{end}}' on 51.68.124.156
  INFO [e0d45284] Finished in 0.142 seconds with exit status 0 (successful).
  INFO container not ready (starting), retrying in 3s (attempt 3/7)...
  INFO [adb682b4] Running docker container ls --all --filter name=^healthcheck-soe-1f90a98d0314bf57ce89016e8d1fd8439f4919ad$ --quiet | xargs docker inspect --format '{{if .State.Health}}{{.State.Health.Status}}{{else}}{{.State.Status}}{{end}}' on 51.68.124.156
  INFO [adb682b4] Finished in 0.095 seconds with exit status 0 (successful).
  INFO container not ready (unhealthy), retrying in 4s (attempt 4/7)...
  INFO [3b9e95ee] Running docker container ls --all --filter name=^healthcheck-soe-1f90a98d0314bf57ce89016e8d1fd8439f4919ad$ --quiet | xargs docker inspect --format '{{if .State.Health}}{{.State.Health.Status}}{{else}}{{.State.Status}}{{end}}' on 51.68.124.156
  INFO [3b9e95ee] Finished in 0.081 seconds with exit status 0 (successful).
  INFO container not ready (unhealthy), retrying in 5s (attempt 5/7)...
  INFO [6fd94b5c] Running docker container ls --all --filter name=^healthcheck-soe-1f90a98d0314bf57ce89016e8d1fd8439f4919ad$ --quiet | xargs docker inspect --format '{{if .State.Health}}{{.State.Health.Status}}{{else}}{{.State.Status}}{{end}}' on 51.68.124.156
  INFO [6fd94b5c] Finished in 0.135 seconds with exit status 0 (successful).
  INFO container not ready (unhealthy), retrying in 6s (attempt 6/7)...
  INFO [2b1bc2c5] Running docker container ls --all --filter name=^healthcheck-soe-1f90a98d0314bf57ce89016e8d1fd8439f4919ad$ --quiet | xargs docker inspect --format '{{if .State.Health}}{{.State.Health.Status}}{{else}}{{.State.Status}}{{end}}' on 51.68.124.156
  INFO [2b1bc2c5] Finished in 0.202 seconds with exit status 0 (successful).
  INFO container not ready (unhealthy), retrying in 7s (attempt 7/7)...
  INFO [a28d5c52] Running docker container ls --all --filter name=^healthcheck-soe-1f90a98d0314bf57ce89016e8d1fd8439f4919ad$ --quiet | xargs docker inspect --format '{{if .State.Health}}{{.State.Health.Status}}{{else}}{{.State.Status}}{{end}}' on 51.68.124.156
  INFO [a28d5c52] Finished in 0.089 seconds with exit status 0 (successful).
  INFO [da84a6a5] Running docker container ls --all --filter name=^healthcheck-soe-1f90a98d0314bf57ce89016e8d1fd8439f4919ad$ --quiet | xargs docker logs --tail 50 2>&1 on 51.68.124.156
  INFO [da84a6a5] Finished in 0.245 seconds with exit status 0 (successful).
 ERROR bin/rails aborted!
ActiveRecord::DatabaseConnectionError: There is an issue connecting to your database with your username/password, username: soe. (ActiveRecord::DatabaseConnectionError)

Please check your database configuration to ensure the username/password are valid.

Caused by:
PG::ConnectionBad: connection to server at "51.68.124.156", port 5432 failed: FATAL:  password authentication failed for user "soe" (PG::ConnectionBad)

Tasks: TOP => db:prepare
(See full trace by running task with --trace)
 ERROR {
  "Status": "unhealthy",
  "FailingStreak": 0,
  "Log": [

  ]
}
  INFO [d9813748] Running docker container ls --all --filter name=^healthcheck-soe-1f90a98d0314bf57ce89016e8d1fd8439f4919ad$ --quiet | xargs docker stop on 51.68.124.156
  INFO [d9813748] Finished in 0.128 seconds with exit status 0 (successful).
  INFO [a13be145] Running docker container ls --all --filter name=^healthcheck-soe-1f90a98d0314bf57ce89016e8d1fd8439f4919ad$ --quiet | xargs docker container rm on 51.68.124.156
  INFO [a13be145] Finished in 0.140 seconds with exit status 0 (successful).
Releasing the deploy lock...
  Finished all in 118.7 seconds
  ERROR (Kamal::Cli::Healthcheck::Poller::HealthcheckError): Exception while executing on host 51.68.124.156: container not ready (unhealthy)

This my database.yml:

production:
  <<: *default
  database: soe_production
  username: <%= ENV["POSTGRES_USER"] %>
  password: <%= ENV["POSTGRES_PASSWORD"] %>
  host: <%= ENV["DB_HOST"] %>

I added all variables in .env and runnedkamal env push

What is going wrong? Please help

abratashov commented 2 months ago

As I see, there is an invalid user/password on your host, you can check it after logging in by SSH:

docker ps -a
docker logs <pg-container-hash>
docker exec -it --user root <pg-container-hash> psql

and fix the role.

I've found it easier to add PG user via production.sql:

CREATE ROLE deployer WITH SUPERUSER LOGIN PASSWORD 'k978_mJurHk7';
CREATE DATABASE kamal_blog_production;
GRANT CREATE ON SCHEMA public TO deployer;
GRANT ALL PRIVILEGES ON ALL TABLES IN SCHEMA public TO deployer;
GRANT ALL PRIVILEGES ON DATABASE kamal_blog_production to deployer;

In production env, you can then manually change the password, or install DB dump manually in the same way by SSH/Docker.

Also, there is the full process of Rails project deployment with Kamal https://github.com/abratashov/kamal-blog/blob/main/doc/install_prod.md