Open patsevanton opened 3 years ago
same issue, did you reach any solution?
That's because you're running pg_isready without passing -U <username>
. Can you verify?
That's because you're running pg_isready without passing
-U <username>
. Can you verify?
I put
…
test: [ "CMD-SHELL", "pg_isready", "-U", "${POSTGRES_USER}" ]
…
and get the same problem.
I replaced the ${POSTGRES_USER
by the actual username, and still gets the spam in the logs.
Same thing here. Using test: ["CMD-SHELL", "pg_isready", "-U", "myUserName"]
it keep printing:
breakoutdb | 2022-01-14 17:27:46.497 UTC [118] FATAL: role "root" does not exist
breakoutdb | 2022-01-14 17:27:51.861 UTC [132] FATAL: role "root" does not exist
breakoutdb | 2022-01-14 17:27:57.336 UTC [147] FATAL: role "root" does not exist
breakoutdb | 2022-01-14 17:28:02.722 UTC [163] FATAL: role "root" does not exist
breakoutdb | 2022-01-14 17:28:08.117 UTC [178] FATAL: role "root" does not exist
breakoutdb | 2022-01-14 17:28:13.475 UTC [193] FATAL: role "root" does not exist
We need to pass the user and database argument, example on my command:
test: ["CMD", "pg_isready", "-U", "user", "-d", "kong_db"]
it works for me
Even with user and database, role "root" does not exist
continues here. :(
But, thanks for the faster reply.
After struggling for a while, I found that this test command worked for me.
test: [ "CMD", "pg_isready", "-q", "-d", "{YOUR_DATABASE_NAME}", "-U", "{YOUR_DATABASE_USERNAME}" ]
for me, I had to add the literal username and db name for it to work
test: [ "CMD", "pg_isready", "-q", "-d", "postgres", "-U", "postgres" ]
got it from here.
I check:
test: [ "CMD", "pg_isready", "-q", "-d", "postgres", "-U", "postgres" ]
Get error
kong-postgres | FATAL: role "postgres" does not exist
I check
test: [ "CMD", "pg_isready", "-q", "-d", "kong", "-U", "kong" ]
Don`t have error But have another error
docker-compose up 0.2s
[+] Running 4/3
- Network docker-compose-healthcheck_default Created 0.0s
- Container kong-postgres Created 2.8s
- Container kong-migration Created 0.1s
- Container kong Created 0.1s
Attaching to kong, kong-migration, kong-postgres
kong-postgres | ********************************************************************************
kong-postgres | WARNING: POSTGRES_HOST_AUTH_METHOD has been set to "trust". This will allow
kong-postgres | anyone with access to the Postgres port to access your database without
kong-postgres | a password, even if POSTGRES_PASSWORD is set. See PostgreSQL
kong-postgres | documentation about "trust":
kong-postgres | https://www.postgresql.org/docs/current/auth-trust.html
kong-postgres | In Docker's default configuration, this is effectively any other
kong-postgres | container on the same system.
kong-postgres |
kong-postgres | It is not recommended to use POSTGRES_HOST_AUTH_METHOD=trust. Replace
kong-postgres | it with "-e POSTGRES_PASSWORD=password" instead to set a password in
kong-postgres | "docker run".
kong-postgres | ********************************************************************************
kong-postgres | The files belonging to this database system will be owned by user "postgres".
kong-postgres | This user must also own the server process.
kong-postgres |
kong-postgres | The database cluster will be initialized with locale "en_US.utf8".
kong-postgres | The default database encoding has accordingly been set to "UTF8".
kong-postgres | The default text search configuration will be set to "english".
kong-postgres |
kong-postgres | Data page checksums are disabled.
kong-postgres |
kong-postgres | fixing permissions on existing directory /var/lib/postgresql/data ... ok
kong-postgres | creating subdirectories ... ok
kong-postgres | selecting default max_connections ... 100
kong-postgres | selecting default shared_buffers ... 128MB
kong-postgres | selecting default timezone ... Etc/UTC
kong-postgres | selecting dynamic shared memory implementation ... posix
kong-postgres | creating configuration files ... ok
kong-postgres | creating template1 database in /var/lib/postgresql/data/base/1 ... ok
kong-postgres | initializing pg_authid ... ok
kong-postgres | setting password ... ok
kong-postgres | initializing dependencies ... ok
kong-postgres | creating system views ... ok
kong-postgres | loading system objects' descriptions ... ok
kong-postgres | creating collations ... ok
kong-postgres | creating conversions ... ok
kong-postgres | creating dictionaries ... ok
kong-postgres | setting privileges on built-in objects ... ok
kong-postgres | creating information schema ... ok
kong-postgres | loading PL/pgSQL server-side language ... ok
kong-postgres | vacuuming database template1 ... ok
kong-postgres | copying template1 to template0 ... ok
kong-postgres | copying template1 to postgres ... ok
kong-postgres | syncing data to disk ... ok
kong-postgres |
kong-postgres | Success. You can now start the database server using:
kong-postgres |
kong-postgres | pg_ctl -D /var/lib/postgresql/data -l logfile start
kong-postgres |
kong-postgres |
kong-postgres | WARNING: enabling "trust" authentication for local connections
kong-postgres | You can change this by editing pg_hba.conf or using the option -A, or
kong-postgres | --auth-local and --auth-host, the next time you run initdb.
kong-postgres | waiting for server to start....LOG: database system was shut down at 2022-02-15 04:04:37 UTC
kong-postgres | LOG: MultiXact member wraparound protections are now enabled
kong-postgres | LOG: autovacuum launcher started
kong-postgres | LOG: database system is ready to accept connections
kong-postgres | done
kong-postgres | server started
kong-postgres | CREATE DATABASE
kong-postgres |
kong-postgres |
kong-postgres | /usr/local/bin/docker-entrypoint.sh: ignoring /docker-entrypoint-initdb.d/*
kong-postgres |
kong-postgres | waiting for server to shut down....LOG: received fast shutdown request
kong-postgres | LOG: aborting any active transactions
kong-postgres | LOG: autovacuum launcher shutting down
kong-postgres | LOG: shutting down
kong-postgres | LOG: database system is shut down
kong-postgres | done
kong-postgres | server stopped
kong-postgres |
kong-postgres | PostgreSQL init process complete; ready for start up.
kong-postgres |
kong-postgres | LOG: database system was shut down at 2022-02-15 04:04:38 UTC
kong-postgres | LOG: MultiXact member wraparound protections are now enabled
kong-postgres | LOG: autovacuum launcher started
kong-postgres | LOG: database system is ready to accept connections
kong-migration | Bootstrapping database...
kong-migration | migrating core on database 'kong'...
kong-migration | core migrated up to: 000_base (executed)
kong-migration | core migrated up to: 003_100_to_110 (executed)
kong-migration | core migrated up to: 004_110_to_120 (executed)
kong-migration | core migrated up to: 005_120_to_130 (executed)
kong-migration | core migrated up to: 006_130_to_140 (executed)
kong-migration | core migrated up to: 007_140_to_150 (executed)
kong-migration | core migrated up to: 008_150_to_200 (executed)
kong-migration | core migrated up to: 009_200_to_210 (executed)
kong-migration | core migrated up to: 010_210_to_211 (executed)
kong-migration | core migrated up to: 011_212_to_213 (executed)
kong-migration | core migrated up to: 012_213_to_220 (executed)
kong-migration | core migrated up to: 013_220_to_230 (executed)
kong-migration | core migrated up to: 014_230_to_270 (executed)
kong-migration | migrating acl on database 'kong'...
kong-migration | acl migrated up to: 000_base_acl (executed)
kong-migration | acl migrated up to: 002_130_to_140 (executed)
kong-migration | acl migrated up to: 003_200_to_210 (executed)
kong-migration | acl migrated up to: 004_212_to_213 (executed)
kong-migration | migrating acme on database 'kong'...
kong-migration | acme migrated up to: 000_base_acme (executed)
kong-migration | migrating basic-auth on database 'kong'...
kong | 2022/02/15 04:04:50 [warn] 1#0: the "user" directive makes sense only if the master process runs with super-user privileges, ignored in /usr/local/kong/nginx.conf:6
kong | nginx: [warn] the "user" directive makes sense only if the master process runs with super-user privileges, ignored in /usr/local/kong/nginx.conf:6
kong | 2022/02/15 04:04:50 [error] 1#0: init_by_lua error: /usr/local/share/lua/5.1/kong/cmd/utils/migrations.lua:20: New migrations available; run 'kong migrations up' to proceed
kong | stack traceback:
kong | [C]: in function 'error'
kong | /usr/local/share/lua/5.1/kong/cmd/utils/migrations.lua:20: in function 'check_state'
kong | /usr/local/share/lua/5.1/kong/init.lua:506: in function 'init'
kong | init_by_lua:3: in main chunk
kong | nginx: [error] init_by_lua error: /usr/local/share/lua/5.1/kong/cmd/utils/migrations.lua:20: New migrations available; run 'kong migrations up'
to proceed
kong | stack traceback:
kong | [C]: in function 'error'
kong | /usr/local/share/lua/5.1/kong/cmd/utils/migrations.lua:20: in function 'check_state'
kong | /usr/local/share/lua/5.1/kong/init.lua:506: in function 'init'
kong | init_by_lua:3: in main chunk
kong-migration | basic-auth migrated up to: 000_base_basic_auth (executed)
kong-migration | basic-auth migrated up to: 002_130_to_140 (executed)
kong-migration | basic-auth migrated up to: 003_200_to_210 (executed)
kong-migration | migrating bot-detection on database 'kong'...
kong-migration | bot-detection migrated up to: 001_200_to_210 (executed)
kong-migration | migrating hmac-auth on database 'kong'...
kong-migration | hmac-auth migrated up to: 000_base_hmac_auth (executed)
kong-migration | hmac-auth migrated up to: 002_130_to_140 (executed)
kong-migration | hmac-auth migrated up to: 003_200_to_210 (executed)
kong-migration | migrating ip-restriction on database 'kong'...
kong-migration | ip-restriction migrated up to: 001_200_to_210 (executed)
kong-migration | migrating jwt on database 'kong'...
kong-migration | jwt migrated up to: 000_base_jwt (executed)
kong-migration | jwt migrated up to: 002_130_to_140 (executed)
kong-migration | jwt migrated up to: 003_200_to_210 (executed)
kong-migration | migrating key-auth on database 'kong'...
kong-migration | key-auth migrated up to: 000_base_key_auth (executed)
kong-migration | key-auth migrated up to: 002_130_to_140 (executed)
kong-migration | key-auth migrated up to: 003_200_to_210 (executed)
kong-migration | migrating oauth2 on database 'kong'...
kong-migration | oauth2 migrated up to: 000_base_oauth2 (executed)
kong-migration | oauth2 migrated up to: 003_130_to_140 (executed)
kong-migration | oauth2 migrated up to: 004_200_to_210 (executed)
kong-migration | oauth2 migrated up to: 005_210_to_211 (executed)
kong-migration | migrating rate-limiting on database 'kong'...
kong-migration | rate-limiting migrated up to: 000_base_rate_limiting (executed)
kong-migration | rate-limiting migrated up to: 003_10_to_112 (executed)
kong-migration | rate-limiting migrated up to: 004_200_to_210 (executed)
kong-migration | migrating response-ratelimiting on database 'kong'...
kong-migration | response-ratelimiting migrated up to: 000_base_response_rate_limiting (executed)
kong-migration | migrating session on database 'kong'...
kong-migration | session migrated up to: 000_base_session (executed)
kong-migration | session migrated up to: 001_add_ttl_index (executed)
kong-migration | 42 migrations processed
kong-migration | 42 executed
kong-migration | Database is up-to-date
kong exited with code 1
kong-migration exited with code 0
kong | 2022/02/15 04:04:52 [warn] 1#0: the "user" directive makes sense only if the master process runs with super-user privileges, ignored in /usr/local/kong/nginx.conf:6
kong | nginx: [warn] the "user" directive makes sense only if the master process runs with super-user privileges, ignored in /usr/local/kong/nginx.conf:6
kong | 2022/02/15 04:04:52 [notice] 1#0: using the "epoll" event method
kong | 2022/02/15 04:04:52 [notice] 1#0: openresty/1.19.9.1
kong | 2022/02/15 04:04:52 [notice] 1#0: built by gcc 10.3.1 20210424 (Alpine 10.3.1_git20210424)
kong | 2022/02/15 04:04:52 [notice] 1#0: OS: Linux 5.10.60.1-microsoft-standard-WSL2
kong | 2022/02/15 04:04:52 [notice] 1#0: getrlimit(RLIMIT_NOFILE): 1048576:1048576
kong | 2022/02/15 04:04:52 [notice] 1#0: start worker processes
kong | 2022/02/15 04:04:52 [notice] 1#0: start worker process 1098
kong | 2022/02/15 04:04:52 [notice] 1#0: start worker process 1099
kong | 2022/02/15 04:04:52 [notice] 1#0: start worker process 1100
kong | 2022/02/15 04:04:52 [notice] 1#0: start worker process 1101
kong | 2022/02/15 04:04:52 [notice] 1#0: start worker process 1102
kong | 2022/02/15 04:04:52 [notice] 1#0: start worker process 1103
kong | 2022/02/15 04:04:52 [notice] 1#0: start worker process 1104
kong | 2022/02/15 04:04:52 [notice] 1#0: start worker process 1105
kong | 2022/02/15 04:04:52 [notice] 1099#0: *3 [kong] init.lua:311 only worker #0 can manage, context: init_worker_by_lua*
kong | 2022/02/15 04:04:52 [notice] 1101#0: *4 [kong] init.lua:311 only worker #0 can manage, context: init_worker_by_lua*
kong | 2022/02/15 04:04:52 [notice] 1098#0: *1 [lua] warmup.lua:92: single_dao(): Preloading 'services' into the core_cache..., context: init_worker_by
context: init_worker_by_lua*
kong | 2022/02/15 04:04:52 [notice] 1103#0: *5 [kong] init.lua:311 only worker #0 can manage, context: init_worker_by_lua*
kong | 2022/02/15 04:04:52 [notice] 1105#0: *8 [kong] init.lua:311 only worker #0 can manage, context: init_worker_by_lua*
kong | 2022/02/15 04:04:52 [notice] 1100#0: *2 [kong] init.lua:311 only worker #0 can manage, context: init_worker_by_lua*
kong | 2022/02/15 04:04:52 [notice] 1098#0: *1 [lua] warmup.lua:129: single_dao(): finished preloading 'services' into the core_cache (in 0ms), contextcache (in 0ms), context: init_worker_by_lua*
kong | 2022/02/15 04:04:52 [notice] 1102#0: *7 [kong] init.lua:311 only worker #0 can manage, context: init_worker_by_lua*
kong | 2022/02/15 04:04:52 [notice] 1104#0: *6 [kong] init.lua:311 only worker #0 can manage, context: init_worker_by_lua*
kong | 2022/02/15 04:04:57 [crit] 1105#0: *15 [lua] balancers.lua:240: create_balancers(): failed loading initial list of upstreams: failed to get fromams: failed to get from node cache: could not acquire callback lock: timeout, context: ngx.timer
kong | 2022/02/15 04:04:57 [crit] 1103#0: *12 [lua] balancers.lua:240: create_balancers(): failed loading initial list of upstreams: failed to get fromams: failed to get from node cache: could not acquire callback lock: timeout, context: ngx.timer
kong | 2022/02/15 04:04:57 [crit] 1100#0: *14 [lua] balancers.lua:240: create_balancers(): failed loading initial list of upstreams: failed to get fromams: failed to get from node cache: could not acquire callback lock: timeout, context: ngx.timer
kong | 2022/02/15 04:04:57 [crit] 1102#0: *16 [lua] balancers.lua:240: create_balancers(): failed loading initial list of upstreams: failed to get fromams: failed to get from node cache: could not acquire callback lock: timeout, context: ngx.timer
kong | 2022/02/15 04:04:57 [crit] 1101#0: *13 [lua] balancers.lua:240: create_balancers(): failed loading initial list of upstreams: failed to get fromams: failed to get from node cache: could not acquire callback lock: timeout, context: ngx.timer
kong | 2022/02/15 04:04:57 [crit] 1098#0: *17 [lua] balancers.lua:240: create_balancers(): failed loading initial list of upstreams: failed to get fromams: failed to get from node cache: could not acquire callback lock: timeout, context: ngx.timer
kong | 2022/02/15 04:04:57 [crit] 1104#0: *18 [lua] balancers.lua:240: create_balancers(): failed loading initial list of upstreams: failed to get fromams: failed to get from node cache: could not acquire callback lock: timeout, context: ngx.timer
Error FATAL: role "root" does not exist fixed by https://github.com/peter-evans/docker-compose-healthcheck/pull/17
My docker-compose file looks like:
version: "2.2"
services:
results:
image: postgres:12
env_file:
- config/server/base.env
- config/server/${ENV}.env
healthcheck:
test: ["CMD-SHELL", "pg_isready"]
interval: 10s
timeout: 5s
retries: 5
ports:
- "5432:5432"
volumes:
- pgdata:/var/lib/postgresql/data
and the error Error FATAL: role "root" does not exist
was still there.
The catch is probably the fact that I'm setting the environment variable using env files. I finally fixed it using this test command:
test: [ "CMD-SHELL", "pg_isready -d $${POSTGRES_DB} -U $${POSTGRES_USER}"]
Error FATAL: role "root" does not exist fixed by #17
I can confirm this works! Thanks!
My docker-compose file looks like:
version: "2.2" services: results: image: postgres:12 env_file: - config/server/base.env - config/server/${ENV}.env healthcheck: test: ["CMD-SHELL", "pg_isready"] interval: 10s timeout: 5s retries: 5 ports: - "5432:5432" volumes: - pgdata:/var/lib/postgresql/data
and the error
Error FATAL: role "root" does not exist
was still there.The catch is probably the fact that I'm setting the environment variable using env files. I finally fixed it using this test command:
test: [ "CMD-SHELL", "pg_isready -d $${POSTGRES_DB} -U $${POSTGRES_USER}"]
I confirm as well. Thas works in my case.
Thanks so much
The catch is probably the fact that I'm setting the environment variable using env files. I finally fixed it using this test command:
test: [ "CMD-SHELL", "pg_isready -d $${POSTGRES_DB} -U $${POSTGRES_USER}"]
Thanks, @sp1thas! I've also figured out that same effect can be achieved in this way:
test: [ "CMD-SHELL", "pg_isready -d $POSTGRES_DB -U $POSTGRES_USER"]
We can work with POSTGRES_DB
and POSTGRES_USER
just using one dollar sign $
without curly braces {}
, same as we work with ordinary environment variables in shell.
Also had this issue! Thanks all for bringing the solution, it seems related to formatting whereas it worked with Postgres 13.7 (but not 14.6).
I had previously:
test: ['CMD-SHELL', 'psql', '-h', 'localhost', '-U', '$$POSTGRES_USER', '-c', 'select 1', '-d', '$$POSTGRES_DB']
And I switched to:
test: ['CMD-SHELL', 'psql -h localhost -U $${POSTGRES_USER} -c select 1 -d $${POSTGRES_DB}']
Note I see everyone using pg_isready
, but in my case after debugging some race conditions in the past I decided to use psql
directly. Here the comment why I did this months ago:
Note: at start we tried
pg_isready
but it's not efficient since the postgres container restarts the server at startup (to init some scripts) so we ended with broken connections... The best is to try a real query to be sure it's up and running as advised in https://github.com/docker-library/postgres/issues/146#issuecomment-872486465
someone please update the docs with this fix!
My docker-compose file looks like:
version: "2.2" services: results: image: postgres:12 env_file: - config/server/base.env - config/server/${ENV}.env healthcheck: test: ["CMD-SHELL", "pg_isready"] interval: 10s timeout: 5s retries: 5 ports: - "5432:5432" volumes: - pgdata:/var/lib/postgresql/data
and the error
Error FATAL: role "root" does not exist
was still there. The catch is probably the fact that I'm setting the environment variable using env files. I finally fixed it using this test command:test: [ "CMD-SHELL", "pg_isready -d $${POSTGRES_DB} -U $${POSTGRES_USER}"]
I confirm as well. Thas works in my case.
Thanks so much
It's working with $$ indeed but also with simple $:
test: [ "CMD-SHELL", "pg_isready -d ${POSTGRES_DB} -U $POSTGRES_USER}"]
Doesn't work too, healthcheck just continues to run and spam logs with:
db_1 | 2023-08-18 20:24:37.733 UTC [362/1] [[unknown]:[unknown]] LOG: connection received: host=127.0.0.1 port=45668
db_1 | 2023-08-18 20:24:37.734 UTC [362/2] [master:[unknown]] LOG: connection authorized: user=master database=master application_name=pg_isready
The issue for me exists only in postgres image >=14
earlier images work fine.
Ok, solution for somebody who needs healthcheck for dependencies only:
tmpfs:
- /run
healthcheck:
test: [ "CMD-SHELL", "[ -r /var/run/postgresql/ready ] || ( pg_isready && touch /var/run/postgresql/ready)" ]
It will run only till it's ready and not spam logs.
It will produce FATAL: role "root" does not exist
only once (at atartup).
After struggling for a while, I found that this test command worked for me.
test: [ "CMD", "pg_isready", "-q", "-d", "{YOUR_DATABASE_NAME}", "-U", "{YOUR_DATABASE_USERNAME}" ]
for me, I had to add the literal username and db name for it to work
test: [ "CMD", "pg_isready", "-q", "-d", "postgres", "-U", "postgres" ]
got it from here.
-q
stands for -quiet. You just muted log
Had this error when i upgrade to postgres 16 container. It was coming from a healthcheck:
healthcheck:
test: /usr/bin/pg_isready || exit 1
interval: 5s
timeout: 10s
retries: 120
It appears that the script /usr/bin/pg_isread uses PGUSER in envs which I did not specify (will default to root if its PGUSER is empty), instead, I was using POSTGRES_USER. So used has to specify both:
environment:
BUILD_ENV: docker
POSTGRES_USER: postgres
PGUSER: postgres
Further reading: https://stackoverflow.com/questions/60193781/postgres-with-docker-compose-gives-fatal-role-root-does-not-exist-error