vitabaks / postgresql_cluster

Automated database platform for PostgreSQL® A modern, open-source alternative to cloud-managed databases.
https://postgresql-cluster.org
MIT License
1.83k stars 418 forks source link

S3 Bucket Creation Error - invalid character(s) found in the bucket name #771

Closed david-lovelystay closed 1 month ago

david-lovelystay commented 1 month ago

Hello,

I was creating an AWS cluster using the vitabaks/postgresql_cluster_console:2.0.0 image, but the S3 bucket task failed with the following error:

image

Looking closely, it seems the cluster name is being shown as being test-cluster} instead of test-cluster.

Looking at the Ansible variable here seems there's an extra curly bracket that's causing this issue.

Hope this is useful

Thanks

vitabaks commented 1 month ago

Thanks @david-lovelystay

Fixed here https://github.com/vitabaks/postgresql_cluster/pull/772

To use the automation image with this fix, recreate the console container with the addition of the PG_CONSOLE_DOCKER_IMAGE variable.

Example:

docker run -d --name pg-console \
  --publish 80:80 \
  --publish 8080:8080 \
  --env PG_CONSOLE_API_URL=http://localhost:8080/api/v1 \
  --env PG_CONSOLE_AUTHORIZATION_TOKEN=secret_token \
  --env PG_CONSOLE_DOCKER_IMAGE=vitabaks/postgresql_cluster:aws-bucket-name \
  --volume console_postgres:/var/lib/postgresql \
  --volume /var/run/docker.sock:/var/run/docker.sock \
  --volume /tmp/ansible:/tmp/ansible \
  --restart=unless-stopped \
  vitabaks/postgresql_cluster_console:2.0.0
david-lovelystay commented 1 month ago

When running my console with that added env variable the create cluster process just hangs forever until it times out.

I tried also changing the console image itself to the one with the same tag to the same effect.

Can't seem to get any logs out of it neither from the front end or the container itself.

What I did notice is that the postgresql_cluster container that should be spawned when creating a cluster never shows up.

vitabaks commented 1 month ago

Is it strange, the log in the "Operations" section is empty?

The console internal logs are available in the /var/log/supervisor directory

docker exec pg-console ls -lt /var/log/supervisor         
total 3296
-rw-r--r-- 1 root root 3351172 Sep 25 11:30 pg-console-api-stdout.log
-rw-r--r-- 1 root root    1195 Sep 25 11:13 pg-console-ui-stdout.log
-rw-r--r-- 1 root root    1025 Sep 25 10:43 supervisord.log
-rw-r--r-- 1 root root     371 Sep 25 10:43 pg-console-db-stdout.log
-rw-r--r-- 1 root root     237 Sep 25 10:43 pg-console-db-stderr.log
-rw-r--r-- 1 root root       0 Sep 25 10:43 pg-console-ui-stderr.log
-rw-r--r-- 1 root root       0 Sep 25 10:43 pg-console-api-stderr.log

As well as Ansible logs (in json format) are available in the /tmp/ansible directory

docker exec pg-console ls -lt /tmp/ansible       
total 184
-rw-r--r-- 1 root root 60127 Sep 25 11:05 postgres-cluster-gcp.json
-rw-r--r-- 1 root root 53832 Sep 25 11:04 postgres-cluster-do.json
-rw-r--r-- 1 root root 57661 Sep 25 10:57 postgres-cluster-01.json
-rw-r--r-- 1 root root  6370 Sep 25 10:54 postgres-cluster-azure.json
david-lovelystay commented 1 month ago

I'm getting the following logs in docker exec pg-console cat /var/log/supervisor/pg-console-api-stdout.log , when the request fails:

{"level":"error","app":"pg_console","version":"2.0.0","module":"log_watcher","cid":"401d750e-e266-45b6-9934-804c03f36f3f","operation_id":3,"error":"Error response from daemon: No such container: bd7a8b42dfd8a74ac915a74f64dd57df31131f0159246f16f3657f8943057886","time":"2024-09-25T11:52:05Z","message":"failed to get containers status"} {"level":"error","app":"pg_console","version":"2.0.0","module":"log_watcher","cid":"e11c73c8-def0-40d7-8bb3-d196024d0097","operation_id":37,"error":"Error response from daemon: No such container: a5472c9a536bace898b39196924e8a3e92a9e171b2c7d84411a53c6603a93b01","time":"2024-09-25T11:52:05Z","message":"failed to get containers status"} {"level":"error","app":"pg_console","version":"2.0.0","module":"log_watcher","cid":"709f68b5-f7ef-4317-a895-fc4fb9d0e203","operation_id":4,"error":"Error response from daemon: No such container: b24666f21665ff902e5ef0a9b6f9768ca38395c93f317ac919342a75dd0ac7f1","time":"2024-09-25T11:52:05Z","message":"failed to get containers status"}

There's also the following a bit earlier:

{"level":"debug","app":"pg_console","version":"2.0.0","cid":"b16612ba-2f55-45dc-811a-09d08dd17fe7","method":"POST","path":"/api/v1/clusters","protocol":"HTTP/1.1","request_length":526,"body":{"code":100,"description":"Error response from daemon: No such image: vitabaks/postgresql_cluster:aws-bucket-name","title":"Error response from daemon: No such image: vitabaks/postgresql_cluster:aws-bucket-name"},"headers":{"Access-Control-Allow-Credentials":["true"],"Access-Control-Allow-Headers":["Authorization, Access-Control-Allow-Origin, Access-Control-Allow-Headers, Origin,Accept, X-Requested-With, Content-Type, Access-Control-Request-Method, Access-Control-Request-Headers, X-Log-Completed, X-Cluster-Id"],"Access-Control-Allow-Methods":[" GET, POST, OPTIONS, PATCH, DELETE, PUT"],"Access-Control-Allow-Origin":["*"],"Access-Control-Expose-Headers":["X-Log-Completed, X-Cluster-Id"],"Content-Type":["application/json"],"X-Correlation-Id":["b16612ba-2f55-45dc-811a-09d08dd17fe7"]},"status":400,"time":"2024-09-25T11:51:59Z","message":"[zerologResponse] Response was sent"}

The cluster is apparently created at a database level, but there's an issue with spawning the postgresql_cluster container.

I see no other relevant error logs.

vitabaks commented 1 month ago

No such image: vitabaks/postgresql_cluster:aws-bucket-name"

can I assume that the image has not yet been uploaded to your server/computer, for example due to limitations in Internet bandwidth?

Try to wait for the image to load, or perform a pull manually:

docker pull vitabaks/postgresql_cluster:aws-bucket-name
david-lovelystay commented 1 month ago

It's been fixed, thanks.

Seems the postgresql_cluster image is quite big and the device didn't have enough space to pull it. Sorry for not checking this sooner and thank you for your help!

Have a nice day

Closing the issue.