What happens:
Using the contents of a k8s secret for POSTGRES_USER the variable is populated but no roles are created. Hopping into the pod with an exec shows the env vars populated correctly. However trying to connect with psql -U $POSTGRES_USER or psql -U postgres fails consistently no matter how many times I destroy and re-create the pod. I also deleted the pvc on every attempt to make sure I didn't have any data in the postgres data directory.
If I populate the POSTGRES_USER env variable with a hard coded value the install always succeeds and I can run psql -U $POSTGRES_USER
What I expect to happen:
It doesn't matter how I set the environment variables and I can use a secrete if I want to.
I can hardly believe this and I'm happy to test anything you folks want to get to the bottom of it. It makes me wonder if the POSTGRES_USER needs to be populated really early in the deployment and there is some sort of race condition
Lastly. I've used this method for building up k8s configs before, even on the postgres image. I have some running already in the same cluster.
Setup Platform: K8s Arm64 Postgres Image: 16.1, 15.5, (presumably others)
What happens: Using the contents of a k8s secret for POSTGRES_USER the variable is populated but no roles are created. Hopping into the pod with an exec shows the env vars populated correctly. However trying to connect with psql -U $POSTGRES_USER or psql -U postgres fails consistently no matter how many times I destroy and re-create the pod. I also deleted the pvc on every attempt to make sure I didn't have any data in the postgres data directory.
If I populate the POSTGRES_USER env variable with a hard coded value the install always succeeds and I can run psql -U $POSTGRES_USER
Here two bits of my k8s manifest to compare:
Not Working:
Working
What I expect to happen: It doesn't matter how I set the environment variables and I can use a secrete if I want to.
I can hardly believe this and I'm happy to test anything you folks want to get to the bottom of it. It makes me wonder if the POSTGRES_USER needs to be populated really early in the deployment and there is some sort of race condition
Lastly. I've used this method for building up k8s configs before, even on the postgres image. I have some running already in the same cluster.