mastodon / chart

Helm chart for Mastodon deployment in Kubernetes
GNU Affero General Public License v3.0
152 stars 90 forks source link

Configuration issue with mastodon-streaming #86

Closed abbottmg closed 1 month ago

abbottmg commented 10 months ago

Has anyone seen an issue with the mastodon-streaming container where it tries to connect to use its own pod's IP address as the postgres host despite DB_HOST being properly set in the ConfigMap?

I'm in the middle of testing a transition from my docker-compose environment to a k8s stack where I'm consuming this as a subchart, with a postgres cluster managed by another subchart that drives a CrunchyData PostgresCluster custom resource.

In my values file, I have mastodon.postgresql.postgresqlHostname set to the service name cluster-primary (a service published by the PostgresCluster) and I see that value makes its way to the pod environments correctly for both mastodon-web and mastodon-streaming pods. I pass the appropriate secret ref via mastodon.postgresql.auth.existingSecret and can see DB_PASS set appropriately in the pods as well.

The mastodon-web rails process seems to be contacting the database correctly, but all requests to wss://.../api/v1/streaming are returning a 401. Reading the logs from the mastodon-streaming pod, I see the following:

WARN Worker 1 now listening on 0.0.0.0:4000
ERR! error: no pg_hba.conf entry for host "10.2.4.187", user "mastodon", database "mastodon_production", no encryption
ERR! error: no pg_hba.conf entry for host "10.2.4.187", user "mastodon", database "mastodon_production", no encryption
ERR! error: no pg_hba.conf entry for host "10.2.4.187", user "mastodon", database "mastodon_production", no encryption
ERR! error: no pg_hba.conf entry for host "10.2.4.187", user "mastodon", database "mastodon_production", no encryption

The first line is on container boot, and subsequent lines seem to match up with requests made by my browser.

10.2.4.187 is the IP of the mastodon-streaming pod, not any pods related to my postgres cluster. I would expect the address used to match either "10.128.54.117", which is the endpoint associated with the cluster-primary service name set in DB_HOST or possibly 10.2.5.83, the IP associated with the pod hosting the actual primary pg instance.

Browsing through the deployment template file and the mastodon-env template file, everything seems correct there, and the proper values seem to make it to the final rendered resources, so perhaps this is a bug in the main mastodon/mastodon project? Is there some other environment variable that could be misleading rails to look to the wrong IP for postgres?

EDIT: Just realized the streaming server isn't rails, but a separate JS file run via node. The env setup code is pretty straightforward so it's a little perplexing how this mismatch could be happening. I'm going to try setting DATABASE_URL, which seems to override all other envvars and see if that is visible to node.

SISheogorath commented 10 months ago

The error message is from the postgres instance telling you that the client with the ip address 10.2.4.187 is denied since the connection doesn't match any allowed config in pg_hpa.conf of your postgresql instance.

If you use a postgresql cluster, that uses TLS, you might need to tell streaming about the correct TLS certificate to validate the connection.

abbottmg commented 10 months ago

ah, I wouldn't have guessed that "host" referred to the client IP there...

the cluster does use TLS; it looks like PGO publishes its own CA in the k8s cluster. I guess that'll be a change within mastodon/mastodon then, with some new values exposed in this chart. I suppose I'll look into why the rails app is functioning without explicitly setting that info while the node app isn't.

SISheogorath commented 10 months ago

I use the zalando postgresql-operator myself and use the following adjustments to make things work:

https://git.shivering-isles.com/shivering-isles/infrastructure-gitops/-/blob/94bab5d8af8e43b98ac78151b62e7d23d9d81ffd/apps/base/mastodon/release.yaml#L42-L82

(Be aware I use a fork of this helm chart nowadays so the adjustments might not be all that is needed/compatible)