Closed sergeyshaykhullin closed 4 years ago
All connections are encrypted by default, but check #868.
@FxKu But why i can connect directly to pg with sslmode=disable, but can't with pooler service. Do you mean, that all pgbouncer connections with ssl by default?
I tested ALLOW_NOSSL, but it doesn't work with pooler. I still can connect without tls to pg directly and can't using pooler.
Helm setup
configKubernetes:
pod_environment_configmap: "{{ release.namespace }}/postgresql-pod-environment"
Configmap
apiVersion: v1
kind: ConfigMap
metadata:
name: postgresql-pod-environment
namespace: "{{ release.namespace }}"
data:
ALLOW_NOSSL: "true"
@sergeyshaykhullin sorry for the delay. Checked our pooler image and SSL is hard coded there. So you could either use your own pooler image or stick to application-side connection pooling.
@FxKu, could you point me to your pooler Dockerfile (and related scripts like entrypoint and such)? I tried searching on the Zalando's repositories, but I couldn't find anything specific to the pooler. Thank you
Hi, I also have the same setup using ALLOW_NOSSL:true with operator v1.6.2, spilo 13-2.0-p6.
It works for me, but I once encountered a similar issue:
When there are no replicas running (numberOfInstances: 1
) and I connect to pooler-repl, I logs the following error:
2021-04-21 16:46:55.037 UTC [1] WARNING C-0x55d3acfdd9b0: postgres/(nouser)@10.240.2.15:33168 pooler error: pgbouncer cannot connect to server
2021-04-21 16:46:55.040 UTC [1] WARNING C-0x55d3acfdd9b0: (nodb)/(nouser)@10.240.2.15:33170 pooler error: SSL required
Pgbouncer still ignores ALLOW_NOSSL:true
option but it's quite easy to fix:
client_tls_sslmode
to client_tls_sslmode = allow
in file /etc/pgbouncer/pgbouncer.ini.tmplWhy is this issue closed?
This a not a usable solution but a complex workaround (having to built our own images, while we don't even have access to the oroginal zalando pgbouncer dockerfile).
Enabling/disabling ssl on pgbouncer should be possible from the helm chart, just like postgres itself.
@jeroenjacobs79 Yeah just ran into the same issue right now. I am avoiding connecting via the pooler instead I am connecting to the pod with spilo-role=master
. Good to have a fix here instead of maintaining a separate image.
Why is this issue closed?
This a not a usable solution but a complex workaround (having to built our own images, while we don't even have access to the oroginal zalando pgbouncer dockerfile).
Enabling/disabling ssl on pgbouncer should be possible from the helm chart, just like postgres itself.
me too
same issue over here, if the workaround is clear why not just implement it and provide it out of the box
Ugh, this is annoying
fwiw it also messes up Supabase trying to connect to the cluster...
for those who struggles, in fact you can overwrite pg_hba.conf
with your own config without reject for nossl
to do it, you should configure in the manifest of db under patroni
section
this works for me
for those who struggles, in fact you can overwrite
pg_hba.conf
with your own config without reject for nossl to do it, you should configure in the manifest of db underpatroni
section this works for me
Struggling with this, and there's no docs on this. Can you share your YAML about this section so I have an example?
We solve this issue by creating a Dockerfile based on the oficial Dockerfile from Zalando: This is our Dockerfile:
ARG PGBOUNCER_VERSION=master-27
FROM registry.opensource.zalan.do/acid/pgbouncer:${PGBOUNCER_VERSION}
ARG SSL_MODE=disable
RUN sed -i '/#/!s/\(tls_sslmode[[:space:]]*=[[:space:]]*\)\(.*\)/\1${SSL_MODE}/' /etc/pgbouncer/pgbouncer.ini.tmpl
By default operator adding below entries to pg_hba: https://github.com/zalando/postgres-operator/blob/master/manifests/complete-postgres-manifest.yaml#L119 Before:
patroni:
# https://www.postgresql.org/docs/8.0/client-authentication.html
pg_hba:
- local all all trust
- hostssl all +zalandos 127.0.0.1/32 pam
- host all all 127.0.0.1/32 md5
- hostssl all +zalandos ::1/128 pam
- host all all ::1/128 md5
- local replication standby trust
- hostssl replication standby all md5
- hostnossl all all all reject
- hostssl all +zalandos all pam
- hostssl all all all md5
I have added another entry like hostnossl dbname username all trust
before hostnossl all all all reject
which fixed the problem.
After
patroni:
# https://www.postgresql.org/docs/8.0/client-authentication.html
pg_hba:
- local all all trust
- hostssl all +zalandos 127.0.0.1/32 pam
- host all all 127.0.0.1/32 md5
- hostssl all +zalandos ::1/128 pam
- host all all ::1/128 md5
- local replication standby trust
- hostssl replication standby all md5
- hostnossl <db name> <username> all trust
- hostnossl all all all reject
- hostssl all +zalandos all pam
- hostssl all all all md5
By default operator adding below entries to pg_hba: https://github.com/zalando/postgres-operator/blob/master/manifests/complete-postgres-manifest.yaml#L119 Before:
patroni: # https://www.postgresql.org/docs/8.0/client-authentication.html pg_hba: - local all all trust - hostssl all +zalandos 127.0.0.1/32 pam - host all all 127.0.0.1/32 md5 - hostssl all +zalandos ::1/128 pam - host all all ::1/128 md5 - local replication standby trust - hostssl replication standby all md5 - hostnossl all all all reject - hostssl all +zalandos all pam - hostssl all all all md5
I have added another entry like
hostnossl dbname username all trust
beforehostnossl all all all reject
which fixed the problem.After
patroni: # https://www.postgresql.org/docs/8.0/client-authentication.html pg_hba: - local all all trust - hostssl all +zalandos 127.0.0.1/32 pam - host all all 127.0.0.1/32 md5 - hostssl all +zalandos ::1/128 pam - host all all ::1/128 md5 - local replication standby trust - hostssl replication standby all md5 - hostnossl <db name> <username> all trust - hostnossl all all all reject - hostssl all +zalandos all pam - hostssl all all all md5
I use the helm chart via Rancher - so I would need to maintain my own version of the helm chart/repo in order to make this change, big hassle - can't just do kubectl apply -f The helm chart would just overwrite on next updates, etc.
We solve this issue by creating a Dockerfile based on the oficial Dockerfile from Zalando: This is our Dockerfile:
ARG PGBOUNCER_VERSION=master-27 FROM registry.opensource.zalan.do/acid/pgbouncer:${PGBOUNCER_VERSION} ARG SSL_MODE=disable RUN sed -i '/#/!s/\(tls_sslmode[[:space:]]*=[[:space:]]*\)\(.*\)/\1${SSL_MODE}/' /etc/pgbouncer/pgbouncer.ini.tmpl
This might be a more workable solution, still not ideal, you're just overriding the bounce image right? Need to check if I can specify the image for the bounce. Kind of wish the project would expose this as an option. :/
I can't connect to DB(sslmode=disable) using "-pooler" service
But i can connect to db directly