konveyor / tackle2-hub

Tackle (2nd generation) hub component.
Apache License 2.0
6 stars 32 forks source link

Proper SSL Verification for Pathfinder and Keycloak Postgres #240

Closed jmontleon closed 1 year ago

jmontleon commented 1 year ago

The keycloak instances don't appear to be forcing verify-ca or at least require SSL. The default mode is prefer, which may or may not run with SSL at all. We should fix this to ensure they use SSL and even verify certs.

The same is true for the pathfinder postgresql client.

jmontleon commented 1 year ago

This works as a rough guide on OpenShift: https://access.redhat.com/solutions/4437661

From talking with @fbladilo it sounds like the service-ca operator does not run, at least not without adding it, on vanilla kubernetes, which complicates this for upstream.

First step is to generate certs. Again on openshift we can do this easily by annotating the service: oc annotate --overwrite service keycloak-postgresql service.beta.openshift.io/serving-cert-secret-name=keycloak-postgresql-crt

The problem here with RHSSO is if we want to do or verify-full we would need to annotate the keycloak-postgresql service that RHSSO creates rather than ours because it connects to and sees the hostname: keycloak-postgresql.openshift-mta.svc.cluster.local rather than mta-keycloak-postgresql.openshift-mta.svc.cluster.local. If we annotate our service the RHSSO service hostname is absent from the subjectAltNames and verify-full does not work, although require does (I didn't try verify-ca).

Once getting certs: Generate a custom.conf:

ssl = on
ssl_cert_file = '/opt/app-root/src/certificates/tls.crt'
ssl_key_file = '/opt/app-root/src/certificates/tls.key'
ssl_ca_file = '/run/secrets/kubernetes.io/serviceaccount/service-ca.crt'

Add all this to the postgres container:

oc create configmap psql-config --from-file=custom.conf
oc set volume deployment/mta-keycloak-postgresql --add --secret-name=keycloak-postgresql-crt --mount-path=/opt/app-root/src/certificates
oc set volume deployment/mta-keycloak-postgresql --add --configmap-name=psql-config --mount-path=/opt/app-root/src/postgresql-cfg

After creating the secret the defaultMode needs to be changed to 416 (default is 420, corresponding to 644, 416==640) to make postgres happy about the key file ownership.

If going all the way and doing verify, add the service CA to keycloak by creating the secret:

apiVersion: v1
kind: Secret
metadata:
  name: keycloak-db-ssl-cert-secret
  namespace: openshift-mta
type: Opaque
data:
  root.crt: $value

The root.crt value comes from oc get secret -n openshift-service-ca signing-key -o yaml tls.crt value.

jortel commented 1 year ago

Why? The tackle network policy prevents access (to all internal services) from outside the namespace. Can to describe the vulnerability you are concerned about?

jmontleon commented 1 year ago

To prevent snooping traffic being passed in the clear? What guarantee do you have that all containers in a given namespace are running on a single node and therefore no traffic whatsoever is traversing the wider cluster network between physical hosts? It's a gap in security.

jmontleon commented 1 year ago

Digging a little more. https://www.ibm.com/docs/en/cpfs?topic=compliance-enabling-fips#configipsec

With IPsec enabled, all network traffic between nodes on the OVN-Kubernetes Container Network Interface (CNI) cluster network travels through an encrypted tunnel. You can provide additional FIPS protection for traffic across different nodes on the cluster by configuring IPSec tunnels.

IPsec is disabled by default when you install OpenShift 4.x clusters. IPsec encryption can be enabled only during cluster installation and cannot be disabled after it is enabled.

I also don't believe most of the other Kubernetes CNI providers encrypt traffic by default, with weave being one that does, and flannel having an experimental feature.

Which is all to say I believe there is the potential to send unecrypted traffic between pods running on different nodes over the wire. Of course, everyone hopes their network is secure from intruders and malicious employees, but it's unfortunately not always the case, and we should avoid being a vector for attack. Especially with one of the databases being an authentication / authorization provider.

jmontleon commented 1 year ago

To further clarify my concern, I also don't believe any of these are using SSL at all at present, in verify-ca mode, or otherwise. Without providing a custom.conf to postgres to set ssl = on and provide certs it's my understanding it does not use SSL.

jmontleon commented 1 year ago

Moved to https://github.com/konveyor/enhancements/issues/107