pomerium / pomerium-helm

Official helm charts for Pomerium.
https://helm.pomerium.io/
55 stars 70 forks source link

Pomerium doesn't use specified existing certificates #250

Closed danavatavu closed 2 years ago

danavatavu commented 2 years ago

What happened?

Installing latest version of pomerium with the following configuration:

I can see from logs that all the pomerium pods are using /etc/ssl/certs/ca-certificates.crt as the system root certificate authority bundle instead of provided root certificate authority, failing after that with no TLS certificate found for domain, using self-signed certificate

What did you expect to happen?

I was expecting to use provided ca root and generated certificates by this authority.

Steps to reproduce

  1. Ran x
  2. Clicked y
  3. Saw error z

What's your environment like?

What are your chart values?

# For detailed explanation of each of the configuration settings see
# https://www.pomerium.io/reference/

nameOverride: ""
fullnameOverride: ""

# settings that are shared by all services
config:
  # routes under this wildcard domain are handled by pomerium
  rootDomain: **myRootDomain**
  existingSecret: 
  existingCASecret: **pomerium-tls**
  ca:
    cert: **ca.crt**
    key: **tls.key**
  sharedSecret: ""
  cookieSecret: ""
  forceGenerateServiceSecrets: false
  existingSharedSecret: ""
  generateTLS: **false**
  generateTLSAnnotations: {}
  forceGenerateTLS: false
  generateSigningKey: **true**
  forceGenerateSigningKey: false
  extraOpts: {}
  existingPolicy: ""
  insecure: false
  insecureProxy: false
  administrators: ""
  **routes:
    - from: https://app.myRootDomain
      to: http://appClusterServiceName.svc.cluster.local:80**
  existingSigningKeySecret: ""
  signingKey: ""
  extraSecretLabels: {}
  extraSharedSecretLabels: {}

authenticate:
  name: ""
  fullnameOverride: ""
  nameOverride: ""
  existingTLSSecret: **pomerium-tls**
  existingExternalTLSSecret: ""
  proxied: true
  **idp:
    provider: github
    clientID: “aaaaaaaaaaaaa
    clientSecret: “bbbbbbbbbbb”
    url: "https://authenticate.myRootDomain/oauth2/callback"  
    scopes: ""
    serviceAccount: ""**  
  tls:
    cert: ""
    key: ""
    defaultSANList: []
    defaultIPList: []
  replicaCount: 1
  autoscaling:
    enabled: false
    minReplicas: 1
    maxReplicas: 5
    targetCPUUtilizationPercentage: 50
    targetMemoryUtilizationPercentage: 50
  pdb:
    enabled: false
    minAvailable: 1
  service:
    annotations: {}
    nodePort: ""
    type: ClusterIP
  deployment:
    annotations: {}
    extraEnv: {}
    podAnnotations: {}
  serviceAccount:
    annotations: {}
    nameOverride: ""
  ingress:
    # cert-manager example
    # annotations:
    #   cert-manager.io/cluster-issuer: letsencrypt-prod
    annotations: {}
    tls:
      secretName: ""
      # secretName: authenticate-ingress-tls

authorize:
  fullnameOverride: ""
  nameOverride: ""
  existingTLSSecret: **pomerium-tls**
  tls:
    cert: ""
    key: ""
    defaultSANList: []
    defaultIPList: []
  replicaCount: 1
  autoscaling:
    enabled: false
    minReplicas: 1
    maxReplicas: 5
    targetCPUUtilizationPercentage: 50
    targetMemoryUtilizationPercentage: 50
  pdb:
    enabled: false
    minAvailable: 1
  service:
    annotations: {}
    type: ClusterIP
    clusterIP: None
  deployment:
    annotations: {}
    extraEnv: {}
    podAnnotations: {}
  serviceAccount:
    annotations: {}
    nameOverride: ""

cache:
  fullnameOverride: ""
  nameOverride: ""

databroker:
  fullnameOverride: ""
  nameOverride: ""
  existingTLSSecret: **pomerium-tls**
  tls:
    cert: ""
    key: ""
    defaultSANList: []
    defaultIPList: []
  replicaCount: 1
  pdb:
    enabled: false
    minAvailable: 1
  service:
    annotations: {}
    type: ClusterIP
    clusterIP: None
  deployment:
    annotations: {}
    extraEnv: {}
    podAnnotations: {}
  serviceAccount:
    annotations: {}
    nameOverride: ""
  storage:
    type: "memory"
    connectionString: ""
    tlsSkipVerify: false
    clientTLS:
      existingSecretName: ""
      existingCASecretKey: ""
      cert: ""
      key: ""
      ca: ""

proxy:
  fullnameOverride: ""
  nameOverride: ""
  existingTLSSecret: **pomerium-tls**
  tls:
    cert: ""
    key: ""
    defaultSANList: []
    defaultIPList: []
  replicaCount: 1
  autoscaling:
    enabled: false
    minReplicas: 1
    maxReplicas: 5
    targetCPUUtilizationPercentage: 50
    targetMemoryUtilizationPercentage: 50
  pdb:
    enabled: false
    minAvailable: 1
  authenticateServiceUrl: ""
  authorizeInternalUrl: ""
  service:
    annotations: {}
    nodePort: ""
    type: ""
  deployment:
    annotations: {}
    extraEnv: {}
    podAnnotations: {}
  serviceAccount:
    annotations: {}
    nameOverride: ""
  redirectServer: true

apiProxy:
  enabled: false
  ingress: true
  fullNameOverride: ""
  name: "kubernetes"

ingressController:
  enabled: false
  ingressClassResource:
    enabled: true
    default: false
    name: pomerium
    controllerName: pomerium.io/ingress-controller
    parameters: {}
  fullnameOverride: ""
  nameOverride: ""
  image:
    repository: "pomerium/ingress-controller"
    tag: "v0.16.0"
  deployment:
    annotations: {}
    extraEnv: {}
  serviceAccount:
    annotations: {}
    nameOverride: ""
  config:
    namespaces: []
    ingressClass: pomerium.io/ingress-controller
    updateStatus: true
    operatorMode: false
  service:
    annotations: {}
    type: ClusterIP

forwardAuth:
  name: ""
  enabled: false
  # Will not create an ingress. ForwardAuth is ony accessible as internal service.
  internal: false

service:
  # externalPort defaults to 80 or 443 depending on config.insecure
  externalPort: ""
  annotations:
    {}
    # ===  GKE load balancer tweaks; default on until I can figure out
    # how the hell to escape this string from the helm CLI
    # cloud.google.com/app-protocols: '{"https":"HTTPS"}'
  labels: {}
  grpcTrafficPort:
    nameOverride: ""
  httpTrafficPort:
    nameOverride: ""

ingress:
  secretName: ""
  secret:
    name: “pomerium-tls”
    cert: ""
    key: ""
  tls:
    hosts: []
  enabled: true
  hosts: []
  # Sets Ingress/ingressClassName. This way ingress resources are able to bound specific ingress-controllers. Kubernetes version >=1.18 required.
  # Ref: https://kubernetes.io/docs/concepts/services-networking/ingress/#ingress-class
  # className: ""
  annotations:
    **kubernetes.io/ingress.class: "nginx-int"**
    #kubernetes.io/ingress.allow-http: "true"
    # === nginx tweaks
    # kubernetes.io/ingress.class: nginx
    # nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
    # nginx.ingress.kubernetes.io/proxy-buffer-size: "16k"
    # ===  GKE load balancer tweaks; default on until I can figure out
    # how the hell to escape this string from the helm CLI
    # kubernetes.io/ingress.allow-http: "false"
  # Ingress pathType (e.g. ImplementationSpecific, Prefix, .. etc.) might also be required by some Ingress Controllers
  pathType: ImplementationSpecific

resources:
  {}
  # limits:
  #   cpu: 1
  #   memory: 600Mi
  # requests:
  #   cpu: 100m
  #   memory: 300Mi

priorityClassName: ""

# Affinity for pod assignment
# Ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity
affinity: {}

# Tolerations for pod assignment
# Ref: https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/
tolerations: []

# Node labels for pod assignment
# Ref: https://kubernetes.io/docs/user-guide/node-selection/
nodeSelector: {}

podAnnotations: {}
podLabels: {}
replicaCount: 1

# For any other settings that are optional. for a complete listing see:
# https://www.pomerium.io/docs/config-reference.html
**extraEnv: 
  # (This will give you details if user is not able to authenticate, ideally this should be turned off)
  POMERIUM_DEBUG: true
  #LOG_LEVEL: "error"
  #IDP_SCOPES: "openid,profile,email,groups,offline_access"
  #DNS_LOOKUP_FAMILY: "V6_ONLY"
  CERTIFICATE_FILE: "/pomerium/ca/tls.crt"
  CERTIFICATE_KEY_FILE: "/pomerium/ca/tls.key"
  CERTIFICATE_AUTHORITY_FILE: "/pomerium/ca/ca.crt"**

extraEnvFrom: []
extraArgs: {}
extraVolumes: []
extraVolumeMounts: []
extraTLSSecrets: []

annotations: {}
imagePullSecrets: ""

image:
  repository: "pomerium/pomerium"
  tag: "v0.16.0"
  pullPolicy: "IfNotPresent"

metrics:
  enabled: false
  port: 9090

tracing:
  enabled: false
  provider: ""
  debug: false
  jaeger:
    collector_endpoint: ""
    agent_endpoint: ""

serviceMonitor:
  enabled: false
  namespace: ""
  labels:
    release: prometheus

rbac:
  create: true

redis:
  enabled: false
  auth:
    existingSecret: pomerium-redis-password
    existingSecretPasswordKey: password
  generateTLS: true
  forceGenerateTLS: false
  cluster:
    slaveCount: 1
  tls:
    enabled: true
    certificatesSecret: pomerium-redis-tls
    certFilename: tls.crt
    certKeyFilename: tls.key
    certCAFilename: ca.crt

What are the contents of your config secret?

kubectl get secret pomerium -o=jsonpath="{.data['config\.yaml']}" | base64 -D

autocert: false
dns_lookup_family: V4_ONLY
address: :443
grpc_address: :443
certificate_authority_file: "/pomerium/ca/ca.crt"
certificates:
authenticate_service_url: https://authenticate.**myRootDomain**.com
authorize_service_url: https://pomerium-authorize.pomerium.svc.cluster.local
databroker_service_url: https://pomerium-databroker.pomerium.svc.cluster.local
idp_provider: github
idp_scopes:
idp_provider_url: https://authenticate.**myRootDomain**.com/oauth2/callback
cookie_secret: MDpfXQxNXZbXCY=
shared_secret: MjFORUh+V0JGfkNEZE40h5XlE=
idp_client_id: aaaaaaaaaaaa
idp_client_secret: bbbbbbbbbbbbbbbbbbbbbbbbb
routes:
  - from: https://app.myRootDomain
    to: http://appClusterServiceName.svc.cluster.local:80**
 - from: https://authenticate.myRootDomain
    to: https://pomerium-authenticate.pomerium.svc.cluster.local
    preserve_host_header: true
    allow_public_unauthenticated_access: true
    tls_server_name: authenticate.myRootDomain.com

What did you see in the logs?


Logs from proxy, similar to the ones from databroker, authenticate and authorize.
{"level":"info","service":"proxy","config":"databroker","checksum":"563cf0eb70f92236","time":"2022-01-23T08:00:47Z","message":"config: updated config"}
8:00AM INF starting http redirect server addr=:80 service=autocert-manager
8:00AM INF using /etc/ssl/certs/ca-certificates.crt as the system root certificate authority bundle
8:00AM ERR cryptutil: no TLS certificate found for domain, using self-signed certificate domain=*
8:00AM ERR cryptutil: no TLS certificate found for domain, using self-signed certificate domain=*

{"level":"info","syncer_id":"databroker","syncer_type":"type.googleapis.com/pomerium.config.Config","time":"2022-01-23T08:43:32Z","message":"initial sync"}
8:43AM ERR error during initial sync error="rpc error: code = Unavailable desc = upstream connect error or disconnect/reset before headers. reset reason: connection failure, transport failure reason: TLS error: 268435581:SSL routines:OPENSSL_internal:CERTIFICATE_VERIFY_FAILED"
8:43AM ERR sync error="rpc error: code = Unavailable desc = upstream connect error or disconnect/reset before headers. reset reason: connection failure, transport failure reason: TLS error: 268435581:SSL routines:OPENSSL_internal:CERTIFICATE_VERIFY_FAILED"
8:44AM ERR controlplane: error storing configuration event, retrying error="rpc error: code = Unavailable desc = upstream connect error or disconnect/reset before headers. reset reason: connection failure, transport failure reason: TLS error: 268435581:SSL routines:OPENSSL_internal:CERTIFICATE_VERIFY_FAILED"
{"level":"info","syncer_id":"databroker","syncer_type":"type.googleapis.com/pomerium.config.Config","time":"2022-01-23T08:44:41Z","message":"initial sync"}
8:44AM ERR error during initial sync error="rpc error: code = Unavailable desc = upstream connect error or disconnect/reset before headers. reset reason: connection failure, transport failure reason: TLS error: 268435581:SSL routines:OPENSSL_internal:CERTIFICATE_VERIFY_FAILED"
8:44AM ERR sync error="rpc error: code = Unavailable desc = upstream connect error or disconnect/reset before headers. reset reason: connection failure, transport failure reason: TLS error: 268435581:SSL routines:OPENSSL_internal:CERTIFICATE_VERIFY_FAILED"

Additional context

Chart is being deployed using Argocd.

danavatavu commented 2 years ago

Probably similar to https://github.com/pomerium/pomerium-helm/issues/247.

danavatavu commented 2 years ago

The content of the certificate is:

apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"cert-manager.io/v1","kind":"Certificate","metadata":{"annotations":{},"name":"pomerium-cert","namespace":"pomerium"},"spec":{"dnsNames":["pomerium-proxy.pomerium.svc.cluster.local","pomerium-authorize.pomerium.svc.cluster.local","pomerium-databroker.pomerium.svc.cluster.local","pomerium-authenticate.pomerium.svc.cluster.local","authenticate.myRootDomain","*.myRootDomain"],"issuerRef":{"kind":"Issuer","name":"pomerium-issuer"},"secretName":"pomerium-tls","usages":["server auth","client auth"]}}
  creationTimestamp: "2022-01-22T07:25:28Z"
  generation: 1
  name: pomerium-cert
  namespace: pomerium
  resourceVersion: "178194452"
  selfLink: /apis/cert-manager.io/v1/namespaces/pomerium/certificates/pomerium-cert
  uid: a63599c7-3771-47d5-b60a-b9fe99e3feda
spec:
  dnsNames:
  - pomerium-proxy.pomerium.svc.cluster.local
  - pomerium-authorize.pomerium.svc.cluster.local
  - pomerium-databroker.pomerium.svc.cluster.local
  - pomerium-authenticate.pomerium.svc.cluster.local
  - authenticate.**myRootDomain**
  - '*.**myRootDomain**'
  issuerRef:
    kind: Issuer
    name: pomerium-issuer
  secretName: pomerium-tls
  usages:
  - server auth
  - client auth
status:
  conditions:
  - lastTransitionTime: "2022-01-22T07:25:28Z"
    message: Certificate is up to date and has not expired
    reason: Ready
    status: "True"
    type: Ready
  notAfter: "2022-04-22T07:25:28Z"
  notBefore: "2022-01-22T07:25:28Z"
  renewalTime: "2022-03-23T07:25:28Z"
  revision: 1

The certificate is signed by a CA root, generated with SelfSigned issuer.

travisgroth commented 2 years ago

�[90m8:00AM�[0m �[32mINF�[0m using /etc/ssl/certs/ca-certificates.crt as the system root certificate authority bundle

This is always logged. I don't think that's the issue.

You might be configuring two different CAs if I understand your configuration correctly. You've got config.ca.cert and config.ca.key set, in addition to config.existingCASecret. That is possibly overwriting the ca.crt data field on pomerium-tls - or even the entire secret, depending on what argo does with the result. What happens if you remove the settings config.ca.cert and config.ca.key?

danavatavu commented 2 years ago

I removed what you suggested and also added ingressController from pomerium as I realised that our ingressController has ssl termination on the LB and we might get to ERR_TOO_MANY_REDIRECTS issue.

# For detailed explanation of each of the configuration settings see
# https://www.pomerium.io/reference/

nameOverride: ""
fullnameOverride: ""

# settings that are shared by all services
config:
  # routes under this wildcard domain are handled by pomerium
  rootDomain: myRootDomain
  existingSecret: 
  existingCASecret: pomerium-tls
  ca:
    cert: 
    key: 
  sharedSecret: ""
  cookieSecret: ""
  forceGenerateServiceSecrets: false
  existingSharedSecret: ""
  generateTLS: false
  generateTLSAnnotations: {}
  forceGenerateTLS: false
  generateSigningKey: true
  forceGenerateSigningKey: false
  extraOpts: {}
  existingPolicy: ""
  insecure: false
  insecureProxy: false
  administrators: ""
  routes:
  existingSigningKeySecret: ""
  signingKey: ""
  extraSecretLabels: {}
  extraSharedSecretLabels: {}

authenticate:
  name: ""
  fullnameOverride: ""
  nameOverride: ""
  existingTLSSecret: pomerium-tls
  existingExternalTLSSecret: ""
  proxied: true
  idp:
    provider: github
    clientID:”addddddddddddd”
    clientSecret: “bbbbbbbbbbbbbbbbbbb”
    url: "https://authenticate.myRootDomain/oauth2/callback"  
    scopes: ""
    serviceAccount: ""  
  tls:
    cert: ""
    key: ""
    defaultSANList: []
    defaultIPList: []
  replicaCount: 1
  autoscaling:
    enabled: false
    minReplicas: 1
    maxReplicas: 5
    targetCPUUtilizationPercentage: 50
    targetMemoryUtilizationPercentage: 50
  pdb:
    enabled: false
    minAvailable: 1
  service:
    annotations: {}
    nodePort: ""
    type: ClusterIP
  deployment:
    annotations: {}
    extraEnv: {}
    podAnnotations: {}
  serviceAccount:
    annotations: {}
    nameOverride: ""
  ingress:
    # cert-manager example
    # annotations:
    annotations: {}
    tls:
      secretName: ""
      # secretName: authenticate-ingress-tls

authorize:
  fullnameOverride: ""
  nameOverride: ""
  existingTLSSecret: pomerium-tls
  tls:
    cert: ""
    key: ""
    defaultSANList: []
    defaultIPList: []
  replicaCount: 1
  autoscaling:
    enabled: false
    minReplicas: 1
    maxReplicas: 5
    targetCPUUtilizationPercentage: 50
    targetMemoryUtilizationPercentage: 50
  pdb:
    enabled: false
    minAvailable: 1
  service:
    annotations: {}
    type: ClusterIP
    clusterIP: None
  deployment:
    annotations: {}
    extraEnv: {}
    podAnnotations: {}
  serviceAccount:
    annotations: {}
    nameOverride: ""

cache:
  fullnameOverride: ""
  nameOverride: ""

databroker:
  fullnameOverride: ""
  nameOverride: ""
  existingTLSSecret: pomerium-tls
  tls:
    cert: ""
    key: ""
    defaultSANList: []
    defaultIPList: []
  replicaCount: 1
  pdb:
    enabled: false
    minAvailable: 1
  service:
    annotations: {}
    type: ClusterIP
    clusterIP: None
  deployment:
    annotations: {}
    extraEnv: {}
    podAnnotations: {}
  serviceAccount:
    annotations: {}
    nameOverride: ""
  storage:
    type: "memory"
    connectionString: ""
    tlsSkipVerify: false
    clientTLS:
      existingSecretName: ""
      existingCASecretKey: ""
      cert: ""
      key: ""
      ca: ""

proxy:
  fullnameOverride: ""
  nameOverride: ""
  existingTLSSecret: pomerium-tls
  tls:
    cert: ""
    key: ""
    defaultSANList: []
    defaultIPList: []
  replicaCount: 1
  autoscaling:
    enabled: false
    minReplicas: 1
    maxReplicas: 5
    targetCPUUtilizationPercentage: 50
    targetMemoryUtilizationPercentage: 50
  pdb:
    enabled: false
    minAvailable: 1
  authenticateServiceUrl: ""
  authorizeInternalUrl: ""
  service:
    annotations: {}
    nodePort: ""
    type: ""
  deployment:
    annotations: {}
    extraEnv: {}
    podAnnotations: {}
  serviceAccount:
    annotations: {}
    nameOverride: ""
  redirectServer: true

apiProxy:
  enabled: false
  ingress: true
  fullNameOverride: ""
  name: "kubernetes"

ingressController:
  enabled: true
  ingressClassResource:
    enabled: true
    default: false
    name: pomerium
    controllerName: pomerium.io/ingress-controller
    parameters: {}
  fullnameOverride: ""
  nameOverride: ""
  image:
    repository: "pomerium/ingress-controller"
    tag: "v0.16.0"
  deployment:
    annotations: {}
    extraEnv: {}
  serviceAccount:
    annotations: {}
    nameOverride: ""
  config:
    namespaces: []
    ingressClass: pomerium.io/ingress-controller
    updateStatus: true
    operatorMode: false
  service:
    annotations: {}
    type: ClusterIP

forwardAuth:
  name: ""
  enabled: false
  # Will not create an ingress. ForwardAuth is ony accessible as internal service.
  internal: false

service:
  # externalPort defaults to 80 or 443 depending on config.insecure
  externalPort: ""
  annotations:
    {}
    # ===  GKE load balancer tweaks; default on until I can figure out
    # how the hell to escape this string from the helm CLI
    # cloud.google.com/app-protocols: '{"https":"HTTPS"}'
  labels: {}
  grpcTrafficPort:
    nameOverride: ""
  httpTrafficPort:
    nameOverride: ""

ingress:
  secretName: ""
  secret:
    name: "pomerium-tls"
    cert: ""
    key: ""
  tls:
    hosts: []
  enabled: true
  hosts: []
  # Sets Ingress/ingressClassName. This way ingress resources are able to bound specific ingress-controllers. Kubernetes version >=1.18 required.
  # Ref: https://kubernetes.io/docs/concepts/services-networking/ingress/#ingress-class
  # className: ""
  annotations:
    kubernetes.io/ingress.class: "pomerium"
    nginx.ingress.kubernetes.io/ssl-redirect: "true"
    #cert-manager.io/cluster-issuer: letsencrypt
    #kubernetes.io/ingress.allow-http: "true"
    # === nginx tweaks
    # kubernetes.io/ingress.class: nginx
    # nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
    # nginx.ingress.kubernetes.io/proxy-buffer-size: "16k"
    # ===  GKE load balancer tweaks; default on until I can figure out
    # how the hell to escape this string from the helm CLI
    # kubernetes.io/ingress.allow-http: "false"
  # Ingress pathType (e.g. ImplementationSpecific, Prefix, .. etc.) might also be required by some Ingress Controllers
  pathType: ImplementationSpecific

resources:
  {}
  # limits:
  #   cpu: 1
  #   memory: 600Mi
  # requests:
  #   cpu: 100m
  #   memory: 300Mi

priorityClassName: ""

# Affinity for pod assignment
# Ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity
affinity: {}

# Tolerations for pod assignment
# Ref: https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/
tolerations: []

# Node labels for pod assignment
# Ref: https://kubernetes.io/docs/user-guide/node-selection/
nodeSelector: {}

podAnnotations: {}
podLabels: {}
replicaCount: 1

# For any other settings that are optional. for a complete listing see:
# https://www.pomerium.io/docs/config-reference.html
extraEnv: 
  # (This will give you details if user is not able to authenticate, ideally this should be turned off)
  POMERIUM_DEBUG: true
  #LOG_LEVEL: "error"
  #IDP_SCOPES: "openid,profile,email,groups,offline_access"
  #DNS_LOOKUP_FAMILY: "V6_ONLY"
  #CERTIFICATE_FILE: "/pomerium/ca/tls.crt"
  #CERTIFICATE_KEY_FILE: "/pomerium/ca/tls.key"
  #CERTIFICATE_AUTHORITY_FILE: "/pomerium/ca/ca.crt"

extraEnvFrom: []
extraArgs: {}
extraVolumes: []
extraVolumeMounts: []
extraTLSSecrets: []

annotations: {}
imagePullSecrets: ""

image:
  repository: "pomerium/pomerium"
  tag: "v0.16.0"
  pullPolicy: "IfNotPresent"

metrics:
  enabled: false
  port: 9090

tracing:
  enabled: false
  provider: ""
  debug: false
  jaeger:
    collector_endpoint: ""
    agent_endpoint: ""

serviceMonitor:
  enabled: false
  namespace: ""
  labels:
    release: prometheus

rbac:
  create: true

redis:
  enabled: false
  auth:
    existingSecret: pomerium-redis-password
    existingSecretPasswordKey: password
  generateTLS: true
  forceGenerateTLS: false
  cluster:
    slaveCount: 1
  tls:
    enabled: true
    certificatesSecret: pomerium-redis-tls
    certFilename: tls.crt
    certKeyFilename: tls.key
    certCAFilename: ca.crt

In the logs, besides the ones which always appear, there is

[90m5:18PM ERR error during initial sync error="rpc error: code = Unavailable desc = upstream connect error or disconnect/reset before headers. reset reason: connection failure, transport failure reason: TLS error: 268435581:SSL routines:OPENSSL_internal:CERTIFICATE_VERIFY_FAILED"
7
5:18PM ERR sync error="rpc error: code = Unavailable desc = upstream connect error or disconnect/reset before headers. reset reason: connection failure, transport failure reason: TLS error: 268435581:SSL routines:OPENSSL_internal:CERTIFICATE_VERIFY_FAILED"
6
5:19PM ERR controlplane: error storing configuration event, retrying error="rpc error: code = Unavailable desc = upstream connect error or disconnect/reset before headers. reset reason: connection failure, transport failure reason: TLS error: 268435581:SSL routines:OPENSSL_internal:CERTIFICATE_VERIFY_FAILED"
5
danavatavu commented 2 years ago

Even using the examples from https://cert-manager.io/docs/tutorials/acme/pomerium-ingress/, meaning using self signed certificates for internal pomerium services, created by us, is not working. The same error as above appears in the logs. The only way it is working is using generateTLS certificates and generateSigning key on true.

danavatavu commented 2 years ago

Going forward with what I mentioned upper, using generatedTLS secrets, my app is redirected to oidc, login succeeds, but redirectCallBack uri https://authenticate.gloat-dev.gloat.com/oauth2/callback returns unsecure with HTTP ERROR 405.

In the logs can be seen:

Authenticate logs:

[90m2:06PM INF http-request authority=authenticate.**myRootDomain**duration=0.615076 forwarded-for=10.20.3.103,10.20.1.68 method=POST path=/oauth2/callback referer=https://sso.jumpcloud.com/ request-id=8f07f4d7-893c-483d-b1a4-5c1d47b3c617 response-code=405 response-code-details=via_upstream service=envoy size=0 upstream-cluster=pomerium-control-plane-http user-agent="Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.99 Safari/537.36"

Authorize logs

travisgroth commented 2 years ago

@danavatavu I see you closed this. Did you find the problem? I'm unable to reproduce using our CRDs from the cert-manager guide, unfortunately. It really seems as though your existingCASecret's ca.crt and existingTLSSecret(s) don't line up.

It might help to have only your value overrides as there's a lot of extra noise with the defaults.

Here's the configuration I tested. Note: I'm using pomerium-test namespace but otherwise this should be the same.

values.yaml:

authenticate:
  existingTLSSecret: pomerium-tls
  idp:
    provider: XXX
    url: XXX
    clientID: XXX
    clientSecret: XXX
    serviceAccount: XXX
authorize:
  existingTLSSecret: pomerium-tls
  generateSigningKey: true
databroker:
  existingTLSSecret: pomerium-tls
proxy:
  existingTLSSecret: pomerium-tls
config:
  rootDomain: localhost.pomerium.io
  sharedSecret: XXX
  cookieSecret: XXX
  existingCASecret: pomerium-tls
  generateTLS: false
ingress:
  enabled: false
ingressController:
  enabled: true

cert-manager manifests:


apiVersion: cert-manager.io/v1
kind: Issuer
metadata:
  name: pomerium-ca
spec:
  selfSigned: {}
---
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
  name: pomerium-ca
spec:
  isCA: true
  secretName: pomerium-ca
  commonName: pomerium ca
  issuerRef:
    name: pomerium-ca
    kind: Issuer
---
apiVersion: cert-manager.io/v1
kind: Issuer
metadata:
  name: pomerium-issuer
spec:
  ca:
    secretName: pomerium-ca
---
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
  name: pomerium-cert
spec:
  secretName: pomerium-tls
  issuerRef:
    name: pomerium-issuer
    kind: Issuer
  usages:
    - server auth
    - client auth
  dnsNames:
    - pomerium-proxy.pomerium-test.svc.cluster.local
    - pomerium-authorize.pomerium-test.svc.cluster.local
    - pomerium-databroker.pomerium-test.svc.cluster.local
    - pomerium-authenticate.pomerium-test.svc.cluster.local
danavatavu commented 2 years ago

Hi, I've closed it by mistake. I wanted to add future behaviour using generateTLS on true. I will test what you suggested and come back. Regarding using only part of the parameters as it is too much noise, I agree, that's the way I started until it started asking me for other values also. Maybe if I just split the values file into 2 files: default and specific and merge them...but in the end helm needs all the parameters specified.

danavatavu commented 2 years ago

I can see that you added generateSigningKey under authorize, not config...where is the chart expecting to be?

travisgroth commented 2 years ago

That's an error - I think it used to be there. It should be config.generateSigningKey. However, the signing key shouldn't cause TLS issues. It is only used to sign the identity JWT.

danavatavu commented 2 years ago

Hi,

I have found the problem why following https://cert-manager.io/docs/tutorials/acme/pomerium-ingress/ to generate the self-signed certificates for pomerium services was not working. When deploying using ArgoCd the name of the services(and other k8s resources) are changed based on the Argo object owner unless you overwrite it, which I didn't.