Closed danavatavu closed 2 years ago
Probably similar to https://github.com/pomerium/pomerium-helm/issues/247.
The content of the certificate is:
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"cert-manager.io/v1","kind":"Certificate","metadata":{"annotations":{},"name":"pomerium-cert","namespace":"pomerium"},"spec":{"dnsNames":["pomerium-proxy.pomerium.svc.cluster.local","pomerium-authorize.pomerium.svc.cluster.local","pomerium-databroker.pomerium.svc.cluster.local","pomerium-authenticate.pomerium.svc.cluster.local","authenticate.myRootDomain","*.myRootDomain"],"issuerRef":{"kind":"Issuer","name":"pomerium-issuer"},"secretName":"pomerium-tls","usages":["server auth","client auth"]}}
creationTimestamp: "2022-01-22T07:25:28Z"
generation: 1
name: pomerium-cert
namespace: pomerium
resourceVersion: "178194452"
selfLink: /apis/cert-manager.io/v1/namespaces/pomerium/certificates/pomerium-cert
uid: a63599c7-3771-47d5-b60a-b9fe99e3feda
spec:
dnsNames:
- pomerium-proxy.pomerium.svc.cluster.local
- pomerium-authorize.pomerium.svc.cluster.local
- pomerium-databroker.pomerium.svc.cluster.local
- pomerium-authenticate.pomerium.svc.cluster.local
- authenticate.**myRootDomain**
- '*.**myRootDomain**'
issuerRef:
kind: Issuer
name: pomerium-issuer
secretName: pomerium-tls
usages:
- server auth
- client auth
status:
conditions:
- lastTransitionTime: "2022-01-22T07:25:28Z"
message: Certificate is up to date and has not expired
reason: Ready
status: "True"
type: Ready
notAfter: "2022-04-22T07:25:28Z"
notBefore: "2022-01-22T07:25:28Z"
renewalTime: "2022-03-23T07:25:28Z"
revision: 1
The certificate is signed by a CA root, generated with SelfSigned issuer.
�[90m8:00AM�[0m �[32mINF�[0m using /etc/ssl/certs/ca-certificates.crt as the system root certificate authority bundle
This is always logged. I don't think that's the issue.
You might be configuring two different CAs if I understand your configuration correctly. You've got config.ca.cert
and config.ca.key
set, in addition to config.existingCASecret
. That is possibly overwriting the ca.crt
data field on pomerium-tls
- or even the entire secret, depending on what argo does with the result. What happens if you remove the settings config.ca.cert
and config.ca.key
?
I removed what you suggested and also added ingressController from pomerium as I realised that our ingressController has ssl termination on the LB and we might get to ERR_TOO_MANY_REDIRECTS issue.
# For detailed explanation of each of the configuration settings see
# https://www.pomerium.io/reference/
nameOverride: ""
fullnameOverride: ""
# settings that are shared by all services
config:
# routes under this wildcard domain are handled by pomerium
rootDomain: myRootDomain
existingSecret:
existingCASecret: pomerium-tls
ca:
cert:
key:
sharedSecret: ""
cookieSecret: ""
forceGenerateServiceSecrets: false
existingSharedSecret: ""
generateTLS: false
generateTLSAnnotations: {}
forceGenerateTLS: false
generateSigningKey: true
forceGenerateSigningKey: false
extraOpts: {}
existingPolicy: ""
insecure: false
insecureProxy: false
administrators: ""
routes:
existingSigningKeySecret: ""
signingKey: ""
extraSecretLabels: {}
extraSharedSecretLabels: {}
authenticate:
name: ""
fullnameOverride: ""
nameOverride: ""
existingTLSSecret: pomerium-tls
existingExternalTLSSecret: ""
proxied: true
idp:
provider: github
clientID:”addddddddddddd”
clientSecret: “bbbbbbbbbbbbbbbbbbb”
url: "https://authenticate.myRootDomain/oauth2/callback"
scopes: ""
serviceAccount: ""
tls:
cert: ""
key: ""
defaultSANList: []
defaultIPList: []
replicaCount: 1
autoscaling:
enabled: false
minReplicas: 1
maxReplicas: 5
targetCPUUtilizationPercentage: 50
targetMemoryUtilizationPercentage: 50
pdb:
enabled: false
minAvailable: 1
service:
annotations: {}
nodePort: ""
type: ClusterIP
deployment:
annotations: {}
extraEnv: {}
podAnnotations: {}
serviceAccount:
annotations: {}
nameOverride: ""
ingress:
# cert-manager example
# annotations:
annotations: {}
tls:
secretName: ""
# secretName: authenticate-ingress-tls
authorize:
fullnameOverride: ""
nameOverride: ""
existingTLSSecret: pomerium-tls
tls:
cert: ""
key: ""
defaultSANList: []
defaultIPList: []
replicaCount: 1
autoscaling:
enabled: false
minReplicas: 1
maxReplicas: 5
targetCPUUtilizationPercentage: 50
targetMemoryUtilizationPercentage: 50
pdb:
enabled: false
minAvailable: 1
service:
annotations: {}
type: ClusterIP
clusterIP: None
deployment:
annotations: {}
extraEnv: {}
podAnnotations: {}
serviceAccount:
annotations: {}
nameOverride: ""
cache:
fullnameOverride: ""
nameOverride: ""
databroker:
fullnameOverride: ""
nameOverride: ""
existingTLSSecret: pomerium-tls
tls:
cert: ""
key: ""
defaultSANList: []
defaultIPList: []
replicaCount: 1
pdb:
enabled: false
minAvailable: 1
service:
annotations: {}
type: ClusterIP
clusterIP: None
deployment:
annotations: {}
extraEnv: {}
podAnnotations: {}
serviceAccount:
annotations: {}
nameOverride: ""
storage:
type: "memory"
connectionString: ""
tlsSkipVerify: false
clientTLS:
existingSecretName: ""
existingCASecretKey: ""
cert: ""
key: ""
ca: ""
proxy:
fullnameOverride: ""
nameOverride: ""
existingTLSSecret: pomerium-tls
tls:
cert: ""
key: ""
defaultSANList: []
defaultIPList: []
replicaCount: 1
autoscaling:
enabled: false
minReplicas: 1
maxReplicas: 5
targetCPUUtilizationPercentage: 50
targetMemoryUtilizationPercentage: 50
pdb:
enabled: false
minAvailable: 1
authenticateServiceUrl: ""
authorizeInternalUrl: ""
service:
annotations: {}
nodePort: ""
type: ""
deployment:
annotations: {}
extraEnv: {}
podAnnotations: {}
serviceAccount:
annotations: {}
nameOverride: ""
redirectServer: true
apiProxy:
enabled: false
ingress: true
fullNameOverride: ""
name: "kubernetes"
ingressController:
enabled: true
ingressClassResource:
enabled: true
default: false
name: pomerium
controllerName: pomerium.io/ingress-controller
parameters: {}
fullnameOverride: ""
nameOverride: ""
image:
repository: "pomerium/ingress-controller"
tag: "v0.16.0"
deployment:
annotations: {}
extraEnv: {}
serviceAccount:
annotations: {}
nameOverride: ""
config:
namespaces: []
ingressClass: pomerium.io/ingress-controller
updateStatus: true
operatorMode: false
service:
annotations: {}
type: ClusterIP
forwardAuth:
name: ""
enabled: false
# Will not create an ingress. ForwardAuth is ony accessible as internal service.
internal: false
service:
# externalPort defaults to 80 or 443 depending on config.insecure
externalPort: ""
annotations:
{}
# === GKE load balancer tweaks; default on until I can figure out
# how the hell to escape this string from the helm CLI
# cloud.google.com/app-protocols: '{"https":"HTTPS"}'
labels: {}
grpcTrafficPort:
nameOverride: ""
httpTrafficPort:
nameOverride: ""
ingress:
secretName: ""
secret:
name: "pomerium-tls"
cert: ""
key: ""
tls:
hosts: []
enabled: true
hosts: []
# Sets Ingress/ingressClassName. This way ingress resources are able to bound specific ingress-controllers. Kubernetes version >=1.18 required.
# Ref: https://kubernetes.io/docs/concepts/services-networking/ingress/#ingress-class
# className: ""
annotations:
kubernetes.io/ingress.class: "pomerium"
nginx.ingress.kubernetes.io/ssl-redirect: "true"
#cert-manager.io/cluster-issuer: letsencrypt
#kubernetes.io/ingress.allow-http: "true"
# === nginx tweaks
# kubernetes.io/ingress.class: nginx
# nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
# nginx.ingress.kubernetes.io/proxy-buffer-size: "16k"
# === GKE load balancer tweaks; default on until I can figure out
# how the hell to escape this string from the helm CLI
# kubernetes.io/ingress.allow-http: "false"
# Ingress pathType (e.g. ImplementationSpecific, Prefix, .. etc.) might also be required by some Ingress Controllers
pathType: ImplementationSpecific
resources:
{}
# limits:
# cpu: 1
# memory: 600Mi
# requests:
# cpu: 100m
# memory: 300Mi
priorityClassName: ""
# Affinity for pod assignment
# Ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity
affinity: {}
# Tolerations for pod assignment
# Ref: https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/
tolerations: []
# Node labels for pod assignment
# Ref: https://kubernetes.io/docs/user-guide/node-selection/
nodeSelector: {}
podAnnotations: {}
podLabels: {}
replicaCount: 1
# For any other settings that are optional. for a complete listing see:
# https://www.pomerium.io/docs/config-reference.html
extraEnv:
# (This will give you details if user is not able to authenticate, ideally this should be turned off)
POMERIUM_DEBUG: true
#LOG_LEVEL: "error"
#IDP_SCOPES: "openid,profile,email,groups,offline_access"
#DNS_LOOKUP_FAMILY: "V6_ONLY"
#CERTIFICATE_FILE: "/pomerium/ca/tls.crt"
#CERTIFICATE_KEY_FILE: "/pomerium/ca/tls.key"
#CERTIFICATE_AUTHORITY_FILE: "/pomerium/ca/ca.crt"
extraEnvFrom: []
extraArgs: {}
extraVolumes: []
extraVolumeMounts: []
extraTLSSecrets: []
annotations: {}
imagePullSecrets: ""
image:
repository: "pomerium/pomerium"
tag: "v0.16.0"
pullPolicy: "IfNotPresent"
metrics:
enabled: false
port: 9090
tracing:
enabled: false
provider: ""
debug: false
jaeger:
collector_endpoint: ""
agent_endpoint: ""
serviceMonitor:
enabled: false
namespace: ""
labels:
release: prometheus
rbac:
create: true
redis:
enabled: false
auth:
existingSecret: pomerium-redis-password
existingSecretPasswordKey: password
generateTLS: true
forceGenerateTLS: false
cluster:
slaveCount: 1
tls:
enabled: true
certificatesSecret: pomerium-redis-tls
certFilename: tls.crt
certKeyFilename: tls.key
certCAFilename: ca.crt
In the logs, besides the ones which always appear, there is
[90m5:18PM[0m [1m[31mERR[0m[0m error during initial sync [36merror=[0m[31m"rpc error: code = Unavailable desc = upstream connect error or disconnect/reset before headers. reset reason: connection failure, transport failure reason: TLS error: 268435581:SSL routines:OPENSSL_internal:CERTIFICATE_VERIFY_FAILED"[0m
7
[90m5:18PM[0m [1m[31mERR[0m[0m sync [36merror=[0m[31m"rpc error: code = Unavailable desc = upstream connect error or disconnect/reset before headers. reset reason: connection failure, transport failure reason: TLS error: 268435581:SSL routines:OPENSSL_internal:CERTIFICATE_VERIFY_FAILED"[0m
6
[90m5:19PM[0m [1m[31mERR[0m[0m controlplane: error storing configuration event, retrying [36merror=[0m[31m"rpc error: code = Unavailable desc = upstream connect error or disconnect/reset before headers. reset reason: connection failure, transport failure reason: TLS error: 268435581:SSL routines:OPENSSL_internal:CERTIFICATE_VERIFY_FAILED"[0m
5
Even using the examples from https://cert-manager.io/docs/tutorials/acme/pomerium-ingress/, meaning using self signed certificates for internal pomerium services, created by us, is not working. The same error as above appears in the logs. The only way it is working is using generateTLS certificates and generateSigning key on true.
Going forward with what I mentioned upper, using generatedTLS secrets, my app is redirected to oidc, login succeeds, but redirectCallBack uri https://authenticate.gloat-dev.gloat.com/oauth2/callback returns unsecure with HTTP ERROR 405.
In the logs can be seen:
Authenticate logs:
[90m2:06PM[0m [32mINF[0m http-request [36mauthority=[0mauthenticate.**myRootDomain**[36mduration=[0m0.615076 [36mforwarded-for=[0m10.20.3.103,10.20.1.68 [36mmethod=[0mPOST [36mpath=[0m/oauth2/callback [36mreferer=[0mhttps://sso.jumpcloud.com/ [36mrequest-id=[0m8f07f4d7-893c-483d-b1a4-5c1d47b3c617 [36mresponse-code=[0m405 [36mresponse-code-details=[0mvia_upstream [36mservice=[0menvoy [36msize=[0m0 [36mupstream-cluster=[0mpomerium-control-plane-http [36muser-agent=[0m"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.99 Safari/537.36"
Authorize logs
@danavatavu I see you closed this. Did you find the problem? I'm unable to reproduce using our CRDs from the cert-manager guide, unfortunately. It really seems as though your existingCASecret
's ca.crt
and existingTLSSecret
(s) don't line up.
It might help to have only your value overrides as there's a lot of extra noise with the defaults.
Here's the configuration I tested. Note: I'm using pomerium-test
namespace but otherwise this should be the same.
values.yaml:
authenticate:
existingTLSSecret: pomerium-tls
idp:
provider: XXX
url: XXX
clientID: XXX
clientSecret: XXX
serviceAccount: XXX
authorize:
existingTLSSecret: pomerium-tls
generateSigningKey: true
databroker:
existingTLSSecret: pomerium-tls
proxy:
existingTLSSecret: pomerium-tls
config:
rootDomain: localhost.pomerium.io
sharedSecret: XXX
cookieSecret: XXX
existingCASecret: pomerium-tls
generateTLS: false
ingress:
enabled: false
ingressController:
enabled: true
cert-manager manifests:
apiVersion: cert-manager.io/v1
kind: Issuer
metadata:
name: pomerium-ca
spec:
selfSigned: {}
---
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: pomerium-ca
spec:
isCA: true
secretName: pomerium-ca
commonName: pomerium ca
issuerRef:
name: pomerium-ca
kind: Issuer
---
apiVersion: cert-manager.io/v1
kind: Issuer
metadata:
name: pomerium-issuer
spec:
ca:
secretName: pomerium-ca
---
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: pomerium-cert
spec:
secretName: pomerium-tls
issuerRef:
name: pomerium-issuer
kind: Issuer
usages:
- server auth
- client auth
dnsNames:
- pomerium-proxy.pomerium-test.svc.cluster.local
- pomerium-authorize.pomerium-test.svc.cluster.local
- pomerium-databroker.pomerium-test.svc.cluster.local
- pomerium-authenticate.pomerium-test.svc.cluster.local
Hi, I've closed it by mistake. I wanted to add future behaviour using generateTLS on true. I will test what you suggested and come back. Regarding using only part of the parameters as it is too much noise, I agree, that's the way I started until it started asking me for other values also. Maybe if I just split the values file into 2 files: default and specific and merge them...but in the end helm needs all the parameters specified.
I can see that you added generateSigningKey under authorize, not config...where is the chart expecting to be?
That's an error - I think it used to be there. It should be config.generateSigningKey
. However, the signing key shouldn't cause TLS issues. It is only used to sign the identity JWT.
Hi,
I have found the problem why following https://cert-manager.io/docs/tutorials/acme/pomerium-ingress/ to generate the self-signed certificates for pomerium services was not working. When deploying using ArgoCd the name of the services(and other k8s resources) are changed based on the Argo object owner unless you overwrite it, which I didn't.
What happened?
Installing latest version of pomerium with the following configuration:
I can see from logs that all the pomerium pods are using /etc/ssl/certs/ca-certificates.crt as the system root certificate authority bundle instead of provided root certificate authority, failing after that with no TLS certificate found for domain, using self-signed certificate
What did you expect to happen?
I was expecting to use provided ca root and generated certificates by this authority.
Steps to reproduce
x
y
z
What's your environment like?
What are your chart values?
What are the contents of your config secret?
kubectl get secret pomerium -o=jsonpath="{.data['config\.yaml']}" | base64 -D
What did you see in the logs?
Additional context
Chart is being deployed using Argocd.