cetic / helm-nifi

Helm Chart for Apache Nifi
Apache License 2.0
215 stars 225 forks source link

[cetic/nifi] javax.net.ssl.SSLPeerUnverifiedException: Hostname localhost not verified #168

Closed esteban1983cl closed 2 years ago

esteban1983cl commented 3 years ago

Describe the bug I'm configure this chart in secure way with ldap enabled. When I do login with admin user I get this error message:

javax.net.ssl.SSLPeerUnverifiedException: Hostname localhost not verified: certificate: sha256/QYFkCwWzDqLQUw5wstxc7y5WYKLTziIccXjX78A5gpA= DN: CN=green-nifi-nifi-0.green-nifi-nifi-headless.nifi.svc.cluster.local, OU=NIFI subjectAltNames: [green-nifi-nifi-0.green-nifi-nifi-headless.nifi.svc.cluster.local]

Version of Helm and Kubernetes: Helm (ArgoCD Embedded):

helm version
version.BuildInfo{Version:"v3.4.1", GitCommit:"c4e74854886b2efe3321e185578e6db9be0a6e29", GitTreeState:"clean", GoVersion:"go1.14.11"}

Kubernetes (AWS EKS):

Client Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.15", GitCommit:"73dd5c840662bb066a146d0871216333181f4b64", GitTreeState:"clean", BuildDate:"2021-01-13T13:22:41Z", GoVersion:"go1.13.15", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"18+", GitVersion:"v1.18.20-eks-8c579e", GitCommit:"8c579edfc914f013ff48b2a2b2c1308fdcacc53f", GitTreeState:"clean", BuildDate:"2021-07-31T01:34:13Z", GoVersion:"go1.13.15", Compiler:"gc", Platform:"linux/amd64"}

What happened: I'm configure this chart in secure way with ldap enabled. When I do login with admin user I get this error message:

javax.net.ssl.SSLPeerUnverifiedException: Hostname localhost not verified: certificate: sha256/QYFkCwWzDqLQUw5wstxc7y5WYKLTziIccXjX78A5gpA= DN: CN=green-nifi-nifi-0.green-nifi-nifi-headless.nifi.svc.cluster.local, OU=NIFI subjectAltNames: [green-nifi-nifi-0.green-nifi-nifi-headless.nifi.svc.cluster.local]

What you expected to happen: View the nifi console

How to reproduce it (as minimally and precisely as possible): Install chart using ArgoCD

Anything else we need to know:

Here are some information that help troubleshooting:

---
# Number of nifi nodes
replicaCount: 1

## Set default image, imageTag, and imagePullPolicy.
## ref: https://hub.docker.com/r/apache/nifi/
##
image:
  repository: apache/nifi
  tag: "1.12.1"
  pullPolicy: IfNotPresent

  ## Optionally specify an imagePullSecret.
  ## Secret must be manually created in the namespace.
  ## ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
  ##
  # pullSecret: myRegistrKeySecretName

securityContext:
  runAsUser: 1000
  fsGroup: 1000

## @param useHostNetwork - boolean - optional
## Bind ports on the hostNetwork. Useful for CNI networking where hostPort might
## not be supported. The ports need to be available on all hosts. It can be
## used for custom metrics instead of a service endpoint.
##
## WARNING: Make sure that hosts using this are properly firewalled otherwise
## metrics and traces are accepted from any host able to connect to this host.
#

sts:
  # Parallel podManagementPolicy for faster bootstrap and teardown. Default is OrderedReady.
  podManagementPolicy: Parallel
  AntiAffinity: hard
  useHostNetwork: null
  hostPort: null
  pod:
    annotations:
      security.alpha.kubernetes.io/sysctls: net.ipv4.ip_local_port_range=10000 65000
      #prometheus.io/scrape: "true"
  serviceAccount:
    create: false
    name: nifi
    annotations: {}
  hostAliases: []
#    - ip: "1.2.3.4"
#      hostnames:
#        - example.com
#        - example

## Useful if using any custom secrets
## Pass in some secrets to use (if required)
# secrets:
# - name: myNifiSecret
#   keys:
#     - key1
#     - key2
#   mountPath: /opt/nifi/secret

## Useful if using any custom configmaps
## Pass in some configmaps to use (if required)
# configmaps:
#   - name: myNifiConf
#     keys:
#       - myconf.conf
#     mountPath: /opt/nifi/custom-config

properties:
  # use externalSecure for when inbound SSL is provided by nginx-ingress or other external mechanism
  externalSecure: false
  isNode: true # set to false if ldap is enabled
  httpPort: null # set to null if ldap is enabled
  httpsPort: 9443 # set to 9443 if ldap is enabled
  # webProxyHost: green-nifi.nifi:9443
  webProxyHost: nifi.example.com:9443, localhost
  clusterPort: 6007
  clusterSecure: true # set to true if ldap is enabled
  needClientAuth: false
  provenanceStorage: "30 GB"
  siteToSite:
    secure: false
    port: 10000
  authorizer: managed-authorizer
  # use properties.safetyValve to pass explicit 'key: value' pairs that overwrite other configuration
  safetyValve:
    #nifi.variable.registry.properties: "${NIFI_HOME}/example1.properties, ${NIFI_HOME}/example2.properties"
    nifi.web.http.network.interface.default: eth0
    # listen to loopback interface so "kubectl port-forward ..." works
    nifi.web.http.network.interface.lo: lo

  ## Include aditional processors
  # customLibPath: "/opt/configuration_resources/custom_lib"

## Include additional libraries in the Nifi containers by using the postStart handler
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/attach-handler-lifecycle-event/
# postStart: /opt/nifi/psql; wget -P /opt/nifi/psql https://jdbc.postgresql.org/download/postgresql-42.2.6.jar

# Nifi User Authentication
auth:
  admin: CN=admin, OU=NIFI
  SSL:
    keystorePasswd: env:PASS
    truststorePasswd: env:PASS
  ldap:
    enabled: true
    host: ldap://ldap.example.com:389
    searchBase: OU=Usuarios,OU=Chile,DC=example,DC=com
    admin: "cn=SERVICE_USER,ou=Servicios,ou=Usuarios,ou=Chile,dc=example,dc=com"
    pass: p455w0rd
    searchFilter: (sAMAccountName=%s)
    UserIdentityAttribute: cn
    authStrategy: ANONYMOUS # How the connection to the LDAP server is authenticated. Possible values are ANONYMOUS, SIMPLE, LDAPS, or START_TLS.
    identityStrategy: USE_USERNAME
    authExpiration: 12 hours

  oidc:
    enabled: false
    discoveryUrl:
    clientId:
    clientSecret:
    claimIdentifyingUser: email
    ## Request additional scopes, for example profile
    additionalScopes:

## Expose the nifi service to be accessed from outside the cluster (LoadBalancer service).
## or access it from within the cluster (ClusterIP service). Set the service type and the port to serve it.
## ref: http://kubernetes.io/docs/user-guide/services/
##

# headless service
headless:
  type: ClusterIP
  annotations:
    service.alpha.kubernetes.io/tolerate-unready-endpoints: "true"

# ui service
service:
  type: LoadBalancer
  httpPort: 8080
  httpsPort: 9443
  # nodePort: 30236
  annotations:
    service.beta.kubernetes.io/aws-load-balancer-additional-resource-tags: "Name=NifiLB,nombre=NifiLB,ceco=cgo1007324,ApplicationName=CL-TXD-NIFI,ambiente=staging,aplicacion=CL-TXD-NIFI,pais=cl,plataforma=linux,proyecto=Plataformas TxD,version-so=Amazon Linux 2"
    service.beta.kubernetes.io/aws-load-balancer-type: "nlb"
    service.beta.kubernetes.io/aws-load-balancer-internal: "0.0.0.0/0"
    # service.beta.kubernetes.io/aws-load-balancer-backend-protocol: "http"
    service.beta.kubernetes.io/aws-load-balancer-cross-zone-load-balancing-enabled: "true"
    external-dns.alpha.kubernetes.io/set-identifier: green
    external-dns.alpha.kubernetes.io/aws-weight: "2"
    external-dns.alpha.kubernetes.io/hostname: nifi.example.com

    # loadBalancerIP:
    ## Load Balancer sources
    ## https://kubernetes.io/docs/tasks/access-application-cluster/configure-cloud-provider-firewall/#restrict-access-for-loadbalancer-service
    ##
    # loadBalancerSourceRanges:
    # - 10.10.10.0/24
    ## OIDC authentication requires "sticky" session on the LoadBalancer for JWT to work properly...but AWS doesn't like it on creation
#  sessionAffinity: ClientIP
#  sessionAffinityConfig:
#    clientIP:
#      timeoutSeconds: 10800

  # Enables additional port/ports to nifi service for internal processors
  processors:
    enabled: false
    ports:
      - name: processor01
        port: 7001
        targetPort: 7001
        #nodePort: 30701
      - name: processor02
        port: 7002
        targetPort: 7002
        #nodePort: 30702

## Configure Ingress based on the documentation here: https://kubernetes.io/docs/concepts/services-networking/ingress/
##
ingress:
  enabled: false
  annotations:
    kubernetes.io/ingress.class: nginx
  tls: []
  hosts: ["nifi.example.com"]
  path: /
  # If you want to change the default path, see this issue https://github.com/cetic/helm-nifi/issues/22

# Amount of memory to give the NiFi java heap
jvmMemory: 3g

# Separate image for tailing each log separately and checking zookeeper connectivity
sidecar:
  image: busybox
  tag: "1.32.0"
  imagePullPolicy: "IfNotPresent"

## Enable persistence using Persistent Volume Claims
## ref: http://kubernetes.io/docs/user-guide/persistent-volumes/
##
persistence:
  enabled: true

  # When creating persistent storage, the NiFi helm chart can either reference an already-defined
  # storage class by name, such as "standard" or can define a custom storage class by specifying
  # customStorageClass: true and providing the "storageClass", "storageProvisioner" and "storageType".
  # For example, to use SSD storage on Google Compute Engine see values-gcp.yaml
  #
  # To use a storage class that already exists on the Kubernetes cluster, we can simply reference it by name.
  # For example:
  storageClass: gp2
  #
  # The default storage class is used if this variable is not set.

  accessModes:  [ReadWriteOnce]
  ## Storage Capacities for persistent volumes
  configStorage:
    size: 100Mi
  authconfStorage:
    size: 100Mi
  # Storage capacity for the 'data' directory, which is used to hold things such as the flow.xml.gz, configuration, state, etc.
  dataStorage:
    size: 1Gi
  # Storage capacity for the FlowFile repository
  flowfileRepoStorage:
    size: 10Gi
  # Storage capacity for the Content repository
  contentRepoStorage:
    size: 10Gi
  # Storage capacity for the Provenance repository. When changing this, one should also change the properties.provenanceStorage value above, also.
  provenanceRepoStorage:
    size: 10Gi
  # Storage capacity for nifi logs
  logStorage:
    size: 5Gi

## Configure resource requests and limits
## ref: http://kubernetes.io/docs/user-guide/compute-resources/
##
resources:
  # We usually recommend not to specify default resources and to leave this as a conscious
  # choice for the user. This also increases chances charts run on environments with little
  # resources, such as Minikube. If you do want to specify resources, uncomment the following
  # lines, adjust them as necessary, and remove the curly braces after 'resources:'.
   limits:
    cpu: 2
    memory: 4Gi
   requests:
    cpu: 1
    memory: 3Gi

logresources:
  requests:
    cpu: 10m
    memory: 10Mi
  limits:
    cpu: 50m
    memory: 50Mi

## Enables setting your own affinity. Mutually exclusive with sts.AntiAffinity
## You need to set the value of sts.AntiAffinity other than "soft" and "hard"
affinity: {}

nodeSelector:
  node.kubernetes.io/role: "batch"

tolerations: []

initContainers: {}
  # foo-init:  # <- will be used as container name
  #   image: "busybox:1.30.1"
  #   imagePullPolicy: "IfNotPresent"
  #   command: ['sh', '-c', 'echo this is an initContainer']
  #   volumeMounts:
  #     - mountPath: /tmp/foo
  #       name: foo

extraVolumeMounts: []

extraVolumes: []

## Extra containers
extraContainers: []

terminationGracePeriodSeconds: 30

## Extra environment variables that will be pass onto deployment pods
env: []

## Extra environment variables from secrets and config maps
envFrom: []

# envFrom:
#   - configMapRef:
#       name: config-name
#   - secretRef:
#       name: mysecret

## Openshift support
## Use the following varables in order to enable Route and Security Context Constraint creation
openshift:
  scc:
    enabled: false
  route:
    enabled: false
    #host: www.test.com
    #path: /nifi

# ca server details
# Setting this true would create a nifi-toolkit based ca server
# The ca server will be used to generate self-signed certificates required setting up secured cluster
ca:
  ## If true, enable the nifi-toolkit certificate authority
  enabled: true
  persistence:
    enabled: true
    storageClass: "aws-efs"
  server: ""
  service:
    port: 9090
  token: mysixteenchars
  admin:
    cn: admin
  serviceAccount:
    create: true
    #name: nifi-ca
  openshift:
    scc:
      enabled: false

# ------------------------------------------------------------------------------
# Zookeeper:
# ------------------------------------------------------------------------------
zookeeper:
  ## If true, install the Zookeeper chart
  ## ref: https://github.com/bitnami/charts/blob/master/bitnami/zookeeper/values.yaml
  enabled: false
  ## If the Zookeeper Chart is disabled a URL and port are required to connect
  url: "green-zookeeper"
  port: 2181
  replicaCount: 3

# ------------------------------------------------------------------------------
# Nifi registry:
# ------------------------------------------------------------------------------
registry:
  ## If true, install the Nifi registry
  enabled: false
  url: ""
  port: 80
  ## Add values for the nifi-registry here
  ## ref: https://github.com/dysnix/charts/blob/master/nifi-registry/values.yaml

# Configure metrics
metrics:
  prometheus:
    # Enable Prometheus metrics
    enabled: false
    # Port used to expose Prometheus metrics
    port: 9092
    serviceMonitor:
      # Enable deployment of Prometheus Operator ServiceMonitor resource
      enabled: false
      # Additional labels for the ServiceMonitor
      labels: {}

Check if a pod is in error:

❯ kubectl get po -n nifi
NAME                             READY   STATUS    RESTARTS   AGE
green-nifi-0                     4/4     Running   0          6h26m
green-nifi-ca-54db7bfbbd-l2bmr   1/1     Running   0          2d12h
green-zookeeper-0                1/1     Running   0          3d19h
green-zookeeper-1                1/1     Running   0          3d19h
green-zookeeper-2                1/1     Running   0          3d19h

Inspect the pod, check the "Events" section at the end for anything suspicious.

❯ kubectl describe po green-nifi-0 -n nifi
Events:          <none>

Get logs on a failed container inside the pod (here the server one):

❯ kubectl logs green-nifi-0 -n nifi -c server
updating nifi.remote.input.host in /opt/nifi/nifi-current/conf/nifi.properties
updating nifi.cluster.node.address in /opt/nifi/nifi-current/conf/nifi.properties
updating nifi.zookeeper.connect.string in /opt/nifi/nifi-current/conf/nifi.properties
updating nifi.web.http.host in /opt/nifi/nifi-current/conf/nifi.properties
updating nifi.web.proxy.host in /opt/nifi/nifi-current/conf/localhost
updating nifi.web.http.network.interface.default in /opt/nifi/nifi-current/conf/nifi.properties
updating nifi.web.http.network.interface.lo in /opt/nifi/nifi-current/conf/nifi.properties
NiFi running with PID 24.

Java home: /usr/local/openjdk-8
NiFi home: /opt/nifi/nifi-current

Bootstrap Config File: /opt/nifi/nifi-current/conf/bootstrap.conf

2021-09-06 17:57:05,787 INFO [main] org.apache.nifi.bootstrap.Command Starting Apache NiFi...
2021-09-06 17:57:05,787 INFO [main] org.apache.nifi.bootstrap.Command Working Directory: /opt/nifi/nifi-current
2021-09-06 17:57:05,787 INFO [main] org.apache.nifi.bootstrap.Command Command: /usr/local/openjdk-8/bin/java -classpath /opt/nifi/nifi-current/./conf:/opt/nifi/nifi-current/./lib/javax.servlet-api-3.1.0.jar:/opt/nifi/nifi-current/./lib/jcl-over-slf4j-1.7.30.jar:/opt/nifi/nifi-current/./lib/jetty-schemas-3.1.jar:/opt/nifi/nifi-current/./lib/jul-to-slf4j-1.7.30.jar:/opt/nifi/nifi-current/./lib/log4j-over-slf4j-1.7.30.jar:/opt/nifi/nifi-current/./lib/logback-classic-1.2.3.jar:/opt/nifi/nifi-current/./lib/logback-core-1.2.3.jar:/opt/nifi/nifi-current/./lib/nifi-api-1.12.1.jar:/opt/nifi/nifi-current/./lib/nifi-framework-api-1.12.1.jar:/opt/nifi/nifi-current/./lib/nifi-nar-utils-1.12.1.jar:/opt/nifi/nifi-current/./lib/nifi-properties-1.12.1.jar:/opt/nifi/nifi-current/./lib/nifi-runtime-1.12.1.jar:/opt/nifi/nifi-current/./lib/slf4j-api-1.7.30.jar -Dorg.apache.jasper.compiler.disablejsr199=true -Xmx3g -Xms3g -Djava.security.egd=file:/dev/urandom -Dsun.net.http.allowRestrictedHeaders=true -Djava.net.preferIPv4Stack=true -Djava.awt.headless=true -Djava.protocol.handler.pkgs=sun.net.www.protocol -Dnifi.properties.file.path=/opt/nifi/nifi-current/./conf/nifi.properties -Dnifi.bootstrap.listen.port=40373 -Dapp=NiFi -Dorg.apache.nifi.bootstrap.config.log.dir=/opt/nifi/nifi-current/logs org.apache.nifi.NiFi
2021-09-06 17:57:05,803 INFO [main] org.apache.nifi.bootstrap.Command Launched Apache NiFi with Process ID 48
2021/09/06 17:57:36 INFO [main] org.apache.nifi.toolkit.tls.standalone.TlsToolkitStandaloneCommandLine: Using /opt/nifi/nifi-current/conf/nifi.properties as template.
2021/09/06 17:57:36 INFO [main] org.apache.nifi.toolkit.tls.standalone.TlsToolkitStandalone: Running standalone certificate generation with output directory /opt/nifi/nifi-current/conf
2021/09/06 17:57:37 INFO [main] org.apache.nifi.toolkit.tls.standalone.TlsToolkitStandalone: Generated new CA certificate /opt/nifi/nifi-current/conf/nifi-cert.pem and key /opt/nifi/nifi-current/conf/nifi-key.key
2021/09/06 17:57:37 INFO [main] org.apache.nifi.toolkit.tls.standalone.TlsToolkitStandalone: Writing new ssl configuration to /opt/nifi/nifi-current/conf/green-nifi-nifi-0.green-nifi-nifi-headless.nifi.svc.cluster.local
2021/09/06 17:57:37 INFO [main] org.apache.nifi.toolkit.tls.standalone.TlsToolkitStandalone: Successfully generated TLS configuration for green-nifi-nifi-0.green-nifi-nifi-headless.nifi.svc.cluster.local 1 in /opt/nifi/nifi-current/conf/green-nifi-nifi-0.green-nifi-nifi-headless.nifi.svc.cluster.local
2021/09/06 17:57:37 INFO [main] org.apache.nifi.toolkit.tls.standalone.TlsToolkitStandalone: Generating new client certificate /opt/nifi/nifi-current/conf/CN=ACCESS_TXD_APPLICATIONS_OU=Servicios_OU=Usuarios_OU=Chile_DC=cencosud_DC=corp.p12
2021/09/06 17:57:38 INFO [main] org.apache.nifi.toolkit.tls.standalone.TlsToolkitStandalone: Successfully generated client certificate /opt/nifi/nifi-current/conf/CN=ACCESS_TXD_APPLICATIONS_OU=Servicios_OU=Usuarios_OU=Chile_DC=cencosud_DC=corp.p12
2021/09/06 17:57:38 INFO [main] org.apache.nifi.toolkit.tls.standalone.TlsToolkitStandalone: tls-toolkit standalone completed successfully

Java home: /usr/local/openjdk-8
NiFi home: /opt/nifi/nifi-current

Bootstrap Config File: /opt/nifi/nifi-current/conf/bootstrap.conf

2021-09-06 17:57:38,449 INFO [main] org.apache.nifi.bootstrap.Command Starting Apache NiFi...
2021-09-06 17:57:38,449 INFO [main] org.apache.nifi.bootstrap.Command Working Directory: /opt/nifi/nifi-current
2021-09-06 17:57:38,450 INFO [main] org.apache.nifi.bootstrap.Command Command: /usr/local/openjdk-8/bin/java -classpath /opt/nifi/nifi-current/./conf:/opt/nifi/nifi-current/./lib/javax.servlet-api-3.1.0.jar:/opt/nifi/nifi-current/./lib/jcl-over-slf4j-1.7.30.jar:/opt/nifi/nifi-current/./lib/jetty-schemas-3.1.jar:/opt/nifi/nifi-current/./lib/jul-to-slf4j-1.7.30.jar:/opt/nifi/nifi-current/./lib/log4j-over-slf4j-1.7.30.jar:/opt/nifi/nifi-current/./lib/logback-classic-1.2.3.jar:/opt/nifi/nifi-current/./lib/logback-core-1.2.3.jar:/opt/nifi/nifi-current/./lib/nifi-api-1.12.1.jar:/opt/nifi/nifi-current/./lib/nifi-framework-api-1.12.1.jar:/opt/nifi/nifi-current/./lib/nifi-nar-utils-1.12.1.jar:/opt/nifi/nifi-current/./lib/nifi-properties-1.12.1.jar:/opt/nifi/nifi-current/./lib/nifi-runtime-1.12.1.jar:/opt/nifi/nifi-current/./lib/slf4j-api-1.7.30.jar -Dorg.apache.jasper.compiler.disablejsr199=true -Xmx3g -Xms3g -Djava.security.egd=file:/dev/urandom -Dsun.net.http.allowRestrictedHeaders=true -Djava.net.preferIPv4Stack=true -Djava.awt.headless=true -Djava.protocol.handler.pkgs=sun.net.www.protocol -Dnifi.properties.file.path=/opt/nifi/nifi-current/./conf/nifi.properties -Dnifi.bootstrap.listen.port=46847 -Dapp=NiFi -Dorg.apache.nifi.bootstrap.config.log.dir=/opt/nifi/nifi-current/logs org.apache.nifi.NiFi
2021-09-06 17:57:38,463 INFO [main] org.apache.nifi.bootstrap.Command Launched Apache NiFi with Process ID 133
banzo commented 3 years ago

166 has some improvements for ldap/oidc but is still in progress

banzo commented 2 years ago

release 1.0.0 should fix that, if not have a look at the updated auth documentation.

Please reopen if not.