cetic / helm-nifi

Helm Chart for Apache Nifi
Apache License 2.0
215 stars 228 forks source link

[cetic/nifi] TLS Toolkit not generating a valid JKS when activating ldap #217

Open digitalshuffler opened 2 years ago

digitalshuffler commented 2 years ago

Describe the bug When trying to deploy nifi with ldap configuration it throws an error saying that the JKS generated by tls-toolkit.sh is not valid and it refuses to start

Version of Helm, Kubernetes and the Nifi chart: kubernetes v1.19.11 helm Nifi Chart version: 1.0.2

What happened: Nifi falls into a crashLoopBack error Attached full stack trace:

2022-01-03 08:24:22,901 WARN [main] o.a.nifi.security.util.SslContextFactory Some truststore properties are populated (/opt/nifi/nifi-current/conf/nifi-deploy-dev-nifi-0.nifi-deploy-dev-nifi-headless.lakehouse-nifi.svc.cluster.local/truststore.jks, ********, JKS) but not valid
2022-01-03 08:24:22,903 ERROR [main] o.a.n.c.s.m.StandardStateManagerProvider Encountered an error configuring TLS for state manager: 
org.apache.nifi.security.util.TlsException: The truststore properties are not valid
        at org.apache.nifi.security.util.SslContextFactory.getTrustManagers(SslContextFactory.java:181)
        at org.apache.nifi.security.util.SslContextFactory.createSslContext(SslContextFactory.java:66)
        at org.apache.nifi.controller.state.manager.StandardStateManagerProvider.createStateProvider(StandardStateManagerProvider.java:217)
        at org.apache.nifi.controller.state.manager.StandardStateManagerProvider.createLocalStateProvider(StandardStateManagerProvider.java:125)
        at org.apache.nifi.controller.state.manager.StandardStateManagerProvider.create(StandardStateManagerProvider.java:105)
        at org.apache.nifi.cluster.coordination.node.NodeClusterCoordinator.<init>(NodeClusterCoordinator.java:134)
        at org.apache.nifi.cluster.spring.NodeClusterCoordinatorFactoryBean.getObject(NodeClusterCoordinatorFactoryBean.java:54)
        at org.apache.nifi.cluster.spring.NodeClusterCoordinatorFactoryBean.getObject(NodeClusterCoordinatorFactoryBean.java:35)
        at org.springframework.beans.factory.support.FactoryBeanRegistrySupport.doGetObjectFromFactoryBean(FactoryBeanRegistrySupport.java:169)
        at org.springframework.beans.factory.support.FactoryBeanRegistrySupport.getObjectFromFactoryBean(FactoryBeanRegistrySupport.java:101)
        at org.springframework.beans.factory.support.AbstractBeanFactory.getObjectForBeanInstance(AbstractBeanFactory.java:1884)
        at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.getObjectForBeanInstance(AbstractAutowireCapableBeanFactory.java:1266)
        at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:345)
        at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:208)
        at org.springframework.beans.factory.support.BeanDefinitionValueResolver.resolveReference(BeanDefinitionValueResolver.java:330)
        at org.springframework.beans.factory.support.BeanDefinitionValueResolver.resolveValueIfNecessary(BeanDefinitionValueResolver.java:113)
        at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.applyPropertyValues(AbstractAutowireCapableBeanFactory.java:1689)
        at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.populateBean(AbstractAutowireCapableBeanFactory.java:1434)
        at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:601)
        at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:524)
        at org.springframework.beans.factory.support.AbstractBeanFactory.lambda$doGetBean$0(AbstractBeanFactory.java:335)
        at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:234)
        at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:333)
        at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:213)
        at org.springframework.context.support.AbstractApplicationContext.getBean(AbstractApplicationContext.java:1160)
        at org.apache.nifi.spring.StandardFlowServiceFactoryBean.getObject(StandardFlowServiceFactoryBean.java:48)
        at org.springframework.beans.factory.support.FactoryBeanRegistrySupport.doGetObjectFromFactoryBean(FactoryBeanRegistrySupport.java:169)
        at org.springframework.beans.factory.support.FactoryBeanRegistrySupport.getObjectFromFactoryBean(FactoryBeanRegistrySupport.java:101)
        at org.springframework.beans.factory.support.AbstractBeanFactory.getObjectForBeanInstance(AbstractBeanFactory.java:1884)
        at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.getObjectForBeanInstance(AbstractAutowireCapableBeanFactory.java:1266)
        at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:345)
        at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:213)
        at org.springframework.context.support.AbstractApplicationContext.getBean(AbstractApplicationContext.java:1160)
        at org.apache.nifi.web.contextlistener.ApplicationStartupContextListener.contextInitialized(ApplicationStartupContextListener.java:51)
        at org.eclipse.jetty.server.handler.ContextHandler.callContextInitialized(ContextHandler.java:1068)
        at org.eclipse.jetty.servlet.ServletContextHandler.callContextInitialized(ServletContextHandler.java:572)
        at org.eclipse.jetty.server.handler.ContextHandler.contextInitialized(ContextHandler.java:997)
        at org.eclipse.jetty.servlet.ServletHandler.initialize(ServletHandler.java:746)
        at org.eclipse.jetty.servlet.ServletContextHandler.startContext(ServletContextHandler.java:379)
        at org.eclipse.jetty.webapp.WebAppContext.startWebapp(WebAppContext.java:1449)
        at org.eclipse.jetty.webapp.WebAppContext.startContext(WebAppContext.java:1414)
        at org.eclipse.jetty.server.handler.ContextHandler.doStart(ContextHandler.java:911)
        at org.eclipse.jetty.servlet.ServletContextHandler.doStart(ServletContextHandler.java:288)
        at org.eclipse.jetty.webapp.WebAppContext.doStart(WebAppContext.java:524)
        at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:73)
        at org.eclipse.jetty.util.component.ContainerLifeCycle.start(ContainerLifeCycle.java:169)
        at org.eclipse.jetty.util.component.ContainerLifeCycle.doStart(ContainerLifeCycle.java:117)
        at org.eclipse.jetty.server.handler.AbstractHandler.doStart(AbstractHandler.java:97)
        at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:73)
        at org.eclipse.jetty.util.component.ContainerLifeCycle.start(ContainerLifeCycle.java:169)
        at org.eclipse.jetty.util.component.ContainerLifeCycle.doStart(ContainerLifeCycle.java:110)
        at org.eclipse.jetty.server.handler.AbstractHandler.doStart(AbstractHandler.java:97)
        at org.eclipse.jetty.server.handler.gzip.GzipHandler.doStart(GzipHandler.java:426)
        at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:73)
        at org.eclipse.jetty.util.component.ContainerLifeCycle.start(ContainerLifeCycle.java:169)
        at org.eclipse.jetty.util.component.ContainerLifeCycle.doStart(ContainerLifeCycle.java:117)
        at org.eclipse.jetty.server.handler.AbstractHandler.doStart(AbstractHandler.java:97)
        at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:73)
        at org.eclipse.jetty.util.component.ContainerLifeCycle.start(ContainerLifeCycle.java:169)
        at org.eclipse.jetty.util.component.ContainerLifeCycle.doStart(ContainerLifeCycle.java:117)
        at org.eclipse.jetty.server.handler.AbstractHandler.doStart(AbstractHandler.java:97)
        at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:73)
        at org.eclipse.jetty.util.component.ContainerLifeCycle.start(ContainerLifeCycle.java:169)
        at org.eclipse.jetty.server.Server.start(Server.java:423)
        at org.eclipse.jetty.util.component.ContainerLifeCycle.doStart(ContainerLifeCycle.java:110)
        at org.eclipse.jetty.server.handler.AbstractHandler.doStart(AbstractHandler.java:97)
        at org.eclipse.jetty.server.Server.doStart(Server.java:387)
        at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:73)
        at org.apache.nifi.web.server.JettyServer.start(JettyServer.java:1129)
        at org.apache.nifi.NiFi.<init>(NiFi.java:159)
        at org.apache.nifi.NiFi.<init>(NiFi.java:71)
        at org.apache.nifi.NiFi.main(NiFi.java:303)
2022-01-03 08:24:22,976 WARN [main] org.eclipse.jetty.webapp.WebAppContext Failed startup of context o.e.j.w.WebAppContext@179ee36b{nifi-api,/nifi-api,file:///opt/nifi/nifi-current/work/jetty/nifi-web-api-1.14.0.war/webapp/,UNAVAILABLE}{./work/nar/extensions/nifi-server-nar-1.14.0.nar-unpacked/NAR-INF/bundled-dependencies/nifi-web-api-1.14.0.war}
org.apache.nifi.web.NiFiCoreException: Unable to start Flow Controller.
        at org.apache.nifi.web.contextlistener.ApplicationStartupContextListener.contextInitialized(ApplicationStartupContextListener.java:83)
        at org.eclipse.jetty.server.handler.ContextHandler.callContextInitialized(ContextHandler.java:1068)
        at org.eclipse.jetty.servlet.ServletContextHandler.callContextInitialized(ServletContextHandler.java:572)
        at org.eclipse.jetty.server.handler.ContextHandler.contextInitialized(ContextHandler.java:997)
        at org.eclipse.jetty.servlet.ServletHandler.initialize(ServletHandler.java:746)
        at org.eclipse.jetty.servlet.ServletContextHandler.startContext(ServletContextHandler.java:379)
        at org.eclipse.jetty.webapp.WebAppContext.startWebapp(WebAppContext.java:1449)
        at org.eclipse.jetty.webapp.WebAppContext.startContext(WebAppContext.java:1414)
        at org.eclipse.jetty.server.handler.ContextHandler.doStart(ContextHandler.java:911)
        at org.eclipse.jetty.servlet.ServletContextHandler.doStart(ServletContextHandler.java:288)
        at org.eclipse.jetty.webapp.WebAppContext.doStart(WebAppContext.java:524)
        at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:73)
        at org.eclipse.jetty.util.component.ContainerLifeCycle.start(ContainerLifeCycle.java:169)
        at org.eclipse.jetty.util.component.ContainerLifeCycle.doStart(ContainerLifeCycle.java:117)
        at org.eclipse.jetty.server.handler.AbstractHandler.doStart(AbstractHandler.java:97)
        at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:73)
        at org.eclipse.jetty.util.component.ContainerLifeCycle.start(ContainerLifeCycle.java:169)
        at org.eclipse.jetty.util.component.ContainerLifeCycle.doStart(ContainerLifeCycle.java:110)
        at org.eclipse.jetty.server.handler.AbstractHandler.doStart(AbstractHandler.java:97)
        at org.eclipse.jetty.server.handler.gzip.GzipHandler.doStart(GzipHandler.java:426)
        at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:73)
        at org.eclipse.jetty.util.component.ContainerLifeCycle.start(ContainerLifeCycle.java:169)
        at org.eclipse.jetty.util.component.ContainerLifeCycle.doStart(ContainerLifeCycle.java:117)
        at org.eclipse.jetty.server.handler.AbstractHandler.doStart(AbstractHandler.java:97)
        at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:73)
        at org.eclipse.jetty.util.component.ContainerLifeCycle.start(ContainerLifeCycle.java:169)
        at org.eclipse.jetty.util.component.ContainerLifeCycle.doStart(ContainerLifeCycle.java:117)
        at org.eclipse.jetty.server.handler.AbstractHandler.doStart(AbstractHandler.java:97)
        at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:73)
        at org.eclipse.jetty.util.component.ContainerLifeCycle.start(ContainerLifeCycle.java:169)
        at org.eclipse.jetty.server.Server.start(Server.java:423)
        at org.eclipse.jetty.util.component.ContainerLifeCycle.doStart(ContainerLifeCycle.java:110)
        at org.eclipse.jetty.server.handler.AbstractHandler.doStart(AbstractHandler.java:97)
        at org.eclipse.jetty.server.Server.doStart(Server.java:387)
        at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:73)
        at org.apache.nifi.web.server.JettyServer.start(JettyServer.java:1129)
        at org.apache.nifi.NiFi.<init>(NiFi.java:159)
        at org.apache.nifi.NiFi.<init>(NiFi.java:71)
        at org.apache.nifi.NiFi.main(NiFi.java:303)
Caused by: org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'flowService': FactoryBean threw exception on object creation; nested exception is org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'flowController' defined in class path resource [nifi-context.xml]: Cannot resolve reference to bean 'clusterCoordinator' while setting bean property 'clusterCoordinator'; nested exception is org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'clusterCoordinator': FactoryBean threw exception on object creation; nested exception is java.lang.IllegalStateException: Error configuring TLS for state manager
        at org.springframework.beans.factory.support.FactoryBeanRegistrySupport.doGetObjectFromFactoryBean(FactoryBeanRegistrySupport.java:176)
        at org.springframework.beans.factory.support.FactoryBeanRegistrySupport.getObjectFromFactoryBean(FactoryBeanRegistrySupport.java:101)
        at org.springframework.beans.factory.support.AbstractBeanFactory.getObjectForBeanInstance(AbstractBeanFactory.java:1884)
        at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.getObjectForBeanInstance(AbstractAutowireCapableBeanFactory.java:1266)
        at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:345)
        at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:213)
        at org.springframework.context.support.AbstractApplicationContext.getBean(AbstractApplicationContext.java:1160)
        at org.apache.nifi.web.contextlistener.ApplicationStartupContextListener.contextInitialized(ApplicationStartupContextListener.java:51)
        ... 38 common frames omitted
Caused by: org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'flowController' defined in class path resource [nifi-context.xml]: Cannot resolve reference to bean 'clusterCoordinator' while setting bean property 'clusterCoordinator'; nested exception is org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'clusterCoordinator': FactoryBean threw exception on object creation; nested exception is java.lang.IllegalStateException: Error configuring TLS for state manager
        at org.springframework.beans.factory.support.BeanDefinitionValueResolver.resolveReference(BeanDefinitionValueResolver.java:342)
        at org.springframework.beans.factory.support.BeanDefinitionValueResolver.resolveValueIfNecessary(BeanDefinitionValueResolver.java:113)
        at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.applyPropertyValues(AbstractAutowireCapableBeanFactory.java:1689)
        at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.populateBean(AbstractAutowireCapableBeanFactory.java:1434)
        at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:601)
        at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:524)
        at org.springframework.beans.factory.support.AbstractBeanFactory.lambda$doGetBean$0(AbstractBeanFactory.java:335)
        at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:234)
        at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:333)
        at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:213)
        at org.springframework.context.support.AbstractApplicationContext.getBean(AbstractApplicationContext.java:1160)
        at org.apache.nifi.spring.StandardFlowServiceFactoryBean.getObject(StandardFlowServiceFactoryBean.java:48)
        at org.springframework.beans.factory.support.FactoryBeanRegistrySupport.doGetObjectFromFactoryBean(FactoryBeanRegistrySupport.java:169)
        ... 45 common frames omitted
Caused by: org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'clusterCoordinator': FactoryBean threw exception on object creation; nested exception is java.lang.IllegalStateException: Error configuring TLS for state manager
        at org.springframework.beans.factory.support.FactoryBeanRegistrySupport.doGetObjectFromFactoryBean(FactoryBeanRegistrySupport.java:176)
        at org.springframework.beans.factory.support.FactoryBeanRegistrySupport.getObjectFromFactoryBean(FactoryBeanRegistrySupport.java:101)
        at org.springframework.beans.factory.support.AbstractBeanFactory.getObjectForBeanInstance(AbstractBeanFactory.java:1884)
        at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.getObjectForBeanInstance(AbstractAutowireCapableBeanFactory.java:1266)
        at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:345)
        at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:208)
        at org.springframework.beans.factory.support.BeanDefinitionValueResolver.resolveReference(BeanDefinitionValueResolver.java:330)
        ... 57 common frames omitted
Caused by: java.lang.IllegalStateException: Error configuring TLS for state manager
        at org.apache.nifi.controller.state.manager.StandardStateManagerProvider.createStateProvider(StandardStateManagerProvider.java:220)
        at org.apache.nifi.controller.state.manager.StandardStateManagerProvider.createLocalStateProvider(StandardStateManagerProvider.java:125)
        at org.apache.nifi.controller.state.manager.StandardStateManagerProvider.create(StandardStateManagerProvider.java:105)
        at org.apache.nifi.cluster.coordination.node.NodeClusterCoordinator.<init>(NodeClusterCoordinator.java:134)
        at org.apache.nifi.cluster.spring.NodeClusterCoordinatorFactoryBean.getObject(NodeClusterCoordinatorFactoryBean.java:54)
        at org.apache.nifi.cluster.spring.NodeClusterCoordinatorFactoryBean.getObject(NodeClusterCoordinatorFactoryBean.java:35)
        at org.springframework.beans.factory.support.FactoryBeanRegistrySupport.doGetObjectFromFactoryBean(FactoryBeanRegistrySupport.java:169)
        ... 63 common frames omitted
Caused by: org.apache.nifi.security.util.TlsException: The truststore properties are not valid
        at org.apache.nifi.security.util.SslContextFactory.getTrustManagers(SslContextFactory.java:181)
        at org.apache.nifi.security.util.SslContextFactory.createSslContext(SslContextFactory.java:66)
        at org.apache.nifi.controller.state.manager.StandardStateManagerProvider.createStateProvider(StandardStateManagerProvider.java:217)
        ... 69 common frames omitted
2022-01-03 08:24:23,887 INFO [main] o.e.j.a.AnnotationConfiguration Scanning elapsed time=401ms
2022-01-03 08:24:24,088 INFO [main] o.e.j.s.h.C._nifi_content_viewer No Spring WebApplicationInitializer types detected on classpath
2022-01-03 08:24:24,093 INFO [main] o.e.jetty.server.handler.ContextHandler Started o.e.j.w.WebAppContext@67763ebe{nifi-content-viewer,/nifi-content-viewer,file:///opt/nifi/nifi-current/work/jetty/nifi-web-content-viewer-1.14.0.war/webapp/,AVAILABLE}{./work/nar/extensions/nifi-server-nar-1.14.0.nar-unpacked/NAR-INF/bundled-dependencies/nifi-web-content-viewer-1.14.0.war}
2022-01-03 08:24:24,277 INFO [main] o.e.j.a.AnnotationConfiguration Scanning elapsed time=90ms
2022-01-03 08:24:24,278 WARN [main] o.e.j.webapp.StandardDescriptorProcessor Duplicate mapping from / to default
2022-01-03 08:24:24,279 INFO [main] o.e.j.s.h.ContextHandler._nifi_docs No Spring WebApplicationInitializer types detected on classpath
2022-01-03 08:24:24,384 INFO [main] o.e.jetty.server.handler.ContextHandler Started o.e.j.w.WebAppContext@4b4a2fa8{nifi-docs,/nifi-docs,file:///opt/nifi/nifi-current/work/jetty/nifi-web-docs-1.14.0.war/webapp/,AVAILABLE}{./work/nar/extensions/nifi-server-nar-1.14.0.nar-unpacked/NAR-INF/bundled-dependencies/nifi-web-docs-1.14.0.war}
2022-01-03 08:24:24,484 INFO [main] o.e.j.a.AnnotationConfiguration Scanning elapsed time=86ms
2022-01-03 08:24:24,486 INFO [main] o.e.j.server.handler.ContextHandler._ No Spring WebApplicationInitializer types detected on classpath
2022-01-03 08:24:24,676 INFO [main] o.e.jetty.server.handler.ContextHandler Started o.e.j.w.WebAppContext@5da6b13d{nifi-error,/,file:///opt/nifi/nifi-current/work/jetty/nifi-web-error-1.14.0.war/webapp/,AVAILABLE}{./work/nar/extensions/nifi-server-nar-1.14.0.nar-unpacked/NAR-INF/bundled-dependencies/nifi-web-error-1.14.0.war}
2022-01-03 08:24:24,685 WARN [main] org.apache.nifi.web.server.JettyServer Failed to start web server... shutting down.
java.lang.IllegalStateException: /opt/nifi/nifi-current/conf/nifi-deploy-dev-nifi-0.nifi-deploy-dev-nifi-headless.lakehouse-nifi.svc.cluster.local/keystore.jks is not a valid keystore
        at org.eclipse.jetty.util.security.CertificateUtils.getKeyStore(CertificateUtils.java:50)
        at org.eclipse.jetty.util.ssl.SslContextFactory.loadKeyStore(SslContextFactory.java:1203)
        at org.eclipse.jetty.util.ssl.SslContextFactory.load(SslContextFactory.java:322)
        at org.eclipse.jetty.util.ssl.SslContextFactory.doStart(SslContextFactory.java:244)
        at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:73)
        at org.eclipse.jetty.util.component.ContainerLifeCycle.start(ContainerLifeCycle.java:169)
        at org.eclipse.jetty.util.component.ContainerLifeCycle.doStart(ContainerLifeCycle.java:117)
        at org.eclipse.jetty.server.SslConnectionFactory.doStart(SslConnectionFactory.java:97)
        at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:73)
        at org.eclipse.jetty.util.component.ContainerLifeCycle.start(ContainerLifeCycle.java:169)
        at org.eclipse.jetty.util.component.ContainerLifeCycle.doStart(ContainerLifeCycle.java:117)
        at org.eclipse.jetty.server.AbstractConnector.doStart(AbstractConnector.java:321)
        at org.eclipse.jetty.server.AbstractNetworkConnector.doStart(AbstractNetworkConnector.java:81)
        at org.eclipse.jetty.server.ServerConnector.doStart(ServerConnector.java:234)
        at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:73)
        at org.eclipse.jetty.server.Server.doStart(Server.java:401)
        at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:73)
        at org.apache.nifi.web.server.JettyServer.start(JettyServer.java:1129)
        at org.apache.nifi.NiFi.<init>(NiFi.java:159)
        at org.apache.nifi.NiFi.<init>(NiFi.java:71)
        at org.apache.nifi.NiFi.main(NiFi.java:303)
2022-01-03 08:24:24,686 INFO [Thread-1] org.apache.nifi.NiFi Initiating shutdown of Jetty web server...

What you expected to happen: Nifi starts correctly

How to reproduce it (as minimally and precisely as possible): Attached values.yaml:

values.yaml:

---
# Number of nifi nodes
replicaCount: 2

## Set default image, imageTag, and imagePullPolicy.
## ref: https://hub.docker.com/r/apache/nifi/
##
image:
  repository: apache/nifi
  tag: "1.14.0"
  pullPolicy: "IfNotPresent"

  ## Optionally specify an imagePullSecret.
  ## Secret must be manually created in the namespace.
  ## ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
  ##
  # pullSecret: myRegistrKeySecretName

securityContext: {}
#  runAsUser: 1000
#  fsGroup: 1000

## @param useHostNetwork - boolean - optional
## Bind ports on the hostNetwork. Useful for CNI networking where hostPort might
## not be supported. The ports need to be available on all hosts. It can be
## used for custom metrics instead of a service endpoint.
##
## WARNING: Make sure that hosts using this are properly firewalled otherwise
## metrics and traces are accepted from any host able to connect to this host.
#

sts:
  # Parallel podManagementPolicy for faster bootstrap and teardown. Default is OrderedReady.
  podManagementPolicy: Parallel
  AntiAffinity: soft
  useHostNetwork: null
  hostPort: null
  pod:
    annotations:
      security.alpha.kubernetes.io/sysctls: net.ipv4.ip_local_port_range=10000 65000
      prometheus.io/scrape: "true"      
  serviceAccount:
    create: false
    #name: nifi
    annotations: {}
  hostAliases: {}
  #  - ip: "0.0.0.0"
  #    hostnames:
  #      - nifi.ingress.fqdn

## Useful if using any custom secrets
## Pass in some secrets to use (if required)
# secrets:
# - name: myNifiSecret
#   keys:
#     - key1
#     - key2
#   mountPath: /opt/nifi/secret

## Useful if using any custom configmaps
## Pass in some configmaps to use (if required)
# configmaps:
#   - name: myNifiConf
#     keys:
#       - myconf.conf
#     mountPath: /opt/nifi/custom-config

properties:
  # use externalSecure for when inbound SSL is provided by nginx-ingress or other external mechanism
  sensitiveKey: changeMechangeMe # Must to have minimal 12 length key
  algorithm: NIFI_PBKDF2_AES_GCM_256
  externalSecure: true
  isNode: true
  httpPort: 8443 
  webProxyHost: "nifi.ingress.fqdn"
  webHttpsHost: 
  clusterPort: 6007
  provenanceStorage: "8 GB"
  # needClientAuth: false
  siteToSite:
    port: 10000
  # use properties.safetyValve to pass explicit 'key: value' pairs that overwrite other configuration
  safetyValve:
    org.apache.nifi.web.security: DEBUG
    #nifi.variable.registry.properties: "${NIFI_HOME}/example1.properties, ${NIFI_HOME}/example2.properties"
    nifi.web.http.network.interface.default: eth0
    # listen to loopback interface so "kubectl port-forward ..." works
    nifi.web.http.network.interface.lo: lo

  ## Include aditional processors
  # customLibPath: "/opt/configuration_resources/custom_lib"

## Include additional libraries in the Nifi containers by using the postStart handler
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/attach-handler-lifecycle-event/
# postStart: /opt/nifi/psql; wget -P /opt/nifi/psql https://jdbc.postgresql.org/download/postgresql-42.2.6.jar

# Nifi User Authentication
auth:
  admin: CN=service-account
  SSL:
    keystorePasswd: changeMe
    truststorePasswd: changeMe

  # Automaticaly disabled if OIDC or LDAP enabled 
  singleUser:
    username: thisisainvalidusername
    password: thisisainvalidpassword # Must to have at least 12 characters

  ldap:
    enabled: true
    host: ldap://ldap.url:389
    searchBase: CN=Users,DC=domain,DC=local
    admin: CN=service-account,OU=Some OU with spaces,DC=domain,DC=local
    pass: ***************
    searchFilter: (sAMAccountName=*)
    userIdentityAttribute: cn
    authStrategy: SIMPLE # How the connection to the LDAP server is authenticated. Possible values are ANONYMOUS, SIMPLE, LDAPS, or START_TLS.
    IdentityStrategy: USE_USERNAME
    authExpiration: 12 hours

  oidc:
    enabled: false
    discoveryUrl: #http://<oidc_provider_address>:<oidc_provider_port>/auth/realms/<client_realm>/.well-known/openid-configuration
    clientId: #<client_name_in_oidc_provider>
    clientSecret: #<client_secret_in_oidc_provider>
    claimIdentifyingUser: preferred_username
    ## Request additional scopes, for example profile
    additionalScopes:

openldap:
  enabled: false
  persistence:
    enabled: false
  env:
    LDAP_ORGANISATION: # name of your organization e.g. "Example"
    LDAP_DOMAIN: # your domain e.g. "ldap.example.be"
    LDAP_BACKEND: "hdb"
    LDAP_TLS: "true"
    LDAP_TLS_ENFORCE: "false"
    LDAP_REMOVE_CONFIG_AFTER_SETUP: "false"
  adminPassword: #ChengeMe
  configPassword: #ChangeMe
  customLdifFiles:
    1-default-users.ldif: |-
      # You can find an example ldif file at https://github.com/cetic/fadi/blob/master/examples/basic/example.ldif
## Expose the nifi service to be accessed from outside the cluster (LoadBalancer service).
## or access it from within the cluster (ClusterIP service). Set the service type and the port to serve it.
## ref: http://kubernetes.io/docs/user-guide/services/
##

# headless service
headless:
  type: ClusterIP
  annotations:
    service.alpha.kubernetes.io/tolerate-unready-endpoints: "true"

# ui service
service:
  type: ClusterIP
  httpPort: 8443
  # nodePort: 30236
  annotations:
    loadBalancerIP: 10.36.121.30
    ## Load Balancer sources
    ## https://kubernetes.io/docs/tasks/access-application-cluster/configure-cloud-provider-firewall/#restrict-access-for-loadbalancer-service
    ##
    # loadBalancerSourceRanges:
    # - 10.10.10.0/24
    ## OIDC authentication requires "sticky" session on the LoadBalancer for JWT to work properly...but AWS doesn't like it on creation
    # sessionAffinity: ClientIP
    # sessionAffinityConfig:
    #   clientIP:
  #     timeoutSeconds: 10800

  # Enables additional port/ports to nifi service for internal processors
  processors:
    enabled: false
    ports:
      - name: processor01
        port: 7001
        targetPort: 7001
        #nodePort: 30701
      - name: processor02
        port: 7002
        targetPort: 7002
        #nodePort: 30702

## Configure Ingress based on the documentation here: https://kubernetes.io/docs/concepts/services-networking/ingress/
##
ingress:
  enabled: true
  annotations:
    ingress.kubernetes.io/ssl-redirect: "true"
    kubernetes.io/tls-acme: "true"
    kubernetes.io/ingress.class: "nginx"
    nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
    nginx.ingress.kubernetes.io/proxy-buffer-size: "100m"
    nginx.org/location-snippets: |
            proxy_set_header HOST "0.0.0.0";
            proxy_set_header X-ProxyHost "0.0.0.0";

  tls: []
  hosts:
    - nifi.ingress.fqdn
  path: /
  # If you want to change the default path, see this issue https://github.com/cetic/helm-nifi/issues/22

# Amount of memory to give the NiFi java heap
jvmMemory: 4g

# Separate image for tailing each log separately and checking zookeeper connectivity
sidecar:
  image: busybox
  tag: "1.32.0"
  imagePullPolicy: "IfNotPresent"

## Enable persistence using Persistent Volume Claims
## ref: http://kubernetes.io/docs/user-guide/persistent-volumes/
##
persistence:
  enabled: false

  # When creating persistent storage, the NiFi helm chart can either reference an already-defined
  # storage class by name, such as "standard" or can define a custom storage class by specifying
  # customStorageClass: true and providing the "storageClass", "storageProvisioner" and "storageType".
  # For example, to use SSD storage on Google Compute Engine see values-gcp.yaml
  #
  # To use a storage class that already exists on the Kubernetes cluster, we can simply reference it by name.
  # For example:
  storageClass: azurefile
  #
  # The default storage class is used if this variable is not set.

  accessModes:  [ReadWriteOnce]
  ## Storage Capacities for persistent volumes
  configStorage:
    size: 100Mi
  authconfStorage:
    size: 100Mi
  # Storage capacity for the 'data' directory, which is used to hold things such as the flow.xml.gz, configuration, state, etc.
  dataStorage:
    size: 1Gi
  # Storage capacity for the FlowFile repository
  flowfileRepoStorage:
    size: 1Gi
  # Storage capacity for the Content repository
  contentRepoStorage:
    size: 1Gi
  # Storage capacity for the Provenance repository. When changing this, one should also change the properties.provenanceStorage value above, also.
  provenanceRepoStorage:
    size: 1Gi
  # Storage capacity for nifi logs
  logStorage:
    size: 5Gi

## Configure resource requests and limits
## ref: http://kubernetes.io/docs/user-guide/compute-resources/
##
resources: #{}
  # We usually recommend not to specify default resources and to leave this as a conscious
  # choice for the user. This also increases chances charts run on environments with little
  # resources, such as Minikube. If you do want to specify resources, uncomment the following
  # lines, adjust them as necessary, and remove the curly braces after 'resources:'.
  limits:
   cpu: 500m
   memory: 2000Mi
  requests:
   cpu: 100m
   memory: 256Mi

logresources:
  requests:
    cpu: 10m
    memory: 50Mi
  limits:
    cpu: 500m
    memory: 500Mi

## Enables setting your own affinity. Mutually exclusive with sts.AntiAffinity
## You need to set the value of sts.AntiAffinity other than "soft" and "hard"
affinity: {}

nodeSelector: {}

tolerations: []

initContainers: {}
  # foo-init:  # <- will be used as container name
  #   image: "busybox:1.30.1"
  #   imagePullPolicy: "IfNotPresent"
  #   command: ['sh', '-c', 'echo this is an initContainer']
  #   volumeMounts:
  #     - mountPath: /tmp/foo
  #       name: foo

extraVolumeMounts: []

extraVolumes: []

## Extra containers
extraContainers: []

terminationGracePeriodSeconds: 30

## Extra environment variables that will be pass onto deployment pods
env:
  #- name: NIFI_WEB_PROXY_HOST
  #  value: "nifi.ingress.fqdn"
  #- name: AUTH
  #  value: ldap
  #  value: nifi.ingress.fqdn

## Extra environment variables from secrets and config maps
envFrom: []

# envFrom:
#   - configMapRef:
#       name: config-name
#   - secretRef:
#       name: mysecret

## Openshift support
## Use the following varables in order to enable Route and Security Context Constraint creation
openshift:
  scc:
    enabled: false
  route:
    enabled: false
    #host: www.test.com
    #path: /nifi

# ca server details
# Setting this true would create a nifi-toolkit based ca server
# The ca server will be used to generate self-signed certificates required setting up secured cluster
ca:
  ## If true, enable the nifi-toolkit certificate authority
  enabled: false
  persistence:
    enabled: true
  server: ""
  service:
    port: 9090
  token: sixteenCharacters
  admin:
    cn: admin
  serviceAccount:
    create: false
    #name: nifi-ca
  openshift:
    scc:
      enabled: false

# ------------------------------------------------------------------------------
# Zookeeper:
# ------------------------------------------------------------------------------
zookeeper:
  ## If true, install the Zookeeper chart
  ## ref: https://github.com/bitnami/charts/blob/master/bitnami/zookeeper/values.yaml
  enabled: true
  ## If the Zookeeper Chart is disabled a URL and port are required to connect
  url: ""
  port: 2181
  replicaCount: 2
  persistence:
    storageClass: azurefile

# ------------------------------------------------------------------------------
# Nifi registry:
# ------------------------------------------------------------------------------
registry:
  ## If true, install the Nifi registry
  enabled: false
  url: ""
  port: 80
  ## Add values for the nifi-registry here
  ## ref: https://github.com/dysnix/charts/blob/master/nifi-registry/values.yaml

# Configure metrics
metrics:
  prometheus:
    # Enable Prometheus metrics
    enabled: false
    # Port used to expose Prometheus metrics
    port: 9092
    serviceMonitor:
      # Enable deployment of Prometheus Operator ServiceMonitor resource
      enabled: false
      # namespace: monitoring
      # Additional labels for the ServiceMonitor
      labels: {}

Anything else we need to know:

I think, the problem is around the generation of the keystores in the statefulset.yaml template wich executes a certain command when auth.ldap is enabled.

No errors where found when nifi starts withouth configuring ldap.


        {{- if .Values.auth.ldap.enabled }}
          /opt/nifi/nifi-toolkit-current/bin/tls-toolkit.sh standalone -n '{{.Release.Name}}-nifi-0.{{.Release.Name}}-nifi-headless.{{.Release.Namespace}}.svc.cluster.local' -C '{{.Values.auth.ldap.admin}}' -o '/opt/nifi/nifi-current/conf/' -P {{.Values.auth.SSL.truststorePasswd}}  -S {{.Values.auth.SSL.keystorePasswd}}  --nifiPropertiesFile /opt/nifi/nifi-current/conf/nifi.properties
          exec bin/nifi.sh run
        {{- end }}
digitalshuffler commented 2 years ago

Hello, we got any revision?

We need some help whit this issue.

banzo commented 2 years ago

@digitalshuffler can you reproduce the problem with the new release?

anshurajkaushik commented 2 years ago

@banzo : i am facing the same problem with the latest release ,below is my values.yaml

---
# Number of nifi nodes
replicaCount: 3

## Set default image, imageTag, and imagePullPolicy.
## ref: https://hub.docker.com/r/apache/nifi/
##
image:
  repository: apache/nifi
  tag: "1.14.0"
  pullPolicy: "IfNotPresent"

  ## Optionally specify an imagePullSecret.
  ## Secret must be manually created in the namespace.
  ## ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
  ##
  # pullSecret: myRegistrKeySecretName

securityContext:
  runAsUser: 1000
  fsGroup: 1000

## @param useHostNetwork - boolean - optional
## Bind ports on the hostNetwork. Useful for CNI networking where hostPort might
## not be supported. The ports need to be available on all hosts. It can be
## used for custom metrics instead of a service endpoint.
##
## WARNING: Make sure that hosts using this are properly firewalled otherwise
## metrics and traces are accepted from any host able to connect to this host.
#

sts:
  # Parallel podManagementPolicy for faster bootstrap and teardown. Default is OrderedReady.
  podManagementPolicy: Parallel
  AntiAffinity: soft
  useHostNetwork: null
  hostPort: null
  pod:
    annotations:
      security.alpha.kubernetes.io/sysctls: net.ipv4.ip_local_port_range=10000 65000
      #prometheus.io/scrape: "true"      
  serviceAccount:
    create: false
    #name: nifi
    annotations: {}
  hostAliases: []
#    - ip: "1.2.3.4"
#      hostnames:
#        - example.com
#        - example

## Useful if using any custom secrets
## Pass in some secrets to use (if required)
# secrets:
# - name: myNifiSecret
#   keys:
#     - key1
#     - key2
#   mountPath: /opt/nifi/secret

## Useful if using any custom configmaps
## Pass in some configmaps to use (if required)
# configmaps:
#   - name: myNifiConf
#     keys:
#       - myconf.conf
#     mountPath: /opt/nifi/custom-config

properties:
  # use externalSecure for when inbound SSL is provided by nginx-ingress or other external mechanism
  sensitiveKey: changeMenifipoc # Must to have minimal 12 length key
  algorithm: NIFI_PBKDF2_AES_GCM_256
  externalSecure: true
  isNode: true
  httpsPort: 8443 
  webProxyHost: # <clusterIP>:<NodePort> (If Nifi service is NodePort or LoadBalancer)
  clusterPort: 6007
  provenanceStorage: "8 GB"
  siteToSite:
    port: 10000
  # use properties.safetyValve to pass explicit 'key: value' pairs that overwrite other configuration
  safetyValve:
    #nifi.variable.registry.properties: "${NIFI_HOME}/example1.properties, ${NIFI_HOME}/example2.properties"
    nifi.web.http.network.interface.default: eth0
    # listen to loopback interface so "kubectl port-forward ..." works
    nifi.web.http.network.interface.lo: lo

  ## Include aditional processors
  # customLibPath: "/opt/configuration_resources/custom_lib"

## Include additional libraries in the Nifi containers by using the postStart handler
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/attach-handler-lifecycle-event/
# postStart: /opt/nifi/psql; wget -P /opt/nifi/psql https://jdbc.postgresql.org/download/postgresql-42.2.6.jar

# Nifi User Authentication
auth:
  admin: CN=admin, OU=NIFI
  SSL:
    keystorePasswd: changeMe
    truststorePasswd: changeMe

  # Automaticaly disabled if OIDC or LDAP enabled 
  singleUser:
    username: username
    password: changemechangeme # Must to have at least 12 characters

  ldap:
    enabled: false
    host: 
    searchBase: CN=Users,DC=ldap,DC=example,DC=be
    admin: cn=admin,dc=ldap,dc=example,dc=be
    pass: ChangeMe
    searchFilter: (objectClass=*)
    userIdentityAttribute: cn
    authStrategy: SIMPLE # How the connection to the LDAP server is authenticated. Possible values are ANONYMOUS, SIMPLE, LDAPS, or START_TLS.
    identityStrategy: USE_DN
    authExpiration: 12 hours

  oidc:
    enabled: true
    discoveryUrl: https://login.microsoftonline.com/<tenant_id>/v2.0/.well-known/openid-configuration
    clientId: *******
    clientSecret: ****************
    claimIdentifyingUser: preferred_username
    ## Request additional scopes, for example profile
    additionalScopes:

openldap:
  enabled: false
  persistence:
    enabled: true
  env:
    LDAP_ORGANISATION: # name of your organization e.g. "Example"
    LDAP_DOMAIN: # your domain e.g. "ldap.example.be"
    LDAP_BACKEND: "hdb"
    LDAP_TLS: "true"
    LDAP_TLS_ENFORCE: "false"
    LDAP_REMOVE_CONFIG_AFTER_SETUP: "false"
  adminPassword: #ChengeMe
  configPassword: #ChangeMe
  customLdifFiles:
    1-default-users.ldif: |-
      # You can find an example ldif file at https://github.com/cetic/fadi/blob/master/examples/basic/example.ldif
## Expose the nifi service to be accessed from outside the cluster (LoadBalancer service).
## or access it from within the cluster (ClusterIP service). Set the service type and the port to serve it.
## ref: http://kubernetes.io/docs/user-guide/services/
##

# headless service
headless:
  type: ClusterIP
  annotations:
    service.alpha.kubernetes.io/tolerate-unready-endpoints: "true"

# ui service
service:
  type: ClusterIP
  httpsPort: 8443
  # nodePort: 30236
  annotations: {}
    # loadBalancerIP:
    ## Load Balancer sources
    ## https://kubernetes.io/docs/tasks/access-application-cluster/configure-cloud-provider-firewall/#restrict-access-for-loadbalancer-service
    ##
    # loadBalancerSourceRanges:
    # - 10.10.10.0/24
    ## OIDC authentication requires "sticky" session on the LoadBalancer for JWT to work properly...but AWS doesn't like it on creation
    # sessionAffinity: ClientIP
    # sessionAffinityConfig:
    #   clientIP:
  #     timeoutSeconds: 10800

  # Enables additional port/ports to nifi service for internal processors
  processors:
    enabled: false
    ports:
      - name: processor01
        port: 7001
        targetPort: 7001
        #nodePort: 30701
      - name: processor02
        port: 7002
        targetPort: 7002
        #nodePort: 30702

## Configure Ingress based on the documentation here: https://kubernetes.io/docs/concepts/services-networking/ingress/
##
ingress:
  enabled: true
  annotations: 
    kubernetes.io/ingress.class: nginx  
    ingress.kubernetes.io/ssl-redirect: “true”
    nginx.ingress.kubernetes.io/affinity: cookie
    kubernetes.io/tls-acme: "true"
    nginx.ingress.kubernetes.io/upstream-vhost: "localhost:8443"
    nginx.ingress.kubernetes.io/proxy-redirect-from: "https://localhost:8443"
    nginx.ingress.kubernetes.io/proxy-redirect-to: "https://****************com"
    nginx.ingress.kubernetes.io/backend-protocol: HTTPS
#    nginx.ingress.kubernetes.io/configuration-snippet: |
#      proxy_set_header 'X-ProxyContextPath' '/';
#      proxy_set_header 'X-ProxyPort' '80';
#      proxy_set_header 'X-ProxyScheme' 'http';
#      proxy_set_header X-ProxyPort 443;
#      proxy_set_header X-ProxiedEntitiesChain "<$ssl_client_s_dn>"       
  tls:
    - hosts:
      - *******************.com
      secretName: *************-n-tlssecret

  hosts:
    - ***********com
  path: /
  # If you want to change the default path, see this issue https://github.com/cetic/helm-nifi/issues/22

# Amount of memory to give the NiFi java heap
jvmMemory: 2g

# Separate image for tailing each log separately and checking zookeeper connectivity
sidecar:
  image: busybox
  tag: "1.32.0"
  imagePullPolicy: "IfNotPresent"

## Enable persistence using Persistent Volume Claims
## ref: http://kubernetes.io/docs/user-guide/persistent-volumes/
##
persistence:
  enabled: false

  # When creating persistent storage, the NiFi helm chart can either reference an already-defined
  # storage class by name, such as "standard" or can define a custom storage class by specifying
  # customStorageClass: true and providing the "storageClass", "storageProvisioner" and "storageType".
  # For example, to use SSD storage on Google Compute Engine see values-gcp.yaml
  #
  # To use a storage class that already exists on the Kubernetes cluster, we can simply reference it by name.
  # For example:
  # storageClass: standard
  #
  # The default storage class is used if this variable is not set.

  accessModes:  [ReadWriteOnce]
  ## Storage Capacities for persistent volumes
  configStorage:
    size: 100Mi
  authconfStorage:
    size: 100Mi
  # Storage capacity for the 'data' directory, which is used to hold things such as the flow.xml.gz, configuration, state, etc.
  dataStorage:
    size: 1Gi
  # Storage capacity for the FlowFile repository
  flowfileRepoStorage:
    size: 10Gi
  # Storage capacity for the Content repository
  contentRepoStorage:
    size: 10Gi
  # Storage capacity for the Provenance repository. When changing this, one should also change the properties.provenanceStorage value above, also.
  provenanceRepoStorage:
    size: 10Gi
  # Storage capacity for nifi logs
  logStorage:
    size: 5Gi

## Configure resource requests and limits
## ref: http://kubernetes.io/docs/user-guide/compute-resources/
##
resources: {}
  # We usually recommend not to specify default resources and to leave this as a conscious
  # choice for the user. This also increases chances charts run on environments with little
  # resources, such as Minikube. If you do want to specify resources, uncomment the following
  # lines, adjust them as necessary, and remove the curly braces after 'resources:'.
  # limits:
  #  cpu: 100m
  #  memory: 128Mi
  # requests:
  #  cpu: 100m
  #  memory: 128Mi

logresources:
  requests:
    cpu: 10m
    memory: 10Mi
  limits:
    cpu: 50m
    memory: 50Mi

## Enables setting your own affinity. Mutually exclusive with sts.AntiAffinity
## You need to set the value of sts.AntiAffinity other than "soft" and "hard"
affinity: {}

nodeSelector: {}

tolerations: []

initContainers: {}
  # foo-init:  # <- will be used as container name
  #   image: "busybox:1.30.1"
  #   imagePullPolicy: "IfNotPresent"
  #   command: ['sh', '-c', 'echo this is an initContainer']
  #   volumeMounts:
  #     - mountPath: /tmp/foo
  #       name: foo

extraVolumeMounts: []

extraVolumes: []

## Extra containers
extraContainers: []

terminationGracePeriodSeconds: 30

## Extra environment variables that will be pass onto deployment pods
env: []

## Extra environment variables from secrets and config maps
envFrom: []

# envFrom:
#   - configMapRef:
#       name: config-name
#   - secretRef:
#       name: mysecret

## Openshift support
## Use the following varables in order to enable Route and Security Context Constraint creation
openshift:
  scc:
    enabled: false
  route:
    enabled: false
    #host: www.test.com
    #path: /nifi

# ca server details
# Setting this true would create a nifi-toolkit based ca server
# The ca server will be used to generate self-signed certificates required setting up secured cluster
ca:
  ## If true, enable the nifi-toolkit certificate authority
  enabled: true
  persistence:
    enabled: true
  server: ""
  service:
    port: 9090
  token: sixteenCharacters
  admin:
    cn: admin
  serviceAccount:
    create: false
    #name: nifi-ca
  openshift:
    scc:
      enabled: false

# ------------------------------------------------------------------------------
# Zookeeper:
# ------------------------------------------------------------------------------
zookeeper:
  ## If true, install the Zookeeper chart
  ## ref: https://github.com/bitnami/charts/blob/master/bitnami/zookeeper/values.yaml
  enabled: true
  ## If the Zookeeper Chart is disabled a URL and port are required to connect
  url: ""
  port: 2181
  replicaCount: 3

# ------------------------------------------------------------------------------
# Nifi registry:
# ------------------------------------------------------------------------------
registry:
  ## If true, install the Nifi registry
  enabled: true
  url: ""
  port: 80
  ## Add values for the nifi-registry here
  ## ref: https://github.com/dysnix/charts/blob/master/nifi-registry/values.yaml

# Configure metrics
metrics:
  prometheus:
    # Enable Prometheus metrics
    enabled: false
    # Port used to expose Prometheus metrics
    port: 9092
    serviceMonitor:
      # Enable deployment of Prometheus Operator ServiceMonitor resource
      enabled: false
      # namespace: monitoring
      # Additional labels for the ServiceMonitor
      labels: {}
digitalshuffler commented 2 years ago

I get up nifi now but seems to drop a warn constantly ¿It's something that we should care? seems like failed requests due to certificates

2022-01-10 15:38:25,490 INFO [Heartbeat Monitor Thread-1] o.a.n.c.c.h.AbstractHeartbeatMonitor Finished processing 1 heartbeats in 24201 nanos
2022-01-10 15:38:25,758 WARN [Process Cluster Protocol Request-6] o.a.n.c.p.impl.SocketProtocolListener Failed processing protocol message from nifi-deploy-dev-1.nifi-deploy-dev-headless.lakehouse-nifi.svc.cluster.local due to Received fatal alert: certificate_unknown
2022-01-10 15:38:27,342 INFO [Process Cluster Protocol Request-7] o.a.n.c.p.impl.SocketProtocolListener Finished processing request 6a97c5ed-0000-4f77-a2c6-94fe1da8ec52 (type=HEARTBEAT, length=2933 bytes) from localhost:8443 in 19 millis
2022-01-10 15:38:27,344 INFO [Clustering Tasks Thread-3] o.a.n.c.c.ClusterProtocolHeartbeater Heartbeat created at 2022-01-10 15:38:27,315 and sent to nifi-deploy-dev-0.nifi-deploy-dev-headless.lakehouse-nifi.svc.cluster.local:6007 at 2022-01-10 15:38:27,344; send took 28 millis
digitalshuffler commented 2 years ago

@anshurajkaushik your issue seems related to oidc configuration, not ldap

anshurajkaushik commented 2 years ago

@digitalshuffler @banzo : ok i get past the initial issues , but still seeing the -nifi.svc.cluster.local due to Received fatal alert: certificate_unknown. issue @digitalshuffler : are your nodes connected in cluster ? and also Untrusted proxy CN=localhost. any suggestions ?

digitalshuffler commented 2 years ago

I found a solution after needing to modify and fix the chart.

There was 2 things: After the new release, nifi goes up with ldap but when activating HA mode, there is no communication between nodes.

This is caused because toolkit is generating a diferent certificate/key for every pod, so a way to solve it is to generate a unique jks shared by a persistent storage.

Also, the FQDN generated to be in HA mode with HTTPS should be changed to use the entire pod FQDN.

I attach the modified code, casue i can't do a pull request or access the repository. You can compare it from release 1.0.4

templates.tar.gz

I attach to a prove to see that nifi in ha mode with ldap is working: image

anshurajkaushik commented 2 years ago

@digitalshuffler : can you attach your values.yaml also , i am still seeing Untrusted proxy CN=localhost .

digitalshuffler commented 2 years ago

@anshurajkaushik Make sure that your tls-toolkit is generating this way and your nifi.web.https.host is equal to the pod name:

/opt/nifi/nifi-toolkit-current/bin/tls-toolkit.sh standalone \
              -n {{.Release.Name}}-0.{{.Release.Name}}-headless.{{.Release.Namespace}}.svc.cluster.local \
              --subjectAlternativeNames 'localhost,{{.Release.Name}}-[0-{{ $replicas }}].{{.Release.Name}}-headless.{{.Release.Namespace}}.svc.cluster.local' \
              -C '{{.Values.auth.ldap.admin}}' \
              -O -o "${NIFI_HOME}/conf/" \
              -P {{.Values.auth.SSL.truststorePasswd}} \
              -S {{.Values.auth.SSL.keystorePasswd}} \
              --nifiPropertiesFile /opt/nifi/nifi-current/conf/nifi.properties