cetic / helm-nifi

Helm Chart for Apache Nifi
Apache License 2.0
215 stars 225 forks source link

[cetic/nifi] Unable to check Access Status #173

Closed ratnakarreddyg closed 2 years ago

ratnakarreddyg commented 3 years ago

Describe the bug We have deployed NiFi on a K8S cluster but we are not able to login. When we try to login it says errors like "Unable to check Access Status. User authentication/authorization is only supported when running over HTTPS."

Version of Helm and Kubernetes: Helm: v3.6.3 K8S : v1.18.8 chart: nifi-0.7.9 or nifi-0.7.8

What happened: It is a new instalaltion

What you expected to happen:

How to reproduce it (as minimally and precisely as possible):

git clone https://github.com/cetic/helm-nifi.git nifi
cd nifi 
helm repo add bitnami https://charts.bitnami.com/bitnami
helm repo add dysnix https://dysnix.github.io/charts/
helm repo update
helm dep up
helm install nifi .

Anything else we need to know:

Here are some information that help troubleshooting:

Set default image, imageTag, and imagePullPolicy.

ref: https://hub.docker.com/r/apache/nifi/

image: repository: artifactorycn.xyx.com:17011/apache/nifi tag: "1.12.1" pullPolicy: IfNotPresent

Optionally specify an imagePullSecret.

Secret must be manually created in the namespace.

ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/

pullSecret: myRegistrKeySecretName

securityContext: runAsUser: 1000 fsGroup: 1000

@param useHostNetwork - boolean - optional

Bind ports on the hostNetwork. Useful for CNI networking where hostPort might

not be supported. The ports need to be available on all hosts. It can be

used for custom metrics instead of a service endpoint.

WARNING: Make sure that hosts using this are properly firewalled otherwise

metrics and traces are accepted from any host able to connect to this host.

#

sts:

Parallel podManagementPolicy for faster bootstrap and teardown. Default is OrderedReady.

podManagementPolicy: Parallel AntiAffinity: soft useHostNetwork: null hostPort: null pod: annotations: security.alpha.kubernetes.io/sysctls: net.ipv4.ip_local_port_range=10000 65000

prometheus.io/scrape: "true"

serviceAccount: create: false

name: nifi

Useful if using any custom secrets

Pass in some secrets to use (if required)

secrets:

- name: myNifiSecret

keys:

- key1

- key2

mountPath: /opt/nifi/secret

Useful if using any custom configmaps

Pass in some configmaps to use (if required)

configmaps:

- name: myNifiConf

keys:

- myconf.conf

mountPath: /opt/nifi/custom-config

properties:

use externalSecure for when inbound SSL is provided by nginx-ingress or other external mechanism

externalSecure: false isNode: true # set to false if ldap is enabled httpPort: 8080 # set to null if ldap is enabled httpsPort: null # set to 9443 if ldap is enabled webProxyHost: clusterPort: 6007 clusterSecure: false # set to true if ldap is enabled needClientAuth: false provenanceStorage: "8 GB" siteToSite: port: 10000 authorizer: managed-authorizer

use properties.safetyValve to pass explicit 'key: value' pairs that overwrite other configuration

safetyValve:

nifi.variable.registry.properties: "${NIFI_HOME}/example1.properties, ${NIFI_HOME}/example2.properties"

nifi.web.http.network.interface.default: eth0
# listen to loopback interface so "kubectl port-forward ..." works
nifi.web.http.network.interface.lo: lo

Include aditional processors

customLibPath: "/opt/configuration_resources/custom_lib"

Include additional libraries in the Nifi containers by using the postStart handler

ref: https://kubernetes.io/docs/tasks/configure-pod-container/attach-handler-lifecycle-event/

postStart: /opt/nifi/psql; wget -P /opt/nifi/psql https://jdbc.postgresql.org/download/postgresql-42.2.6.jar

Nifi User Authentication

auth: admin: CN=admin, OU=NIFI SSL: keystorePasswd: env:PASS truststorePasswd: env:PASS ldap: enabled: false host: ldap://: searchBase: CN=Users,DC=example,DC=com admin: cn=admin,dc=example,dc=be pass: password searchFilter: (objectClass=*) userIdentityAttribute: cn authStrategy: SIMPLE # How the connection to the LDAP server is authenticated. Possible values are ANONYMOUS, SIMPLE, LDAPS, or START_TLS. identityStrategy: USE_DN authExpiration: 12 hours

oidc: enabled: false discoveryUrl: clientId: clientSecret: claimIdentifyingUser: email

Request additional scopes, for example profile

additionalScopes:

Expose the nifi service to be accessed from outside the cluster (LoadBalancer service).

or access it from within the cluster (ClusterIP service). Set the service type and the port to serve it.

ref: http://kubernetes.io/docs/user-guide/services/

headless service

headless: type: ClusterIP annotations: service.alpha.kubernetes.io/tolerate-unready-endpoints: "true"

ui service

service: type: LoadBalancer httpPort: 8080 httpsPort: 9443 nodePort: 30236 annotations: {}

loadBalancerIP:

## Load Balancer sources
## https://kubernetes.io/docs/tasks/access-application-cluster/configure-cloud-provider-firewall/#restrict-access-for-loadbalancer-service
##
# loadBalancerSourceRanges:
# - 10.10.10.0/24
## OIDC authentication requires "sticky" session on the LoadBalancer for JWT to work properly...but AWS doesn't like it on creation
# sessionAffinity: ClientIP
# sessionAffinityConfig:
#   clientIP:

timeoutSeconds: 10800

Enables additional port/ports to nifi service for internal processors

processors: enabled: false ports:

Configure Ingress based on the documentation here: https://kubernetes.io/docs/concepts/services-networking/ingress/

ingress: enabled: true annotations: {} tls: [] hosts: [] path: /

If you want to change the default path, see this issue https://github.com/cetic/helm-nifi/issues/22

Amount of memory to give the NiFi java heap

jvmMemory: 2g

Separate image for tailing each log separately and checking zookeeper connectivity

sidecar: image: artifactorycn.xyz.com:17011/apache/busybox tag: "1.32.1"

Enable persistence using Persistent Volume Claims

ref: http://kubernetes.io/docs/user-guide/persistent-volumes/

persistence: enabled: false

When creating persistent storage, the NiFi helm chart can either reference an already-defined

storage class by name, such as "standard" or can define a custom storage class by specifying

customStorageClass: true and providing the "storageClass", "storageProvisioner" and "storageType".

For example, to use SSD storage on Google Compute Engine see values-gcp.yaml

#

To use a storage class that already exists on the Kubernetes cluster, we can simply reference it by name.

For example:

storageClass: cinder #

The default storage class is used if this variable is not set.

accessModes: [ReadWriteOnce]

Storage Capacities for persistent volumes

configStorage: size: 100Mi authconfStorage: size: 100Mi

Storage capacity for the 'data' directory, which is used to hold things such as the flow.xml.gz, configuration, state, etc.

dataStorage: size: 1Gi

Storage capacity for the FlowFile repository

flowfileRepoStorage: size: 10Gi

Storage capacity for the Content repository

contentRepoStorage: size: 10Gi

Storage capacity for the Provenance repository. When changing this, one should also change the properties.provenanceStorage value above, also.

provenanceRepoStorage: size: 10Gi

Storage capacity for nifi logs

logStorage: size: 5Gi

Configure resource requests and limits

ref: http://kubernetes.io/docs/user-guide/compute-resources/

resources: {}

We usually recommend not to specify default resources and to leave this as a conscious

choice for the user. This also increases chances charts run on environments with little

resources, such as Minikube. If you do want to specify resources, uncomment the following

lines, adjust them as necessary, and remove the curly braces after 'resources:'.

limits:

cpu: 100m

memory: 128Mi

requests:

cpu: 100m

memory: 128Mi

logresources: requests: cpu: 10m memory: 10Mi limits: cpu: 50m memory: 50Mi

nodeSelector: {}

tolerations: []

initContainers: {}

foo-init: # <- will be used as container name

image: "busybox:1.30.1"

imagePullPolicy: "IfNotPresent"

command: ['sh', '-c', 'echo this is an initContainer']

volumeMounts:

- mountPath: /tmp/foo

name: foo

extraVolumeMounts: []

extraVolumes: []

Extra containers

extraContainers: []

terminationGracePeriodSeconds: 30

Extra environment variables that will be pass onto deployment pods

env: []

Extra environment variables from secrets and config maps

envFrom: []

envFrom:

- configMapRef:

name: config-name

- secretRef:

name: mysecret

Openshift support

Use the following varables in order to enable Route and Security Context Constraint creation

openshift: scc: enabled: false route: enabled: false

host: www.test.com

#path: /nifi

ca server details

Setting this true would create a nifi-toolkit based ca server

The ca server will be used to generate self-signed certificates required setting up secured cluster

ca:

If true, enable the nifi-toolkit certificate authority

enabled: false persistence: enabled: true server: "" service: port: 9090 token: sixteenCharacters admin: cn: admin serviceAccount: create: false

name: nifi-ca

openshift: scc: enabled: false

------------------------------------------------------------------------------

Zookeeper:

------------------------------------------------------------------------------

zookeeper:

If true, install the Zookeeper chart

ref: https://github.com/bitnami/charts/blob/master/bitnami/zookeeper/values.yaml

enabled: false

If the Zookeeper Chart is disabled a URL and port are required to connect

url: "zookeeper.zookeeper" port: 2181 replicaCount: 3

------------------------------------------------------------------------------

Nifi registry:

------------------------------------------------------------------------------

registry:

If true, install the Nifi registry

enabled: true url: "" port: 80

Add values for the nifi-registry here

ref: https://github.com/dysnix/charts/blob/master/nifi-registry/values.yaml

Configure metrics

metrics: prometheus:

Enable Prometheus metrics

enabled: false
# Port used to expose Prometheus metrics
port: 9092
serviceMonitor:
  # Enable deployment of Prometheus Operator ServiceMonitor resource
  enabled: false
  # Additional labels for the ServiceMonitor
  labels: {}

Check if a pod is in error: 
```bash
kubectl get pod
NAME                  READY   STATUS    RESTARTS   AGE
NAME              READY   STATUS    RESTARTS   AGE
nifi-0            4/4     Running   1          110s
nifi-registry-0   1/1     Running   0          44s

Get logs on a failed container inside the pod (here the server one):

kubectl logs myrelease-nifi-0 server

updating nifi.remote.input.host in /opt/nifi/nifi-current/conf/nifi.properties
updating nifi.cluster.node.address in /opt/nifi/nifi-current/conf/nifi.properties
updating nifi.zookeeper.connect.string in /opt/nifi/nifi-current/conf/nifi.properties
updating nifi.web.http.host in /opt/nifi/nifi-current/conf/nifi.properties
updating nifi.web.proxy.host in /opt/nifi/nifi-current/conf/nifi.properties
updating nifi.web.http.network.interface.default in /opt/nifi/nifi-current/conf/nifi.properties
updating nifi.web.http.network.interface.lo in /opt/nifi/nifi-current/conf/nifi.properties
NiFi running with PID 25.

Java home: /usr/local/openjdk-8
NiFi home: /opt/nifi/nifi-current

Bootstrap Config File: /opt/nifi/nifi-current/conf/bootstrap.conf

2021-09-16 06:26:14,426 INFO [main] org.apache.nifi.bootstrap.Command Starting Apache NiFi...
2021-09-16 06:26:14,427 INFO [main] org.apache.nifi.bootstrap.Command Working Directory: /opt/nifi/nifi-current
2021-09-16 06:26:14,427 INFO [main] org.apache.nifi.bootstrap.Command Command: /usr/local/openjdk-8/bin/java -classpath /opt/nifi/nifi-current/./conf:/opt/nifi/nifi-current/./lib/javax.servlet-api-3.1.0.jar:/opt/nifi/nifi-current/./lib/jcl-over-slf4j-1.7.30.jar:/opt/nifi/nifi-current/./lib/jetty-schemas-3.1.jar:/opt/nifi/nifi-current/./lib/jul-to-slf4j-1.7.30.jar:/opt/nifi/nifi-current/./lib/log4j-over-slf4j-1.7.30.jar:/opt/nifi/nifi-current/./lib/logback-classic-1.2.3.jar:/opt/nifi/nifi-current/./lib/logback-core-1.2.3.jar:/opt/nifi/nifi-current/./lib/nifi-api-1.12.1.jar:/opt/nifi/nifi-current/./lib/nifi-framework-api-1.12.1.jar:/opt/nifi/nifi-current/./lib/nifi-nar-utils-1.12.1.jar:/opt/nifi/nifi-current/./lib/nifi-properties-1.12.1.jar:/opt/nifi/nifi-current/./lib/nifi-runtime-1.12.1.jar:/opt/nifi/nifi-current/./lib/slf4j-api-1.7.30.jar -Dorg.apache.jasper.compiler.disablejsr199=true -Xmx2g -Xms2g -Djava.security.egd=file:/dev/urandom -Dsun.net.http.allowRestrictedHeaders=true -Djava.net.preferIPv4Stack=true -Djava.awt.headless=true -Djava.protocol.handler.pkgs=sun.net.www.protocol -Dnifi.properties.file.path=/opt/nifi/nifi-current/./conf/nifi.properties -Dnifi.bootstrap.listen.port=45466 -Dapp=NiFi -Dorg.apache.nifi.bootstrap.config.log.dir=/opt/nifi/nifi-current/logs org.apache.nifi.NiFi
2021-09-16 06:26:14,739 INFO [main] org.apache.nifi.bootstrap.Command Launched Apache NiFi with Process ID 47
banzo commented 3 years ago

To work around this, you can use self signed certificates. There is some work in progress on that in https://github.com/cetic/helm-nifi/pull/170

banzo commented 2 years ago

the new release should fix that, if not have a look at the updated auth documentation.