Closed ratnakarreddyg closed 2 years ago
To work around this, you can use self signed certificates. There is some work in progress on that in https://github.com/cetic/helm-nifi/pull/170
the new release should fix that, if not have a look at the updated auth documentation.
Describe the bug We have deployed NiFi on a K8S cluster but we are not able to login. When we try to login it says errors like "Unable to check Access Status. User authentication/authorization is only supported when running over HTTPS."
Version of Helm and Kubernetes: Helm: v3.6.3 K8S : v1.18.8 chart: nifi-0.7.9 or nifi-0.7.8
What happened: It is a new instalaltion
What you expected to happen:
How to reproduce it (as minimally and precisely as possible):
Anything else we need to know:
Here are some information that help troubleshooting:
values.yaml
(after removing sensitive information)Set default image, imageTag, and imagePullPolicy.
ref: https://hub.docker.com/r/apache/nifi/
image: repository: artifactorycn.xyx.com:17011/apache/nifi tag: "1.12.1" pullPolicy: IfNotPresent
Optionally specify an imagePullSecret.
Secret must be manually created in the namespace.
ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
pullSecret: myRegistrKeySecretName
securityContext: runAsUser: 1000 fsGroup: 1000
@param useHostNetwork - boolean - optional
Bind ports on the hostNetwork. Useful for CNI networking where hostPort might
not be supported. The ports need to be available on all hosts. It can be
used for custom metrics instead of a service endpoint.
WARNING: Make sure that hosts using this are properly firewalled otherwise
metrics and traces are accepted from any host able to connect to this host.
#
sts:
Parallel podManagementPolicy for faster bootstrap and teardown. Default is OrderedReady.
podManagementPolicy: Parallel AntiAffinity: soft useHostNetwork: null hostPort: null pod: annotations: security.alpha.kubernetes.io/sysctls: net.ipv4.ip_local_port_range=10000 65000
prometheus.io/scrape: "true"
serviceAccount: create: false
name: nifi
Useful if using any custom secrets
Pass in some secrets to use (if required)
secrets:
- name: myNifiSecret
keys:
- key1
- key2
mountPath: /opt/nifi/secret
Useful if using any custom configmaps
Pass in some configmaps to use (if required)
configmaps:
- name: myNifiConf
keys:
- myconf.conf
mountPath: /opt/nifi/custom-config
properties:
use externalSecure for when inbound SSL is provided by nginx-ingress or other external mechanism
externalSecure: false isNode: true # set to false if ldap is enabled httpPort: 8080 # set to null if ldap is enabled httpsPort: null # set to 9443 if ldap is enabled webProxyHost: clusterPort: 6007 clusterSecure: false # set to true if ldap is enabled needClientAuth: false provenanceStorage: "8 GB" siteToSite: port: 10000 authorizer: managed-authorizer
use properties.safetyValve to pass explicit 'key: value' pairs that overwrite other configuration
safetyValve:
nifi.variable.registry.properties: "${NIFI_HOME}/example1.properties, ${NIFI_HOME}/example2.properties"
Include aditional processors
customLibPath: "/opt/configuration_resources/custom_lib"
Include additional libraries in the Nifi containers by using the postStart handler
ref: https://kubernetes.io/docs/tasks/configure-pod-container/attach-handler-lifecycle-event/
postStart: /opt/nifi/psql; wget -P /opt/nifi/psql https://jdbc.postgresql.org/download/postgresql-42.2.6.jar
Nifi User Authentication
auth: admin: CN=admin, OU=NIFI SSL: keystorePasswd: env:PASS truststorePasswd: env:PASS ldap: enabled: false host: ldap://:
searchBase: CN=Users,DC=example,DC=com
admin: cn=admin,dc=example,dc=be
pass: password
searchFilter: (objectClass=*)
userIdentityAttribute: cn
authStrategy: SIMPLE # How the connection to the LDAP server is authenticated. Possible values are ANONYMOUS, SIMPLE, LDAPS, or START_TLS.
identityStrategy: USE_DN
authExpiration: 12 hours
oidc: enabled: false discoveryUrl: clientId: clientSecret: claimIdentifyingUser: email
Request additional scopes, for example profile
Expose the nifi service to be accessed from outside the cluster (LoadBalancer service).
or access it from within the cluster (ClusterIP service). Set the service type and the port to serve it.
ref: http://kubernetes.io/docs/user-guide/services/
headless service
headless: type: ClusterIP annotations: service.alpha.kubernetes.io/tolerate-unready-endpoints: "true"
ui service
service: type: LoadBalancer httpPort: 8080 httpsPort: 9443 nodePort: 30236 annotations: {}
loadBalancerIP:
timeoutSeconds: 10800
Enables additional port/ports to nifi service for internal processors
processors: enabled: false ports:
nodePort: 30701
nodePort: 30702
Configure Ingress based on the documentation here: https://kubernetes.io/docs/concepts/services-networking/ingress/
ingress: enabled: true annotations: {} tls: [] hosts: [] path: /
If you want to change the default path, see this issue https://github.com/cetic/helm-nifi/issues/22
Amount of memory to give the NiFi java heap
jvmMemory: 2g
Separate image for tailing each log separately and checking zookeeper connectivity
sidecar: image: artifactorycn.xyz.com:17011/apache/busybox tag: "1.32.1"
Enable persistence using Persistent Volume Claims
ref: http://kubernetes.io/docs/user-guide/persistent-volumes/
persistence: enabled: false
When creating persistent storage, the NiFi helm chart can either reference an already-defined
storage class by name, such as "standard" or can define a custom storage class by specifying
customStorageClass: true and providing the "storageClass", "storageProvisioner" and "storageType".
For example, to use SSD storage on Google Compute Engine see values-gcp.yaml
#
To use a storage class that already exists on the Kubernetes cluster, we can simply reference it by name.
For example:
storageClass: cinder #
The default storage class is used if this variable is not set.
accessModes: [ReadWriteOnce]
Storage Capacities for persistent volumes
configStorage: size: 100Mi authconfStorage: size: 100Mi
Storage capacity for the 'data' directory, which is used to hold things such as the flow.xml.gz, configuration, state, etc.
dataStorage: size: 1Gi
Storage capacity for the FlowFile repository
flowfileRepoStorage: size: 10Gi
Storage capacity for the Content repository
contentRepoStorage: size: 10Gi
Storage capacity for the Provenance repository. When changing this, one should also change the properties.provenanceStorage value above, also.
provenanceRepoStorage: size: 10Gi
Storage capacity for nifi logs
logStorage: size: 5Gi
Configure resource requests and limits
ref: http://kubernetes.io/docs/user-guide/compute-resources/
resources: {}
We usually recommend not to specify default resources and to leave this as a conscious
choice for the user. This also increases chances charts run on environments with little
resources, such as Minikube. If you do want to specify resources, uncomment the following
lines, adjust them as necessary, and remove the curly braces after 'resources:'.
limits:
cpu: 100m
memory: 128Mi
requests:
cpu: 100m
memory: 128Mi
logresources: requests: cpu: 10m memory: 10Mi limits: cpu: 50m memory: 50Mi
nodeSelector: {}
tolerations: []
initContainers: {}
foo-init: # <- will be used as container name
image: "busybox:1.30.1"
imagePullPolicy: "IfNotPresent"
command: ['sh', '-c', 'echo this is an initContainer']
volumeMounts:
- mountPath: /tmp/foo
name: foo
extraVolumeMounts: []
extraVolumes: []
Extra containers
extraContainers: []
terminationGracePeriodSeconds: 30
Extra environment variables that will be pass onto deployment pods
env: []
Extra environment variables from secrets and config maps
envFrom: []
envFrom:
- configMapRef:
name: config-name
- secretRef:
name: mysecret
Openshift support
Use the following varables in order to enable Route and Security Context Constraint creation
openshift: scc: enabled: false route: enabled: false
host: www.test.com
ca server details
Setting this true would create a nifi-toolkit based ca server
The ca server will be used to generate self-signed certificates required setting up secured cluster
ca:
If true, enable the nifi-toolkit certificate authority
enabled: false persistence: enabled: true server: "" service: port: 9090 token: sixteenCharacters admin: cn: admin serviceAccount: create: false
name: nifi-ca
openshift: scc: enabled: false
------------------------------------------------------------------------------
Zookeeper:
------------------------------------------------------------------------------
zookeeper:
If true, install the Zookeeper chart
ref: https://github.com/bitnami/charts/blob/master/bitnami/zookeeper/values.yaml
enabled: false
If the Zookeeper Chart is disabled a URL and port are required to connect
url: "zookeeper.zookeeper" port: 2181 replicaCount: 3
------------------------------------------------------------------------------
Nifi registry:
------------------------------------------------------------------------------
registry:
If true, install the Nifi registry
enabled: true url: "" port: 80
Add values for the nifi-registry here
ref: https://github.com/dysnix/charts/blob/master/nifi-registry/values.yaml
Configure metrics
metrics: prometheus:
Enable Prometheus metrics
Get logs on a failed container inside the pod (here the
server
one):