cetic / helm-nifi

Helm Chart for Apache Nifi
Apache License 2.0
213 stars 220 forks source link

[cetic/nifi] authorizations.xml is not updated from configmap (1.0.0) #222

Open a4tarasiuk opened 2 years ago

a4tarasiuk commented 2 years ago

Describe the bug After installation, user is not able to authorize. Unknown identity error. After chart installation, conf/authorization.xml does not have information about users. usersGroupProvider section does not contain k8s user, and admin user that is set in values.yaml.
Everything works fine on 0.7.8 version. I think that something wrong with volumes, because in previous version of chart, i was able to update users.xml file and others, and after reinstallation, they persisted, but currently every file is overwritten.

Version of Helm, Kubernetes and the Nifi chart: Helm: 3.6.2
Kubernetes: 1.19.7
Nifi chart: 1.0.0

What happened: After not being able to authorize, I checked authorizers.xml file and found that it does not contain my user email as initial user identity and k8s user too. authorizers.xml section in nifi-config configmap is valid, and also is present in authorizers.temp, but authorizers.xml has some default info.

What you expected to happen: Currently in authorizers.xml i have:

<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<authorizers>
    <userGroupProvider>
        <identifier>file-user-group-provider</identifier>
        <class>org.apache.nifi.authorization.FileUserGroupProvider</class>
        <property name="Users File">./conf/users.xml</property>
        <property name="Legacy Authorized Users File"></property>
        <property name="Initial User Identity 0">john</property>
    </userGroupProvider>

    <accessPolicyProvider>
        <identifier>file-access-policy-provider</identifier>
        <class>org.apache.nifi.authorization.FileAccessPolicyProvider</class>
        <property name="User Group Provider">file-user-group-provider</property>
        <property name="Authorizations File">./conf/authorizations.xml</property>
        <property name="Initial Admin Identity">john</property>
        <property name="Legacy Authorized Users File"></property>
        <property name="Node Identity"></property>
    </accessPolicyProvider>

    <authorizer>
        <identifier>managed-authorizer</identifier>
        <class>org.apache.nifi.authorization.StandardManagedAuthorizer</class>
        <property name="Access Policy Provider">file-access-policy-provider</property>
    </authorizer>
<authorizers>

I expect (took from 0.7.8 version):

<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<authorizers>
    <userGroupProvider>
        <identifier>file-user-group-provider</identifier>
        <class>org.apache.nifi.authorization.FileUserGroupProvider</class>
        <property name="Users File">./auth-conf/users.xml</property>
        <property name="Legacy Authorized Users File"></property>
        <property name="Initial User Identity 0">CN=nifi-0.nifi-headless.nifi.svc.cluster.local, OU=NIFI</property>
        <property name="Initial User Identity admin">my-admin@mail.com</property>
    </userGroupProvider>

    <accessPolicyProvider>
        <identifier>file-access-policy-provider</identifier>
        <class>org.apache.nifi.authorization.FileAccessPolicyProvider</class>
        <property name="User Group Provider">file-user-group-provider</property>
        <property name="Authorizations File">./auth-conf/authorizations.xml</property>
        <property name="Initial Admin Identity">my-admin@mail.com</property>
        <property name="Legacy Authorized Users File"></property>
        <property name="Node Identity 0">CN=nifi-0.nifi-headless.nifi.svc.cluster.local, OU=NIFI</property>
        <property name="Node Identity"></property>
    </accessPolicyProvider>

    <authorizer>
        <identifier>managed-authorizer</identifier>
        <class>org.apache.nifi.authorization.StandardManagedAuthorizer</class>
        <property name="Access Policy Provider">file-access-policy-provider</property>
    </authorizer>
</authorizers>

Also the same case with users.xml file, instead of k8s user and my user, i have default John user.

How to reproduce it (as minimally and precisely as possible): helm upgrade --install nifi -f values.yaml cetic/nifi -n nifi

Anything else we need to know:

values.yaml:

replicaCount: 1

image:
  repository: apache/nifi
  tag: "1.15.2"
  pullPolicy: IfNotPresent

securityContext:
  runAsUser: 1000
  fsGroup: 1000

sts:
  podManagementPolicy: Parallel
  AntiAffinity: soft
  useHostNetwork: null
  hostPort: null
  pod:
    annotations:
      security.alpha.kubernetes.io/sysctls: net.ipv4.ip_local_port_range=10000 65000
  serviceAccount:
    create: false
    annotations: {}
  hostAliases: []

properties:
  externalSecure: true
  isNode: true # set to false if ldap is enabled
  httpPort: 8080 # set to null if ldap is enabled
  httpsPort: 9443 # set to 9443 if ldap is enabled
  webProxyHost: <my-value>
  clusterPort: 6007
  clusterSecure: true # set to true if ldap is enabled
  needClientAuth: false
  provenanceStorage: "8 GB"
  siteToSite:
    port: 10000
  authorizer: managed-authorizer
  safetyValve:
    nifi.web.http.network.interface.default: eth0
    nifi.web.http.network.interface.lo: lo
    nifi.sensitive.props.key: <my-value>"

auth:
  admin: my-admin@mail.com
  SSL:
    keystorePasswd: env:PASS
    truststorePasswd: env:PASS
  oidc:
    enabled: true
    discoveryUrl: <my-value>
    clientId: <my-value>
    clientSecret: <my-value>
    claimIdentifyingUser: email
    admin: my-admin@mail.com

headless:
  type: ClusterIP
  annotations:
    service.alpha.kubernetes.io/tolerate-unready-endpoints: "true"

service:
  type: NodePort
  httpPort: 8080
  httpsPort: 9443
  annotations: {}

  processors:
    enabled: true
    ports:
      - name: dev
        port: 8090
        targetPort: 8090
      - name: stage
        port: 8091
        targetPort: 8091
      - name: prod
        port: 8092
        targetPort: 8092

jvmMemory: 2g

sidecar:
  image: busybox
  tag: "1.32.0"
  imagePullPolicy: "IfNotPresent"

persistence:
  enabled: true

  accessModes:  [ReadWriteOnce]
  ## Storage Capacities for persistent volumes
  configStorage:
    size: 100Mi
  authconfStorage:
    size: 100Mi
  # Storage capacity for the 'data' directory, which is used to hold things such as the flow.xml.gz, configuration, state, etc.
  dataStorage:
    size: 1Gi
  # Storage capacity for the FlowFile repository
  flowfileRepoStorage:
    size: 10Gi
  # Storage capacity for the Content repository
  contentRepoStorage:
    size: 10Gi
  # Storage capacity for the Provenance repository. When changing this, one should also change the properties.provenanceStorage value above, also.
  provenanceRepoStorage:
    size: 10Gi
  # Storage capacity for nifi logs
  logStorage:
    size: 5Gi

resources: {}

logresources:
  requests:
    cpu: 10m
    memory: 10Mi
  limits:
    cpu: 50m
    memory: 50Mi

affinity: {}
nodeSelector: {}
tolerations: []
initContainers: {}
extraVolumeMounts: []
extraVolumes: []
extraContainers: []
terminationGracePeriodSeconds: 30
env: []
envFrom: []

openshift:
  scc:
    enabled: false
  route:
    enabled: false

ca:
  enabled: true
  persistence:
    enabled: true
  server: "<my-value>"
  service:
    port: 9090
  token: <my-value>
  admin:
    cn: admin
  serviceAccount:
    create: false
    #name: nifi-ca
  openshift:
    scc:
      enabled: false

zookeeper:
  enabled: true
  url: ""
  port: 2181
  replicaCount: 3

# ------------------------------------------------------------------------------
# Nifi registry:
# ------------------------------------------------------------------------------
registry:
  enabled: false

metrics:
  prometheus:
    enabled: false

Check if a pod is in error:

0% ❯ kubectl get pods -n nifi
NAME                       READY   STATUS    RESTARTS   AGE
nifi-0                     4/4     Running   0          2m16s
nifi-ca-5b94d598b7-8wb8b   1/1     Running   0          2m16s
nifi-zookeeper-0           1/1     Running   0          2m16s
nifi-zookeeper-1           1/1     Running   0          2m16s
nifi-zookeeper-2           1/1     Running   0          2m16s

Inspect the pod, check the "Events" section at the end for anything suspicious.

0% ❯ kubectl describe pod nifi-0 -n nifi 
Name:           nifi-0
Namespace:      nifi
Priority:       0
Node:           10.224.7.150/10.224.7.150
Start Time:     Tue, 11 Jan 2022 12:53:12 +0200
Labels:         app=nifi
                chart=nifi-1.0.0
                controller-revision-hash=nifi-66d5c999ff
                heritage=Helm
                release=nifi
                statefulset.kubernetes.io/pod-name=nifi-0
Annotations:    security.alpha.kubernetes.io/sysctls: net.ipv4.ip_local_port_range=10000 65000
Status:         Running
IP:             10.246.0.215
Controlled By:  StatefulSet/nifi
Init Containers:
  zookeeper:
    Container ID:  docker://d8ab69fd9227ce6aeb012aa6c11bd7cd6b4b56a9930fee51d4d0cd141298b3d0
    Image:         busybox:1.32.0
    Image ID:      docker-pullable://busybox@sha256:bde48e1751173b709090c2539fdf12d6ba64e88ec7a4301591227ce925f3c678
    Port:          <none>
    Host Port:     <none>
    Command:
      sh
      -c
      echo trying to contact nifi-zookeeper 2181
      until nc -vzw 1 nifi-zookeeper 2181; do
        echo "waiting for zookeeper..."
        sleep 2
      done

    State:          Terminated
      Reason:       Completed
      Exit Code:    0
      Started:      Tue, 11 Jan 2022 12:53:49 +0200
      Finished:     Tue, 11 Jan 2022 12:53:49 +0200
    Ready:          True
    Restart Count:  0
    Environment:    <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-frw4w (ro)
Containers:
  server:
    Container ID:  docker://e1482739d501b77035de2c8487de9b5c718c55f3e27046e1b92880c9751890e4
    Image:         apache/nifi:1.15.2
    Image ID:      docker-pullable://apache/nifi@sha256:a9e98e32bac251e28c942f13f1c59f2a2d9fc1910ddbc0eea73a994d806de394
    Ports:         9443/TCP, 6007/TCP
    Host Ports:    0/TCP, 0/TCP
    Command:
      bash
      -ce
      prop_replace () {
        target_file=${NIFI_HOME}/conf/${3:-nifi.properties}
        echo "updating ${1} in ${target_file}"
        if egrep "^${1}=" ${target_file} &> /dev/null; then
          sed -i -e "s|^$1=.*$|$1=$2|"  ${target_file}
        else
          echo ${1}=${2} >> ${target_file}
        fi
      }
      mkdir -p ${NIFI_HOME}/config-data/conf
      FQDN=$(hostname -f)

      cat "${NIFI_HOME}/conf/nifi.temp" > "${NIFI_HOME}/conf/nifi.properties"
        cat "${NIFI_HOME}/conf/authorizers.empty" > "${NIFI_HOME}/conf/authorizers.xml"

      if ! test -f /opt/nifi/data/flow.xml.gz && test -f /opt/nifi/data/flow.xml; then
        gzip /opt/nifi/data/flow.xml
      fi

      prop_replace nifi.remote.input.host ${FQDN}
      prop_replace nifi.cluster.node.address ${FQDN}
      prop_replace nifi.zookeeper.connect.string ${NIFI_ZOOKEEPER_CONNECT_STRING}
      prop_replace nifi.web.http.host ${FQDN}
      # Update nifi.properties for web ui proxy hostname
      prop_replace nifi.web.proxy.host nifi.stage.cp.flyaps.com
      prop_replace nifi.sensitive.props.key "UVXq9wrA=uKs" nifi.properties
      prop_replace nifi.web.http.network.interface.default "eth0" nifi.properties
      prop_replace nifi.web.http.network.interface.lo "lo" nifi.properties

      exec bin/nifi.sh run & nifi_pid="$!"

      function offloadNode() {
          FQDN=$(hostname -f)
          echo "disconnecting node '$FQDN'"
          baseUrl=https://${FQDN}:9443

          keystore=${NIFI_HOME}/config-data/certs/keystore.jks
          keystorePasswd=$(jq -r .keyStorePassword ${NIFI_HOME}/config-data/certs/config.json)
          keyPasswd=$(jq -r .keyPassword ${NIFI_HOME}/config-data/certs/config.json)
          truststore=${NIFI_HOME}/config-data/certs/truststore.jks
          truststorePasswd=$(jq -r .trustStorePassword ${NIFI_HOME}/config-data/certs/config.json)

          secureArgs=" --truststore ${truststore} --truststoreType JKS --truststorePasswd ${truststorePasswd} --keystore ${keystore} --keystoreType JKS --keystorePasswd ${keystorePasswd} --proxiedEntity "flyaps.dev@gmail.com""

          echo baseUrl ${baseUrl}
          echo "gracefully disconnecting node '$FQDN' from cluster"
          ${NIFI_TOOLKIT_HOME}/bin/cli.sh nifi get-nodes -ot json -u ${baseUrl} ${secureArgs} > nodes.json
          nnid=$(jq --arg FQDN "$FQDN" '.cluster.nodes[] | select(.address==$FQDN) | .nodeId' nodes.json)
          echo "disconnecting node ${nnid}"
          ${NIFI_TOOLKIT_HOME}/bin/cli.sh nifi disconnect-node -nnid $nnid -u ${baseUrl} ${secureArgs}
          echo ""
          echo "get a connected node"
          connectedNode=$(jq -r 'first(.cluster.nodes|=sort_by(.address)| .cluster.nodes[] | select(.status=="CONNECTED")) | .address' nodes.json)
          baseUrl=https://${connectedNode}:9443
          echo baseUrl ${baseUrl}
          echo ""
          echo "wait until node has state 'DISCONNECTED'"
          while [[ "${node_state}" != "DISCONNECTED" ]]; do
              sleep 1
              ${NIFI_TOOLKIT_HOME}/bin/cli.sh nifi get-nodes -ot json -u ${baseUrl} ${secureArgs} > nodes.json
              node_state=$(jq -r --arg FQDN "$FQDN" '.cluster.nodes[] | select(.address==$FQDN) | .status' nodes.json)
              echo "state is '${node_state}'"
          done
          echo ""
          echo "node '${nnid}' was disconnected"
          echo "offloading node"
          ${NIFI_TOOLKIT_HOME}/bin/cli.sh nifi offload-node -nnid $nnid -u ${baseUrl} ${secureArgs}
          echo ""
          echo "wait until node has state 'OFFLOADED'"
          while [[ "${node_state}" != "OFFLOADED" ]]; do
              sleep 1
              ${NIFI_TOOLKIT_HOME}/bin/cli.sh nifi get-nodes -ot json -u ${baseUrl} ${secureArgs} > nodes.json
              node_state=$(jq -r --arg FQDN "$FQDN" '.cluster.nodes[] | select(.address==$FQDN) | .status' nodes.json)
              echo "state is '${node_state}'"
          done
      }

      deleteNode() {
          echo "deleting node"
          ${NIFI_TOOLKIT_HOME}/bin/cli.sh nifi delete-node -nnid ${nnid} -u ${baseUrl} ${secureArgs}
          echo "node deleted"
      }

      trap 'echo Received trapped signal, beginning shutdown...;offloadNode;./bin/nifi.sh stop;deleteNode;exit 0;' TERM HUP INT;
      trap ":" EXIT

      echo NiFi running with PID ${nifi_pid}.
      wait ${nifi_pid}

    State:          Running
      Started:      Tue, 11 Jan 2022 12:53:50 +0200
    Ready:          True
    Restart Count:  0
    Liveness:       tcp-socket :9443 delay=90s timeout=1s period=60s #success=1 #failure=3
    Readiness:      tcp-socket :9443 delay=60s timeout=1s period=20s #success=1 #failure=3
    Environment:
      NIFI_ZOOKEEPER_CONNECT_STRING:  nifi-zookeeper:2181
    Mounts:
      /opt/nifi/content_repository from content-repository (rw)
      /opt/nifi/data from data (rw)
      /opt/nifi/data/flow.xml from flow-content (rw,path="flow.xml")
      /opt/nifi/flowfile_repository from flowfile-repository (rw)
      /opt/nifi/nifi-current/auth-conf/ from auth-conf (rw)
      /opt/nifi/nifi-current/conf/authorizers.empty from authorizers-empty (rw,path="authorizers.empty")
      /opt/nifi/nifi-current/conf/authorizers.temp from authorizers-temp (rw,path="authorizers.temp")
      /opt/nifi/nifi-current/conf/bootstrap-notification-services.xml from bootstrap-notification-services-xml (rw,path="bootstrap-notification-services.xml")
      /opt/nifi/nifi-current/conf/bootstrap.conf from bootstrap-conf (rw,path="bootstrap.conf")
      /opt/nifi/nifi-current/conf/login-identity-providers.xml from login-identity-providers-xml (rw,path="login-identity-providers.xml")
      /opt/nifi/nifi-current/conf/nifi.temp from nifi-properties (rw,path="nifi.temp")
      /opt/nifi/nifi-current/conf/state-management.xml from state-management-xml (rw,path="state-management.xml")
      /opt/nifi/nifi-current/conf/zookeeper.properties from zookeeper-properties (rw,path="zookeeper.properties")
      /opt/nifi/nifi-current/config-data from config-data (rw)
      /opt/nifi/nifi-current/logs from logs (rw)
      /opt/nifi/provenance_repository from provenance-repository (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-frw4w (ro)
  app-log:
    Container ID:  docker://1f6d35f6dceb26e6db1aaa1de0b92dbd925e604b674fe720d4bb9326f3e88a95
    Image:         busybox:1.32.0
    Image ID:      docker-pullable://busybox@sha256:bde48e1751173b709090c2539fdf12d6ba64e88ec7a4301591227ce925f3c678
    Port:          <none>
    Host Port:     <none>
    Args:
      tail
      -n+1
      -F
      /var/log/nifi-app.log
    State:          Running
      Started:      Tue, 11 Jan 2022 12:53:51 +0200
    Ready:          True
    Restart Count:  0
    Limits:
      cpu:     50m
      memory:  50Mi
    Requests:
      cpu:        10m
      memory:     10Mi
    Environment:  <none>
    Mounts:
      /var/log from logs (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-frw4w (ro)
  bootstrap-log:
    Container ID:  docker://2c4c66d230ed734c3141cf84afc8484af6fa37a7bea65a540649bffc2b155bbf
    Image:         busybox:1.32.0
    Image ID:      docker-pullable://busybox@sha256:bde48e1751173b709090c2539fdf12d6ba64e88ec7a4301591227ce925f3c678
    Port:          <none>
    Host Port:     <none>
    Args:
      tail
      -n+1
      -F
      /var/log/nifi-bootstrap.log
    State:          Running
      Started:      Tue, 11 Jan 2022 12:53:51 +0200
    Ready:          True
    Restart Count:  0
    Limits:
      cpu:     50m
      memory:  50Mi
    Requests:
      cpu:        10m
      memory:     10Mi
    Environment:  <none>
    Mounts:
      /var/log from logs (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-frw4w (ro)
  user-log:
    Container ID:  docker://6db231422f314a508c400bd12d79631c929bc09882a76527730f4ff0a7a5fc57
    Image:         busybox:1.32.0
    Image ID:      docker-pullable://busybox@sha256:bde48e1751173b709090c2539fdf12d6ba64e88ec7a4301591227ce925f3c678
    Port:          <none>
    Host Port:     <none>
    Args:
      tail
      -n+1
      -F
      /var/log/nifi-user.log
    State:          Running
      Started:      Tue, 11 Jan 2022 12:53:51 +0200
    Ready:          True
    Restart Count:  0
    Limits:
      cpu:     50m
      memory:  50Mi
    Requests:
      cpu:        10m
      memory:     10Mi
    Environment:  <none>
    Mounts:
      /var/log from logs (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-frw4w (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             True 
  ContainersReady   True 
  PodScheduled      True 
Volumes:
  flowfile-repository:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  flowfile-repository-nifi-0
    ReadOnly:   false
  content-repository:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  content-repository-nifi-0
    ReadOnly:   false
  provenance-repository:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  provenance-repository-nifi-0
    ReadOnly:   false
  auth-conf:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  auth-conf-nifi-0
    ReadOnly:   false
  logs:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  logs-nifi-0
    ReadOnly:   false
  config-data:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  config-data-nifi-0
    ReadOnly:   false
  data:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  data-nifi-0
    ReadOnly:   false
  bootstrap-conf:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      nifi-config
    Optional:  false
  nifi-properties:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      nifi-config
    Optional:  false
  authorizers-temp:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      nifi-config
    Optional:  false
  authorizers-empty:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      nifi-config
    Optional:  false
  bootstrap-notification-services-xml:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      nifi-config
    Optional:  false
  login-identity-providers-xml:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      nifi-config
    Optional:  false
  state-management-xml:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      nifi-config
    Optional:  false
  zookeeper-properties:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      nifi-config
    Optional:  false
  flow-content:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      nifi-config
    Optional:  false
  default-token-frw4w:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-frw4w
    Optional:    false
QoS Class:       Burstable
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type    Reason                  Age        From                     Message
  ----    ------                  ----       ----                     -------
  Normal  Scheduled               <unknown>                           Successfully assigned nifi/nifi-0 to 10.224.7.150
  Normal  SuccessfulAttachVolume  2m50s      attachdetach-controller  AttachVolume.Attach succeeded for volume "ocid1.volume.oc1.uk-london-1.abwgiljsxy6mnlhquaxeynl325c3kkzxt473nv6iq5nhhdsbl5jkpfg66dra"
  Normal  SuccessfulAttachVolume  2m50s      attachdetach-controller  AttachVolume.Attach succeeded for volume "ocid1.volume.oc1.uk-london-1.abwgiljsp7gob3a33iajdzydyju5oxu533ylxsnwnhg5ugmennr7mnmgm5ga"
  Normal  SuccessfulAttachVolume  2m50s      attachdetach-controller  AttachVolume.Attach succeeded for volume "ocid1.volume.oc1.uk-london-1.abwgiljspzapxsajwop43dldmx25aymsc5j24e6sorecmgvqydt3zl3h33na"
  Normal  SuccessfulAttachVolume  2m45s      attachdetach-controller  AttachVolume.Attach succeeded for volume "ocid1.volume.oc1.uk-london-1.abwgiljselr74fivus6zt35i2bufbjw4smd6yud6izcby7fqcm5vezcbwboq"
  Normal  SuccessfulAttachVolume  2m45s      attachdetach-controller  AttachVolume.Attach succeeded for volume "ocid1.volume.oc1.uk-london-1.abwgiljswyy7tmraqtixgwpgnn4ua3nsfrcpfpcwk7kxmdxelybwm5behxla"
  Normal  SuccessfulAttachVolume  2m40s      attachdetach-controller  AttachVolume.Attach succeeded for volume "ocid1.volume.oc1.uk-london-1.abwgiljsyyyxtmrwvylkmehymgmin3gyg6kbr6smnzvxprrcw6lnnqnjfxlq"
  Normal  SuccessfulAttachVolume  2m40s      attachdetach-controller  AttachVolume.Attach succeeded for volume "ocid1.volume.oc1.uk-london-1.abwgiljs5dwjar63b6lqatdq5ayfwleyw3gjnisdqf5lsx4voec4defoleqq"
  Normal  Pulled                  2m30s      kubelet, 10.224.7.150    Container image "busybox:1.32.0" already present on machine
  Normal  Created                 2m30s      kubelet, 10.224.7.150    Created container zookeeper
  Normal  Started                 2m30s      kubelet, 10.224.7.150    Started container zookeeper
  Normal  Created                 2m29s      kubelet, 10.224.7.150    Created container app-log
  Normal  Created                 2m29s      kubelet, 10.224.7.150    Created container server
  Normal  Started                 2m29s      kubelet, 10.224.7.150    Started container server
  Normal  Pulled                  2m29s      kubelet, 10.224.7.150    Container image "busybox:1.32.0" already present on machine
  Normal  Pulled                  2m29s      kubelet, 10.224.7.150    Container image "apache/nifi:1.15.2" already present on machine
  Normal  Started                 2m28s      kubelet, 10.224.7.150    Started container app-log
  Normal  Pulled                  2m28s      kubelet, 10.224.7.150    Container image "busybox:1.32.0" already present on machine
  Normal  Created                 2m28s      kubelet, 10.224.7.150    Created container bootstrap-log
  Normal  Started                 2m28s      kubelet, 10.224.7.150    Started container bootstrap-log
  Normal  Pulled                  2m28s      kubelet, 10.224.7.150    Container image "busybox:1.32.0" already present on machine
  Normal  Created                 2m28s      kubelet, 10.224.7.150    Created container user-log
  Normal  Started                 2m28s      kubelet, 10.224.7.150    Started container user-log

Get logs on a failed container inside the pod (here the server one):

0% ❯ kubectl logs nifi-0 server -n nifi
updating nifi.remote.input.host in /opt/nifi/nifi-current/conf/nifi.properties
updating nifi.cluster.node.address in /opt/nifi/nifi-current/conf/nifi.properties
updating nifi.zookeeper.connect.string in /opt/nifi/nifi-current/conf/nifi.properties
updating nifi.web.http.host in /opt/nifi/nifi-current/conf/nifi.properties
updating nifi.web.proxy.host in /opt/nifi/nifi-current/conf/nifi.properties
updating nifi.sensitive.props.key in /opt/nifi/nifi-current/conf/nifi.properties
updating nifi.web.http.network.interface.default in /opt/nifi/nifi-current/conf/nifi.properties
updating nifi.web.http.network.interface.lo in /opt/nifi/nifi-current/conf/nifi.properties
NiFi running with PID 25.

Java home: /usr/local/openjdk-8
NiFi home: /opt/nifi/nifi-current

Bootstrap Config File: /opt/nifi/nifi-current/conf/bootstrap.conf

2022-01-11 10:53:51,667 INFO [main] org.apache.nifi.bootstrap.Command Generating Self-Signed Certificate: Expires on 2022-03-12
2022-01-11 10:53:53,311 INFO [main] org.apache.nifi.bootstrap.Command Generated Self-Signed Certificate SHA-256: A200296FEC20A874E573C868DB99AD819886CA4212D8F503FF1376C22503228B
2022-01-11 10:53:53,324 INFO [main] org.apache.nifi.bootstrap.Command Starting Apache NiFi...
2022-01-11 10:53:53,324 INFO [main] org.apache.nifi.bootstrap.Command Working Directory: /opt/nifi/nifi-current
2022-01-11 10:53:53,325 INFO [main] org.apache.nifi.bootstrap.Command Command: /usr/local/openjdk-8/bin/java -classpath /opt/nifi/nifi-current/./conf:/opt/nifi/nifi-current/./lib/javax.servlet-api-3.1.0.jar:/opt/nifi/nifi-current/./lib/jcl-over-slf4j-1.7.32.jar:/opt/nifi/nifi-current/./lib/jetty-schemas-3.1.jar:/opt/nifi/nifi-current/./lib/jul-to-slf4j-1.7.32.jar:/opt/nifi/nifi-current/./lib/log4j-over-slf4j-1.7.32.jar:/opt/nifi/nifi-current/./lib/logback-classic-1.2.9.jar:/opt/nifi/nifi-current/./lib/logback-core-1.2.9.jar:/opt/nifi/nifi-current/./lib/nifi-api-1.15.2.jar:/opt/nifi/nifi-current/./lib/nifi-framework-api-1.15.2.jar:/opt/nifi/nifi-current/./lib/nifi-nar-utils-1.15.2.jar:/opt/nifi/nifi-current/./lib/nifi-properties-1.15.2.jar:/opt/nifi/nifi-current/./lib/nifi-property-utils-1.15.2.jar:/opt/nifi/nifi-current/./lib/nifi-runtime-1.15.2.jar:/opt/nifi/nifi-current/./lib/nifi-server-api-1.15.2.jar:/opt/nifi/nifi-current/./lib/nifi-stateless-api-1.15.2.jar:/opt/nifi/nifi-current/./lib/nifi-stateless-bootstrap-1.15.2.jar:/opt/nifi/nifi-current/./lib/slf4j-api-1.7.32.jar -Dorg.apache.jasper.compiler.disablejsr199=true -Xmx2g -Xms2g -Djava.security.egd=file:/dev/urandom -Dsun.net.http.allowRestrictedHeaders=true -Djava.net.preferIPv4Stack=true -Djava.awt.headless=true -Djava.protocol.handler.pkgs=sun.net.www.protocol -Dnifi.properties.file.path=/opt/nifi/nifi-current/./conf/nifi.properties -Dnifi.bootstrap.listen.port=43375 -Dapp=NiFi -Dorg.apache.nifi.bootstrap.config.log.dir=/opt/nifi/nifi-current/logs org.apache.nifi.NiFi 
2022-01-11 10:53:53,338 INFO [main] org.apache.nifi.bootstrap.Command Launched Apache NiFi with Process ID 47
jjjedlicka commented 2 years ago

Is there any progress on this? It is kind of a show stopper as we cannot configure our initial admin user to setup more users. I can see everything going into the authorizers.temp, just not making it to the authorizers.xml.

wknickless commented 2 years ago

@jjjedlicka you might try the version I've submitted under PR #218 ; it includes GitHub action tests that confirm the initial admin can be used. And leave a comment about how it works for you, whether it works or fails. If that works for you, please let @banzo and @zakaria2905 know over at the comment section of PR #218 so they can feel more comfortable merging it. Or if it doesn't work for you definitely leave me a comment over there so I can make a test for your use case and get it working.

jjjedlicka commented 2 years ago

@wknickless we are using version: 1.0.6, appVersion: 1.14.0. I am somewhat new to helm, but it looks like the check-in is the cert manager branch. How do I pull in a branch that is not part of the helm application version or available chart versions?

wknickless commented 2 years ago

@jjjedlicka you should be able to do something like:

git clone https://github.com/wknickless/helm-nifi.git helm-nifi
cd helm-nifi
git checkout 2bb07f92bc2d74f8f6e0be1efe038b2147444dda
helm dep update
helm install nifi . -f /path/to/my-local-nifi-values-file.yaml