cetic / helm-nifi

Helm Chart for Apache Nifi
Apache License 2.0
215 stars 228 forks source link

This node is currently not connected to the cluster. #205

Closed shuhaib3 closed 2 years ago

shuhaib3 commented 2 years ago

image

am getting like this in my UI. i have setup 3 replicas secured cluster and am using oidc authentication enabled.

below is the warning am getting WARN [Process Cluster Protocol Request-8] o.a.n.c.p.impl.SocketProtocolListener Failed processing protocol message from nifi-0.nifi-headless.default.svc.cluster.local due to Received fatal alert: certificate_unknown

shuhaib3 commented 2 years ago

@banzo can you please help here

Sarankrishna commented 2 years ago

I am also having the same issue. Can anybody help to fix this issue?

wknickless commented 2 years ago

Just created https://github.com/wknickless/helm-nifi/blob/pnlo/2-way-cluster/tests/05-2-way-cluster-values.yaml and have replicated the problem. At startup app-log reports:

2022-01-02 15:05:59,582 WARN [Process Cluster Protocol Request-2] o.a.n.c.p.impl.SocketProtocolListener Failed processing protocol message from nifi-1.nifi-headless.default.svc.cluster.local due to Received fatal alert: certificate_unknown

Pretty sure this means the intra-cluster TLS setup is broken.

github-actions[bot] commented 2 years ago

This issue is stale because it has not seen recent activity. Remove stale label or comment or this will be closed.

lfreinag commented 2 years ago

I don't know if this helps but I modified the nifi.properties:

nifi.cluster.is.node={{.Values.properties.isNode}}
nifi.cluster.flow.election.max.candidates={{.Values.properties.maxCandidates}}
nifi.zookeeper.connect.string={{.Values.properties.zookeeperConnectString}}

And on my values.yaml file I have:

isNode: true
maxCandidates: 1
zookeeperConnectString: "nifi-zookeeper-1.nifi-zookeeper.nifi-cluster.svc.cluster.local:2181,nifi-zookeeper-2.nifi-zookeeper.nifi-cluster.svc.cluster.local:2181"

The Connection String obeys the following format: ...svc.cluster.local:2181

I manage at least to get a cluster with one node running. Still trying to figure out how to add the second node though.

Screenshot 2022-04-21 at 14 41 27
wknickless commented 2 years ago

@lfreinag multi-NiFi-node support is broken in the current version of the chart (v1.0.4). If you're willing to try using cert-manager in your Kubernetes cluster, you might try the branch in pull request #218.

lfreinag commented 2 years ago

Hi @wknickless. So I have tried your branch now and I managed to make it work on our cluster. I hade to use these settings to make it work:

ca:
  persistence:
    enabled: false

I got some problems with the volume attachments but not really sure if that is my cluster or some configuration on the helm chart. Have you heard of this before?

multi-NiFi-node is still not working but I know why now. The chart needs to assign different ports to each NiFi pod in order to be picked up by the cluster. I will try to see if I can do something about that 😉