Describe the bug
I've set up a single job to move around 2mb of data from 1 store to another each day. This works fine, however, it jumps up around 100mb in memory usage and holds onto it. This increments daily; every time the job is run.
Version of Helm, Kubernetes and the Nifi chart:
Helm Version: 3.11.2
Kubernetes Version: 1.23.8
Chart version: 1.1.3
What happened:
Every time NiFi runs our daily job, it holds onto a lot of memory and doesn't release it.
We have nothing else running.
What you expected to happen:
I expect the memory to be periodically released.
How to reproduce it (as minimally and precisely as possible):
Run a simple pipeline with a small amount of data.
Anything else we need to know:
Image of a memory usage Grafana graph of the NiFi pod:
Is this a known issue, or am I potentially using a processor which has a known memory leak issue?
Describe the bug I've set up a single job to move around 2mb of data from 1 store to another each day. This works fine, however, it jumps up around 100mb in memory usage and holds onto it. This increments daily; every time the job is run.
Version of Helm, Kubernetes and the Nifi chart: Helm Version: 3.11.2 Kubernetes Version: 1.23.8 Chart version: 1.1.3
What happened: Every time NiFi runs our daily job, it holds onto a lot of memory and doesn't release it. We have nothing else running.
What you expected to happen: I expect the memory to be periodically released.
How to reproduce it (as minimally and precisely as possible): Run a simple pipeline with a small amount of data.
Anything else we need to know: Image of a memory usage Grafana graph of the NiFi pod:
Is this a known issue, or am I potentially using a processor which has a known memory leak issue?