Open jrbentzon opened 1 month ago
Need to investigate. I don't see this behavior on our cluster, but we did not perform the massive migration of our streams. I will add more streams and take a look at how much memory Arcane.Operator consumes. Also a problem may exist with cache and events deduplication.
Description
We've seen our Arcane operator increase in memory until it hits our limits to then get restarted by kubernetes There are only 5 streams attached to the operator, so looks like it could be a memory leak:
Steps to reproduce the issue
Monitor Arcane Operator Memory
Describe the results you expected
More or less constant
System information
v0.0.10