Closed nkreiger closed 6 months ago
Thanks for reporting @nkreiger, apparently there is an open issue on the OpenJDK project https://bugs.openjdk.org/browse/JDK-8192647, can you share a little bit information regarding:
Hi @pierDipi I have 220 triggers, 6 brokers right now.
Resources aren't defined right now on the kafka-broker-dispatcher, so I assume it would scale as needed, or until I guess the Node ran out of memory?
This is the first knative version v1.12
its a new setup.
My retention period is 1 week, and I accidentally applied 60 duplicate triggers, this possibly caused the crash, resulting in a restart, and it looks like the dispatcher sent all events from the last week.
Do you know how to set the dispatcher to read from the latest offset, instead of firing all events retained?
This issue is stale because it has been open for 90 days with no
activity. It will automatically close after 30 more days of
inactivity. Reopen the issue with /reopen
. Mark the issue as
fresh by adding the comment /remove-lifecycle stale
.
Describe the bug
I want to be transparent, I am by no means a kafka expert, I am trying to better understand what I could have possibly done to achieve this error.
What I did was re-apply a ton of triggers with new names, so that I could clear out the old ones. However, the second I applied them it seems like they started streaming thousands of events all at once, that could have possibly caused the memory error. Resulting in a replay of the receiver which also seems to start sending events from the beginning of the retention period.
Expected behavior
Applying new triggers would start at the latest offset.
To Reproduce
Working on it.
Knative release version
v1.12
additional context
Hoping to better understand if this is a bug or an ignorance of the underlying infrastructure and how it works.