Open derekjlowe opened 1 year ago
It looks like this issue was caused because our isolation level on kafka was not set to read_committed. After making this change the events stopped duplicating.
Changing the kafka to read_committed prevented us from grabbing the events and putting them into our datastore but the error keeps coming and eventually prevents any events coming in from our zeebe engines.
Zeebe variable greater than 1mb causes event batch to get stuck
We had a zeebe variable exceed the 1mb limit imposed by Kafka. This has caused the event batch to get stuck. We think Zebee is continually resubmitting the same batch. In our case we are then writing those events to a separate data store. We plan on imposing a limit on the variable size but is there a way to get it out of the stuck state without losing the events in our production environment?