Open aditya-msd opened 1 year ago
Hey @aditya-msd ,
currently there is no possibility to drop those messages, because it rarely happens and if it does, the processor must solve its underlying issue (or it will corrupt its own data or lose data). However maybe it does make sense to handle messages that cause that behavior somehow. What would be your approach to "side step" the processing, as you said? Should it be silently dropped? Maybe we could add a callback being triggered when that error happens? Or any other ideas?
For sidestepping :
I can calculate the size of the message that is going to be set . But since the compressed data is what is going to be sent to kafka , I am unable to put a upper limit to this . As per the logs earlier , the Workerstate Size :: 89983 , this is the no of bytes . My topic size is set to max.message.bytes=10000 , which means , that the compressed data size is what is being checked .
Right now , I know the error has occurred via the logs . Within the code , how to catch this error , so I can notify/manipulate the internal state as required .
Also you mentioned , we could add a callback being triggered when that error happens
, can you provide code snippet as where to add this or any references that use this approach .
Else I have to figure some other means to detect this .
My point actually was that there is currently no way to detect or handle a failing emit. The processor shuts down, thats it. But we could build one, it doesn't sound too hard to do. That's why I was wondering if you already had an idea how a solution could look like? Anyway, once we find the time we'll take a look and maybe an obvious solution pops up.
I am getting kafka server: Message was too large, server rejected it to avoid allocation error. Below are the logs that indicate the same . This is causing the processor to not commit and go into a loop sort of situation and blocking other messages in the topic . Is there any way to determine or catch the errors .
Increasing the topic size solves the issue , but I would like is to determine and prevent the loop . If this error happens , then I can side step the processing and prevent the loop and continue with the processing as usual.