Closed jinmu0410 closed 1 year ago
The local mode is normal, but this problem occurs on yarn
spark 3.1.2 and yarn 3.1
I see that this problem has occurred before, has it been resolved in the future? Now my situation is that this bug is triggered every time
Is there any news on this topic? Thanks in advance
The issue might be caused by Spark shutting down too quickly and not allowing the agent to finish it's job.
Seems like similar problem happens here: https://github.com/AbsaOSS/spline-spark-agent/issues/478
Could you try to set high hadoop.service.shutdown.timeout
and let us know if it helps?
I think it may be a problem of compiling and selecting the spark version.
this way is wrong
this choice is right. After choosing the right compilation package, I did not encounter the problem of kafka
sorry,This problem occurs again
Could you try to set high hadoop.service.shutdown.timeout
and let us know if it helps?
Could you try to set high
hadoop.service.shutdown.timeout
and let us know if it helps?
I tried it, it doesn't seem to work。
I looked at the code and found some key places
sys.addShutdownHook(sf.close()) cause producer.close()
I am now actively calling sf.close @cerveada
What do you mean by actively calling ?
After kafka sends the message, then call SplineRecordSenderFactory.close
Ok, but then you can't send another message, right?
No, I changed some codes to unify plan and event into one kafka message, for some special business later
I found there was an issue in the agent, that cause the Kafka producer being close twice and first time it was too soon.
The issue: #639
This issue was fixed in agent version 1.1.0.