Open xzhang2sc opened 2 months ago
@je-ik Is this something you could provide some help? or any guideline to fix this issue?
I have a suspicion that the job needs permission to access pubsub metrics (oldest unacked message age) to work properly, verifying that.
I found it's able to acknowledging old messages after I got the permission to access pubsub metrics. However, the number is not adding up. [update: I don't think accessing pubsub metrics is helping]
In the past 30 minutes, the ack message count stays well about 150/s, in total it should've ack'ed 150 60 30 = 270k messages, but the unacked messages only dropped 8k. The publish rate is about 10/s, which is negligible.
I found this assumption quite problematic, and the consequence of a wrong watermark is actually dramatic.
This assumes Pubsub delivers the oldest (in Pubsub processing time) available message at least once a minute
If pubsub didn't deliver an old message during the past minute, then the estimated watermark will be wrong. If the watermark has already progressed, then it means old messages don't get acked properly and they will be delivered repeatedly.
In summary I think there are two problems:
What is your ack deadline in PubSub? FlinkRunner can ack messages only after checkpoint, default ack deadline is 10 seconds and your checkpoint interval is aligned with that (--checkpointingInterval=10000
). This could cause issues you observe, you might try to either decrese checkpoint interval or increase ack deadline.
My ACK deadline is 600s, so that shouldn't be the issue
@liferoad @je-ik PubsubIO is basically unusable on Flink runner, but maybe I'm missing some configurations. Is it possible to bump up the priority of this issue?
Adding @Abacn @kennknowles who might have more context.
What happened?
I'm using "org.apache.beam:beam-runners-flink-1.18:2.57.0". When I read from pubsub, I found it's not able to acknowledging messages that are generated before the job starts. As a result, the messages are sent to Flink repeatedly, the number of unacked messages stay flat. I also observed a similiar issue to this one https://github.com/apache/beam/issues/31510 The ack message count can be higher than the message produce rate.
It can be reproduced with the following code, it's simply reading from pubsub and print out a string. args
Issue Priority
Priority: 2 (default / most bugs should be filed as P2)
Issue Components