dpkp / kafka-python

Python client for Apache Kafka
http://kafka-python.readthedocs.io/
Apache License 2.0
5.6k stars 1.4k forks source link

Manual Offset Commit/Heartbeat Deadlock #2413

Open mro-rhansen2 opened 10 months ago

mro-rhansen2 commented 10 months ago

I believe that I have found the reason for the deadlock that has been alluded to in a few other issues on the board.

2373

2099

2042

1989

The offset commit appears to be blocked by-design with the assumption that the operation should resume without issue once the underlying network problem has been resolved. The issue appears to be that the consumer is not holding onto an exclusive client lock while it is waiting. This leads to a race condition between the main thread and the heartbeat thread due to a failure to maintain lock ordering.

The order of operations is as follows:

1) Consumer handles a message. 2) Network error occurs. 3) Consumer tries to commit offset. Commit blocks in an infinite loop and releases the KafkaClient lock on each attempt: https://github.com/dpkp/kafka-python/blob/0864817de97549ad71e7bc2432c53108c5806cf1/kafka/coordinator/consumer.py#L512 4) While trying to identify a new coordinator, BaseCoordinator takes the KafkaClient lock and then the internal coordinator lock: https://github.com/dpkp/kafka-python/blob/0864817de97549ad71e7bc2432c53108c5806cf1/kafka/coordinator/base.py#L245 5) HeartbeatThread thread runs and only takes the coordinator lock: https://github.com/dpkp/kafka-python/blob/0864817de97549ad71e7bc2432c53108c5806cf1/kafka/coordinator/base.py#L958 6) HeartbeatThread detects consumer timeout and tries to shutdown the coordinator, taking the client lock only after having already taken the coordinator lock in the step above (the inverse order in how the locks are taken by the main thread during the block commit operation): https://github.com/dpkp/kafka-python/blob/0864817de97549ad71e7bc2432c53108c5806cf1/kafka/coordinator/base.py#L993 https://github.com/dpkp/kafka-python/blob/0864817de97549ad71e7bc2432c53108c5806cf1/kafka/coordinator/base.py#L766

It is admittedly a very tight window for a race condition but it does exist based on my own experience as well as that of others in the community. The problem can be avoided by allowing the consumer exclusive access to the KafkaClient while trying to commit the offset, or by ensuring that the heartbeat thread has exclusivity to the client while it is checking things out.

It should also be noted that, while I have only spelled out the race condition as it exists between the commit and heartbeat operations, I wouldn't be surprised if the heartbeat was also interfering with other operations because of this issue.

mro-rhansen2 commented 10 months ago

After further investigation, I did find another avenue where this same deadlock can occur during a consumer.poll(). If the timing is just right, then there's a chance that the HeartbeatThread deadlocks with a subsequent invocation of ensure_coordinator_ready() during the consumer.poll() that occurs immediately before trying to read data from the buffer.

Is there any particular reason for the finer grain on the locking scheme within the HeartbeatThread after line 958? It feels more like code bumming than anything else. The HeartbeatThread isn't doing much so it's probably just fine to be safe and hold onto the client lock rather than try and let other processes have the client in the event that the HeartbeatThread doesn't actually need it. Clients would be better off looking to asyncio if they needed to support the degree of concurrency that such an optimization seems to cater to.