thelastpickle / cassandra-reaper

Automated Repair Awesomeness for Apache Cassandra
http://cassandra-reaper.io/
Apache License 2.0
491 stars 218 forks source link

Segment can hang for longer than `hangingRepairTimeoutMins` #1290

Open lapo-luchini opened 1 year ago

lapo-luchini commented 1 year ago

Project board link

Rarely, but sometimes I get segmets to hang for a much longer time than I configured hangingRepairTimeoutMins (e.g. 3 days vs a configuration of "120" i.e. 2 hours).

I can return to normality by aborting those segments manually (and/or sometimes killing and restarting Reaper) but I wonder if that's only me and/or I did misunderstand anything about that parameter.

Could this be connected to this log?

WARN   [2023-04-19 13:02:26,929] [andxor:271db840-de81-11ed-a6d8-41233aeced5a:271ddf57-de81-11ed-a6d8-41233aeced5a] i.c.s.SegmentRunner - repair session failed for segment with id '271ddf57-de81-11ed-a6d8-41233aeced5a' and repair number '2043'
WARN   [2023-04-19 13:05:05,189] [andxor:271db840-de81-11ed-a6d8-41233aeced5a:271ddf57-de81-11ed-a6d8-41233aeced5a] i.c.s.SegmentRunner - repair session failed for segment with id '271ddf57-de81-11ed-a6d8-41233aeced5a' and repair number '5625'
WARN   [2023-04-19 13:44:15,784] [andxor:271db840-de81-11ed-a6d8-41233aeced5a:271ddf57-de81-11ed-a6d8-41233aeced5a] i.c.s.SegmentRunner - Failed to connect to a coordinator node for segment 271ddf57-de81-11ed-a6d8-41233aeced5a
com.datastax.driver.core.exceptions.OperationTimedOutException: [/2a01:4f8:[…]:9042] Timed out waiting for server response
        at com.datastax.driver.core.exceptions.OperationTimedOutException.copy(OperationTimedOutException.java:43)
        at com.datastax.driver.core.exceptions.OperationTimedOutException.copy(OperationTimedOutException.java:25)
        at com.datastax.driver.core.DriverThrowables.propagateCause(DriverThrowables.java:35)
        at com.datastax.driver.core.DefaultResultSetFuture.getUninterruptibly(DefaultResultSetFuture.java:293)
        at com.datastax.driver.core.AbstractSession.execute(AbstractSession.java:58)
        at io.cassandrareaper.storage.CassandraStorage.renewRunningRepairsForNodes(CassandraStorage.java:2150)
        at io.cassandrareaper.service.SegmentRunner.renewLead(SegmentRunner.java:884)
        at io.cassandrareaper.service.SegmentRunner.processTriggeredSegment(SegmentRunner.java:374)
        at io.cassandrareaper.service.SegmentRunner.runRepair(SegmentRunner.java:313)
        at io.cassandrareaper.service.SegmentRunner.run(SegmentRunner.java:161)
        at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
        at com.google.common.util.concurrent.TrustedListenableFutureTask$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:125)
        at com.google.common.util.concurrent.InterruptibleTask.run(InterruptibleTask.java:57)
        at com.google.common.util.concurrent.TrustedListenableFutureTask.run(TrustedListenableFutureTask.java:78)
        at com.codahale.metrics.InstrumentedScheduledExecutorService$InstrumentedRunnable.run(InstrumentedScheduledExecutorService.java:241)
        at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
        at java.util.concurrent.FutureTask.run(FutureTask.java:266)
        at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
        at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
        at com.codahale.metrics.InstrumentedThreadFactory$InstrumentedRunnable.run(InstrumentedThreadFactory.java:66)
        at java.lang.Thread.run(Thread.java:750)
Caused by: com.datastax.driver.core.exceptions.OperationTimedOutException: [/2a01:4f8:[…]:9042] Timed out waiting for server response
        at com.datastax.driver.core.RequestHandler$SpeculativeExecution.onTimeout(RequestHandler.java:979)
        at com.datastax.driver.core.Connection$ResponseHandler$1.run(Connection.java:1636)
        at io.netty.util.HashedWheelTimer$HashedWheelTimeout.run(HashedWheelTimer.java:715)
        at io.netty.util.concurrent.ImmediateExecutor.execute(ImmediateExecutor.java:34)
        at io.netty.util.HashedWheelTimer$HashedWheelTimeout.expire(HashedWheelTimer.java:703)
        at io.netty.util.HashedWheelTimer$HashedWheelBucket.expireTimeouts(HashedWheelTimer.java:790)
        at io.netty.util.HashedWheelTimer$Worker.run(HashedWheelTimer.java:503)
        at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
        ... 1 common frames omitted
WARN   [2023-04-19 13:44:25,831] [andxor:271db840-de81-11ed-a6d8-41233aeced5a:271ddf57-de81-11ed-a6d8-41233aeced5a] i.c.s.SegmentRunner - Open files amount for process: 143

┆Issue is synchronized with this Jira Story by Unito ┆Issue Number: REAP-62

adejanovski commented 1 year ago

Since the last versions, repairs are now adaptive. Which means a segment timeout will get extended for each retry, in the hope that it will eventually pass. Maybe you have segments that have a high failure count?

lapo-luchini commented 1 year ago

Ah, yes! Didn't notice, thanks. Previous rows in the log were indeed:

i.c.s.SegmentRunner - repair session failed for segment with id '271ddf57-de81-11ed-a6d8-41233aeced5a' and repair number '1023'
i.c.s.SegmentRunner - repair session failed for segment with id '271ddf57-de81-11ed-a6d8-41233aeced5a' and repair number '5623'
i.c.s.SegmentRunner - repair session failed for segment with id '271ddf57-de81-11ed-a6d8-41233aeced5a' and repair number '5624'
i.c.s.SegmentRunner - repair session failed for segment with id '271ddf57-de81-11ed-a6d8-41233aeced5a' and repair number '2043'
i.c.s.SegmentRunner - repair session failed for segment with id '271ddf57-de81-11ed-a6d8-41233aeced5a' and repair number '5625'
i.c.s.SegmentRunner - Failed to connect to a coordinator node for segment 271ddf57-de81-11ed-a6d8-41233aeced5a

Is it possible to check current expiration time?

lapo-luchini commented 1 year ago

Mhh, I was a bit fast in closing the issue… the UUIDs don't match and the stuck segment has zero failures:

image

(it's now 15:47, so over 3h after that segment has started)

lapo-luchini commented 1 year ago

repairs are now adaptive

Over the single segment, or overall? (I have failures on other segments, not on that ones)