Closed kamenik closed 7 years ago
Hi @kamenik,
I've seen this before indeed. It seems like something got stuck on your cluster, maybe a validation compaction, and that prevents Reaper from running anymore repairs. Please roll restart your cluster to get it back into a clean state and then try running a repair again. The negative timeout value bug is fixed in the current master branch (updated today), so you can use it instead of your current version to prevent that particular error from popping up again (plus it can have some other nasty effects).
Let me know how it works after the roll restart and if the problem still shows up with the latest master.
Thanks
I have deleted reaper_db and restarted cassandra cluster but it is still the same. Good news - the negative timeout value bug disappeared:).
Could you try to set logging level to DEBUG in reaper and send me the output after a few minutes of run ?
Also, could you give me the output of nodetool tpstats
on the node that is reported as already having a running repair ?
I would also need the output of bin/spreaper list-runs
to check the status of running/past repairs.
Thanks
Oki, is is after restart and with clean reaper_db.
nodetool_tpstats.txt spreaper_list_runs.txt cassandra-reaper.log.zip
According to what I see in your outputs, things are currently going ok. Reaper won't allow multiple repair sessions on the same node, so if it tries to start repairing a new segment which involves a node that is already repairing another segment, then it will be postponed.
What we currently see is very different from what you had previously with Reaper claiming it sees repair running on a node but has no trace in its own database and then tries to kill it. This was likely to be related to the negative timeout bug.
So far, I'd say things look good.
Please update the ticket with the progress.
Thanks
Thank you for help. I would like to ask about reaper performance yet. I run one not sheduled repair with default values and surprisingly it took much longer time than full repair at every server and even consumed much more CPU. First five peaks are full repairs, data since 15:45 to 19:00 are reaper repair. What do you thing about it?
There are several possibilities here. Could you share the specific command line you've used for you full repair ? Did you run it on all nodes at the same time ? Can you confirm you're using Cassandra 3.10 ?
Another possibility could be that there are performance problems with the Cassandra backend in Reaper. Was the Reaper UI opened the whole time on the "Repair runs" screen ? If so, can you check CPU consumption when the window isn't opened ? While a repair is running in Reaper, could you share the output of :
You can try to use another backend, like H2 for example, and compare results. You can also lower the intensity to 0.5 in order to check how it affects the load.
Does the first part of the graph (the 5 spikes) show the total time for the full repair to run (on all nodes) ?
nodetool --host NODE_IP repair --full
it is called from one server on all nodes, one after another. We have Cassandra 3.10 on all servers. At the graph - spikes between 14:05 and 15:15 is full repair of all five nodes.
I will try to run it with different backend, we will see.. Trying H2 now, it has lots of this stacktraces in log, but it is running.
WARN [2017-05-02 09:56:45,787] [woc:2:1408] c.s.r.s.SegmentRunner - SegmentRunner declined to repair segment 1408 because of an error collecting information from one of the hosts (192.168.20.17): {}
java.lang.ClassCastException: [Ljava.lang.Object; cannot be cast to [Ljava.lang.String;
at com.spotify.reaper.storage.postgresql.RepairParametersMapper.map(RepairParametersMapper.java:33)
at com.spotify.reaper.storage.postgresql.RepairParametersMapper.map(RepairParametersMapper.java:28)
at org.skife.jdbi.v2.Query$4.munge(Query.java:183)
at org.skife.jdbi.v2.QueryResultSetMunger.munge(QueryResultSetMunger.java:41)
at org.skife.jdbi.v2.SQLStatement.internalExecute(SQLStatement.java:1344)
at org.skife.jdbi.v2.Query.fold(Query.java:173)
at org.skife.jdbi.v2.Query.list(Query.java:82)
at org.skife.jdbi.v2.sqlobject.ResultReturnThing$IterableReturningThing.result(ResultReturnThing.java:253)
at org.skife.jdbi.v2.sqlobject.ResultReturnThing.map(ResultReturnThing.java:43)
at org.skife.jdbi.v2.sqlobject.QueryHandler.invoke(QueryHandler.java:41)
at org.skife.jdbi.v2.sqlobject.SqlObject.invoke(SqlObject.java:224)
at org.skife.jdbi.v2.sqlobject.SqlObject$3.intercept(SqlObject.java:133)
at org.skife.jdbi.v2.sqlobject.CloseInternalDoNotUseThisClass$$EnhancerByCGLIB$$e2535170.getRunningRepairsForCluster(<generated>)
at com.spotify.reaper.storage.PostgresStorage.getOngoingRepairsInCluster(PostgresStorage.java:362)
at com.spotify.reaper.service.SegmentRunner$1.initialize(SegmentRunner.java:169)
at com.spotify.reaper.service.SegmentRunner$1.initialize(SegmentRunner.java:165)
at org.apache.commons.lang3.concurrent.LazyInitializer.get(LazyInitializer.java:100)
at com.spotify.reaper.service.SegmentRunner.canRepair(SegmentRunner.java:300)
at com.spotify.reaper.service.SegmentRunner.runRepair(SegmentRunner.java:178)
at com.spotify.reaper.service.SegmentRunner.run(SegmentRunner.java:95)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at com.google.common.util.concurrent.TrustedListenableFutureTask$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:108)
at com.google.common.util.concurrent.InterruptibleTask.run(InterruptibleTask.java:41)
at com.google.common.util.concurrent.TrustedListenableFutureTask.run(TrustedListenableFutureTask.java:77)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:748)
Yesterday results - there we no traffic on cluster, only repairing. All repairs were paralel, full, intensity .95, only difference was backend. C data (csstats.txt) are from the middle of the first C run, plus minus.. It seems there is no (small) difference between run with UI and without it.
Hi @kamenik ,
thanks for the results ! There's obviously a big overhead related to the Cassandra backend that we're going to optimize ASAP. I'll get back to you as soon as we have something to test on that front.
Thanks again for the time you spent on this !
No problem:). I run it with cassandra only today and it seems there is some problem with intensity settings too. You can see on graph - I set intensity 0.95, 0.5, 0.25, 0.125, 0.0625 (there is just beginning of it). Plus it says all segments are repaired some time before switch to state DONE - it is marked by red lines, some DB cleanup at the end?
Thank you for help. I would like to ask about reaper performance yet.
At this point the ticket changed from being one about postponed segments, now resolved, to reaper performance using the Cassandra backend.
Could we either close this ticket, and move the comments into a new ticket, or rename this ticket?
Following up on this in #94
@kamenik : I've created a branch that fixes the performance issues you've been experiencing with the Cassandra backend.
Could you build an try with the following branch ? https://github.com/thelastpickle/cassandra-reaper/tree/alex/fix-parallel-repair-computation
TL;DR : the number of parallel repairs was computed based on the number of tokens and not the number of nodes. If you use vnodes, Reaper will compute a high value and you'll have 15 threads competing to run repairs on your cluster. I've fixed this using the number of nodes instead and added a local cache to lighten the load on C*.
Thanks
@adejanovski: Thanx, it is much better now:). Big peak at the beginning is full repair, the rest is reaper with C* backend. Interesting is there is no difference between run with intensity 0.95 and 0.1 .
Intensity 0.95
Intensity 0.1
Great to see the improvements on your charts.
Intensity mustn't apply here because your segments are very fast to run (within seconds I guess). if you spend 1s repairing then intensity 0.1 will wait for : (1/0.1 - 1) = 9s. Then you have a 30s pause between each segment that mostly explains why it takes much longer with Reaper than the full nodetool repair in your case. That 30s pause will be configurable in the yaml file once I merge the mck/cassandra-improvements-94 branch, which should happen this week. If you have 1000 segments, that 30s pause will already bring the repair time to more than 8h.
The upcoming merge will bring many more improvements on the data model and make good use of the Cassandra row cache for segments.
Hi guys,
I am trying to use reaper at test cluster, but I am stuck with this issue. After few test runs it starts to postpone segment repairs, it seems that it tries to start same segment twice, second run fails and is postponed. I tried to delete reaper_db ks to reset it, but it did not help. Any idea?
Few lines from beginning of log: