Open sAws opened 4 years ago
The combination of a cluster-wide 2i repair, with pending handoffs in a mixed version cluster is not tested. 2i repair only tested with a common version, with the cluster in a stable state (and only with the eleveldb backend - it doesn't work with other 2i supporting backends).
I think it was release 2.2 that introduced a version uplift to the AAE trees. So it might be that it is failing to lock based on a version mismatch. There were different scenarios tested for this uplift, but cluster-wide 2i repair wasn't one of them.
There's unlikely to be a quick and simple answer to "what can be done?". You may just have a lot of trees locked for rebuilds, and as the rebuilds complete you will be free to run 2i repair again. But the problem might be more involved, and the only way to be sure that 2i repair will behave as expected would be to run it in its tested state.
Sorry that this is a bit of an unhelpful answer. Perhaps someone else might have the time to dig deeper and give you a better answer.
Thanks! It prompted me to use riak-admin down
. Tomorrow I will write the result.
riak-admin down
don't help.
But now i see this error:
Partition: 662242929415565384811044689824565743281594433536
Error: {no_aae_pid,undefined_aae_pid}
Hello.
I'm use
riak-admin repair-2i
on one server and see this logWhat can be done?