Open dao-jun opened 1 week ago
I think that we need to find another solution. ReadWriteLock adds a lot more overhead than StampedLock.
Yes, but RoaringBitmap is not designed for Concurrency at all, and the PR is a quick fix, we can make further improvements in the future.
I wonder if it would be a viable option to catch exceptions and retry with a read lock if that happens?
Then we may catch a lot of exceptions when a broker is in a large throughput, I'm not sure if the cost is less than RWLock or not.
I wonder if it would be a viable option to catch exceptions and retry with a read lock if that happens?
Then we may catch a lot of exceptions when a broker is in a large throughput, I'm not sure if the cost is less than RWLock or not.
That's a valid concern, we should investigate the different choices and experiment.
I think that we should revert the migration to RoaringBitSet in branch-3.0, branch-3.2 and branch-3.3 so that we don't need to rush with the solution.
I reverted the changes in branch-3.0, branch-3.2 and branch-3.3. Here's the PR to revert the change in master branch: #22968 . It's better to have a fresh start with a proper fix that is validated so that it doesn't cause performance regressions and also addresses the concurrency issues. The concern about switching to ReadWriteLock is about it causing a performance regression. It's possible that it's not a valid concern, but let's validate that before applying the solution.
I did a less rigorous test:
@Test
public void test() {
long start = System.currentTimeMillis();
CountDownLatch latch = new CountDownLatch(2);
ConcurrentRoaringBitSet bitSet = new ConcurrentRoaringBitSet();
new Thread(() -> {
for (int i = 0; i < 100000000; i++) {
bitSet.set(1);
}
latch.countDown();
}).start();
new Thread(() -> {
for (int i = 0; i < 100000000; i++) {
bitSet.get(1);
}
latch.countDown();
}).start();
try {
latch.await();
} catch (InterruptedException e) {
e.printStackTrace();
}
System.out.println("Time: " + (System.currentTimeMillis() - start));
}
I started 2 threads to call get/set methods on ReadWriteLock
/StampLock
based ConcurrentRoaringBitSet, each thread looping 100 million times.
For ReadWriteLock
based ConcurrentRoaringBitSet
, the total durations are around 9.5s
For StampLock
base ConcurrentRoaringBitSet
, the total durations are around 8.5s.
Maybe we don't need to worry about the performance regression?
When we do Readonly operations on StampLock based ConcurrentRoaringBitSet, it does faster than ReadWriteLock(about 5 times faster), but in the case we use ConcurrentRoaringBitSet
is Read and Write(about 1:1).
When we do Readonly operations on StampLock based ConcurrentRoaringBitSet, it does faster than ReadWriteLock(about 5 times faster), but in the case we use
ConcurrentRoaringBitSet
is Read and Write(about 1:1).
In Pulsar we have https://github.com/apache/pulsar/tree/master/microbench module with JMH. I think JMH is better for comparisons. For Pulsar, the efficiency also matters so the comparison might not be that simple.
btw. In Pulsar ConcurrentOpenLongPairRangeSet
is only used in RangeSetWrapper
and the only usage of that is in ManagedCursorImpl
for individualDeletedMessages
. In many cases, the operations on individualDeletedMessages
are already protected by the ReadWriteLock
field lock
in ManagedCursorImpl
.
It might be better to make the lock usage consistent. We wouldn't need ConcurrentRoaringBitSet
in the Pulsar code base in that case as long as we document that ConcurrentOpenLongPairRangeSet
isn't really thread safe. The thread safe solution could use the old solution.
btw. In Pulsar ConcurrentOpenLongPairRangeSet is only used in RangeSetWrapper and the only usage of that is in ManagedCursorImpl for individualDeletedMessages. In many cases, the operations on individualDeletedMessages are already protected by the ReadWriteLock field lock in ManagedCursorImpl. It might be better to make the lock usage consistent. We wouldn't need ConcurrentRoaringBitSet in the Pulsar code base in that case as long as we document that ConcurrentOpenLongPairRangeSet isn't really thread safe. The thread safe solution could use the old solution.
It makes sense, I addressed this, PTAL
It makes sense, I addressed this, PTAL
@dao-jun Looks good, I'll soon review in more detail. Please update the PR title and description so that it describes the motivation and modifications of this PR more accurately.
Since the previous change #22908 was rollbacked by #22968, please rebase the changes.
Please use write lock for
individualDeletedMessages.resetDirtyKeys();
call inbuildIndividualDeletedMessageRanges
method.
This is actually a real bug in the current implementation and needs to be fixed even if we wouldn't switch to use RoaringBitMap's RoaringBitSet.
Rename ConcurrentOpenLongPairRangeSet to OpenLongPairRangeSet and mark it as NotThreadSafe.
I guess this change and the switch to use RoaringBitSet (in version 1.1.0) was lost in rebasing?
One possibility would be to complete this PR by switching to the non-thread version of ConcurrentOpenLongPairRangeSet using ordinary BitSet in this PR and then switch to use RoaringBitSet in a follow up PR.
It's possible that using StampedLock in ConcurrentBitSet results in similar problems as we had with StampedLock in ConcurrentRoaringBitSet.
By looking at the code of BitSet, it seems that assertions in this method could fail in ConcurrentBitSet:
private void checkInvariants() {
assert(wordsInUse == 0 || words[wordsInUse - 1] != 0);
assert(wordsInUse >= 0 && wordsInUse <= words.length);
assert(wordsInUse == words.length || words[wordsInUse] == 0);
}
However the problems are hidden since assertions aren't commonly enabled in production.
Please use write lock for
individualDeletedMessages.resetDirtyKeys();
call inbuildIndividualDeletedMessageRanges
method.This is actually a real bug in the current implementation and needs to be fixed even if we wouldn't switch to use RoaringBitMap's RoaringBitSet.
Yes, individualDeletedMessages.resetDirtyKeys()
is a WRITE operation, but it just requires a READ lock.
Attention: Patch coverage is 91.48936%
with 4 lines
in your changes missing coverage. Please review.
Project coverage is 73.43%. Comparing base (
bbc6224
) to head (66b228c
). Report is 424 commits behind head on master.
LGTM, good work @dao-jun
Motivation
In https://github.com/apache/pulsar/pull/22908 we introduced
ConcurrentRoaringBitSet
which is based onStampLock
andRoaringBitmap
to optimize the memory usage and GC pause onBitSet
.However, there is a concurrency issue on
ConcurrentRoaringBitSet
.It will throw NPE when calling
ConcurrentRoaringBitSet#get
andConcurrentRoaringBitSet#set
in multiple threads, the situation is a little similar with https://github.com/apache/pulsar/issues/18388.see: RoaringBitmap#add RoaringBitmap#get
It will throw NPE if use StampLock, the situation is a little similar with https://github.com/apache/pulsar/issues/18388
Modifications
ConcurrentBitSet
ConcurrentOpenLongPairRangeSet
toOpenLongPairRangeSet
and mark it as NotThreadSafe.ManageCursorImpl#individualDeletedMessages
in ReadWriteLock scope.Verifying this change
(Please pick either of the following options)
This change is a trivial rework / code cleanup without any test coverage.
(or)
This change is already covered by existing tests, such as (please describe tests).
(or)
This change added tests and can be verified as follows:
(example:)
Does this pull request potentially affect one of the following parts:
If the box was checked, please highlight the changes
Documentation
doc
doc-required
doc-not-needed
doc-complete
Matching PR in forked repository
PR in forked repository: