Open pqab opened 7 months ago
@semistone could you share the code to reproduce the issue?
Our test is running in batch receive max 1000 events then process those 1000 events concurrent and parallel
Flux.fromIterable(events).parallel().runOn(Schedulers.fromExecutor(this.executorService))
.flatMap(event -> { // handle event and ack })
originally we use acknowledgeAsync and seem have issue,
ackMono = Mono.fromCompletionStage(() -> consumer.acknowledgeAsync(message))
.doOnSuccess(v -> ackCount.incrementAndGet());
so we replace by
@SneakyThrows
private void acknowledge(Message<?> message) {
ackLock.lock();
try {
consumer.acknowledge(message);
ackCount.incrementAndGet();
} finally {
ackLock.unlock();
}
}
Mono<Void> runnable = Mono.fromRunnable(() -> this.acknowledge(message));
ackMono = runnable.subscribeOn(Schedulers.fromExecutor(this.executorService));
which force to synchronized all acknowledge by ReentrantLock, then seem it worked.
I could try to write test code later if needed.
Mono.fromCompletionStage(() -> consumer.acknowledgeAsync(message))
A Mono
doesn't do anything unless it is subscribed. would be useful to have a simple Java class or test case that runs the logic that you are using.
I'd recommend using pulsar-client-reactive with Project Reactor and other Reactive Streams implementations. It contains a proper solution for acknowledgements.
Acknowledgement / Negative Acknowledgement is handled as a value (instead of a side-effect):
example: https://github.com/apache/pulsar-client-reactive/tree/main?tab=readme-ov-file#consuming-messages
we published around 1m of messages, and we are able to reproduce with this code https://github.com/apache/pulsar/compare/v3.1.2...semistone:pulsar:test/flux-test
bin/pulsar-perf consume persistent://tenant1/namespace1/topic1 --auth-plugin org.apache.pulsar.client.impl.auth.AuthenticationTls --auth-params '{"tlsCertFile":"conf/superuser.cer","tlsKeyFile":"conf/superuser.key.pem"}' --test-reactor -sp Earliest -st Key_Shared -ss sub1
the unack message keeps increasing and the available permits become negative value, which makes the consumer couldn't poll more events unless we restart it, in order to re-delivery the events to the consumer
3.0.0
Is there a chance to use 3.0.2 ? A lot of bugs have been fixed in 3.0.1 and 3.0.2 . This applies to both broker and the the client.
we published around 1m of messages, and we are able to reproduce with this code v3.1.2...semistone:pulsar:test/flux-test
Thanks for sharing the repro app.
I'll give it a try soon. One question about the repro, you have -st Key_Shared
. Does the problem reproduce with the Shared
subscription type?
we published around 1m of messages, and we are able to reproduce with this code v3.1.2...semistone:pulsar:test/flux-test
Thanks for sharing the repro app. I'll give it a try soon. One question about the repro, you have
-st Key_Shared
. Does the problem reproduce with theShared
subscription type?
Yes, I run again with Shared
subscription type, it also happens, I think the type doesn't matter
3.0.0
Is there a chance to use 3.0.2 ? A lot of bugs have been fixed in 3.0.1 and 3.0.2 . This applies to both broker and the the client.
The original message was running from our application using 3.0.0 client with 3.1.2 broker, we are going to upgrade client to 3.0.2 for now, and the above reproduce code was running from the server, so both client & broker are 3.1.2
This is possible related to #22601 / #22810.
We tested again it still happen, but we found pulsar client default doesn't wait ack return and return CompletableFuture.complete directly.
so we turn on https://pulsar.apache.org/api/client/3.0.x/org/apache/pulsar/client/api/ConsumerBuilder.html#isAckReceiptEnabled(boolean)
because if enable that option, then it will create lock read lock https://github.com/apache/pulsar/blob/master/pulsar-client/src/main/java/org/apache/pulsar/client/impl/PersistentAcknowledgmentsGroupingTracker.java#L261
so the concurrent issue will disappear.
and if I test without enable that option. because it won't wait the acknowledge, so it will look like memory leak in my performance test. because too many currentIndividualAckFuture is queueing.
so maybe concurrent issue still there or maybe just too many currentIndividualAckFuture pile up.
but at least we could enable isAckReceiptEnabled to fix this issue. or I guess if always using that read lock with or without isAckReceiptEnabled, it will fix this issue as well.
msg backlog & unack message should be 0 for both acknowledge & acknowledgeAsync subscription
regarding unack message counts, #22657 is possibly related, see https://github.com/apache/pulsar/issues/22657#issuecomment-2150533309
There has been an ack issue with batch index acknowledgements, #22353. That must be a different issue.
I made an attempt to reproduce this using Pulsar client directly. The problem didn't reproduce with https://github.com/lhotari/pulsar-playground/blob/master/src/main/java/com/github/lhotari/pulsar/playground/TestScenarioAckIssue.java . (The test code is fairly complex due to the counters to validate behavior and since I had the attempt to increase chances of race conditions.)
I'll attempt to reproduce with the provided changes to pulsar-perf.
we published around 1m of messages, and we are able to reproduce with this code v3.1.2...semistone:pulsar:test/flux-test
rebased it over master in https://github.com/apache/pulsar/compare/master...lhotari:pulsar:lh-issue21958-flux-test
I'm not able to reproduce.
compiling Pulsar branch with rebased flux-test patch for pulsar-perf
git clone --depth 1 -b lh-issue21958-flux-test https://github.com/lhotari/pulsar
cd pulsar
mvn -Pcore-modules,-main -T 1C clean install -DskipTests -Dspotbugs.skip=true -Dcheckstyle.skip=true -Dlicense.skip=true -DnarPluginPhase=none
running Pulsar
rm -rf data
PULSAR_STANDALONE_USE_ZOOKEEPER=1 bin/pulsar standalone -nss -nfw 2>&1 | tee standalone.log
running consumer
bin/pulsar-perf consume test --test-reactor -sp Earliest -st Key_Shared -ss sub1
running producer
bin/pulsar-perf produce test -mk random -r 50000
@semistone @pqab Are you able to reproduce with master branch version of Pulsar? how about of branches/releases? Is this issue resolved?
I made an attempt to reproduce this using Pulsar client directly. The problem didn't reproduce with https://github.com/lhotari/pulsar-playground/blob/master/src/main/java/com/github/lhotari/pulsar/playground/TestScenarioAckIssue.java . (The test code is fairly complex due to the counters to validate behavior and since I had the attempt to increase chances of race conditions.)
One possible variation to this scenario would be to test together with topic unloading events.
Related issue #22709
let me check tomrrow.
let me check tomrrow.
@semistone thanks, that you be helpful. If it reproduces only within a cluster with multiple nodes and other traffic that could mean that a load balancing event is triggering the problem. Currently in-flight acknowledgements could get lost when this happens. Usually this gets recovered, but it's possible that there's a race condition where the acknowledgements get lost and the message doesn't get redelivered during the unload/reconnection event triggered by load balancing. It should be possible to simulate this scenario also by triggering topic unloads with the admin api.
because our cluster already on production, so I can't test it on our cluster instead it I test pulsar version apache-pulsar-3.3.1 standalone on my local
it seem more difficult to reproduce this issue compare to first time when we report it. but after retry many times, I still could see something strange. my steps is
bin/pulsar-admin topics unsubscribe persistent://public/default/my-topic -s=sub
bin/pulsar-perf consume my-topic --test-reactor -sp Earliest -st Key_Shared -ss sub -n 20
after retry many times, I could see only once the consumer stopped like
2024-09-06T17:33:01,433+0900 [main] INFO org.apache.pulsar.testclient.PerformanceConsumer - Throughput received: 1045 msg --- 0.000 msg/s --- 0.000 Mbit/s --- Latency: mean: 0.000 ms - med: 0 - 95pct: 0 - 99pct: 0 - 99.9pct: 0 - 99.99pct: 0 - Max: 0
but I could still see backlog like "msgBacklog" : 996329, but unlike previous unackedMessages have something. this time unackedMessages is 0.
and I found it happened once that it stopped consuming for about 1 mins and recover later. or if we push more messages, then it will continues to consume.
I will try to test master branch and double check any mistake in my testing later.
@semistone there are multiple issues contributing to this, here's one update: https://github.com/apache/pulsar/issues/22709#issuecomment-2335104724
I have created a proposal "PIP-377: Automatic retry for failed acknowledgements", https://github.com/apache/pulsar/pull/23267 (rendered doc) . Discussion thread https://lists.apache.org/thread/7sg7hfv9dyxto36dr8kotghtksy1j0kr
Search before asking
Version
3.0.0
Minimal reproduce step
Publish 600k messages
Start 2 consumers with different subscription name and subscribe from Earliest
one with async ack
another one with sync ack
What did you expect to see?
msg backlog & unack message should be 0 for both
acknowledge
&acknowledgeAsync
subscriptionWhat did you see instead?
There are few messages in the backlog & unack message left even we received the ack callback when using
acknowledgeAsync
,acknowledge
is working fineTopic stats for the
acknowledgeAsync
subscription for referenceAnything else?
we run for multiple times, and every time there are few backlog & unack message left for the
acknowledgeAsync
subscriptionAre you willing to submit a PR?