Open-source, scalable, and fault-tolerant MQTT broker able to handle 4M+ concurrent client connections, supporting at least 3M messages per second throughput per single cluster node with low latency delivery. The cluster mode supports more than 100M concurrently connected clients.
During the TBMQ shutdown, this issue may occur when last will messages are attempted to be delivered, but the scheduler has already been stopped.
Resulting in:
java.util.concurrent.RejectedExecutionException: Task
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask@4c7825d8
[Not completed, task = java.util.concurrent.Executors$RunnableAdapter@25380416
[Wrapped task = org.thingsboard.mqtt.broker.service.mqtt.will.DefaultLastWillService$$Lambda$3299/0x00007fc82926e418@7b067819]]
rejected from java.util.concurrent.ScheduledThreadPoolExecutor@47e838a3
[Terminated, pool size = 0, active threads = 0, queued tasks = 0, completed tasks = 16]
at java.base/java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2065)
at java.base/java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:833)
at java.base/java.util.concurrent.ScheduledThreadPoolExecutor.delayedExecute(ScheduledThreadPoolExecutor.java:340)
at java.base/java.util.concurrent.ScheduledThreadPoolExecutor.schedule(ScheduledThreadPoolExecutor.java:562)
at java.base/java.util.concurrent.Executors$DelegatedScheduledExecutorService.schedule(Executors.java:813)
at org.thingsboard.mqtt.broker.service.mqtt.will.DefaultLastWillService.scheduleLastWill(DefaultLastWillService.java:121)
at org.thingsboard.mqtt.broker.service.mqtt.will.DefaultLastWillService.removeAndExecuteLastWillIfNeeded(DefaultLastWillService.java:108)
at org.thingsboard.mqtt.broker.actors.client.service.disconnect.DisconnectServiceImpl.clearClientSession(DisconnectServiceImpl.java:163)
at org.thingsboard.mqtt.broker.actors.client.service.disconnect.DisconnectServiceImpl.disconnect(DisconnectServiceImpl.java:91)
at org.thingsboard.mqtt.broker.actors.client.service.ActorProcessorImpl.disconnect(ActorProcessorImpl.java:336)
at org.thingsboard.mqtt.broker.actors.client.service.ActorProcessorImpl.onDisconnect(ActorProcessorImpl.java:331)
at org.thingsboard.mqtt.broker.actors.client.ClientActor.doProcess(ClientActor.java:129)
at org.thingsboard.mqtt.broker.actors.service.ContextAwareActor.process(ContextAwareActor.java:44)
at org.thingsboard.mqtt.broker.actors.TbActorMailbox.processMailbox(TbActorMailbox.java:145)
at java.base/java.util.concurrent.ForkJoinTask$RunnableExecuteAction.exec(ForkJoinTask.java:1395)
at java.base/java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:373)
at java.base/java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1182)
at java.base/java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1655)
at java.base/java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1622)
at java.base/java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:165)
Pull Request description
During the TBMQ shutdown, this issue may occur when last will messages are attempted to be delivered, but the scheduler has already been stopped. Resulting in:
General checklist
Front-End feature checklist
Back-End feature checklist