uses AspectJ to trace io.vertx.ext.mongo.impl.PublisherAdapter#handleIn and io.vertx.ext.mongo.impl.PublisherAdapter#requestMore method calls
uses java.lang.reflect tricks to trace capacity changes of the underlying io.vertx.core.streams.impl.InboundBuffer of the io.vertx.ext.mongo.impl.PublisherAdapter
uses AspectJ to detect if a drain() occurs in the InboundBuffer of the PublisherAdapter
uses AspectJ to detect if pause() or resume() occur in the PublisherAdapter
In a few seconds you can see queue capacity changed: xyz messages:
10500 documents processed
writing bulk of docs with categories...
bulk written
[PublisherAdapter] InboundBuffer fetch drained!!
queue capacity changed: 13528
10600 documents processed
10700 documents processed
10800 documents processed
10900 documents processed
11000 documents processed
writing bulk of docs with categories...
bulk written
[PublisherAdapter] InboundBuffer fetch drained!!
queue capacity changed: 20292
11100 documents processed
11200 documents processed
11300 documents processed
11400 documents processed
11500 documents processed
writing bulk of docs with categories...
bulk written
[PublisherAdapter] InboundBuffer fetch drained!!
queue capacity changed: 30438
11600 documents processed
11700 documents processed
11800 documents processed
11900 documents processed
12000 documents processed
writing bulk of docs with categories...
bulk written
[PublisherAdapter] InboundBuffer fetch drained!!
12100 documents processed
12200 documents processed
12300 documents processed
12400 documents processed
12500 documents processed
writing bulk of docs with categories...
bulk written
[PublisherAdapter] InboundBuffer fetch drained!!
12600 documents processed
12700 documents processed
12800 documents processed
12900 documents processed
13000 documents processed
writing bulk of docs with categories...
bulk written
[PublisherAdapter] InboundBuffer fetch drained!!
queue capacity changed: 45657
13100 documents processed
13200 documents processed
13300 documents processed
13400 documents processed
13500 documents processed
writing bulk of docs with categories...
bulk written
[PublisherAdapter] InboundBuffer fetch drained!!
13600 documents processed
13700 documents processed
13800 documents processed
13900 documents processed
14000 documents processed
14100 documents processed
writing bulk of docs with categories...
bulk written
[PublisherAdapter] InboundBuffer fetch drained!!
14200 documents processed
14300 documents processed
14400 documents processed
14500 documents processed
14600 documents processed
writing bulk of docs with categories...
until:
java.lang.OutOfMemoryError: Java heap space
at io.vertx.core.impl.btc.BlockedThreadChecker$1.run (BlockedThreadChecker.java:55)
at java.util.TimerThread.mainLoop (Timer.java:556)
at java.util.TimerThread.run (Timer.java:506)
java.lang.OutOfMemoryError: Java heap space
More information in the readme.md of the reproducer.
I would expect the InboundBuffer not to grow (much) beyond batchSize.
Version
4.3.2
Context
The following code is very likely to create an
OutOfMemoryError
:Note that 512 element do fit into memory without any problem.
Do you have a reproducer?
https://github.com/bfreuden/vertx-reproducers/tree/master/mongo-client-oome
Steps to reproduce
My setup:
Extra
The reproducer:
io.vertx.ext.mongo.impl.PublisherAdapter#handleIn
andio.vertx.ext.mongo.impl.PublisherAdapter#requestMore
method callsio.vertx.core.streams.impl.InboundBuffer
of theio.vertx.ext.mongo.impl.PublisherAdapter
drain()
occurs in theInboundBuffer
of thePublisherAdapter
pause()
orresume()
occur in thePublisherAdapter
In a few seconds you can see
queue capacity changed: xyz
messages:until:
More information in the readme.md of the reproducer.
I would expect the
InboundBuffer
not to grow (much) beyondbatchSize
.