Add .batch_size(batch_size) to #__find_in_batches (Mongoid).
Fixes #1037 .
Although .each_slice(batch_size) is useful in order to limit how many documents are sent to Elasticsearch at a time, it does nots limit the batch size of MongoDB's getMore commands.
By default, iterating over a MongoDB collection will first return 101 documents, and then subsequent batches of 16 MiB :
It is now likely that more than 10 minutes go by between two getMore commands and that the MongoDB cursor expires.
Adding .batch_size(batch_size) to the query makes sure that MongoDB documents are retrieved at the same rate as they are processed and indexed in Elasticsearch, and allow applications affected by the .no_timeout issue to reduce the batch size to avoid cursor timeouts.
Add
.batch_size(batch_size)
to#__find_in_batches
(Mongoid).Fixes #1037 .
Although
.each_slice(batch_size)
is useful in order to limit how many documents are sent to Elasticsearch at a time, it does nots limit the batch size of MongoDB'sgetMore
commands.By default, iterating over a MongoDB collection will first return 101 documents, and then subsequent batches of 16 MiB :
https://www.mongodb.com/docs/manual/tutorial/iterate-a-cursor/#cursor-batches
For example, a MongoDB collection containing documents averaging 1 KiB might return more than 16,000 documents at a time.
Although Mongoid claims in its documentation a default batch size of 1,000 documents, it does not seem to be the case.
Also, Mongoid's
.no_timeout
is broken right now and does nothing:https://github.com/mongodb/mongo-ruby-driver/pull/2557
It is now likely that more than 10 minutes go by between two
getMore
commands and that the MongoDB cursor expires.Adding
.batch_size(batch_size)
to the query makes sure that MongoDB documents are retrieved at the same rate as they are processed and indexed in Elasticsearch, and allow applications affected by the.no_timeout
issue to reduce the batch size to avoid cursor timeouts.