When I start a river, the initial import is failing after importing 30% of the documents. It shows a cursor timeout error, and starts syncing with oplogs. This is the error that it shows up..
[2015-10-12 23:16:15,657][ERROR][org.elasticsearch.river.mongodb.CollectionSlurper] Exception while looping in cursor
com.mongodb.MongoException: getMore: cursor didn't exist on server, possible restart or timeout?
at com.mongodb.QueryResultIterator.throwOnQueryFailure(QueryResultIterator.java:246)
at com.mongodb.QueryResultIterator.init(QueryResultIterator.java:224)
at com.mongodb.QueryResultIterator.initFromQueryResponse(QueryResultIterator.java:184)
at com.mongodb.QueryResultIterator.getMore(QueryResultIterator.java:149)
at com.mongodb.QueryResultIterator.hasNext(QueryResultIterator.java:135)
at com.mongodb.DBCursor._hasNext(DBCursor.java:626)
at com.mongodb.DBCursor.hasNext(DBCursor.java:657)
at org.elasticsearch.river.mongodb.CollectionSlurper.importCollection(CollectionSlurper.java:140)
at org.elasticsearch.river.mongodb.CollectionSlurper.importInitial(CollectionSlurper.java:77)
at org.elasticsearch.river.mongodb.MongoDBRiver$1.run(MongoDBRiver.java:305)
at java.lang.Thread.run(Thread.java:745)
This way, I get changes from op.logs, but I'm missing most of the documents in Elasticsearch. Is there a way to specify the timeout of a cursor during initial import? So that I can increase the cursor living length? Or what is the solution to that.
Hello,
When I start a river, the initial import is failing after importing 30% of the documents. It shows a cursor timeout error, and starts syncing with oplogs. This is the error that it shows up..
This way, I get changes from op.logs, but I'm missing most of the documents in Elasticsearch. Is there a way to specify the timeout of a cursor during initial import? So that I can increase the cursor living length? Or what is the solution to that.