richardwilly98 / elasticsearch-river-mongodb

MongoDB River Plugin for ElasticSearch
1.12k stars 215 forks source link

cannot start river after mapper exception #498

Closed fatihpolatli closed 9 years ago

fatihpolatli commented 9 years ago

hi,

i have installed river and imported all data but after that it stopped. i looked at log and see that there was number format exception on mapper.

after that, whenever try to start ES, mongo river cannot be start. here is the log

[2015-03-24 11:34:59,323][INFO ][node ] [Wysper] version[1.4.2], pid[3088], build[927caff/2014-12-16T14:11:12Z] [2015-03-24 11:34:59,325][INFO ][node ] [Wysper] initializing ... [2015-03-24 11:34:59,454][INFO ][plugins ] [Wysper] loaded [mongodb-river, mapper-attachments], sites [river-mongodb] [2015-03-24 11:35:04,944][INFO ][node ] [Wysper] initialized [2015-03-24 11:35:04,945][INFO ][node ] [Wysper] starting ... [2015-03-24 11:35:07,254][INFO ][transport ] [Wysper] bound_address {inet[/0:0:0:0:0:0:0:0:9300]}, publish_address {inet[/192.168.1.249:9300]} [2015-03-24 11:35:13,656][INFO ][discovery ] [Wysper] elasticsearch/WT0aXga-RnmwyaCG9yw_kg [2015-03-24 11:35:17,448][INFO ][cluster.service ] [Wysper] new_master [Wysper][WT0aXga-RnmwyaCG9yw_kg][MTRCW077][inet[/192.168.1.249:9300]], reason: zen-disco-join (elected_as_master) [2015-03-24 11:35:17,561][INFO ][gateway ] [Wysper] recovered [0] indices into cluster_state [2015-03-24 11:35:20,662][INFO ][http ] [Wysper] bound_address {inet[/0:0:0:0:0:0:0:0:9200]}, publish_address {inet[/192.168.1.249:9200]} [2015-03-24 11:35:20,664][INFO ][node ] [Wysper] started [2015-03-24 11:35:26,758][INFO ][cluster.metadata ] [Wysper] [_river] creating index, cause [auto(index api)], shards [1]/[1], mappings [mongodb] [2015-03-24 11:35:27,439][INFO ][cluster.metadata ] [Wysper] [_river] update_mapping mongodb [2015-03-24 11:35:28,443][INFO ][org.elasticsearch.river.mongodb.MongoDBRiver] MongoDB River Plugin - version[2.0.7] - hash[92a76fb] - time[2015-03-23T12:35:39Z] [2015-03-24 11:35:28,449][INFO ][river.mongodb.util ] setRiverStatus called with mongodb - RUNNING [2015-03-24 11:35:28,458][INFO ][org.elasticsearch.river.mongodb.MongoDBRiver] River mongodb startup pending [2015-03-24 11:35:28,463][INFO ][cluster.metadata ] [Wysper] [_river] update_mapping mongodb [2015-03-24 11:35:28,475][INFO ][org.elasticsearch.river.mongodb.MongoDBRiver] Starting river mongodb [2015-03-24 11:35:28,477][INFO ][org.elasticsearch.river.mongodb.MongoDBRiver] MongoDB options: secondaryreadpreference [false], drop_collection [false], include_collection [], throttlesize [5000], gridfs [false], filter [null], db [test], collection [users], script [null], indexing to [mongotest]/[users] [2015-03-24 11:35:28,627][INFO ][cluster.metadata ] [Wysper] [mongotest] creating index, cause [api], shards [5]/[1], mappings [] [2015-03-24 11:35:28,990][INFO ][river.mongodb ] [Wysper] Creating MongoClient for [[127.0.0.1:27017]] [2015-03-24 11:35:29,464][INFO ][cluster.metadata ] [Wysper] [_river] update_mapping mongodb [2015-03-24 11:35:30,667][INFO ][org.elasticsearch.river.mongodb.MongoConfigProvider] MongoDB version - 3.0.0 [2015-03-24 11:35:30,710][INFO ][org.elasticsearch.river.mongodb.CollectionSlurper] MongoDBRiver is beginning initial import of test.users [2015-03-24 11:35:30,741][INFO ][org.elasticsearch.river.mongodb.CollectionSlurper] Number of documents indexed in initial import of test.users: 17 [2015-03-24 11:35:30,795][DEBUG][action.bulk ] [Wysper] [mongotest][1] failed to execute bulk item (index) index {[mongotest][users][550fc2c9fb9f955aeb18ae05], source[{"phone":"asdasd","_id":"550fc2c9fb9f955aeb18ae05","email":"asdasdas","age":"asdas","name":"asdasda","lastname":"asdada"}]} org.elasticsearch.index.mapper.MapperParsingException: failed to parse [age] at org.elasticsearch.index.mapper.core.AbstractFieldMapper.parse(AbstractFieldMapper.java:415) at org.elasticsearch.index.mapper.object.ObjectMapper.serializeValue(ObjectMapper.java:707) at org.elasticsearch.index.mapper.object.ObjectMapper.parse(ObjectMapper.java:500) at org.elasticsearch.index.mapper.DocumentMapper.parse(DocumentMapper.java:541) at org.elasticsearch.index.mapper.DocumentMapper.parse(DocumentMapper.java:490) at org.elasticsearch.index.shard.service.InternalIndexShard.prepareIndex(InternalIndexShard.java:413) at org.elasticsearch.action.bulk.TransportShardBulkAction.shardIndexOperation(TransportShardBulkAction.java:435) at org.elasticsearch.action.bulk.TransportShardBulkAction.shardOperationOnPrimary(TransportShardBulkAction.java:150) at org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction$AsyncShardOperationAction.performOnPrimary(TransportShardReplicationOperationAction.java:511) at org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction$AsyncShardOperationAction$1.run(TransportShardReplicationOperationAction.java:419) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) Caused by: java.lang.NumberFormatException: For input string: "asdas" at java.lang.NumberFormatException.forInputString(NumberFormatException.java:65) at java.lang.Long.parseLong(Long.java:441) at java.lang.Long.parseLong(Long.java:483) at org.elasticsearch.common.xcontent.support.AbstractXContentParser.longValue(AbstractXContentParser.java:145) at org.elasticsearch.index.mapper.core.LongFieldMapper.innerParseCreateField(LongFieldMapper.java:300) at org.elasticsearch.index.mapper.core.NumberFieldMapper.parseCreateField(NumberFieldMapper.java:235) at org.elasticsearch.index.mapper.core.AbstractFieldMapper.parse(AbstractFieldMapper.java:405) ... 12 more [2015-03-24 11:35:30,807][INFO ][cluster.metadata ] [Wysper] [mongotest] update_mapping users [2015-03-24 11:35:30,799][DEBUG][action.bulk ] [Wysper] [mongotest][0] failed to execute bulk item (index) index {[mongotest][users][550c4056fb9f955aeb18ae03], source[{"phone":"asdasd","_id":"550c4056fb9f955aeb18ae03","email":"asdasdas","age":"asdas","name":"asdasda","lastname":"asdada"}]} org.elasticsearch.index.mapper.MapperParsingException: failed to parse [age] at org.elasticsearch.index.mapper.core.AbstractFieldMapper.parse(AbstractFieldMapper.java:415) at org.elasticsearch.index.mapper.object.ObjectMapper.serializeValue(ObjectMapper.java:707) at org.elasticsearch.index.mapper.object.ObjectMapper.parse(ObjectMapper.java:500) at org.elasticsearch.index.mapper.DocumentMapper.parse(DocumentMapper.java:541) at org.elasticsearch.index.mapper.DocumentMapper.parse(DocumentMapper.java:490) at org.elasticsearch.index.shard.service.InternalIndexShard.prepareIndex(InternalIndexShard.java:413) at org.elasticsearch.action.bulk.TransportShardBulkAction.shardIndexOperation(TransportShardBulkAction.java:435) at org.elasticsearch.action.bulk.TransportShardBulkAction.shardOperationOnPrimary(TransportShardBulkAction.java:150) at org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction$AsyncShardOperationAction.performOnPrimary(TransportShardReplicationOperationAction.java:511) at org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction$AsyncShardOperationAction$1.run(TransportShardReplicationOperationAction.java:419) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) Caused by: java.lang.NumberFormatException: For input string: "asdas" at java.lang.NumberFormatException.forInputString(NumberFormatException.java:65) at java.lang.Long.parseLong(Long.java:441) at java.lang.Long.parseLong(Long.java:483) at org.elasticsearch.common.xcontent.support.AbstractXContentParser.longValue(AbstractXContentParser.java:145) at org.elasticsearch.index.mapper.core.LongFieldMapper.innerParseCreateField(LongFieldMapper.java:300) at org.elasticsearch.index.mapper.core.NumberFieldMapper.parseCreateField(NumberFieldMapper.java:235) at org.elasticsearch.index.mapper.core.AbstractFieldMapper.parse(AbstractFieldMapper.java:405) ... 12 more [2015-03-24 11:35:30,831][DEBUG][action.bulk ] [Wysper] [mongotest][0] failed to execute bulk item (index) index {[mongotest][users][550fc24bfb9f955aeb18ae04], source[{"phone":"asdasd","_id":"550fc24bfb9f955aeb18ae04","email":"asdasdas","age":"asdas","name":"asdasda","lastname":"asdada"}]} org.elasticsearch.index.mapper.MapperParsingException: failed to parse [age] at org.elasticsearch.index.mapper.core.AbstractFieldMapper.parse(AbstractFieldMapper.java:415) at org.elasticsearch.index.mapper.object.ObjectMapper.serializeValue(ObjectMapper.java:707) at org.elasticsearch.index.mapper.object.ObjectMapper.parse(ObjectMapper.java:500) at org.elasticsearch.index.mapper.DocumentMapper.parse(DocumentMapper.java:541) at org.elasticsearch.index.mapper.DocumentMapper.parse(DocumentMapper.java:490) at org.elasticsearch.index.shard.service.InternalIndexShard.prepareIndex(InternalIndexShard.java:413) at org.elasticsearch.action.bulk.TransportShardBulkAction.shardIndexOperation(TransportShardBulkAction.java:435) at org.elasticsearch.action.bulk.TransportShardBulkAction.shardOperationOnPrimary(TransportShardBulkAction.java:150) at org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction$AsyncShardOperationAction.performOnPrimary(TransportShardReplicationOperationAction.java:511) at org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction$AsyncShardOperationAction$1.run(TransportShardReplicationOperationAction.java:419) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) Caused by: java.lang.NumberFormatException: For input string: "asdas" at java.lang.NumberFormatException.forInputString(NumberFormatException.java:65) at java.lang.Long.parseLong(Long.java:441) at java.lang.Long.parseLong(Long.java:483) at org.elasticsearch.common.xcontent.support.AbstractXContentParser.longValue(AbstractXContentParser.java:145) at org.elasticsearch.index.mapper.core.LongFieldMapper.innerParseCreateField(LongFieldMapper.java:300) at org.elasticsearch.index.mapper.core.NumberFieldMapper.parseCreateField(NumberFieldMapper.java:235) at org.elasticsearch.index.mapper.core.AbstractFieldMapper.parse(AbstractFieldMapper.java:405) ... 12 more [2015-03-24 11:35:30,796][DEBUG][action.bulk ] [Wysper] [mongotest][3] failed to execute bulk item (index) index {[mongotest][users][550fc3ddfb9f955aeb18ae06], source[{"phone":"asdasd","_id":"550fc3ddfb9f955aeb18ae06","email":"asdasdas","age":"asdas","name":"asdasda","lastname":"asdada"}]} org.elasticsearch.index.mapper.MapperParsingException: failed to parse [age] at org.elasticsearch.index.mapper.core.AbstractFieldMapper.parse(AbstractFieldMapper.java:415) at org.elasticsearch.index.mapper.object.ObjectMapper.serializeValue(ObjectMapper.java:707) at org.elasticsearch.index.mapper.object.ObjectMapper.parse(ObjectMapper.java:500) at org.elasticsearch.index.mapper.DocumentMapper.parse(DocumentMapper.java:541) at org.elasticsearch.index.mapper.DocumentMapper.parse(DocumentMapper.java:490) at org.elasticsearch.index.shard.service.InternalIndexShard.prepareIndex(InternalIndexShard.java:413) at org.elasticsearch.action.bulk.TransportShardBulkAction.shardIndexOperation(TransportShardBulkAction.java:435) at org.elasticsearch.action.bulk.TransportShardBulkAction.shardOperationOnPrimary(TransportShardBulkAction.java:150) at org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction$AsyncShardOperationAction.performOnPrimary(TransportShardReplicationOperationAction.java:511) at org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction$AsyncShardOperationAction$1.run(TransportShardReplicationOperationAction.java:419) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) Caused by: java.lang.NumberFormatException: For input string: "asdas" at java.lang.NumberFormatException.forInputString(NumberFormatException.java:65) at java.lang.Long.parseLong(Long.java:441) at java.lang.Long.parseLong(Long.java:483) at org.elasticsearch.common.xcontent.support.AbstractXContentParser.longValue(AbstractXContentParser.java:145) at org.elasticsearch.index.mapper.core.LongFieldMapper.innerParseCreateField(LongFieldMapper.java:300) at org.elasticsearch.index.mapper.core.NumberFieldMapper.parseCreateField(NumberFieldMapper.java:235) at org.elasticsearch.index.mapper.core.AbstractFieldMapper.parse(AbstractFieldMapper.java:405) ... 12 more [2015-03-24 11:35:30,853][ERROR][org.elasticsearch.river.mongodb.MongoDBRiverBulkProcessor] Bulk processor failed. failure in bulk execution: [8]: index [mongotest], type [users], id [550c4056fb9f955aeb18ae03], message [MapperParsingException[failed to parse [age]]; nested: NumberFormatException[For input string: "asdas"]; ] [9]: index [mongotest], type [users], id [550fc24bfb9f955aeb18ae04], message [MapperParsingException[failed to parse [age]]; nested: NumberFormatException[For input string: "asdas"]; ] [10]: index [mongotest], type [users], id [550fc2c9fb9f955aeb18ae05], message [MapperParsingException[failed to parse [age]]; nested: NumberFormatException[For input string: "asdas"]; ] [11]: index [mongotest], type [users], id [550fc3ddfb9f955aeb18ae06], message [MapperParsingException[failed to parse [age]]; nested: NumberFormatException[For input string: "asdas"]; ] [2015-03-24 11:35:30,856][INFO ][river.mongodb.util ] setRiverStatus called with mongodb - IMPORT_FAILED [2015-03-24 11:35:30,863][INFO ][org.elasticsearch.river.mongodb.MongoDBRiver] Closing river mongodb [2015-03-24 11:35:30,864][INFO ][org.elasticsearch.river.mongodb.MongoDBRiver] Stopping river mongodb [2015-03-24 11:35:30,865][INFO ][org.elasticsearch.river.mongodb.MongoDBRiver] Stopped river mongodb [2015-03-24 11:35:30,865][INFO ][org.elasticsearch.river.mongodb.Indexer] river-mongodb indexer interrupted [2015-03-24 11:35:30,865][INFO ][org.elasticsearch.river.mongodb.CollectionSlurper] river-mongodb slurper interrupted [2015-03-24 11:35:30,867][INFO ][river.mongodb ] [Wysper] Creating MongoClient for [[MTRCW077:27017]] [2015-03-24 11:35:30,879][INFO ][org.elasticsearch.river.mongodb.MongoDBRiver] Started river mongodb [2015-03-24 11:35:30,880][INFO ][org.elasticsearch.river.mongodb.OplogSlurper] Slurper is stopping. River has status STOPPED [2015-03-24 11:35:30,888][INFO ][cluster.metadata ] [Wysper] [_river] update_mapping mongodb [2015-03-24 11:38:01,514][INFO ][node ] [Fearmaster] version[1.4.2], pid[6032], build[927caff/2014-12-16T14:11:12Z] [2015-03-24 11:38:01,516][INFO ][node ] [Fearmaster] initializing ... [2015-03-24 11:38:01,636][INFO ][plugins ] [Fearmaster] loaded [mongodb-river, mapper-attachments], sites [river-mongodb] [2015-03-24 11:38:06,766][INFO ][node ] [Fearmaster] initialized [2015-03-24 11:38:06,766][INFO ][node ] [Fearmaster] starting ... [2015-03-24 11:38:10,329][INFO ][transport ] [Fearmaster] bound_address {inet[/0:0:0:0:0:0:0:0:9300]}, publish_address {inet[/192.168.1.249:9300]} [2015-03-24 11:38:16,640][INFO ][discovery ] [Fearmaster] elasticsearch/CPpDSndnShqW4Xht19nuyA [2015-03-24 11:38:20,428][INFO ][cluster.service ] [Fearmaster] new_master [Fearmaster][CPpDSndnShqW4Xht19nuyA][MTRCW077][inet[/192.168.1.249:9300]], reason: zen-disco-join (elected_as_master) [2015-03-24 11:38:21,457][INFO ][http ] [Fearmaster] bound_address {inet[/0:0:0:0:0:0:0:0:9200]}, publish_address {inet[/192.168.1.249:9200]} [2015-03-24 11:38:21,459][INFO ][node ] [Fearmaster] started [2015-03-24 11:38:21,886][INFO ][gateway ] [Fearmaster] recovered [2] indices into cluster_state [2015-03-24 11:38:23,155][INFO ][org.elasticsearch.river.mongodb.MongoDBRiver] MongoDB River Plugin - version[2.0.7] - hash[92a76fb] - time[2015-03-23T12:35:39Z] [2015-03-24 11:38:23,161][ERROR][org.elasticsearch.river.mongodb.MongoDBRiver] Cannot start river mongodb. Current status is IMPORT_FAILED [2015-03-24 11:38:46,031][INFO ][node ] [Time Bomb] version[1.4.2], pid[244], build[927caff/2014-12-16T14:11:12Z] [2015-03-24 11:38:46,033][INFO ][node ] [Time Bomb] initializing ... [2015-03-24 11:38:46,166][INFO ][plugins ] [Time Bomb] loaded [mongodb-river, mapper-attachments], sites [river-mongodb] [2015-03-24 11:38:51,935][INFO ][node ] [Time Bomb] initialized [2015-03-24 11:38:51,936][INFO ][node ] [Time Bomb] starting ... [2015-03-24 11:38:53,315][INFO ][transport ] [Time Bomb] bound_address {inet[/0:0:0:0:0:0:0:0:9300]}, publish_address {inet[/192.168.1.249:9300]} [2015-03-24 11:38:55,610][INFO ][discovery ] [Time Bomb] elasticsearch/RNMP8wSBRxib5ay6VmeymQ [2015-03-24 11:38:59,397][INFO ][cluster.service ] [Time Bomb] new_master [Time Bomb][RNMP8wSBRxib5ay6VmeymQ][MTRCW077][inet[/192.168.1.249:9300]], reason: zen-disco-join (elected_as_master) [2015-03-24 11:39:00,486][INFO ][http ] [Time Bomb] bound_address {inet[/0:0:0:0:0:0:0:0:9200]}, publish_address {inet[/192.168.1.249:9200]} [2015-03-24 11:39:00,487][INFO ][node ] [Time Bomb] started [2015-03-24 11:39:00,780][INFO ][gateway ] [Time Bomb] recovered [2] indices into cluster_state [2015-03-24 11:39:01,643][INFO ][org.elasticsearch.river.mongodb.MongoDBRiver] MongoDB River Plugin - version[2.0.7] - hash[92a76fb] - time[2015-03-23T12:35:39Z] [2015-03-24 11:39:01,649][ERROR][org.elasticsearch.river.mongodb.MongoDBRiver] Cannot start river mongodb. Current status is IMPORT_FAILED [2015-03-24 11:42:27,620][INFO ][node ] [Royal Roy] version[1.4.2], pid[2716], build[927caff/2014-12-16T14:11:12Z] [2015-03-24 11:42:27,622][INFO ][node ] [Royal Roy] initializing ... [2015-03-24 11:42:27,758][INFO ][plugins ] [Royal Roy] loaded [mongodb-river, mapper-attachments], sites [river-mongodb] [2015-03-24 11:42:33,130][INFO ][node ] [Royal Roy] initialized [2015-03-24 11:42:33,131][INFO ][node ] [Royal Roy] starting ... [2015-03-24 11:42:36,541][INFO ][transport ] [Royal Roy] bound_address {inet[/0:0:0:0:0:0:0:0:9300]}, publish_address {inet[/192.168.1.249:9300]} [2015-03-24 11:42:40,934][INFO ][discovery ] [Royal Roy] elasticsearch/A57olq63SyqgWV2LwWRmpg [2015-03-24 11:42:44,721][INFO ][cluster.service ] [Royal Roy] new_master [Royal Roy][A57olq63SyqgWV2LwWRmpg][MTRCW077][inet[/192.168.1.249:9300]], reason: zen-disco-join (elected_as_master) [2015-03-24 11:42:45,794][INFO ][http ] [Royal Roy] bound_address {inet[/0:0:0:0:0:0:0:0:9200]}, publish_address {inet[/192.168.1.249:9200]} [2015-03-24 11:42:45,795][INFO ][node ] [Royal Roy] started [2015-03-24 11:42:46,147][INFO ][gateway ] [Royal Roy] recovered [2] indices into cluster_state [2015-03-24 11:42:47,073][INFO ][org.elasticsearch.river.mongodb.MongoDBRiver] MongoDB River Plugin - version[2.0.7] - hash[92a76fb] - time[2015-03-23T12:35:39Z] [2015-03-24 11:42:47,079][ERROR][org.elasticsearch.river.mongodb.MongoDBRiver] Cannot start river mongodb. Current status is IMPORT_FAILED

ewgRa commented 9 years ago

River can't import records from MongoDB.

MongoDB is schema less, that mean that for example in record1.field can be stored "1.99" and in record2.field can be stored "stringvalue".

When river import record1, it map "field" as Number type and than it try record2 and in "field" there is string, that river doesn't know how to import as number.

Possible ways to strictly map "field" as String type, or for example use "script" configuration and in script transform string values to number values.

fatihpolatli commented 9 years ago

i guess this is because of mapper but maybe it shouldnt be sctrict on field type. what do you think? am i wrong?

ewgRa commented 9 years ago

"maybe it shouldnt be sctrict on field type" - this is how Elasticsearch works, for each field it have type. You can check index metadata and will see something like this:

"mappings": {
  "notification": {
     "properties": {
         "price": {
            "type": "long"
         },

If you try save in price string value, elasticsearch will give you an error. This is not about river, this is about ES.

fatihpolatli commented 9 years ago

got it, thanks... ;)

ewgRa commented 9 years ago

@fatihpolatli please close issue if problem solved