Closed tarann closed 8 years ago
Hi @tarann ,
Thank you for reporting this issue!. Probably you use the latest mongolastic, I haven't tested it with ES 2.4 but It seems like this is due to An SPI class of type org.apache.lucene.codecs.PostingsFormat with name 'Lucene50' does not exist.
. I will soon update the pom and bump a new version.
could you please use v1.3.11 ?
Hi @ozlerhakan, thank you for your help. It's better, now i have an index created in ES, but it's empty : http://hpics.li/3abcf66 This is what i get with v1.3.11 :
root@wheezy:/home/jp/Téléchargements# java -jar mongolastic.jar -f mongo_to_ES.yml
0 [main] INFO com.kodcu.config.FileConfiguration -
Config Output:
{elastic=Elastic{host='localhost', port=9300, clusterName=null, dateFormat=null, longToString=false, auth=null}, misc=Misc{batch=200, direction='me', dindex=Namespace{as='valueable_dev', name='valueable_dev'}, ctype=Namespace{as='product', name='product'}, dropDataset=true}, mongo=Mongo{host='localhost', port=27017, query='{}', auth=null}}
231 [main] INFO org.elasticsearch.plugins - [Hack] modules [], plugins [], sites []
265 [main] DEBUG org.elasticsearch.threadpool - [Hack] creating thread_pool [force_merge], type [fixed], size [1], queue_size [null]
287 [main] DEBUG org.elasticsearch.threadpool - [Hack] creating thread_pool [percolate], type [fixed], size [1], queue_size [1k]
311 [main] DEBUG org.elasticsearch.threadpool - [Hack] creating thread_pool [fetch_shard_started], type [scaling], min [1], size [2], keep_alive [5m]
313 [main] DEBUG org.elasticsearch.threadpool - [Hack] creating thread_pool [listener], type [fixed], size [1], queue_size [null]
313 [main] DEBUG org.elasticsearch.threadpool - [Hack] creating thread_pool [index], type [fixed], size [1], queue_size [200]
314 [main] DEBUG org.elasticsearch.threadpool - [Hack] creating thread_pool [refresh], type [scaling], min [1], size [1], keep_alive [5m]
314 [main] DEBUG org.elasticsearch.threadpool - [Hack] creating thread_pool [suggest], type [fixed], size [1], queue_size [1k]
314 [main] DEBUG org.elasticsearch.threadpool - [Hack] creating thread_pool [generic], type [cached], keep_alive [30s]
319 [main] DEBUG org.elasticsearch.threadpool - [Hack] creating thread_pool [warmer], type [scaling], min [1], size [1], keep_alive [5m]
324 [main] DEBUG org.elasticsearch.threadpool - [Hack] creating thread_pool [search], type [fixed], size [2], queue_size [1k]
327 [main] DEBUG org.elasticsearch.threadpool - [Hack] creating thread_pool [flush], type [scaling], min [1], size [1], keep_alive [5m]
328 [main] DEBUG org.elasticsearch.threadpool - [Hack] creating thread_pool [fetch_shard_store], type [scaling], min [1], size [2], keep_alive [5m]
329 [main] DEBUG org.elasticsearch.threadpool - [Hack] creating thread_pool [management], type [scaling], min [1], size [5], keep_alive [5m]
329 [main] DEBUG org.elasticsearch.threadpool - [Hack] creating thread_pool [get], type [fixed], size [1], queue_size [1k]
330 [main] DEBUG org.elasticsearch.threadpool - [Hack] creating thread_pool [bulk], type [fixed], size [1], queue_size [50]
330 [main] DEBUG org.elasticsearch.threadpool - [Hack] creating thread_pool [snapshot], type [scaling], min [1], size [1], keep_alive [5m]
810 [main] DEBUG org.elasticsearch.common.network - configuration:
lo
inet 127.0.0.1 netmask:255.0.0.0 scope:host
inet6 ::1 prefixlen:128 scope:host
UP LOOPBACK mtu:16436 index:1
eth0
inet 10.0.2.15 netmask:255.255.255.0 broadcast:10.0.2.255 scope:site
inet6 fe80::a00:27ff:feb8:e83f prefixlen:64 scope:link
hardware 08:00:27:B8:E8:3F
UP MULTICAST mtu:1500 index:2
eth1
inet 192.168.56.102 netmask:255.255.255.0 broadcast:192.168.56.255 scope:site
inet6 fe80::a00:27ff:fe65:a25 prefixlen:64 scope:link
hardware 08:00:27:65:0A:25
UP MULTICAST mtu:1500 index:3
840 [main] DEBUG org.elasticsearch.common.netty - using gathering [true]
890 [main] DEBUG org.elasticsearch.client.transport - [Hack] node_sampler_interval[5s]
933 [main] DEBUG org.elasticsearch.netty.channel.socket.nio.SelectorUtil - Using select timeout of 500
934 [main] DEBUG org.elasticsearch.netty.channel.socket.nio.SelectorUtil - Epoll-bug workaround enabled = false
953 [main] DEBUG org.elasticsearch.client.transport - [Hack] adding address [{#transport#-1}{127.0.0.1}{localhost/127.0.0.1:9300}]
989 [elasticsearch[Hack][management][T#1]] DEBUG org.elasticsearch.transport.netty - [Hack] connected to node [{#transport#-1}{127.0.0.1}{localhost/127.0.0.1:9300}]
1108 [main] DEBUG org.elasticsearch.transport.netty - [Hack] connected to node [{Gargoyle}{MWNYleM7TOeWyoCBB3VLnA}{127.0.0.1}{127.0.0.1:9300}]
1170 [main] INFO org.mongodb.driver.cluster - Cluster created with settings {hosts=[localhost:27017], mode=MULTIPLE, requiredClusterType=UNKNOWN, serverSelectionTimeout='5000 ms', maxWaitQueueSize=500}
1171 [main] INFO org.mongodb.driver.cluster - Adding discovered server localhost:27017 to client view of cluster
1297 [main] DEBUG org.mongodb.driver.cluster - Updating cluster description to {type=UNKNOWN, servers=[{address=localhost:27017, type=UNKNOWN, state=CONNECTING}]
1397 [main] INFO org.mongodb.driver.cluster - No server chosen by ReadPreferenceServerSelector{readPreference=primaryPreferred} from cluster description ClusterDescription{type=UNKNOWN, connectionMode=MULTIPLE, all=[ServerDescription{address=localhost:27017, type=UNKNOWN, state=CONNECTING}]}. Waiting for 5000 ms before timing out
1405 [cluster-ClusterId{value='57cfc73f330e110f0717c7c1', description='null'}-localhost:27017] INFO org.mongodb.driver.connection - Opened connection [connectionId{localValue:1, serverValue:1}] to localhost:27017
1405 [cluster-ClusterId{value='57cfc73f330e110f0717c7c1', description='null'}-localhost:27017] DEBUG org.mongodb.driver.cluster - Checking status of localhost:27017
1406 [cluster-ClusterId{value='57cfc73f330e110f0717c7c1', description='null'}-localhost:27017] INFO org.mongodb.driver.cluster - Monitor thread successfully connected to server with description ServerDescription{address=localhost:27017, type=STANDALONE, state=CONNECTED, ok=true, version=ServerVersion{versionList=[3, 0, 3]}, minWireVersion=0, maxWireVersion=3, maxDocumentSize=16777216, roundTripTimeNanos=693785}
1411 [cluster-ClusterId{value='57cfc73f330e110f0717c7c1', description='null'}-localhost:27017] INFO org.mongodb.driver.cluster - Discovered cluster type of STANDALONE
1413 [cluster-ClusterId{value='57cfc73f330e110f0717c7c1', description='null'}-localhost:27017] DEBUG org.mongodb.driver.cluster - Updating cluster description to {type=STANDALONE, servers=[{address=localhost:27017, type=STANDALONE, roundTripTime=0,7 ms, state=CONNECTED}]
1418 [main] INFO org.mongodb.driver.connection - Opened connection [connectionId{localValue:2, serverValue:2}] to localhost:27017
1423 [main] DEBUG org.mongodb.driver.protocol.command - Sending command {count : BsonString{value='product'}} to database valueable_dev on connection [connectionId{localValue:2, serverValue:2}] to server localhost:27017
1434 [main] DEBUG org.mongodb.driver.protocol.command - Command execution completed
1435 [main] INFO com.kodcu.provider.MongoToElasticProvider - Mongo collection count: 6
1453 [main] DEBUG org.mongodb.driver.protocol.query - Sending query of namespace valueable_dev.product on connection [connectionId{localValue:2, serverValue:2}] to server localhost:27017
1985 [main] DEBUG org.mongodb.driver.protocol.query - Query completed
1996 [main] DEBUG org.mongodb.driver.protocol.getmore - Getting more documents from namespace valueable_dev.product with cursor 73685289658 on connection [connectionId{localValue:2, serverValue:2}] to server localhost:27017
2253 [main] DEBUG org.mongodb.driver.protocol.getmore - Get-more completed
2257 [main] INFO com.kodcu.service.ElasticBulkService - Transferring data began to elasticsearch.
2952 [main] DEBUG org.elasticsearch.common.compress.lzf - using decoder[VanillaChunkDecoder]
4197 [elasticsearch[Hack][listener][T#1]] ERROR com.kodcu.listener.BulkProcessorListener - failure in bulk execution:
[0]: index [valueable_dev], type [product], id [56b9d3dfcce61970398b456b], message [MapperParsingException[failed to parse]; nested: IllegalArgumentException[mapper [quote_tpl.families.subfamilies.contribution.563cb563d0c6b9251e8b4567] of different type, current_type [double], merged_type [ObjectMapper]];]
[1]: index [valueable_dev], type [product], id [56c2db63cce61937438b4569], message [MapperParsingException[failed to parse]; nested: IllegalArgumentException[mapper [quote_tpl.families.subfamilies.contribution.563cb563d0c6b9251e8b4567] of different type, current_type [double], merged_type [ObjectMapper]];]
[2]: index [valueable_dev], type [product], id [56e03adbcce6191e448b4572], message [MapperParsingException[failed to parse]; nested: IllegalArgumentException[mapper [quote_tpl.families.subfamilies.items.criterias.score] of different type, current_type [string], merged_type [ObjectMapper]];]
[3]: index [valueable_dev], type [product], id [56e2cbfbcce6191e448b45cd], message [MapperParsingException[failed to parse]; nested: IllegalArgumentException[mapper [quote_tpl.families.subfamilies.items.criterias.score] of different type, current_type [string], merged_type [ObjectMapper]];]
[4]: index [valueable_dev], type [product], id [56e94c24cce61937438b4605], message [MapperParsingException[failed to parse]; nested: IllegalArgumentException[mapper [quote_tpl.families.subfamilies.items.criterias.score] of different type, current_type [string], merged_type [ObjectMapper]];]
[5]: index [valueable_dev], type [product], id [577b7dcccce61964298b4568], message [MapperParsingException[failed to parse]; nested: IllegalArgumentException[mapper [quote_tpl.families.subfamilies.items.criterias.score] of different type, current_type [string], merged_type [ObjectMapper]];]
4198 [main] INFO org.mongodb.driver.connection - Closed connection [connectionId{localValue:2, serverValue:2}] to localhost:27017 because the pool has been closed.
4198 [main] DEBUG org.mongodb.driver.connection - Closing connection connectionId{localValue:2, serverValue:2}
4199 [main] DEBUG org.elasticsearch.transport.netty - [Hack] disconnecting from [{Gargoyle}{MWNYleM7TOeWyoCBB3VLnA}{127.0.0.1}{127.0.0.1:9300}] due to explicit disconnect call
4200 [cluster-ClusterId{value='57cfc73f330e110f0717c7c1', description='null'}-localhost:27017] DEBUG org.mongodb.driver.connection - Closing connection connectionId{localValue:1, serverValue:1}
4212 [main] DEBUG org.elasticsearch.transport.netty - [Hack] disconnecting from [{#transport#-1}{127.0.0.1}{localhost/127.0.0.1:9300}] due to explicit disconnect call
4271 [main] INFO com.kodcu.main.Mongolastic - Load duration: 4269ms
Ah you get an exception during the migration : ERROR com.kodcu.listener.BulkProcessorListener - failure in bulk execution:
.
Could you show me the 6 documents located in your mongo? You may need a mapping in elasticsearch
Docs are really huge but i can show you one :
http://www.k-upload.fr/afficher-fichier-2016-09-07-134ee3369outputfile.json.html
yes the doc is indeed huge, but if you look into the log :
[0]: index [valueable_dev], type [product], id [56b9d3dfcce61970398b456b], message [MapperParsingException[failed to parse]; nested: IllegalArgumentException[mapper [quote_tpl.families.subfamilies.contribution.563cb563d0c6b9251e8b4567] of different type, current_type [double], merged_type [ObjectMapper]];]
[1]: index [valueable_dev], type [product], id [56c2db63cce61937438b4569], message [MapperParsingException[failed to parse]; nested: IllegalArgumentException[mapper [quote_tpl.families.subfamilies.contribution.563cb563d0c6b9251e8b4567] of different type, current_type [double], merged_type [ObjectMapper]];]
[2]: index [valueable_dev], type [product], id [56e03adbcce6191e448b4572], message [MapperParsingException[failed to parse]; nested: IllegalArgumentException[mapper [quote_tpl.families.subfamilies.items.criterias.score] of different type, current_type [string], merged_type [ObjectMapper]];]
[3]: index [valueable_dev], type [product], id [56e2cbfbcce6191e448b45cd], message [MapperParsingException[failed to parse]; nested: IllegalArgumentException[mapper [quote_tpl.families.subfamilies.items.criterias.score] of different type, current_type [string], merged_type [ObjectMapper]];]
[4]: index [valueable_dev], type [product], id [56e94c24cce61937438b4605], message [MapperParsingException[failed to parse]; nested: IllegalArgumentException[mapper [quote_tpl.families.subfamilies.items.criterias.score] of different type, current_type [string], merged_type [ObjectMapper]];]
[5]: index [valueable_dev], type [product], id [577b7dcccce61964298b4568], message [MapperParsingException[failed to parse]; nested: IllegalArgumentException[mapper [quote_tpl.families.subfamilies.items.criterias.score] of different type, current_type [string], merged_type [ObjectMapper]];]
for example, it says that the doc 56b9d3dfcce61970398b456b
field, quote_tpl.families.subfamilies.contribution.563cb563d0c6b9251e8b4567
must be the same type always. the auto generated index mapping expects its type double but it isn't.
Ok, i don't need this, i will clean this collection and just keep what i want, it'll probably works after this !
please let us know @tarann .you may need to use dropDataset option to not drop the target index so that you can create your own index mapping and then migrate the data from mongo to es with dropDataset:false
Sorry for the late answer I had other things to do, I don't know how to manage this mapping, some fields are sometimes object, array or just have one value. I tried to put a mapping and set dropDataset: false
but it doesn't work (nothing strange for a field which change his type). Let's see what tomorrow brings.
Hi @ozlerhakan, i used a new collection to feed my elastic index, but when a field is empty the type of the field fill it (in mongo i mean), and i get an error like this :
0 [main] INFO com.kodcu.config.FileConfiguration -
Config Output:
{elastic=Elastic{host='localhost', port=9300, clusterName=null, dateFormat=null, longToString=false, auth=null}, misc=Misc{batch=200, direction='me', dindex=Namespace{as='valueable_dev', name='valueable_dev'}, ctype=Namespace{as='search', name='search'}, dropDataset=false}, mongo=Mongo{host='localhost', port=27017, query='{}', auth=null}}
247 [main] INFO org.elasticsearch.plugins - [Ikthalon] modules [], plugins [], sites []
284 [main] DEBUG org.elasticsearch.threadpool - [Ikthalon] creating thread_pool [force_merge], type [fixed], size [1], queue_size [null]
300 [main] DEBUG org.elasticsearch.threadpool - [Ikthalon] creating thread_pool [percolate], type [fixed], size [1], queue_size [1k]
324 [main] DEBUG org.elasticsearch.threadpool - [Ikthalon] creating thread_pool [fetch_shard_started], type [scaling], min [1], size [2], keep_alive [5m]
325 [main] DEBUG org.elasticsearch.threadpool - [Ikthalon] creating thread_pool [listener], type [fixed], size [1], queue_size [null]
327 [main] DEBUG org.elasticsearch.threadpool - [Ikthalon] creating thread_pool [index], type [fixed], size [1], queue_size [200]
327 [main] DEBUG org.elasticsearch.threadpool - [Ikthalon] creating thread_pool [refresh], type [scaling], min [1], size [1], keep_alive [5m]
328 [main] DEBUG org.elasticsearch.threadpool - [Ikthalon] creating thread_pool [suggest], type [fixed], size [1], queue_size [1k]
328 [main] DEBUG org.elasticsearch.threadpool - [Ikthalon] creating thread_pool [generic], type [cached], keep_alive [30s]
333 [main] DEBUG org.elasticsearch.threadpool - [Ikthalon] creating thread_pool [warmer], type [scaling], min [1], size [1], keep_alive [5m]
333 [main] DEBUG org.elasticsearch.threadpool - [Ikthalon] creating thread_pool [search], type [fixed], size [2], queue_size [1k]
333 [main] DEBUG org.elasticsearch.threadpool - [Ikthalon] creating thread_pool [flush], type [scaling], min [1], size [1], keep_alive [5m]
334 [main] DEBUG org.elasticsearch.threadpool - [Ikthalon] creating thread_pool [fetch_shard_store], type [scaling], min [1], size [2], keep_alive [5m]
334 [main] DEBUG org.elasticsearch.threadpool - [Ikthalon] creating thread_pool [management], type [scaling], min [1], size [5], keep_alive [5m]
334 [main] DEBUG org.elasticsearch.threadpool - [Ikthalon] creating thread_pool [get], type [fixed], size [1], queue_size [1k]
335 [main] DEBUG org.elasticsearch.threadpool - [Ikthalon] creating thread_pool [bulk], type [fixed], size [1], queue_size [50]
335 [main] DEBUG org.elasticsearch.threadpool - [Ikthalon] creating thread_pool [snapshot], type [scaling], min [1], size [1], keep_alive [5m]
843 [main] DEBUG org.elasticsearch.common.network - configuration:
lo
inet 127.0.0.1 netmask:255.0.0.0 scope:host
inet6 ::1 prefixlen:128 scope:host
UP LOOPBACK mtu:16436 index:1
eth0
inet 10.0.2.15 netmask:255.255.255.0 broadcast:10.0.2.255 scope:site
inet6 fe80::a00:27ff:feb8:e83f prefixlen:64 scope:link
hardware 08:00:27:B8:E8:3F
UP MULTICAST mtu:1500 index:2
eth1
inet 192.168.56.102 netmask:255.255.255.0 broadcast:192.168.56.255 scope:site
inet6 fe80::a00:27ff:fe65:a25 prefixlen:64 scope:link
hardware 08:00:27:65:0A:25
UP MULTICAST mtu:1500 index:3
882 [main] DEBUG org.elasticsearch.common.netty - using gathering [true]
933 [main] DEBUG org.elasticsearch.client.transport - [Ikthalon] node_sampler_interval[5s]
982 [main] DEBUG org.elasticsearch.netty.channel.socket.nio.SelectorUtil - Using select timeout of 500
982 [main] DEBUG org.elasticsearch.netty.channel.socket.nio.SelectorUtil - Epoll-bug workaround enabled = false
1011 [main] DEBUG org.elasticsearch.client.transport - [Ikthalon] adding address [{#transport#-1}{127.0.0.1}{localhost/127.0.0.1:9300}]
1049 [elasticsearch[Ikthalon][management][T#1]] DEBUG org.elasticsearch.transport.netty - [Ikthalon] connected to node [{#transport#-1}{127.0.0.1}{localhost/127.0.0.1:9300}]
1166 [main] DEBUG org.elasticsearch.transport.netty - [Ikthalon] connected to node [{Tyrannus}{JfOf1kuBRt-OWroXacR8Ug}{127.0.0.1}{127.0.0.1:9300}]
1227 [main] INFO org.mongodb.driver.cluster - Cluster created with settings {hosts=[localhost:27017], mode=MULTIPLE, requiredClusterType=UNKNOWN, serverSelectionTimeout='5000 ms', maxWaitQueueSize=500}
1232 [main] INFO org.mongodb.driver.cluster - Adding discovered server localhost:27017 to client view of cluster
1347 [main] DEBUG org.mongodb.driver.cluster - Updating cluster description to {type=UNKNOWN, servers=[{address=localhost:27017, type=UNKNOWN, state=CONNECTING}]
1446 [main] INFO org.mongodb.driver.cluster - No server chosen by ReadPreferenceServerSelector{readPreference=primaryPreferred} from cluster description ClusterDescription{type=UNKNOWN, connectionMode=MULTIPLE, all=[ServerDescription{address=localhost:27017, type=UNKNOWN, state=CONNECTING}]}. Waiting for 5000 ms before timing out
1455 [cluster-ClusterId{value='57d27de4330e110ebf255a42', description='null'}-localhost:27017] INFO org.mongodb.driver.connection - Opened connection [connectionId{localValue:1, serverValue:3}] to localhost:27017
1456 [cluster-ClusterId{value='57d27de4330e110ebf255a42', description='null'}-localhost:27017] DEBUG org.mongodb.driver.cluster - Checking status of localhost:27017
1456 [cluster-ClusterId{value='57d27de4330e110ebf255a42', description='null'}-localhost:27017] INFO org.mongodb.driver.cluster - Monitor thread successfully connected to server with description ServerDescription{address=localhost:27017, type=STANDALONE, state=CONNECTED, ok=true, version=ServerVersion{versionList=[3, 0, 3]}, minWireVersion=0, maxWireVersion=3, maxDocumentSize=16777216, roundTripTimeNanos=399583}
1458 [cluster-ClusterId{value='57d27de4330e110ebf255a42', description='null'}-localhost:27017] INFO org.mongodb.driver.cluster - Discovered cluster type of STANDALONE
1459 [cluster-ClusterId{value='57d27de4330e110ebf255a42', description='null'}-localhost:27017] DEBUG org.mongodb.driver.cluster - Updating cluster description to {type=STANDALONE, servers=[{address=localhost:27017, type=STANDALONE, roundTripTime=0,4 ms, state=CONNECTED}]
1466 [main] INFO org.mongodb.driver.connection - Opened connection [connectionId{localValue:2, serverValue:4}] to localhost:27017
1470 [main] DEBUG org.mongodb.driver.protocol.command - Sending command {count : BsonString{value='search'}} to database valueable_dev on connection [connectionId{localValue:2, serverValue:4}] to server localhost:27017
1474 [main] DEBUG org.mongodb.driver.protocol.command - Command execution completed
1474 [main] INFO com.kodcu.provider.MongoToElasticProvider - Mongo collection count: 6
1495 [main] DEBUG org.mongodb.driver.protocol.query - Sending query of namespace valueable_dev.search on connection [connectionId{localValue:2, serverValue:4}] to server localhost:27017
1515 [main] DEBUG org.mongodb.driver.protocol.query - Query completed
1527 [main] INFO com.kodcu.service.ElasticBulkService - Transferring data began to elasticsearch.
1572 [main] DEBUG org.elasticsearch.common.compress.lzf - using decoder[VanillaChunkDecoder]
1892 [elasticsearch[Ikthalon][listener][T#1]] ERROR com.kodcu.listener.BulkProcessorListener - failure in bulk execution:
[1]: index [valueable_dev], type [search], id [56c2db63cce61937438b4569], message [MapperParsingException[failed to parse [cotation_percent]]; nested: IllegalArgumentException[unknown property [$numberLong]];]
1893 [main] INFO org.mongodb.driver.connection - Closed connection [connectionId{localValue:2, serverValue:4}] to localhost:27017 because the pool has been closed.
1893 [main] DEBUG org.mongodb.driver.connection - Closing connection connectionId{localValue:2, serverValue:4}
1894 [cluster-ClusterId{value='57d27de4330e110ebf255a42', description='null'}-localhost:27017] DEBUG org.mongodb.driver.connection - Closing connection connectionId{localValue:1, serverValue:3}
1895 [main] DEBUG org.elasticsearch.transport.netty - [Ikthalon] disconnecting from [{Tyrannus}{JfOf1kuBRt-OWroXacR8Ug}{127.0.0.1}{127.0.0.1:9300}] due to explicit disconnect call
1905 [main] DEBUG org.elasticsearch.transport.netty - [Ikthalon] disconnecting from [{#transport#-1}{127.0.0.1}{localhost/127.0.0.1:9300}] due to explicit disconnect call
1983 [main] INFO com.kodcu.main.Mongolastic - Load duration: 1982ms
However mongolastic works fine with other collection. I think you can close this issue. Thank you for your help.
Hi @tarann ,
This error really depends on the document structure, in the error, it says that: index [valueable_dev], type [search], id [56c2db63cce61937438b4569], message [MapperParsingException[failed to parse [cotation_percent]]; nested: IllegalArgumentException[unknown property [$numberLong]];]
es probably doesn't recognize the type of the cotation_percent field. I close this issue. Thanks for using mongolastic. feel free to make any suggestions for the tool.
Hi i'm with ES 2.4, mongo3.0.3 on debian 7, here is the configuration file :
I get this :