ozlerhakan / mongolastic

:traffic_light: A dataset migration tool from MongoDB to Elasticsearch and vice versa.
MIT License
137 stars 34 forks source link

Disconnecting from ES #16

Closed tarann closed 8 years ago

tarann commented 8 years ago

Hi i'm with ES 2.4, mongo3.0.3 on debian 7, here is the configuration file :

misc:
 dindex:
  name: valueable_dev
 ctype:
  name: product
mongo:
 host: localhost
 port: 27017
elastic:
 host: localhost
 port: 9300

I get this :

root@wheezy:/home/jp/Téléchargements# java -jar mongolastic.jar -f mongo_to_elastic.yml 
0 [main] INFO com.kodcu.config.FileConfiguration  - 
Config Output:
{elastic=Elastic{host='localhost', port=9300, clusterName=null, dateFormat=null, longToString=false, auth=null}, misc=Misc{batch=200, direction='me', dindex=Namespace{as='valueable_dev', name='valueable_dev'}, ctype=Namespace{as='product', name='product'}, dropDataset=true}, mongo=Mongo{host='localhost', port=27017, query='{}', auth=null}}

310 [main] INFO org.elasticsearch.plugins  - [Unseen] modules [], plugins [], sites []
350 [main] DEBUG org.elasticsearch.threadpool  - [Unseen] creating thread_pool [force_merge], type [fixed], size [1], queue_size [null]
365 [main] DEBUG org.elasticsearch.threadpool  - [Unseen] creating thread_pool [percolate], type [fixed], size [1], queue_size [1k]
398 [main] DEBUG org.elasticsearch.threadpool  - [Unseen] creating thread_pool [fetch_shard_started], type [scaling], min [1], size [2], keep_alive [5m]
399 [main] DEBUG org.elasticsearch.threadpool  - [Unseen] creating thread_pool [listener], type [fixed], size [1], queue_size [null]
405 [main] DEBUG org.elasticsearch.threadpool  - [Unseen] creating thread_pool [index], type [fixed], size [1], queue_size [200]
408 [main] DEBUG org.elasticsearch.threadpool  - [Unseen] creating thread_pool [refresh], type [scaling], min [1], size [1], keep_alive [5m]
409 [main] DEBUG org.elasticsearch.threadpool  - [Unseen] creating thread_pool [suggest], type [fixed], size [1], queue_size [1k]
409 [main] DEBUG org.elasticsearch.threadpool  - [Unseen] creating thread_pool [generic], type [cached], keep_alive [30s]
413 [main] DEBUG org.elasticsearch.threadpool  - [Unseen] creating thread_pool [warmer], type [scaling], min [1], size [1], keep_alive [5m]
414 [main] DEBUG org.elasticsearch.threadpool  - [Unseen] creating thread_pool [search], type [fixed], size [2], queue_size [1k]
414 [main] DEBUG org.elasticsearch.threadpool  - [Unseen] creating thread_pool [flush], type [scaling], min [1], size [1], keep_alive [5m]
414 [main] DEBUG org.elasticsearch.threadpool  - [Unseen] creating thread_pool [fetch_shard_store], type [scaling], min [1], size [2], keep_alive [5m]
415 [main] DEBUG org.elasticsearch.threadpool  - [Unseen] creating thread_pool [management], type [scaling], min [1], size [5], keep_alive [5m]
416 [main] DEBUG org.elasticsearch.threadpool  - [Unseen] creating thread_pool [get], type [fixed], size [1], queue_size [1k]
417 [main] DEBUG org.elasticsearch.threadpool  - [Unseen] creating thread_pool [bulk], type [fixed], size [1], queue_size [50]
417 [main] DEBUG org.elasticsearch.threadpool  - [Unseen] creating thread_pool [snapshot], type [scaling], min [1], size [1], keep_alive [5m]
983 [main] DEBUG org.elasticsearch.common.network  - configuration:

lo
        inet 127.0.0.1 netmask:255.0.0.0 scope:host
        inet6 ::1 prefixlen:128 scope:host
        UP LOOPBACK mtu:16436 index:1

eth0
        inet 10.0.2.15 netmask:255.255.255.0 broadcast:10.0.2.255 scope:site
        inet6 fe80::a00:27ff:feb8:e83f prefixlen:64 scope:link
        hardware 08:00:27:B8:E8:3F
        UP MULTICAST mtu:1500 index:2

eth1
        inet 192.168.56.102 netmask:255.255.255.0 broadcast:192.168.56.255 scope:site
        inet6 fe80::a00:27ff:fe65:a25 prefixlen:64 scope:link
        hardware 08:00:27:65:0A:25
        UP MULTICAST mtu:1500 index:3

1033 [main] DEBUG org.elasticsearch.common.netty  - using gathering [true]
1096 [main] DEBUG org.elasticsearch.client.transport  - [Unseen] node_sampler_interval[5s]
1140 [main] DEBUG org.elasticsearch.netty.channel.socket.nio.SelectorUtil  - Using select timeout of 500
1140 [main] DEBUG org.elasticsearch.netty.channel.socket.nio.SelectorUtil  - Epoll-bug workaround enabled = false
1180 [main] DEBUG org.elasticsearch.client.transport  - [Unseen] adding address [{#transport#-1}{127.0.0.1}{localhost/127.0.0.1:9300}]
1222 [elasticsearch[Unseen][management][T#1]] DEBUG org.elasticsearch.transport.netty  - [Unseen] connected to node [{#transport#-1}{127.0.0.1}{localhost/127.0.0.1:9300}]
1375 [elasticsearch[Unseen][transport_client_worker][T#1]{New I/O worker #1}] INFO org.elasticsearch.client.transport  - [Unseen] failed to get local cluster state for {#transport#-1}{127.0.0.1}{localhost/127.0.0.1:9300}, disconnecting...
RemoteTransportException[[Failed to deserialize response of type [org.elasticsearch.action.admin.cluster.state.ClusterStateResponse]]]; nested: TransportSerializationException[Failed to deserialize response of type [org.elasticsearch.action.admin.cluster.state.ClusterStateResponse]]; nested: ExceptionInInitializerError; nested: IllegalArgumentException[An SPI class of type org.apache.lucene.codecs.PostingsFormat with name 'Lucene50' does not exist.  You need to add the corresponding JAR file supporting this SPI to your classpath.  The current classpath supports the following names: [es090, completion090, XBloomFilter]];
Caused by: TransportSerializationException[Failed to deserialize response of type [org.elasticsearch.action.admin.cluster.state.ClusterStateResponse]]; nested: ExceptionInInitializerError; nested: IllegalArgumentException[An SPI class of type org.apache.lucene.codecs.PostingsFormat with name 'Lucene50' does not exist.  You need to add the corresponding JAR file supporting this SPI to your classpath.  The current classpath supports the following names: [es090, completion090, XBloomFilter]];
    at org.elasticsearch.transport.netty.MessageChannelHandler.handleResponse(MessageChannelHandler.java:180)
    at org.elasticsearch.transport.netty.MessageChannelHandler.messageReceived(MessageChannelHandler.java:138)
    at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
    at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
    at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
    at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:296)
    at org.jboss.netty.handler.codec.frame.FrameDecoder.unfoldAndFireMessageReceived(FrameDecoder.java:462)
    at org.jboss.netty.handler.codec.frame.FrameDecoder.callDecode(FrameDecoder.java:443)
    at org.jboss.netty.handler.codec.frame.FrameDecoder.messageReceived(FrameDecoder.java:303)
    at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
    at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
    at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:559)
    at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:268)
    at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:255)
    at org.jboss.netty.channel.socket.nio.NioWorker.read(NioWorker.java:88)
    at org.jboss.netty.channel.socket.nio.AbstractNioWorker.process(AbstractNioWorker.java:108)
    at org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:337)
    at org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89)
    at org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178)
    at org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
    at org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.ExceptionInInitializerError
    at org.elasticsearch.Version.fromId(Version.java:572)
    at org.elasticsearch.Version.readVersion(Version.java:312)
    at org.elasticsearch.cluster.node.DiscoveryNode.readFrom(DiscoveryNode.java:339)
    at org.elasticsearch.cluster.node.DiscoveryNode.readNode(DiscoveryNode.java:322)
    at org.elasticsearch.cluster.node.DiscoveryNodes.readFrom(DiscoveryNodes.java:594)
    at org.elasticsearch.cluster.node.DiscoveryNodes$Builder.readFrom(DiscoveryNodes.java:674)
    at org.elasticsearch.cluster.ClusterState.readFrom(ClusterState.java:699)
    at org.elasticsearch.cluster.ClusterState$Builder.readFrom(ClusterState.java:677)
    at org.elasticsearch.action.admin.cluster.state.ClusterStateResponse.readFrom(ClusterStateResponse.java:58)
    at org.elasticsearch.transport.netty.MessageChannelHandler.handleResponse(MessageChannelHandler.java:178)
    ... 23 more
Caused by: java.lang.IllegalArgumentException: An SPI class of type org.apache.lucene.codecs.PostingsFormat with name 'Lucene50' does not exist.  You need to add the corresponding JAR file supporting this SPI to your classpath.  The current classpath supports the following names: [es090, completion090, XBloomFilter]
    at org.apache.lucene.util.NamedSPILoader.lookup(NamedSPILoader.java:114)
    at org.apache.lucene.codecs.PostingsFormat.forName(PostingsFormat.java:112)
    at org.elasticsearch.common.lucene.Lucene.<clinit>(Lucene.java:65)
    ... 33 more
1386 [elasticsearch[Unseen][transport_client_worker][T#1]{New I/O worker #1}] DEBUG org.elasticsearch.transport.netty  - [Unseen] disconnecting from [{#transport#-1}{127.0.0.1}{localhost/127.0.0.1:9300}] due to explicit disconnect call
1397 [elasticsearch[Unseen][transport_client_worker][T#1]{New I/O worker #1}] WARN org.elasticsearch.transport.netty  - [Unseen] exception caught on transport layer [[id: 0x6324c4d8, /127.0.0.1:34158 :> localhost/127.0.0.1:9300]], closing connection
java.lang.IllegalStateException: Message not fully read (response) for requestId [0], handler [org.elasticsearch.client.transport.TransportClientNodesService$SniffNodesSampler$1$1@4f36d02], error [false]; resetting
    at org.elasticsearch.transport.netty.MessageChannelHandler.messageReceived(MessageChannelHandler.java:146)
    at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
    at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
    at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
    at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:296)
    at org.jboss.netty.handler.codec.frame.FrameDecoder.unfoldAndFireMessageReceived(FrameDecoder.java:462)
    at org.jboss.netty.handler.codec.frame.FrameDecoder.callDecode(FrameDecoder.java:443)
    at org.jboss.netty.handler.codec.frame.FrameDecoder.messageReceived(FrameDecoder.java:303)
    at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
    at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
    at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:559)
    at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:268)
    at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:255)
    at org.jboss.netty.channel.socket.nio.NioWorker.read(NioWorker.java:88)
    at org.jboss.netty.channel.socket.nio.AbstractNioWorker.process(AbstractNioWorker.java:108)
    at org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:337)
    at org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89)
    at org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178)
    at org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
    at org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
1477 [main] INFO org.mongodb.driver.cluster  - Cluster created with settings {hosts=[localhost:27017], mode=MULTIPLE, requiredClusterType=UNKNOWN, serverSelectionTimeout='5000 ms', maxWaitQueueSize=500}
1480 [main] INFO org.mongodb.driver.cluster  - Adding discovered server localhost:27017 to client view of cluster
1594 [main] DEBUG org.mongodb.driver.cluster  - Updating cluster description to  {type=UNKNOWN, servers=[{address=localhost:27017, type=UNKNOWN, state=CONNECTING}]
1712 [main] INFO org.mongodb.driver.cluster  - No server chosen by ReadPreferenceServerSelector{readPreference=primaryPreferred} from cluster description ClusterDescription{type=UNKNOWN, connectionMode=MULTIPLE, all=[ServerDescription{address=localhost:27017, type=UNKNOWN, state=CONNECTING}]}. Waiting for 5000 ms before timing out
1735 [cluster-ClusterId{value='57ce8631330e1114f57cddda', description='null'}-localhost:27017] INFO org.mongodb.driver.connection  - Opened connection [connectionId{localValue:1, serverValue:6}] to localhost:27017
1736 [cluster-ClusterId{value='57ce8631330e1114f57cddda', description='null'}-localhost:27017] DEBUG org.mongodb.driver.cluster  - Checking status of localhost:27017
1738 [cluster-ClusterId{value='57ce8631330e1114f57cddda', description='null'}-localhost:27017] INFO org.mongodb.driver.cluster  - Monitor thread successfully connected to server with description ServerDescription{address=localhost:27017, type=STANDALONE, state=CONNECTED, ok=true, version=ServerVersion{versionList=[3, 0, 3]}, minWireVersion=0, maxWireVersion=3, maxDocumentSize=16777216, roundTripTimeNanos=2199767}
1745 [cluster-ClusterId{value='57ce8631330e1114f57cddda', description='null'}-localhost:27017] INFO org.mongodb.driver.cluster  - Discovered cluster type of STANDALONE
1747 [cluster-ClusterId{value='57ce8631330e1114f57cddda', description='null'}-localhost:27017] DEBUG org.mongodb.driver.cluster  - Updating cluster description to  {type=STANDALONE, servers=[{address=localhost:27017, type=STANDALONE, roundTripTime=2,2 ms, state=CONNECTED}]
1757 [main] INFO org.mongodb.driver.connection  - Opened connection [connectionId{localValue:2, serverValue:7}] to localhost:27017
1763 [main] DEBUG org.mongodb.driver.protocol.command  - Sending command {count : BsonString{value='product'}} to database valueable_dev on connection [connectionId{localValue:2, serverValue:7}] to server localhost:27017
1772 [main] DEBUG org.mongodb.driver.protocol.command  - Command execution completed
1772 [main] INFO com.kodcu.provider.MongoToElasticProvider  - Mongo collection count: 6
1774 [main] INFO com.kodcu.main.Mongolastic  - Load duration: 1771ms
Exception in thread "main" NoNodeAvailableException[None of the configured nodes are available: [{#transport#-1}{127.0.0.1}{localhost/127.0.0.1:9300}]]
    at org.elasticsearch.client.transport.TransportClientNodesService.ensureNodesAreAvailable(TransportClientNodesService.java:290)
    at org.elasticsearch.client.transport.TransportClientNodesService.execute(TransportClientNodesService.java:207)
    at org.elasticsearch.client.transport.support.TransportProxyClient.execute(TransportProxyClient.java:55)
    at org.elasticsearch.client.transport.TransportClient.doExecute(TransportClient.java:288)
    at org.elasticsearch.client.support.AbstractClient.execute(AbstractClient.java:359)
    at org.elasticsearch.client.support.AbstractClient$IndicesAdmin.execute(AbstractClient.java:1226)
    at org.elasticsearch.action.ActionRequestBuilder.execute(ActionRequestBuilder.java:86)
    at org.elasticsearch.action.ActionRequestBuilder.execute(ActionRequestBuilder.java:56)
    at com.kodcu.service.ElasticBulkService.dropDataSet(ElasticBulkService.java:94)
    at com.kodcu.provider.Provider.transfer(Provider.java:22)
    at com.kodcu.main.Mongolastic.proceedService(Mongolastic.java:61)
    at java.util.Optional.ifPresent(Optional.java:159)
    at com.kodcu.main.Mongolastic.start(Mongolastic.java:50)
    at com.kodcu.main.Mongolastic.main(Mongolastic.java:38)
root@wheezy:/home/jp/Téléchargements#
ozlerhakan commented 8 years ago

Hi @tarann ,

Thank you for reporting this issue!. Probably you use the latest mongolastic, I haven't tested it with ES 2.4 but It seems like this is due to An SPI class of type org.apache.lucene.codecs.PostingsFormat with name 'Lucene50' does not exist.. I will soon update the pom and bump a new version.

ozlerhakan commented 8 years ago

could you please use v1.3.11 ?

tarann commented 8 years ago

Hi @ozlerhakan, thank you for your help. It's better, now i have an index created in ES, but it's empty : http://hpics.li/3abcf66 This is what i get with v1.3.11 :

root@wheezy:/home/jp/Téléchargements# java -jar mongolastic.jar -f mongo_to_ES.yml 
0 [main] INFO com.kodcu.config.FileConfiguration  - 
Config Output:
{elastic=Elastic{host='localhost', port=9300, clusterName=null, dateFormat=null, longToString=false, auth=null}, misc=Misc{batch=200, direction='me', dindex=Namespace{as='valueable_dev', name='valueable_dev'}, ctype=Namespace{as='product', name='product'}, dropDataset=true}, mongo=Mongo{host='localhost', port=27017, query='{}', auth=null}}

231 [main] INFO org.elasticsearch.plugins  - [Hack] modules [], plugins [], sites []
265 [main] DEBUG org.elasticsearch.threadpool  - [Hack] creating thread_pool [force_merge], type [fixed], size [1], queue_size [null]
287 [main] DEBUG org.elasticsearch.threadpool  - [Hack] creating thread_pool [percolate], type [fixed], size [1], queue_size [1k]
311 [main] DEBUG org.elasticsearch.threadpool  - [Hack] creating thread_pool [fetch_shard_started], type [scaling], min [1], size [2], keep_alive [5m]
313 [main] DEBUG org.elasticsearch.threadpool  - [Hack] creating thread_pool [listener], type [fixed], size [1], queue_size [null]
313 [main] DEBUG org.elasticsearch.threadpool  - [Hack] creating thread_pool [index], type [fixed], size [1], queue_size [200]
314 [main] DEBUG org.elasticsearch.threadpool  - [Hack] creating thread_pool [refresh], type [scaling], min [1], size [1], keep_alive [5m]
314 [main] DEBUG org.elasticsearch.threadpool  - [Hack] creating thread_pool [suggest], type [fixed], size [1], queue_size [1k]
314 [main] DEBUG org.elasticsearch.threadpool  - [Hack] creating thread_pool [generic], type [cached], keep_alive [30s]
319 [main] DEBUG org.elasticsearch.threadpool  - [Hack] creating thread_pool [warmer], type [scaling], min [1], size [1], keep_alive [5m]
324 [main] DEBUG org.elasticsearch.threadpool  - [Hack] creating thread_pool [search], type [fixed], size [2], queue_size [1k]
327 [main] DEBUG org.elasticsearch.threadpool  - [Hack] creating thread_pool [flush], type [scaling], min [1], size [1], keep_alive [5m]
328 [main] DEBUG org.elasticsearch.threadpool  - [Hack] creating thread_pool [fetch_shard_store], type [scaling], min [1], size [2], keep_alive [5m]
329 [main] DEBUG org.elasticsearch.threadpool  - [Hack] creating thread_pool [management], type [scaling], min [1], size [5], keep_alive [5m]
329 [main] DEBUG org.elasticsearch.threadpool  - [Hack] creating thread_pool [get], type [fixed], size [1], queue_size [1k]
330 [main] DEBUG org.elasticsearch.threadpool  - [Hack] creating thread_pool [bulk], type [fixed], size [1], queue_size [50]
330 [main] DEBUG org.elasticsearch.threadpool  - [Hack] creating thread_pool [snapshot], type [scaling], min [1], size [1], keep_alive [5m]
810 [main] DEBUG org.elasticsearch.common.network  - configuration:

lo
        inet 127.0.0.1 netmask:255.0.0.0 scope:host
        inet6 ::1 prefixlen:128 scope:host
        UP LOOPBACK mtu:16436 index:1

eth0
        inet 10.0.2.15 netmask:255.255.255.0 broadcast:10.0.2.255 scope:site
        inet6 fe80::a00:27ff:feb8:e83f prefixlen:64 scope:link
        hardware 08:00:27:B8:E8:3F
        UP MULTICAST mtu:1500 index:2

eth1
        inet 192.168.56.102 netmask:255.255.255.0 broadcast:192.168.56.255 scope:site
        inet6 fe80::a00:27ff:fe65:a25 prefixlen:64 scope:link
        hardware 08:00:27:65:0A:25
        UP MULTICAST mtu:1500 index:3

840 [main] DEBUG org.elasticsearch.common.netty  - using gathering [true]
890 [main] DEBUG org.elasticsearch.client.transport  - [Hack] node_sampler_interval[5s]
933 [main] DEBUG org.elasticsearch.netty.channel.socket.nio.SelectorUtil  - Using select timeout of 500
934 [main] DEBUG org.elasticsearch.netty.channel.socket.nio.SelectorUtil  - Epoll-bug workaround enabled = false
953 [main] DEBUG org.elasticsearch.client.transport  - [Hack] adding address [{#transport#-1}{127.0.0.1}{localhost/127.0.0.1:9300}]
989 [elasticsearch[Hack][management][T#1]] DEBUG org.elasticsearch.transport.netty  - [Hack] connected to node [{#transport#-1}{127.0.0.1}{localhost/127.0.0.1:9300}]
1108 [main] DEBUG org.elasticsearch.transport.netty  - [Hack] connected to node [{Gargoyle}{MWNYleM7TOeWyoCBB3VLnA}{127.0.0.1}{127.0.0.1:9300}]
1170 [main] INFO org.mongodb.driver.cluster  - Cluster created with settings {hosts=[localhost:27017], mode=MULTIPLE, requiredClusterType=UNKNOWN, serverSelectionTimeout='5000 ms', maxWaitQueueSize=500}
1171 [main] INFO org.mongodb.driver.cluster  - Adding discovered server localhost:27017 to client view of cluster
1297 [main] DEBUG org.mongodb.driver.cluster  - Updating cluster description to  {type=UNKNOWN, servers=[{address=localhost:27017, type=UNKNOWN, state=CONNECTING}]
1397 [main] INFO org.mongodb.driver.cluster  - No server chosen by ReadPreferenceServerSelector{readPreference=primaryPreferred} from cluster description ClusterDescription{type=UNKNOWN, connectionMode=MULTIPLE, all=[ServerDescription{address=localhost:27017, type=UNKNOWN, state=CONNECTING}]}. Waiting for 5000 ms before timing out
1405 [cluster-ClusterId{value='57cfc73f330e110f0717c7c1', description='null'}-localhost:27017] INFO org.mongodb.driver.connection  - Opened connection [connectionId{localValue:1, serverValue:1}] to localhost:27017
1405 [cluster-ClusterId{value='57cfc73f330e110f0717c7c1', description='null'}-localhost:27017] DEBUG org.mongodb.driver.cluster  - Checking status of localhost:27017
1406 [cluster-ClusterId{value='57cfc73f330e110f0717c7c1', description='null'}-localhost:27017] INFO org.mongodb.driver.cluster  - Monitor thread successfully connected to server with description ServerDescription{address=localhost:27017, type=STANDALONE, state=CONNECTED, ok=true, version=ServerVersion{versionList=[3, 0, 3]}, minWireVersion=0, maxWireVersion=3, maxDocumentSize=16777216, roundTripTimeNanos=693785}
1411 [cluster-ClusterId{value='57cfc73f330e110f0717c7c1', description='null'}-localhost:27017] INFO org.mongodb.driver.cluster  - Discovered cluster type of STANDALONE
1413 [cluster-ClusterId{value='57cfc73f330e110f0717c7c1', description='null'}-localhost:27017] DEBUG org.mongodb.driver.cluster  - Updating cluster description to  {type=STANDALONE, servers=[{address=localhost:27017, type=STANDALONE, roundTripTime=0,7 ms, state=CONNECTED}]
1418 [main] INFO org.mongodb.driver.connection  - Opened connection [connectionId{localValue:2, serverValue:2}] to localhost:27017
1423 [main] DEBUG org.mongodb.driver.protocol.command  - Sending command {count : BsonString{value='product'}} to database valueable_dev on connection [connectionId{localValue:2, serverValue:2}] to server localhost:27017
1434 [main] DEBUG org.mongodb.driver.protocol.command  - Command execution completed
1435 [main] INFO com.kodcu.provider.MongoToElasticProvider  - Mongo collection count: 6
1453 [main] DEBUG org.mongodb.driver.protocol.query  - Sending query of namespace valueable_dev.product on connection [connectionId{localValue:2, serverValue:2}] to server localhost:27017
1985 [main] DEBUG org.mongodb.driver.protocol.query  - Query completed
1996 [main] DEBUG org.mongodb.driver.protocol.getmore  - Getting more documents from namespace valueable_dev.product with cursor 73685289658 on connection [connectionId{localValue:2, serverValue:2}] to server localhost:27017
2253 [main] DEBUG org.mongodb.driver.protocol.getmore  - Get-more completed
2257 [main] INFO com.kodcu.service.ElasticBulkService  - Transferring data began to elasticsearch.
2952 [main] DEBUG org.elasticsearch.common.compress.lzf  - using decoder[VanillaChunkDecoder] 
4197 [elasticsearch[Hack][listener][T#1]] ERROR com.kodcu.listener.BulkProcessorListener  - failure in bulk execution:
[0]: index [valueable_dev], type [product], id [56b9d3dfcce61970398b456b], message [MapperParsingException[failed to parse]; nested: IllegalArgumentException[mapper [quote_tpl.families.subfamilies.contribution.563cb563d0c6b9251e8b4567] of different type, current_type [double], merged_type [ObjectMapper]];]
[1]: index [valueable_dev], type [product], id [56c2db63cce61937438b4569], message [MapperParsingException[failed to parse]; nested: IllegalArgumentException[mapper [quote_tpl.families.subfamilies.contribution.563cb563d0c6b9251e8b4567] of different type, current_type [double], merged_type [ObjectMapper]];]
[2]: index [valueable_dev], type [product], id [56e03adbcce6191e448b4572], message [MapperParsingException[failed to parse]; nested: IllegalArgumentException[mapper [quote_tpl.families.subfamilies.items.criterias.score] of different type, current_type [string], merged_type [ObjectMapper]];]
[3]: index [valueable_dev], type [product], id [56e2cbfbcce6191e448b45cd], message [MapperParsingException[failed to parse]; nested: IllegalArgumentException[mapper [quote_tpl.families.subfamilies.items.criterias.score] of different type, current_type [string], merged_type [ObjectMapper]];]
[4]: index [valueable_dev], type [product], id [56e94c24cce61937438b4605], message [MapperParsingException[failed to parse]; nested: IllegalArgumentException[mapper [quote_tpl.families.subfamilies.items.criterias.score] of different type, current_type [string], merged_type [ObjectMapper]];]
[5]: index [valueable_dev], type [product], id [577b7dcccce61964298b4568], message [MapperParsingException[failed to parse]; nested: IllegalArgumentException[mapper [quote_tpl.families.subfamilies.items.criterias.score] of different type, current_type [string], merged_type [ObjectMapper]];]
4198 [main] INFO org.mongodb.driver.connection  - Closed connection [connectionId{localValue:2, serverValue:2}] to localhost:27017 because the pool has been closed.
4198 [main] DEBUG org.mongodb.driver.connection  - Closing connection connectionId{localValue:2, serverValue:2}
4199 [main] DEBUG org.elasticsearch.transport.netty  - [Hack] disconnecting from [{Gargoyle}{MWNYleM7TOeWyoCBB3VLnA}{127.0.0.1}{127.0.0.1:9300}] due to explicit disconnect call
4200 [cluster-ClusterId{value='57cfc73f330e110f0717c7c1', description='null'}-localhost:27017] DEBUG org.mongodb.driver.connection  - Closing connection connectionId{localValue:1, serverValue:1}
4212 [main] DEBUG org.elasticsearch.transport.netty  - [Hack] disconnecting from [{#transport#-1}{127.0.0.1}{localhost/127.0.0.1:9300}] due to explicit disconnect call
4271 [main] INFO com.kodcu.main.Mongolastic  - Load duration: 4269ms
ozlerhakan commented 8 years ago

Ah you get an exception during the migration : ERROR com.kodcu.listener.BulkProcessorListener - failure in bulk execution:.

Could you show me the 6 documents located in your mongo? You may need a mapping in elasticsearch

tarann commented 8 years ago

Docs are really huge but i can show you one :

http://www.k-upload.fr/afficher-fichier-2016-09-07-134ee3369outputfile.json.html

ozlerhakan commented 8 years ago

yes the doc is indeed huge, but if you look into the log :

[0]: index [valueable_dev], type [product], id [56b9d3dfcce61970398b456b], message [MapperParsingException[failed to parse]; nested: IllegalArgumentException[mapper [quote_tpl.families.subfamilies.contribution.563cb563d0c6b9251e8b4567] of different type, current_type [double], merged_type [ObjectMapper]];]
[1]: index [valueable_dev], type [product], id [56c2db63cce61937438b4569], message [MapperParsingException[failed to parse]; nested: IllegalArgumentException[mapper [quote_tpl.families.subfamilies.contribution.563cb563d0c6b9251e8b4567] of different type, current_type [double], merged_type [ObjectMapper]];]
[2]: index [valueable_dev], type [product], id [56e03adbcce6191e448b4572], message [MapperParsingException[failed to parse]; nested: IllegalArgumentException[mapper [quote_tpl.families.subfamilies.items.criterias.score] of different type, current_type [string], merged_type [ObjectMapper]];]
[3]: index [valueable_dev], type [product], id [56e2cbfbcce6191e448b45cd], message [MapperParsingException[failed to parse]; nested: IllegalArgumentException[mapper [quote_tpl.families.subfamilies.items.criterias.score] of different type, current_type [string], merged_type [ObjectMapper]];]
[4]: index [valueable_dev], type [product], id [56e94c24cce61937438b4605], message [MapperParsingException[failed to parse]; nested: IllegalArgumentException[mapper [quote_tpl.families.subfamilies.items.criterias.score] of different type, current_type [string], merged_type [ObjectMapper]];]
[5]: index [valueable_dev], type [product], id [577b7dcccce61964298b4568], message [MapperParsingException[failed to parse]; nested: IllegalArgumentException[mapper [quote_tpl.families.subfamilies.items.criterias.score] of different type, current_type [string], merged_type [ObjectMapper]];]

for example, it says that the doc 56b9d3dfcce61970398b456b field, quote_tpl.families.subfamilies.contribution.563cb563d0c6b9251e8b4567 must be the same type always. the auto generated index mapping expects its type double but it isn't.

tarann commented 8 years ago

Ok, i don't need this, i will clean this collection and just keep what i want, it'll probably works after this !

ozlerhakan commented 8 years ago

please let us know @tarann .you may need to use dropDataset option to not drop the target index so that you can create your own index mapping and then migrate the data from mongo to es with dropDataset:false

tarann commented 8 years ago

Sorry for the late answer I had other things to do, I don't know how to manage this mapping, some fields are sometimes object, array or just have one value. I tried to put a mapping and set dropDataset: false but it doesn't work (nothing strange for a field which change his type). Let's see what tomorrow brings.

tarann commented 8 years ago

Hi @ozlerhakan, i used a new collection to feed my elastic index, but when a field is empty the type of the field fill it (in mongo i mean), and i get an error like this :

0 [main] INFO com.kodcu.config.FileConfiguration  - 
Config Output:
{elastic=Elastic{host='localhost', port=9300, clusterName=null, dateFormat=null, longToString=false, auth=null}, misc=Misc{batch=200, direction='me', dindex=Namespace{as='valueable_dev', name='valueable_dev'}, ctype=Namespace{as='search', name='search'}, dropDataset=false}, mongo=Mongo{host='localhost', port=27017, query='{}', auth=null}}

247 [main] INFO org.elasticsearch.plugins  - [Ikthalon] modules [], plugins [], sites []
284 [main] DEBUG org.elasticsearch.threadpool  - [Ikthalon] creating thread_pool [force_merge], type [fixed], size [1], queue_size [null]
300 [main] DEBUG org.elasticsearch.threadpool  - [Ikthalon] creating thread_pool [percolate], type [fixed], size [1], queue_size [1k]
324 [main] DEBUG org.elasticsearch.threadpool  - [Ikthalon] creating thread_pool [fetch_shard_started], type [scaling], min [1], size [2], keep_alive [5m]
325 [main] DEBUG org.elasticsearch.threadpool  - [Ikthalon] creating thread_pool [listener], type [fixed], size [1], queue_size [null]
327 [main] DEBUG org.elasticsearch.threadpool  - [Ikthalon] creating thread_pool [index], type [fixed], size [1], queue_size [200]
327 [main] DEBUG org.elasticsearch.threadpool  - [Ikthalon] creating thread_pool [refresh], type [scaling], min [1], size [1], keep_alive [5m]
328 [main] DEBUG org.elasticsearch.threadpool  - [Ikthalon] creating thread_pool [suggest], type [fixed], size [1], queue_size [1k]
328 [main] DEBUG org.elasticsearch.threadpool  - [Ikthalon] creating thread_pool [generic], type [cached], keep_alive [30s]
333 [main] DEBUG org.elasticsearch.threadpool  - [Ikthalon] creating thread_pool [warmer], type [scaling], min [1], size [1], keep_alive [5m]
333 [main] DEBUG org.elasticsearch.threadpool  - [Ikthalon] creating thread_pool [search], type [fixed], size [2], queue_size [1k]
333 [main] DEBUG org.elasticsearch.threadpool  - [Ikthalon] creating thread_pool [flush], type [scaling], min [1], size [1], keep_alive [5m]
334 [main] DEBUG org.elasticsearch.threadpool  - [Ikthalon] creating thread_pool [fetch_shard_store], type [scaling], min [1], size [2], keep_alive [5m]
334 [main] DEBUG org.elasticsearch.threadpool  - [Ikthalon] creating thread_pool [management], type [scaling], min [1], size [5], keep_alive [5m]
334 [main] DEBUG org.elasticsearch.threadpool  - [Ikthalon] creating thread_pool [get], type [fixed], size [1], queue_size [1k]
335 [main] DEBUG org.elasticsearch.threadpool  - [Ikthalon] creating thread_pool [bulk], type [fixed], size [1], queue_size [50]
335 [main] DEBUG org.elasticsearch.threadpool  - [Ikthalon] creating thread_pool [snapshot], type [scaling], min [1], size [1], keep_alive [5m]
843 [main] DEBUG org.elasticsearch.common.network  - configuration:

lo
        inet 127.0.0.1 netmask:255.0.0.0 scope:host
        inet6 ::1 prefixlen:128 scope:host
        UP LOOPBACK mtu:16436 index:1

eth0
        inet 10.0.2.15 netmask:255.255.255.0 broadcast:10.0.2.255 scope:site
        inet6 fe80::a00:27ff:feb8:e83f prefixlen:64 scope:link
        hardware 08:00:27:B8:E8:3F
        UP MULTICAST mtu:1500 index:2

eth1
        inet 192.168.56.102 netmask:255.255.255.0 broadcast:192.168.56.255 scope:site
        inet6 fe80::a00:27ff:fe65:a25 prefixlen:64 scope:link
        hardware 08:00:27:65:0A:25
        UP MULTICAST mtu:1500 index:3

882 [main] DEBUG org.elasticsearch.common.netty  - using gathering [true]
933 [main] DEBUG org.elasticsearch.client.transport  - [Ikthalon] node_sampler_interval[5s]
982 [main] DEBUG org.elasticsearch.netty.channel.socket.nio.SelectorUtil  - Using select timeout of 500
982 [main] DEBUG org.elasticsearch.netty.channel.socket.nio.SelectorUtil  - Epoll-bug workaround enabled = false
1011 [main] DEBUG org.elasticsearch.client.transport  - [Ikthalon] adding address [{#transport#-1}{127.0.0.1}{localhost/127.0.0.1:9300}]
1049 [elasticsearch[Ikthalon][management][T#1]] DEBUG org.elasticsearch.transport.netty  - [Ikthalon] connected to node [{#transport#-1}{127.0.0.1}{localhost/127.0.0.1:9300}]
1166 [main] DEBUG org.elasticsearch.transport.netty  - [Ikthalon] connected to node [{Tyrannus}{JfOf1kuBRt-OWroXacR8Ug}{127.0.0.1}{127.0.0.1:9300}]
1227 [main] INFO org.mongodb.driver.cluster  - Cluster created with settings {hosts=[localhost:27017], mode=MULTIPLE, requiredClusterType=UNKNOWN, serverSelectionTimeout='5000 ms', maxWaitQueueSize=500}
1232 [main] INFO org.mongodb.driver.cluster  - Adding discovered server localhost:27017 to client view of cluster
1347 [main] DEBUG org.mongodb.driver.cluster  - Updating cluster description to  {type=UNKNOWN, servers=[{address=localhost:27017, type=UNKNOWN, state=CONNECTING}]
1446 [main] INFO org.mongodb.driver.cluster  - No server chosen by ReadPreferenceServerSelector{readPreference=primaryPreferred} from cluster description ClusterDescription{type=UNKNOWN, connectionMode=MULTIPLE, all=[ServerDescription{address=localhost:27017, type=UNKNOWN, state=CONNECTING}]}. Waiting for 5000 ms before timing out
1455 [cluster-ClusterId{value='57d27de4330e110ebf255a42', description='null'}-localhost:27017] INFO org.mongodb.driver.connection  - Opened connection [connectionId{localValue:1, serverValue:3}] to localhost:27017
1456 [cluster-ClusterId{value='57d27de4330e110ebf255a42', description='null'}-localhost:27017] DEBUG org.mongodb.driver.cluster  - Checking status of localhost:27017
1456 [cluster-ClusterId{value='57d27de4330e110ebf255a42', description='null'}-localhost:27017] INFO org.mongodb.driver.cluster  - Monitor thread successfully connected to server with description ServerDescription{address=localhost:27017, type=STANDALONE, state=CONNECTED, ok=true, version=ServerVersion{versionList=[3, 0, 3]}, minWireVersion=0, maxWireVersion=3, maxDocumentSize=16777216, roundTripTimeNanos=399583}
1458 [cluster-ClusterId{value='57d27de4330e110ebf255a42', description='null'}-localhost:27017] INFO org.mongodb.driver.cluster  - Discovered cluster type of STANDALONE
1459 [cluster-ClusterId{value='57d27de4330e110ebf255a42', description='null'}-localhost:27017] DEBUG org.mongodb.driver.cluster  - Updating cluster description to  {type=STANDALONE, servers=[{address=localhost:27017, type=STANDALONE, roundTripTime=0,4 ms, state=CONNECTED}]
1466 [main] INFO org.mongodb.driver.connection  - Opened connection [connectionId{localValue:2, serverValue:4}] to localhost:27017
1470 [main] DEBUG org.mongodb.driver.protocol.command  - Sending command {count : BsonString{value='search'}} to database valueable_dev on connection [connectionId{localValue:2, serverValue:4}] to server localhost:27017
1474 [main] DEBUG org.mongodb.driver.protocol.command  - Command execution completed
1474 [main] INFO com.kodcu.provider.MongoToElasticProvider  - Mongo collection count: 6
1495 [main] DEBUG org.mongodb.driver.protocol.query  - Sending query of namespace valueable_dev.search on connection [connectionId{localValue:2, serverValue:4}] to server localhost:27017
1515 [main] DEBUG org.mongodb.driver.protocol.query  - Query completed
1527 [main] INFO com.kodcu.service.ElasticBulkService  - Transferring data began to elasticsearch.
1572 [main] DEBUG org.elasticsearch.common.compress.lzf  - using decoder[VanillaChunkDecoder] 
1892 [elasticsearch[Ikthalon][listener][T#1]] ERROR com.kodcu.listener.BulkProcessorListener  - failure in bulk execution:
[1]: index [valueable_dev], type [search], id [56c2db63cce61937438b4569], message [MapperParsingException[failed to parse [cotation_percent]]; nested: IllegalArgumentException[unknown property [$numberLong]];]
1893 [main] INFO org.mongodb.driver.connection  - Closed connection [connectionId{localValue:2, serverValue:4}] to localhost:27017 because the pool has been closed.
1893 [main] DEBUG org.mongodb.driver.connection  - Closing connection connectionId{localValue:2, serverValue:4}
1894 [cluster-ClusterId{value='57d27de4330e110ebf255a42', description='null'}-localhost:27017] DEBUG org.mongodb.driver.connection  - Closing connection connectionId{localValue:1, serverValue:3}
1895 [main] DEBUG org.elasticsearch.transport.netty  - [Ikthalon] disconnecting from [{Tyrannus}{JfOf1kuBRt-OWroXacR8Ug}{127.0.0.1}{127.0.0.1:9300}] due to explicit disconnect call
1905 [main] DEBUG org.elasticsearch.transport.netty  - [Ikthalon] disconnecting from [{#transport#-1}{127.0.0.1}{localhost/127.0.0.1:9300}] due to explicit disconnect call
1983 [main] INFO com.kodcu.main.Mongolastic  - Load duration: 1982ms

However mongolastic works fine with other collection. I think you can close this issue. Thank you for your help.

ozlerhakan commented 8 years ago

Hi @tarann ,

This error really depends on the document structure, in the error, it says that: index [valueable_dev], type [search], id [56c2db63cce61937438b4569], message [MapperParsingException[failed to parse [cotation_percent]]; nested: IllegalArgumentException[unknown property [$numberLong]];] es probably doesn't recognize the type of the cotation_percent field. I close this issue. Thanks for using mongolastic. feel free to make any suggestions for the tool.