Open rvijayakumar82 opened 9 years ago
@jprante : I appreciate your early response to this issue as we are planning for production data migration sooner..
Thanks in advance!!!
Is there anything in the server logs?
Some bulk requests take very long to process. So when the importer tries to close, the server has difficulties to respond in time.
There may be lots of reasons (problems on cluster nodes, data mappings etc.) which can only be revealed by a look at the cluster node logs.
@jprante : Thanks for the quick reply.
Nothing is there in the server log. Please find the log details from ES server.
[2015-09-08 06:03:40,121][INFO ][cluster.metadata ] [
Then I'm clueless.
Please check if all the documents you expect are indexed.
If not, just repeat the indexing. Maybe the disconnection event is not reproducible.
@jprante : Not all the data got migrated. It stops indexing data once the importer receives the "org.elasticsearch.client.transport.NoNodeAvailableException: None of the configured nodes were available".
We are getting this error when we index large volume of data. Also in the jdbc-importer log, it is clearly mentioned that "disconnecting from [Machine name] due to explicit disconnect call". I believe that this call is from the "afterFetch()" method in "StandardSink".
Is there a way to inform jdbc-importer to check for any ongoing write process and then close the client connection once that process in finished?
You notice the line with afterFetch: stop bulk
? The JDBC importer waits for 60 seconds so the cluster has much time to complete the indexing. Then it closes down and writes the line with afterFetch: before ingest shutdown
which means it is about to disconnect.
In your case, bulk requests are still processed after 60 seconds. There is something very slow on the cluster. There are documented settings of JDBC importer how to reduce the bulk indexing load. You should also check if you want to disable index throttling, or if you want to adjust the segment merging parameters of the cluster, to increase your cluster power. Or if you want to add a node which is very simple.
@jprante : Thanks for your reply. We are using keyword tokenizer with nGram filter to generate the token stream as like the original text in order to support partial matching search. Does it cause this issue? Also could you please share the details for the below items which you mentioned.
1.There are documented settings of JDBC importer how to reduce the bulk indexing load. 2.You should also check if you want to disable index throttling, or if you want to adjust the segment merging parameters of the cluster, to increase your cluster power
Hi Jorg,
I am getting "org.elasticsearch.client.transport.NoNodeAvailableException: None of the configured nodes were available" error while bulk indexing data into elasticsearch from oracle db. I am using elasticseach-jdbc 1.7.0.0 with elasticsearch 1.7.0.
As I analyzed the source, it looks the client got shutdown call from the StandardSink class before all the bulk indexing threads complete their action. I believe some threads are consuming more time to push the data into ES but the client got closed by the shutdown call.
I have enabled the debug logging for elasticsearch-jdbc and the same can be found below.
Log Info:
[20:07:18,149][DEBUG][importer.jdbc ][main] prepare started [20:07:18,212][INFO ][importer.jdbc ][main] index name = index_stg, concrete index name = index_stg [20:07:18,213][DEBUG][importer.jdbc ][main] prepare ended [20:07:18,215][DEBUG][importer ][main] executing [20:07:18,244][INFO ][importer.jdbc ][pool-2-thread-1] strategy standard: settings = {}, context = org.xbib.elasticsearch.jdbc.strategy.standard.StandardContext@3ee489aa [20:07:18,245][INFO ][importer.jdbc.context.standard][pool-2-thread-1] metrics thread started [20:07:18,245][DEBUG][importer.jdbc.context.standard][pool-2-thread-1] before fetch [20:07:18,252][INFO ][importer.jdbc.context.standard][pool-2-thread-1] found sink class org.xbib.elasticsearch.jdbc.strategy.standard.StandardSink@74dc0c5d [20:07:18,355][INFO ][importer.jdbc.context.standard][pool-2-thread-1] found source class org.xbib.elasticsearch.jdbc.strategy.standard.StandardSource@24ec87dc [20:07:18,407][INFO ][BaseTransportClient ][pool-2-thread-1] creating transport client, java version 1.7.0_65, effective settings {cluster.name=, host.0=, port=9300, sniff=false, autodiscover=false, name=importer, client.transport.ignore_cluster_name=false, client.transport.ping_timeout=5s, client.transport.nodes_sampler_interval=5s}
[20:07:18,487][DEBUG][org.elasticsearch.plugins][pool-2-thread-1] [importer] [/app/elasticsearch-jdbc-1.7.0.0/bin/plugins] directory does not exist.
[20:07:18,491][DEBUG][org.elasticsearch.plugins][pool-2-thread-1] [importer] lucene property is not set in plugin es-plugin.properties file. Skipping test.
[20:07:18,493][DEBUG][org.elasticsearch.plugins][pool-2-thread-1] [importer] [/app/elasticsearch-jdbc-1.7.0.0/bin/plugins/support-1.7.0.0-8e7ca71/_site] directory does not exist.
[20:07:18,494][DEBUG][org.elasticsearch.plugins][pool-2-thread-1] [importer] [/app/elasticsearch-jdbc-1.7.0.0/bin/plugins] directory does not exist.
[20:07:18,495][INFO ][org.elasticsearch.plugins][pool-2-thread-1] [importer] loaded [support-1.7.0.0-8e7ca71], sites []
[20:07:18,513][DEBUG][org.elasticsearch.common.compress.lzf][pool-2-thread-1] using encoder [VanillaChunkDecoder] and decoder[{}]
[20:07:18,527][DEBUG][org.elasticsearch.threadpool][pool-2-thread-1] [importer] creating thread_pool [generic], type [cached], keep_alive [30s]
[20:07:18,535][DEBUG][org.elasticsearch.threadpool][pool-2-thread-1] [importer] creating thread_pool [index], type [fixed], size [4], queue_size [200]
[20:07:18,538][DEBUG][org.elasticsearch.threadpool][pool-2-thread-1] [importer] creating thread_pool [bulk], type [fixed], size [4], queue_size [50]
[20:07:18,538][DEBUG][org.elasticsearch.threadpool][pool-2-thread-1] [importer] creating thread_pool [get], type [fixed], size [4], queue_size [1k]
[20:07:18,538][DEBUG][org.elasticsearch.threadpool][pool-2-thread-1] [importer] creating thread_pool [search], type [fixed], size [7], queue_size [1k]
[20:07:18,539][DEBUG][org.elasticsearch.threadpool][pool-2-thread-1] [importer] creating thread_pool [suggest], type [fixed], size [4], queue_size [1k]
[20:07:18,539][DEBUG][org.elasticsearch.threadpool][pool-2-thread-1] [importer] creating thread_pool [percolate], type [fixed], size [4], queue_size [1k]
[20:07:18,539][DEBUG][org.elasticsearch.threadpool][pool-2-thread-1] [importer] creating thread_pool [management], type [scaling], min [1], size [5], keep_alive [5m]
[20:07:18,540][DEBUG][org.elasticsearch.threadpool][pool-2-thread-1] [importer] creating thread_pool [listener], type [fixed], size [2], queue_size [null]
[20:07:18,541][DEBUG][org.elasticsearch.threadpool][pool-2-thread-1] [importer] creating thread_pool [flush], type [scaling], min [1], size [2], keep_alive [5m]
[20:07:18,541][DEBUG][org.elasticsearch.threadpool][pool-2-thread-1] [importer] creating thread_pool [merge], type [scaling], min [1], size [2], keep_alive [5m]
[20:07:18,541][DEBUG][org.elasticsearch.threadpool][pool-2-thread-1] [importer] creating thread_pool [refresh], type [scaling], min [1], size [2], keep_alive [5m]
[20:07:18,541][DEBUG][org.elasticsearch.threadpool][pool-2-thread-1] [importer] creating thread_pool [warmer], type [scaling], min [1], size [2], keep_alive [5m]
[20:07:18,541][DEBUG][org.elasticsearch.threadpool][pool-2-thread-1] [importer] creating thread_pool [snapshot], type [scaling], min [1], size [2], keep_alive [5m]
[20:07:18,542][DEBUG][org.elasticsearch.threadpool][pool-2-thread-1] [importer] creating thread_pool [optimize], type [fixed], size [1], queue_size [null]
[20:07:18,542][DEBUG][org.elasticsearch.threadpool][pool-2-thread-1] [importer] creating thread_pool [fetch_shard_started], type [scaling], min [1], size [8], keep_alive [5m]
[20:07:18,542][DEBUG][org.elasticsearch.threadpool][pool-2-thread-1] [importer] creating thread_pool [fetch_shard_store], type [scaling], min [1], size [8], keep_alive [5m]
[20:07:19,305][DEBUG][org.elasticsearch.common.netty][pool-2-thread-1] using gathering [true]
[20:07:19,314][DEBUG][org.elasticsearch.client.transport][pool-2-thread-1] [importer] node_sampler_interval[5s]
[20:07:19,362][DEBUG][org.elasticsearch.netty.channel.socket.nio.SelectorUtil][pool-2-thread-1] Using select timeout of 500
[20:07:19,362][DEBUG][org.elasticsearch.netty.channel.socket.nio.SelectorUtil][pool-2-thread-1] Epoll-bug workaround enabled = false
[20:07:19,410][INFO ][BaseTransportClient ][pool-2-thread-1] trying to connect to [inet[/:9300]]
[20:07:19,410][DEBUG][org.elasticsearch.client.transport][pool-2-thread-1] [importer] adding address [[#transport#-1][][inet[/:9300]]]
[20:07:19,473][DEBUG][org.elasticsearch.transport.netty][pool-2-thread-1] [importer] connected to node [[#transport#-1][][inet[/:9300]]]
[20:07:19,569][DEBUG][org.elasticsearch.transport.netty][pool-2-thread-1] [importer] connected to node [[][Jr8tuDOuQBi98B9xot-YMA][][inet[/:9300]]{master=true}]
[20:07:19,570][INFO ][BaseTransportClient ][pool-2-thread-1] connected to [[][Jr8tuDOuQBi98B9xot-YMA][][inet[/:9300]]{master=true}]
[20:07:19,589][INFO ][importer.jdbc.sink.standard][pool-2-thread-1] creating index index_stg with settings = {analysis.tokenizer.commatokenizer.pattern=,, analysis.analyzer.comma_analyzer.type=custom, analysis.filter.nGram_filter.token_chars.1=digit, analysis.filter.nGram_filter.token_chars.0=letter, analysis.filter.nGram_filter.token_chars.3=symbols, analysis.filter.nGram_filter.token_chars.2=punctuation, analysis.filter.nGram_filter.type=nGram, analysis.analyzer.comma_analyzer.filter.1=asciifolding, analysis.analyzer.comma_analyzer.filter.0=lowercase, analysis.analyzer.nGram_analyzer.filter.2=nGram_filter, analysis.analyzer.nGram_analyzer.filter.1=asciifolding, index.number_of_shards=5, analysis.analyzer.comma_analyzer.tokenizer=commatokenizer, analysis.analyzer.nGram_analyzer.filter.0=lowercase, analysis.analyzer.nGram_analyzer.tokenizer=keyword, index.number_of_replica=1, analysis.filter.nGram_filter.min_gram=2, analysis.tokenizer.commatokenizer.type=pattern, analysis.analyzer.nGram_analyzer.type=custom, analysis.filter.nGram_filter.max_gram=20} and mappings = {}
[20:07:19,591][INFO ][BaseIngestTransportClient][pool-2-thread-1] settings = {analysis={filter={nGram_filter={min_gram=2, type=nGram, max_gram=20, token_chars=[letter, digit, punctuation, symbols]}}, analyzer={nGram_analyzer={type=custom, filter=[lowercase, asciifolding, nGram_filter], tokenizer=keyword}, comma_analyzer={type=custom, filter=[lowercase, asciifolding], tokenizer=commatokenizer}}, tokenizer={commatokenizer={type=pattern, pattern=,}}}, index={number_of_replica=1, number_of_shards=5}}
[20:07:19,592][INFO ][BaseIngestTransportClient][pool-2-thread-1] found mapping for
[20:07:20,262][INFO ][BaseIngestTransportClient][pool-2-thread-1] index index_stg created
[20:07:20,352][DEBUG][importer.jdbc.context.standard][pool-2-thread-1] fetch
[20:07:59,578][DEBUG][BulkTransportClient ][elasticsearch[importer][bulk_processor][T#1]] before bulk [1] [actions=3899] [bytes=2070168] [concurrent requests=1]
[20:08:02,396][DEBUG][BulkTransportClient ][pool-2-thread-1] before bulk [2] [actions=10000] [bytes=5412963] [concurrent requests=3]
[20:08:04,416][DEBUG][BulkTransportClient ][pool-2-thread-1] before bulk [3] [actions=10000] [bytes=5323264] [concurrent requests=4]
[20:08:04,683][DEBUG][BulkTransportClient ][elasticsearch[importer][bulk_processor][T#1]] before bulk [4] [actions=1060] [bytes=554085] [concurrent requests=4]
[20:08:06,126][DEBUG][BulkTransportClient ][pool-2-thread-1] before bulk [5] [actions=10000] [bytes=5185523] [concurrent requests=6]
[20:08:07,558][DEBUG][BulkTransportClient ][pool-2-thread-1] before bulk [6] [actions=10000] [bytes=5160638] [concurrent requests=7]
[20:08:08,987][DEBUG][BulkTransportClient ][pool-2-thread-1] before bulk [7] [actions=10000] [bytes=5196594] [concurrent requests=8]
[20:08:09,690][DEBUG][BulkTransportClient ][elasticsearch[importer][bulk_processor][T#1]] before bulk [8] [actions=4454] [bytes=2298793] [concurrent requests=8]
[20:08:11,450][DEBUG][BulkTransportClient ][pool-2-thread-1] before bulk [9] [actions=10000] [bytes=5295231] [concurrent requests=10]
[20:08:18,253][INFO ][metrics.source.json ][pool-4-thread-1] {"totalrows":80094,"elapsed":60013,"bytes":17006364,"avg":212.0,"dps":1334.610834319231,"mbps":0.2767363295244364}
[20:08:18,254][INFO ][metrics.sink.json ][pool-4-thread-1] {"elapsed":60013,"submitted":69413,"succeeded":0,"failed":0,"bytes":36497259,"avg":525.0,"dps":1156.6327295752587,"mbps":0.5939022293867579}
[20:09:18,245][INFO ][metrics.source.json ][pool-4-thread-1] {"totalrows":80094,"elapsed":120006,"bytes":17006364,"avg":212.0,"dps":667.4166291685416,"mbps":0.1383912249700015}
[20:09:18,245][INFO ][metrics.sink.json ][pool-4-thread-1] {"elapsed":120005,"submitted":69413,"succeeded":0,"failed":0,"bytes":36497259,"avg":525.0,"dps":578.4175659347527,"mbps":0.29700307897327194}
[20:09:23,707][DEBUG][BulkTransportClient ][elasticsearch[importer][transport_client_worker][T#5]{New I/O worker #5}] after bulk [4] [succeeded=1060] [failed=0] [78965ms] [concurrent requests=9]
[20:09:25,720][DEBUG][BulkTransportClient ][pool-2-thread-1] before bulk [10] [actions=10000] [bytes=5428581] [concurrent requests=10]
[20:10:05,750][DEBUG][BulkTransportClient ][elasticsearch[importer][transport_client_worker][T#5]{New I/O worker #5}] after bulk [1] [succeeded=4959] [failed=0] [125989ms] [concurrent requests=9]
[20:10:07,730][DEBUG][BulkTransportClient ][pool-2-thread-1] before bulk [11] [actions=10000] [bytes=5166947] [concurrent requests=10]
[20:10:18,245][INFO ][metrics.source.json ][pool-4-thread-1] {"totalrows":101353,"elapsed":180006,"bytes":21801568,"avg":215.0,"dps":563.0534537737631,"mbps":0.11827713381776163}
[20:10:18,245][INFO ][metrics.sink.json ][pool-4-thread-1] {"elapsed":180005,"submitted":89413,"succeeded":4959,"failed":0,"bytes":47092787,"avg":526.0,"dps":496.72509096969526,"mbps":0.25548762425870114}
[20:11:03,781][DEBUG][BulkTransportClient ][elasticsearch[importer][transport_client_worker][T#4]{New I/O worker #4}] after bulk [3] [succeeded=14959] [failed=0] [179145ms] [concurrent requests=9]
[20:11:05,370][DEBUG][BulkTransportClient ][pool-2-thread-1] before bulk [12] [actions=10000] [bytes=5134947] [concurrent requests=10]
[20:11:13,536][DEBUG][BulkTransportClient ][elasticsearch[importer][transport_client_worker][T#6]{New I/O worker #6}] after bulk [5] [succeeded=24959] [failed=0] [187298ms] [concurrent requests=9]
[20:11:15,199][DEBUG][BulkTransportClient ][pool-2-thread-1] before bulk [13] [actions=10000] [bytes=5190999] [concurrent requests=10]
[20:11:18,245][INFO ][metrics.source.json ][pool-4-thread-1] {"totalrows":121376,"elapsed":240006,"bytes":26072491,"avg":214.0,"dps":505.72069031607543,"mbps":0.10608658530281534}
[20:11:18,245][INFO ][metrics.sink.json ][pool-4-thread-1] {"elapsed":240005,"submitted":109413,"succeeded":24959,"failed":0,"bytes":57418733,"avg":524.0,"dps":455.87800254161374,"mbps":0.2336325553438991}
[20:11:20,547][DEBUG][BulkTransportClient ][elasticsearch[importer][transport_client_worker][T#4]{New I/O worker #4}] after bulk [6] [succeeded=34959] [failed=0] [192884ms] [concurrent requests=9]
[20:11:22,771][DEBUG][BulkTransportClient ][pool-2-thread-1] before bulk [14] [actions=10000] [bytes=5209954] [concurrent requests=10]
[20:11:30,687][DEBUG][BulkTransportClient ][elasticsearch[importer][transport_client_worker][T#6]{New I/O worker #6}] after bulk [2] [succeeded=44959] [failed=0] [208100ms] [concurrent requests=9]
[20:11:33,324][DEBUG][BulkTransportClient ][pool-2-thread-1] before bulk [15] [actions=10000] [bytes=5172827] [concurrent requests=10]
[20:11:33,615][DEBUG][BulkTransportClient ][elasticsearch[importer][transport_client_worker][T#6]{New I/O worker #6}] after bulk [8] [succeeded=49413] [failed=0] [203879ms] [concurrent requests=9]
[20:11:35,535][DEBUG][BulkTransportClient ][pool-2-thread-1] before bulk [16] [actions=10000] [bytes=5176173] [concurrent requests=10]
[20:12:18,245][INFO ][metrics.source.json ][pool-4-thread-1] {"totalrows":154769,"elapsed":300005,"bytes":33041803,"avg":213.0,"dps":515.8880685321911,"mbps":0.10755615987129381}
[20:12:18,245][INFO ][metrics.sink.json ][pool-4-thread-1] {"elapsed":300005,"submitted":139413,"succeeded":49413,"failed":0,"bytes":72977687,"avg":523.0,"dps":464.70225496241727,"mbps":0.23755361564286429}
--Repeating Logs are trimmed--
[23:51:57,926][DEBUG][importer.jdbc.context.standard][pool-2-thread-1] after fetch [23:51:57,959][DEBUG][importer.jdbc.sink.standard][pool-2-thread-1] afterFetch: flush ingest [23:51:57,959][DEBUG][BulkTransportClient ][pool-2-thread-1] flushing bulk processor [23:51:57,970][DEBUG][BulkTransportClient ][pool-2-thread-1] before bulk [323] [actions=7910] [bytes=3806632] [concurrent requests=9] [23:52:18,245][INFO ][metrics.source.json ][pool-4-thread-1] {"totalrows":3015695,"elapsed":13500006,"bytes":560876847,"avg":185.0,"dps":223.384715532719,"mbps":0.040572670552771424} [23:52:18,245][INFO ][metrics.sink.json ][pool-4-thread-1] {"elapsed":13500005,"submitted":2975053,"succeeded":2887143,"failed":0,"bytes":1453149989,"avg":488.0,"dps":220.37421467621678,"mbps":0.10511787115136717} [23:52:25,121][DEBUG][BulkTransportClient ][elasticsearch[importer][transport_client_worker][T#4]{New I/O worker #4}] after bulk [315] [succeeded=2897143] [failed=0] [379849ms] [concurrent requests=8] [23:53:18,245][INFO ][metrics.source.json ][pool-4-thread-1] {"totalrows":3015695,"elapsed":13560006,"bytes":560876847,"avg":185.0,"dps":222.3962880252413,"mbps":0.0403931455412658} [23:53:18,245][INFO ][metrics.sink.json ][pool-4-thread-1] {"elapsed":13560005,"submitted":2975053,"succeeded":2897143,"failed":0,"bytes":1453149989,"avg":488.0,"dps":219.3991078911844,"mbps":0.10465274799919413} [23:53:25,173][DEBUG][importer.jdbc.sink.standard][pool-2-thread-1] afterFetch: stop bulk [23:53:25,477][DEBUG][importer.jdbc.sink.standard][pool-2-thread-1] afterFetch: refresh index [23:54:18,245][INFO ][metrics.source.json ][pool-4-thread-1] {"totalrows":3015695,"elapsed":13620006,"bytes":560876847,"avg":185.0,"dps":221.41656912632786,"mbps":0.04021520224722643} [23:54:18,245][INFO ][metrics.sink.json ][pool-4-thread-1] {"elapsed":13620005,"submitted":2975053,"succeeded":2897143,"failed":0,"bytes":1453149989,"avg":488.0,"dps":218.43259235220546,"mbps":0.10419172284685743} [23:54:31,804][DEBUG][importer.jdbc.sink.standard][pool-2-thread-1] afterFetch: before ingest shutdown [23:54:31,804][DEBUG][BulkTransportClient ][pool-2-thread-1] closing bulk processor... [23:54:31,805][DEBUG][BulkTransportClient ][pool-2-thread-1] shutting down... [23:54:31,805][DEBUG][BaseTransportClient ][pool-2-thread-1] shutdown started [23:54:31,841][DEBUG][org.elasticsearch.transport.netty][pool-2-thread-1] [importer] disconnecting from [[][Jr8tuDOuQBi98B9xot-YMA][][inet[/:9300]]{master=true}] due to explicit disconnect call
[23:54:31,941][DEBUG][org.elasticsearch.transport.netty][pool-2-thread-1] [importer] disconnecting from [[#transport#-1][][inet[/:9300]]] due to explicit disconnect call
[23:54:31,955][ERROR][BulkTransportClient ][elasticsearch[importer][listener][T#2]] bulk [319] error
org.elasticsearch.client.transport.NoNodeAvailableException: None of the configured nodes were available: [[][Jr8tuDOuQBi98B9xot-YMA][][inet[/:9300]]{master=true}]
at org.elasticsearch.client.transport.TransportClientNodesService$RetryListener.onFailure(TransportClientNodesService.java:242) ~[elasticsearch-jdbc-1.7.0.0-uberjar.jar:?]
at org.elasticsearch.action.TransportActionNodeProxy$1.handleException(TransportActionNodeProxy.java:78) ~[elasticsearch-jdbc-1.7.0.0-uberjar.jar:?]
at org.elasticsearch.transport.TransportService$Adapter$3.run(TransportService.java:468) ~[elasticsearch-jdbc-1.7.0.0-uberjar.jar:?]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) [?:1.7.0_65]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) [?:1.7.0_65]
at java.lang.Thread.run(Thread.java:745) [?:1.7.0_65]
Caused by: org.elasticsearch.transport.NodeDisconnectedException: [][inet[/:9300]][indices:data/write/bulk] disconnected
[23:54:31,955][ERROR][BulkTransportClient ][elasticsearch[importer][listener][T#1]] bulk [320] error
org.elasticsearch.client.transport.NoNodeAvailableException: None of the configured nodes were available: [[][Jr8tuDOuQBi98B9xot-YMA][][inet[/:9300]]{master=true}]
at org.elasticsearch.client.transport.TransportClientNodesService$RetryListener.onFailure(TransportClientNodesService.java:242) ~[elasticsearch-jdbc-1.7.0.0-uberjar.jar:?]
at org.elasticsearch.action.TransportActionNodeProxy$1.handleException(TransportActionNodeProxy.java:78) ~[elasticsearch-jdbc-1.7.0.0-uberjar.jar:?]
at org.elasticsearch.transport.TransportService$Adapter$3.run(TransportService.java:468) ~[elasticsearch-jdbc-1.7.0.0-uberjar.jar:?]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) [?:1.7.0_65]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) [?:1.7.0_65]
at java.lang.Thread.run(Thread.java:745) [?:1.7.0_65]
Caused by: org.elasticsearch.transport.NodeDisconnectedException: [][inet[/:9300]][indices:data/write/bulk] disconnected
[23:54:32,021][ERROR][BulkTransportClient ][elasticsearch[importer][listener][T#2]] bulk [323] error
org.elasticsearch.client.transport.NoNodeAvailableException: None of the configured nodes were available: [[][Jr8tuDOuQBi98B9xot-YMA][][inet[/:9300]]{master=true}]
at org.elasticsearch.client.transport.TransportClientNodesService$RetryListener.onFailure(TransportClientNodesService.java:242) ~[elasticsearch-jdbc-1.7.0.0-uberjar.jar:?]
at org.elasticsearch.action.TransportActionNodeProxy$1.handleException(TransportActionNodeProxy.java:78) ~[elasticsearch-jdbc-1.7.0.0-uberjar.jar:?]
at org.elasticsearch.transport.TransportService$Adapter$3.run(TransportService.java:468) ~[elasticsearch-jdbc-1.7.0.0-uberjar.jar:?]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) [?:1.7.0_65]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) [?:1.7.0_65]
at java.lang.Thread.run(Thread.java:745) [?:1.7.0_65]
Caused by: org.elasticsearch.transport.NodeDisconnectedException: [][inet[/:9300]][indices:data/write/bulk] disconnected
[23:54:32,022][ERROR][BulkTransportClient ][elasticsearch[importer][listener][T#1]] bulk [317] error
org.elasticsearch.client.transport.NoNodeAvailableException: None of the configured nodes were available: [[][Jr8tuDOuQBi98B9xot-YMA][][inet[/:9300]]{master=true}]
at org.elasticsearch.client.transport.TransportClientNodesService$RetryListener.onFailure(TransportClientNodesService.java:242) ~[elasticsearch-jdbc-1.7.0.0-uberjar.jar:?]
at org.elasticsearch.action.TransportActionNodeProxy$1.handleException(TransportActionNodeProxy.java:78) ~[elasticsearch-jdbc-1.7.0.0-uberjar.jar:?]
at org.elasticsearch.transport.TransportService$Adapter$3.run(TransportService.java:468) ~[elasticsearch-jdbc-1.7.0.0-uberjar.jar:?]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) [?:1.7.0_65]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) [?:1.7.0_65]
at java.lang.Thread.run(Thread.java:745) [?:1.7.0_65]
Caused by: org.elasticsearch.transport.NodeDisconnectedException: [][inet[/:9300]][indices:data/write/bulk] disconnected
[23:54:32,022][ERROR][BulkTransportClient ][elasticsearch[importer][listener][T#2]] bulk [318] error
org.elasticsearch.client.transport.NoNodeAvailableException: None of the configured nodes were available: [[][Jr8tuDOuQBi98B9xot-YMA][][inet[/:9300]]{master=true}]
at org.elasticsearch.client.transport.TransportClientNodesService$RetryListener.onFailure(TransportClientNodesService.java:242) ~[elasticsearch-jdbc-1.7.0.0-uberjar.jar:?]
at org.elasticsearch.action.TransportActionNodeProxy$1.handleException(TransportActionNodeProxy.java:78) ~[elasticsearch-jdbc-1.7.0.0-uberjar.jar:?]
at org.elasticsearch.transport.TransportService$Adapter$3.run(TransportService.java:468) ~[elasticsearch-jdbc-1.7.0.0-uberjar.jar:?]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) [?:1.7.0_65]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) [?:1.7.0_65]
at java.lang.Thread.run(Thread.java:745) [?:1.7.0_65]
Caused by: org.elasticsearch.transport.NodeDisconnectedException: [][inet[/:9300]][indices:data/write/bulk] disconnected
[23:54:32,023][ERROR][BulkTransportClient ][elasticsearch[importer][listener][T#2]] bulk [321] error
org.elasticsearch.client.transport.NoNodeAvailableException: None of the configured nodes were available: [[][Jr8tuDOuQBi98B9xot-YMA][][inet[/:9300]]{master=true}]
at org.elasticsearch.client.transport.TransportClientNodesService$RetryListener.onFailure(TransportClientNodesService.java:242) ~[elasticsearch-jdbc-1.7.0.0-uberjar.jar:?]
at org.elasticsearch.action.TransportActionNodeProxy$1.handleException(TransportActionNodeProxy.java:78) ~[elasticsearch-jdbc-1.7.0.0-uberjar.jar:?]
at org.elasticsearch.transport.TransportService$Adapter$3.run(TransportService.java:468) ~[elasticsearch-jdbc-1.7.0.0-uberjar.jar:?]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) [?:1.7.0_65]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) [?:1.7.0_65]
at java.lang.Thread.run(Thread.java:745) [?:1.7.0_65]
Caused by: org.elasticsearch.transport.NodeDisconnectedException: [][inet[/:9300]][indices:data/write/bulk] disconnected
[23:54:32,023][ERROR][BulkTransportClient ][elasticsearch[importer][listener][T#1]] bulk [322] error
org.elasticsearch.client.transport.NoNodeAvailableException: None of the configured nodes were available: [[][Jr8tuDOuQBi98B9xot-YMA][][inet[/:9300]]{master=true}]
at org.elasticsearch.client.transport.TransportClientNodesService$RetryListener.onFailure(TransportClientNodesService.java:242) ~[elasticsearch-jdbc-1.7.0.0-uberjar.jar:?]
at org.elasticsearch.action.TransportActionNodeProxy$1.handleException(TransportActionNodeProxy.java:78) ~[elasticsearch-jdbc-1.7.0.0-uberjar.jar:?]
at org.elasticsearch.transport.TransportService$Adapter$3.run(TransportService.java:468) ~[elasticsearch-jdbc-1.7.0.0-uberjar.jar:?]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) [?:1.7.0_65]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) [?:1.7.0_65]
at java.lang.Thread.run(Thread.java:745) [?:1.7.0_65]
Caused by: org.elasticsearch.transport.NodeDisconnectedException: [][inet[/:9300]][indices:data/write/bulk] disconnected
[23:54:32,023][ERROR][BulkTransportClient ][elasticsearch[importer][listener][T#2]] bulk [316] error
org.elasticsearch.client.transport.NoNodeAvailableException: None of the configured nodes were available: [[][Jr8tuDOuQBi98B9xot-YMA][][inet[/:9300]]{master=true}]
at org.elasticsearch.client.transport.TransportClientNodesService$RetryListener.onFailure(TransportClientNodesService.java:242) ~[elasticsearch-jdbc-1.7.0.0-uberjar.jar:?]
at org.elasticsearch.action.TransportActionNodeProxy$1.handleException(TransportActionNodeProxy.java:78) ~[elasticsearch-jdbc-1.7.0.0-uberjar.jar:?]
at org.elasticsearch.transport.TransportService$Adapter$3.run(TransportService.java:468) ~[elasticsearch-jdbc-1.7.0.0-uberjar.jar:?]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) [?:1.7.0_65]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) [?:1.7.0_65]
at java.lang.Thread.run(Thread.java:745) [?:1.7.0_65]
Caused by: org.elasticsearch.transport.NodeDisconnectedException: [][inet[/:9300]][indices:data/write/bulk] disconnected
[23:54:32,050][DEBUG][BaseTransportClient ][pool-2-thread-1] shutdown complete
[23:54:32,050][DEBUG][BulkTransportClient ][pool-2-thread-1] shutting down completed
[23:54:32,050][DEBUG][importer.jdbc.sink.standard][pool-2-thread-1] afterFetch: after ingest shutdown
[23:54:32,050][DEBUG][importer ][pool-2-thread-1] close (no op)
[23:54:32,051][DEBUG][importer ][main] execution completed
[23:54:32,051][DEBUG][importer ][main] cleanup (no op)