jprante / elasticsearch-jdbc

JDBC importer for Elasticsearch
Apache License 2.0
2.84k stars 710 forks source link

failed to get node info localhost/127.0.0.1:9300 disconnecting... #796

Open yaxitashah opened 8 years ago

yaxitashah commented 8 years ago

Hello,

Following is my configuration and command which i am running :

echo '{
"type":"jdbc",
"jdbc":{

"url":"jdbc:mysql://localhost:3306/test",
"user":"root",
"password":"",
"sql":"SELECT id as _id, id, name,email FROM test",
"index":"users",
"type":"user",
"autocommit":"true"
}
}' | java -cp "/opt/lampp/htdocs/xampp/elasticsearch-2.2.1/plugins/elasticsearch-jdbc-1.7.1.0/lib/*" -"Dlog4j.configurationFile=file:///opt/lampp/htdocs/xampp/elasticsearch-2.2.1/plugins/elasticsearch-jdbc-1.7.1.0/bin/log4j2.xml" "org.xbib.tools.Runner" "org.xbib.tools.JDBCImporter" 

but i am getting lots of errors .. following is my jdbc.log file

jdbc.txt

Please help me to resolve this.

Thanks, Yaxita Shah

tenderwinner commented 8 years ago

use elasticsearch 2.2.0.1

yaxitashah commented 8 years ago

@tenderwinner : I changed it .. now i am using elasticsearch 2.2.0.1 and elasticsearch-jdbc 2.2.0 and getting following errors

[15:43:38,903][INFO ][importer.jdbc ][pool-3-thread-1] strategy standard: settings = {autocommit=true, elasticsearch.cluster=elasticsearch, elasticsearch.host=localhost, elasticsearch.port=9300, index=users1, metrics.enabled=true, password=, sql=SELECT id as _id, id, name,email FROM test, type=user1, url=jdbc:mysql://localhost:3306/ElasticSearch, user=}, context = org.xbib.elasticsearch.jdbc.strategy.standard.StandardContext@7ab544f3 [15:43:38,906][INFO ][importer.jdbc.context.standard][pool-3-thread-1] metrics thread started [15:43:38,910][INFO ][importer.jdbc.context.standard][pool-3-thread-1] found sink class org.xbib.elasticsearch.jdbc.strategy.standard.StandardSink@65a5d69a [15:43:38,915][INFO ][importer.jdbc.context.standard][pool-3-thread-1] found source class org.xbib.elasticsearch.jdbc.strategy.standard.StandardSource@5e3b072 [15:43:38,952][INFO ][org.xbib.elasticsearch.helper.client.BaseTransportClient][pool-3-thread-1] creating transport client on Linux Java HotSpot(TM) 64-Bit Server VM Oracle Corporation 1.8.0_74-b02 25.74-b02 with effective settings {autodiscover=false, client.transport.ignore_cluster_name=false, client.transport.nodes_sampler_interval=5s, client.transport.ping_timeout=5s, cluster.name=elasticsearch, flush_interval=5s, host.0=localhost, max_actions_per_request=10000, max_concurrent_requests=8, max_volume_per_request=10mb, name=importer, port=9300, sniff=false} [15:43:38,972][INFO ][org.elasticsearch.plugins][pool-3-thread-1] [importer] modules [], plugins [helper], sites [] [15:43:39,380][INFO ][org.xbib.elasticsearch.helper.client.BaseTransportClient][pool-3-thread-1] trying to connect to [localhost/127.0.0.1:9300] [15:43:39,469][WARN ][org.elasticsearch.client.transport][pool-3-thread-1] [importer] node {#transport#-1}{127.0.0.1}{localhost/127.0.0.1:9300} not part of the cluster Cluster [elasticsearch], ignoring... [15:43:39,494][ERROR][importer.jdbc ][pool-3-thread-1] error while processing request: no cluster nodes available, check settings {autodiscover=false, client.transport.ignore_cluster_name=false, client.transport.nodes_sampler_interval=5s, client.transport.ping_timeout=5s, cluster.name=elasticsearch, flush_interval=5s, host.0=localhost, max_actions_per_request=10000, max_concurrent_requests=8, max_volume_per_request=10mb, name=importer, port=9300, sniff=false} org.elasticsearch.client.transport.NoNodeAvailableException: no cluster nodes available, check settings {autodiscover=false, client.transport.ignore_cluster_name=false, client.transport.nodes_sampler_interval=5s, client.transport.ping_timeout=5s, cluster.name=elasticsearch, flush_interval=5s, host.0=localhost, max_actions_per_request=10000, max_concurrent_requests=8, max_volume_per_request=10mb, name=importer, port=9300, sniff=false} at org.xbib.elasticsearch.helper.client.BulkTransportClient.init(BulkTransportClient.java:165) ~[elasticsearch-jdbc-2.2.0.0-uberjar.jar:?] at org.xbib.elasticsearch.helper.client.ClientBuilder.toBulkTransportClient(ClientBuilder.java:102) ~[elasticsearch-jdbc-2.2.0.0-uberjar.jar:?] at org.xbib.elasticsearch.jdbc.strategy.standard.StandardSink.createIngest(StandardSink.java:348) ~[elasticsearch-jdbc-2.2.0.0-uberjar.jar:?] at org.xbib.elasticsearch.jdbc.strategy.standard.StandardSink.beforeFetch(StandardSink.java:100) ~[elasticsearch-jdbc-2.2.0.0-uberjar.jar:?] at org.xbib.elasticsearch.jdbc.strategy.standard.StandardContext.beforeFetch(StandardContext.java:180) ~[elasticsearch-jdbc-2.2.0.0-uberjar.jar:?] at org.xbib.elasticsearch.jdbc.strategy.standard.StandardContext.execute(StandardContext.java:161) ~[elasticsearch-jdbc-2.2.0.0-uberjar.jar:?] at org.xbib.tools.JDBCImporter.process(JDBCImporter.java:179) ~[elasticsearch-jdbc-2.2.0.0-uberjar.jar:?] at org.xbib.tools.JDBCImporter.newRequest(JDBCImporter.java:165) [elasticsearch-jdbc-2.2.0.0-uberjar.jar:?] at org.xbib.tools.JDBCImporter.newRequest(JDBCImporter.java:51) [elasticsearch-jdbc-2.2.0.0-uberjar.jar:?] at org.xbib.pipeline.AbstractPipeline.call(AbstractPipeline.java:50) [elasticsearch-jdbc-2.2.0.0-uberjar.jar:?] at org.xbib.pipeline.AbstractPipeline.call(AbstractPipeline.java:16) [elasticsearch-jdbc-2.2.0.0-uberjar.jar:?] at java.util.concurrent.FutureTask.run(FutureTask.java:266) [?:1.8.0_74] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [?:1.8.0_74] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [?:1.8.0_74] at java.lang.Thread.run(Thread.java:745) [?:1.8.0_74] [15:43:44,334][WARN ][org.elasticsearch.client.transport][elasticsearch[importer][generic][T#1]] [importer] node {#transport#-1}{127.0.0.1}{localhost/127.0.0.1:9300} not part of the cluster Cluster [elasticsearch], ignoring...

why is it showing "no cluster nodes available" ?

yaxitashah commented 8 years ago

It is working now .. i uninstalled elasticsearch and installed it again and it worked then :)

cdechery commented 8 years ago

I'm getting the same message. After seeing this thread I realized I was using ES 2.3.2, which is not yet supported, so I downgraded to 2.3.1. But still, I get a connection error.

ES is up, I can verify it with a simple "curl localhost:9200" call.

jprante commented 8 years ago

You did not specify the cluster name. Default is elasticsearch.

cdechery commented 8 years ago

Yes, I have. I tried with different settings (autodiscover, cluster, etc) on the config file, here is the current one: echo ' { "type" : "jdbc", "jdbc" : { "url" : "jdbc:oracle:thin:@//oradx01a_vip:1551/SSOWEB", "connection_properties" : { "oracle.jdbc.TcpNoDelay" : false, "useFetchSizeWithLongColumn" : false, "oracle.net.CONNECT_TIMEOUT" : 10000, "oracle.jdbc.ReadTimeout" : 50000 }, "user" : "uestat", "password" : "poc#2011", "sql" : "select * from tb_auditoria_requisicao t", "index" : "minhaoi_audit", "type" : "requisicao", "elasticsearch" : { "cluster" : "elasticsearch", "host" : "127.0.0.1", "port" : 9200 }, "autodiscover": true, "max_bulk_actions" : 20000, "max_concurrent_bulk_requests" : 10, "index_settings" : { "index" : { "number_of_shards" : 1, "number_of_replica" : 0 } } } } ' | java \ -cp "${lib}/*" \ -Dlog4j.configurationFile=${bin}/log4j2.xml \ org.xbib.tools.Runner \ org.xbib.tools.JDBCImporter

This is the error I get [14:18:06,271][INFO ][org.elasticsearch.org.xbib.elasticsearch.helper.client.TransportClient][pool-3-thread-1] [importer] fail ed to get node info for {#transport#-1}{127.0.0.1}{127.0.0.1:9200}, disconnecting... org.elasticsearch.transport.ReceiveTimeoutTransportException: [][127.0.0.1:9200][cluster:monitor/nodes/liveness] request_id [0 ] timed out after [5001ms] at org.elasticsearch.transport.TransportService$TimeoutHandler.run(TransportService.java:679) ~[elasticsearch-2.3.1.ja r:2.3.1] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [?:1.8.0_91] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [?:1.8.0_91] at java.lang.Thread.run(Thread.java:745) [?:1.8.0_91] [14:18:06,287][ERROR][importer.jdbc ][pool-3-thread-1] error while processing request: no cluster nodes available, check settings {autodiscover=true, client.transport.ignore_cluster_name=false, client.transport.nodes_sampler_interval=5s, cli ent.transport.ping_timeout=5s, cluster.name=elasticsearch, flush_interval=5s, host.0=127.0.0.1, max_actions_per_request=20000, max_concurrent_requests=10, max_volume_per_request=10mb, name=importer, port=9200, sniff=false}

Proof that Elasticsearch is up and running, the right version, at the correct host/port: [cdechery@frofens logs]$ curl localhost:9200 { "name" : "Gorgilla", "cluster_name" : "elasticsearch", "version" : { "number" : "2.3.1", "build_hash" : "bd980929010aef404e7cb0843e61d0665269fc39", "build_timestamp" : "2016-04-04T12:25:05Z", "build_snapshot" : false, "lucene_version" : "5.5.0" }, "tagline" : "You Know, for Search" }

scripta55 commented 8 years ago

@cdechery did you manage to solve this issue?

cdechery commented 8 years ago

On my own, no. Maybe it is a proxy issue, but it isn't clear from the logfiles.

Em ter, 23 de ago de 2016 às 17:20, Peter Phillip notifications@github.com escreveu:

@cdechery https://github.com/cdechery did you manage to solve this issue?

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/jprante/elasticsearch-jdbc/issues/796#issuecomment-241863860, or mute the thread https://github.com/notifications/unsubscribe-auth/AEzMsN_8zxBGOTvvV_JMrPw5OQF14mJoks5qi1X5gaJpZM4H8qpu .