Closed ArtyomBaranovskiy closed 9 years ago
I was struggling with the same issue. Try to change network.bind_host
and network.host
parameters in elasticsearch.yml
and restart server.
Dear Jörg, dear All, I've the same issue. With an ubuntu distribution on amazon cloud and local mySQL database.
My json file:
{
"type" : "jdbc",
"jdbc" : {
"url" : "jdbc:mysql://localhost:3306/sms_dw",
"user" : "user",
"password" : "passwordforuser",
"sql" : "select *, id as _id from beitrag",
"index" : "sms",
"type" : "beitrag",
"elasticsearch.autodiscover" : "true"
}
}
My importer script:
#!/bin/sh
JAVA_HOME=/usr/lib/jvm/java-8-oracle
JDBC_IMPORTER_HOME=/usr/share/elasticsearch/elasticsearch-jdbc-1.6.0.0
bin=$JDBC_IMPORTER_HOME/bin
lib=$JDBC_IMPORTER_HOME/lib
java \
-cp "${lib}/*" \
-Dlog4j.configurationFile=${bin}/log4j2.xml \
org.xbib.tools.Runner \
org.xbib.tools.JDBCImporter \
sms_dw.beitrag.json
jdbc.log:
[11:33:05,093][INFO ][importer.jdbc ][main] index name = sms, concrete index name = sms
[11:33:05,115][INFO ][importer.jdbc ][pool-2-thread-1] strategy standard: settings = {type=beitrag, url=jdbc:mysql://localhost:3306/sms_dw, sql=select *, id as _id from beitrag, user=user, index=sms, elasticsearch.autodiscover=true, password=passwordforuser}, context = org.xbib.elasticsearch.jdbc.strategy.standard.StandardContext@38ff3be5
[11:33:05,117][INFO ][importer.jdbc.context.standard][pool-2-thread-1] found sink class org.xbib.elasticsearch.jdbc.strategy.standard.StandardSink@ebc0279
[11:33:05,124][INFO ][importer.jdbc.context.standard][pool-2-thread-1] found source class org.xbib.elasticsearch.jdbc.strategy.standard.StandardSource@2f4f895c
[11:33:05,164][INFO ][BaseTransportClient ][pool-2-thread-1] creating transport client, java version 1.7.0_51, effective settings {cluster.name=elasticsearch, port=9300, sniff=false, autodiscover=true, name=importer, client.transport.ignore_cluster_name=false, client.transport.ping_timeout=5s, client.transport.nodes_sampler_interval=5s}
[11:33:05,219][INFO ][org.elasticsearch.plugins][pool-2-thread-1] [importer] loaded [support-1.6.0.0-d7bb0e9], sites []
[11:33:05,942][INFO ][BaseTransportClient ][pool-2-thread-1] trying to connect to [inet[localhost/127.0.0.1:9300]]
[11:33:06,033][INFO ][org.elasticsearch.client.transport][pool-2-thread-1] [importer] failed to get node info for [#transport#-1][IPADRESS_WITH_IP_PREFIX][inet[localhost/127.0.0.1:9300]], disconnecting...
org.elasticsearch.transport.NodeDisconnectedException: [][inet[localhost/127.0.0.1:9300]][cluster:monitor/nodes/info] disconnected
[11:33:06,037][ERROR][importer ][pool-2-thread-1] error while getting next input: no cluster nodes available, check settings {cluster.name=elasticsearch, port=9300, sniff=false, autodiscover=true, name=importer, client.transport.ignore_cluster_name=false, client.transport.ping_timeout=5s, client.transport.nodes_sampler_interval=5s}
org.elasticsearch.client.transport.NoNodeAvailableException: no cluster nodes available, check settings {cluster.name=elasticsearch, port=9300, sniff=false, autodiscover=true, name=importer, client.transport.ignore_cluster_name=false, client.transport.ping_timeout=5s, client.transport.nodes_sampler_interval=5s}
at org.xbib.elasticsearch.support.client.BaseTransportClient.createClient(BaseTransportClient.java:53) ~[elasticsearch-jdbc-1.6.0.0-uberjar.jar:?]
at org.xbib.elasticsearch.support.client.BaseIngestTransportClient.newClient(BaseIngestTransportClient.java:22) ~[elasticsearch-jdbc-1.6.0.0-uberjar.jar:?]
at org.xbib.elasticsearch.support.client.transport.BulkTransportClient.newClient(BulkTransportClient.java:88) ~[elasticsearch-jdbc-1.6.0.0-uberjar.jar:?]
at org.xbib.elasticsearch.jdbc.strategy.standard.StandardContext$1.create(StandardContext.java:440) ~[elasticsearch-jdbc-1.6.0.0-uberjar.jar:?]
at org.xbib.elasticsearch.jdbc.strategy.standard.StandardSink.beforeFetch(StandardSink.java:94) ~[elasticsearch-jdbc-1.6.0.0-uberjar.jar:?]
at org.xbib.elasticsearch.jdbc.strategy.standard.StandardContext.beforeFetch(StandardContext.java:207) ~[elasticsearch-jdbc-1.6.0.0-uberjar.jar:?]
at org.xbib.elasticsearch.jdbc.strategy.standard.StandardContext.execute(StandardContext.java:188) ~[elasticsearch-jdbc-1.6.0.0-uberjar.jar:?]
at org.xbib.tools.JDBCImporter.process(JDBCImporter.java:117) ~[elasticsearch-jdbc-1.6.0.0-uberjar.jar:?]
at org.xbib.tools.Importer.newRequest(Importer.java:241) [elasticsearch-jdbc-1.6.0.0-uberjar.jar:?]
at org.xbib.tools.Importer.newRequest(Importer.java:57) [elasticsearch-jdbc-1.6.0.0-uberjar.jar:?]
at org.xbib.pipeline.AbstractPipeline.call(AbstractPipeline.java:86) [elasticsearch-jdbc-1.6.0.0-uberjar.jar:?]
at org.xbib.pipeline.AbstractPipeline.call(AbstractPipeline.java:17) [elasticsearch-jdbc-1.6.0.0-uberjar.jar:?]
at java.util.concurrent.FutureTask.run(Unknown Source) [?:1.7.0_51]
at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) [?:1.7.0_51]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) [?:1.7.0_51]
at java.lang.Thread.run(Unknown Source) [?:1.7.0_51]
[11:33:06,049][WARN ][BulkTransportClient ][Thread-1] no client
I would be very happy if you could give me a hint. Thanks and best regards, Ali Sarioglu
Dear ArtyomBaranovskiy, as long as you running "shield" nodes are not available for the jdbc plugin.
Hi @alisarioglu, thank you for your comment!
I'm wondering, why you think that shield was installed on my node? Actually, I didn't install Shield on Elasticsearch node, and I don't think it's installed there by default.
Hi @khozzy, I'm sorry for late response - haven't noticed your comment. Do you mean the elasticsearch.yml file for my target ES node or there the file somewhere in JDBC Importer? I'm asking because other clients are successfully using that particular ES cluster, so it's configuration file should be fine.
Hi @ArtyomBaranovskiy, just for your information. It was the problem in my case and I got the same error massage, that nodes are not found. Yes your right, shield is not installed by deafult. Cheers, Ali
hey guys I have the same issue. But I have 1 script running every 30 minutes and its fine. When I try to import another table with a different script this exact message comes up also. Did any of you guys experience this?
I have the same issue. In my development environment the script is ok, but in production environment it's goning wrong.
my script is:
JDBC_IMPORTER_HOME=/data/soft/elasticsearch-jdbc-1.7.0.1
es_jdbc_bin=$JDBC_IMPORTER_HOME/bin
es_jdbc_lib=$JDBC_IMPORTER_HOME/lib
echo '{
"type" : "jdbc",
"jdbc" : {
"elasticsearch" : {
"cluster" : "es-online",
"host" : "10.10.XXX.XXX",
"port" : "9300",
"autodiscover" : false
},
"url" : "jdbc:mysql://XXX.XXX.XXX.211:3306/abcd",
"driver": "com.mysql.jdbc.Driver",
"user" : "mysql_user",
"password" : "mysql_password",
"sql" : "SELECT `m`.*, `m`.`id` AS `_id` FROM `member` AS `m`",
"index" : "abcd",
"type" : "member"
}
}' | java \
-cp "${es_jdbc_lib}/*" \
-Dlog4j.configurationFile=${es_jdbc_bin}/log4j2.xml \
org.xbib.tools.Runner \
org.xbib.tools.JDBCImporter
[UPDATE] on 2015-08-01
I have found the key to solve my problem: the version of jdbc importer. In production environment, i was trying the elasticsearch 1.7.1, and dumping data from mysql with jdbc importer 1.7.0.1. There always something wrong until i install elasticsearch 1.7.0 instead of 1.7.1.
I have just checked version 1.5.2 - it is failing with the same error, NoNodeAvailableException: no cluster nodes available, check settings {cluster.name=elasticsearch, port=9300, ...}. Unfortunately, I couldn't find an environment where it could be working (except the case with another local elasticsearch node). Does anyone know if it is possible to instrument recent versions of JDBC river to use Elasticsearch's bulk API? Here is my config for reference: { "type": "jdbc", "jdbc": [{ "driver": "com.microsoft.sqlserver.jdbc.SQLServerDriver", "url": "jdbc:sqlserver://;databaseName=;integratedSecurity=true;applicationName=JdbcRiver", "sql": [ { "statement": "SELECT ...", "parameter": ["$now", "...", "...", "...", "..."] } ], "elasticsearch": { "cluster": "xxx", "host": "xxx" }, "index": "xxx" }] }
@ArtyomBaranovskiy Of course, JDBC river is using bulk indexing API. JDBC river 1.5.2 can only work as a river within a node, that means, it can not connect to a remote cluster.
@jprante, could you please clarify: did you mean that I cannot specify custom cluster name and node name for storing the data from database (especially, when multicast is disabled in the cluster)? My understanding was that JDBC river was creating a temporary node to connect to a cluster and use elasticsearch's native API, so how does this limit its usage to local cluster only? Anyway, I would like to notice that I could connect to a remote cluster in neither 1.5 nor 1.6 version of JDBC Importer. As per bulk API, the question was more about http bulk API: is this possible in newer versions of JDBC Importer?
JDBC river is not creating a temporary node. It runs inside of an existing node - see river concept https://www.elastic.co/guide/en/elasticsearch/rivers/current/index.html
JDBC importer uses a TransportClient instance to connect.
HTTP bulk is not possible, and is not necessary, since JDBC importer uses the BulkProcessor
class of Elasticsearch.
@jprante, all right, here we are discussing a bit different things: I was mostly talking about running the JDBC river 1.5 in feeder mode - it seems to be not working correctly in the latest version, as well as version 1.6 - I just can't make JDBC Importer connect to remote cluster, because he seem to be ignoring the "elasticsearch" node in the settings. Do you have an idea why it could be so?
Same with @simonkuang
With Elasticsearch-1.7.1 and jdbc-importer-1.7.0.1 ---> Got the same error. Fixed after downgrading to Elasticsearch-1.7.0
I encountered the same error and can confirm that downgrading Elasticsearch to 1.7.0 fixed it with no further changes.
@ArtyomBaranovskiy I had the similar issue that you had. I found the issue and fixed it. Basically inside the below configuration, you are required to have elasticsearch object within jdbc object instead of outside of jdbc object. This is somewhat different from older version of feeder where the configuration is outside. When i changed this, things started to work !!
bin=$JDBC_IMPORTER_HOME/bin lib=$JDBC_IMPORTERHOME/lib echo '{ "type" : "jdbc", "jdbc" : { "url" : "jdbc:mysql://localhost:3306/test", "user" : "", "password" : "", "sql" : "select , id as id from orders", "elasticsearch":{"host":"","cluster":"",port:""} } }' | java \ -cp "${lib}/" \ -Dlog4j.configurationFile=${bin}/log4j2.xml \ org.xbib.tools.Runner \ org.xbib.tools.JDBCImporter
Guys, thanks every one for your assistance - I have finally figured out why I'm getting the errors: in my configuration files for JDBC Importer I was using the old-style format (jdbc: [{}] instead of recently introduced jdbc: {}), that is why my elasticsearch configuration was ignored.
Same with @simonkuang +1
With Elasticsearch-1.7.1 and jdbc-importer-1.7.0.1 ---> Got the same error. Fixed after downgrading to Elasticsearch-1.7.0
@simonkuang, @yingnansong, I think you can try JDBC Importer 1.7.1 with elasticsearch 1.7.1: I've recently noticed that it's already available http://xbib.org/repository/org/xbib/elasticsearch/importer/elasticsearch-jdbc/1.7.1.0/
export JDBC_IMPORTER_HOME=${HOME}/search1/elasticsearch-2.0.0/plugins/elasticsearch-jdbc-2.0.0.0 bin=$JDBC_IMPORTER_HOME/bin lib=$JDBC_IMPORTER_HOME/lib echo ' { "type" : "jdbc", "jdbc" : { "url" : "jdbc:mysql://192.168.1.99:3306/ay_test", "driver": "com.mysql.jdbc.Driver", "user" : "root", "password" : "111111", "locale" : "enUS", "sql" : "select ,\"myjdbc\" as _index, \"mytype\" as _type, id as _id from t_car_brand", "elasticsearch" : { "cluster" : "elasticsearch", "host" : "192.168.1.32", "port" : 9300 }, "index" : "myjdbc", "type" : "mytype", "index_settings" : { "index" : { "number_ofshards" : 1 } } } } ' | java \ -cp "${lib}/" \ -Dlog4j.configurationFile=${bin}/log4j2.xml \ org.xbib.tools.Runner \ org.xbib.tools.JDBCImporter
curl -XGET 'http://192.168.1.32:9200/myjdbc/_refresh'
@alisarioglu hi, i have the same question, how did you deal with it?
@skyfall86, Shield (new name is Security ) was installed. After deinstalling Shield it worked.
I have the same issue in my development environment and found that telnet xx.xx.xx.xx 9300
does not work, so I changed the config # network.host 192.168.0.1
to network.host 0.0.0.0
in the $ES_HOME /config/elasticsearch.yml
file to solve this problem
Hi,
I cannot run the latest version (1.6.0) of importer on Windows. Could you please assist me with the fix or explain what I am doing wrong? It seems like default elasticsearch connection settings are not overwritten by custom ones.
Config file: { "type": "jdbc", "jdbc": [{ "driver": "com.microsoft.sqlserver.jdbc.SQLServerDriver", "url": "...connection string...", "sql": [ { "statement": "... long sql query ...", "parameter": ["... some params ..."] } ], "elasticsearch" : { "cluster": "custom cluster name", "host": ["custom node name:9300"] }, "index": "... index name ..." }] }
Error message: ... ERROR importer - error while getting next input: no cluster nodes available, check settings { cluster.name=elasticsearch, port=9300, sniff=false, autodiscover=false, name=importer, client.transport.ignore_cluster_name= false, client.transport.ping_timeout=5s, client.transport.nodes_sampler_interval=5s} org.elasticsearch.client.transport.NoNodeAvailableException: no cluster nodes available, check settings {cluster.name=elasti csearch, port=9300, sniff=false, autodiscover=false, name=importer, client.transport.ignore_cluster_name=false, client.trans port.ping_timeout=5s, client.transport.nodes_sampler_interval=5s}