Open TruyenNT opened 8 years ago
OK, I see, I have to investigate. Any messages in the server logs?
hi Jprante, i run single node, and this is my information:
curl localhost:9200 { "name" : "Doorman", "cluster_name" : "elasticsearch", "version" : { "number" : "2.0.0", "build_hash" : "de54438d6af8f9340d50c5c786151783ce7d6be5", "build_timestamp" : "2015-10-22T08:09:48Z", "build_snapshot" : false, "lucene_version" : "5.2.1" }, "tagline" : "You Know, for Search" }
OS: CentOS Linux release 7.1.1503 (Core) And this is netstat cmd:
netstat -naltp | grep java tcp6 0 0 :::9200 :::* LISTEN 4480/java tcp6 0 0 :::9300 :::* LISTEN 4480/java
Are there any message in the Elasticsearch server logs? What is the health of the cluster?
JDBC importer can not proceed because cluster blocks.
hi, sorry about this late feedback.
This is my elastic log when i run import again:
[2015-11-20 16:39:07,520][INFO ][cluster.metadata ] [Turner D. Century] [cus] creating index, cause [api], templates [], shards [5]/[1], mappings [] [2015-11-20 16:39:07,668][INFO ][index.shard ] [Turner D. Century] [cus][3] updating refresh_interval from [1s] to [-1] [2015-11-20 16:39:07,686][INFO ][index.shard ] [Turner D. Century] [cus][2] updating refresh_interval from [1s] to [-1] [2015-11-20 16:39:07,687][INFO ][index.shard ] [Turner D. Century] [cus][1] updating refresh_interval from [1s] to [-1] [2015-11-20 16:39:07,687][INFO ][index.shard ] [Turner D. Century] [cus][0] updating refresh_interval from [1s] to [-1] [2015-11-20 16:39:08,514][INFO ][cluster.metadata ] [Turner D. Century] [cus] create_mapping [cus] [2015-11-20 16:39:08,539][INFO ][cluster.metadata ] [Turner D. Century] [cus] update_mapping [cus] [2015-11-20 16:39:08,575][INFO ][cluster.metadata ] [Turner D. Century] [cus] update_mapping [cus] [2015-11-20 16:39:08,708][INFO ][index.shard ] [Turner D. Century] [cus][3] updating refresh_interval from [-1] to [1s] [2015-11-20 16:39:08,708][INFO ][index.shard ] [Turner D. Century] [cus][2] updating refresh_interval from [-1] to [1s] [2015-11-20 16:39:08,709][INFO ][index.shard ] [Turner D. Century] [cus][1] updating refresh_interval from [-1] to [1s] [2015-11-20 16:39:08,709][INFO ][index.shard ] [Turner D. Century] [cus][0] updating refresh_interval from [-1] to [1s] [2015-11-20 16:39:08,709][INFO ][index.shard ] [Turner D. Century] [cus][4] updating refresh_interval from [-1] to [1s]
And this is jdbc.log import:
[16:39:06,243][INFO ][importer.jdbc ][pool-3-thread-1] strategy standard: settings = {index=cus, password=123456, sql=select * from Customers, type=cus, url=jdbc:mysql://192.168.179.140:3306/northwind, user=es}, context = org.xbib.elasticsearch.jdbc.strategy.standard.StandardContext@5633ae25 [16:39:06,247][INFO ][importer.jdbc.context.standard][pool-3-thread-1] found sink class org.xbib.elasticsearch.jdbc.strategy.standard.StandardSink@57980093 [16:39:06,254][INFO ][importer.jdbc.context.standard][pool-3-thread-1] found source class org.xbib.elasticsearch.jdbc.strategy.standard.StandardSource@517ac9b3 [16:39:06,301][INFO ][org.xbib.elasticsearch.support.client.BaseTransportClient][pool-3-thread-1] creating transport client on Linux Java HotSpot(TM) 64-Bit Server VM Oracle Corporation 1.8.0_65-b17 25.65-b01 with effective settings {autodiscover=false, client.transport.ignore_cluster_name=false, client.transport.nodes_sampler_interval=5s, client.transport.ping_timeout=5s, cluster.name=elasticsearch, name=importer, port=9300, sniff=false} [16:39:06,340][INFO ][org.elasticsearch.plugins][pool-3-thread-1] [importer] loaded [], sites [] [16:39:07,221][INFO ][org.xbib.elasticsearch.support.client.BaseTransportClient][pool-3-thread-1] trying to connect to [localhost/127.0.0.1:9300] [16:39:07,475][INFO ][org.xbib.elasticsearch.support.client.BaseTransportClient][pool-3-thread-1] connected to [{Turner D. Century}{122baGqUQGq7mqCduesfhw}{192.168.179.140}{localhost/127.0.0.1:9300}] [16:39:07,488][INFO ][importer.jdbc.sink.standard][pool-3-thread-1] creating index cus with settings = and mappings = [16:39:07,655][INFO ][org.xbib.elasticsearch.support.client.BaseIngestTransportClient][pool-3-thread-1] index cus created [16:39:13,843][WARN ][org.xbib.elasticsearch.support.client.transport.BulkTransportClient][Thread-1] no client
hi , And this is elasticsearch log when i start service:
[2015-11-20 17:53:56,377][WARN ][bootstrap ] Unable to lock JVM Memory: error=12,reason=Cannot allocate memory [2015-11-20 17:53:56,378][WARN ][bootstrap ] This can result in part of the JVM being swapped out. [2015-11-20 17:53:56,378][WARN ][bootstrap ] Increase RLIMIT_MEMLOCK, soft limit: 65536, hard limit: 65536 [2015-11-20 17:53:56,378][WARN ][bootstrap ] These can be adjusted by modifying /etc/security/limits.conf, for example:
elasticsearch soft memlock unlimited
elasticsearch hard memlock unlimited
[2015-11-20 17:53:56,378][WARN ][bootstrap ] If you are logged in interactively, you will have to re-login for the new limits to take effect. [2015-11-20 17:53:56,698][INFO ][node ] [Wild Thing] version[2.0.0], pid[2587], build[de54438/2015-10-22T08:09:48Z] [2015-11-20 17:53:56,699][INFO ][node ] [Wild Thing] initializing ... [2015-11-20 17:53:57,822][INFO ][plugins ] [Wild Thing] loaded [license, marvel, analysis-icu], sites [head, river-jdbc, river-jdbc] [2015-11-20 17:53:57,922][INFO ][env ] [Wild Thing] using [1] data paths, mounts [[/ (rootfs)]], net usable_space [32.2gb], net total_space [37.4gb], spins? [unknown], types [rootfs] [2015-11-20 17:54:02,376][INFO ][node ] [Wild Thing] initialized [2015-11-20 17:54:02,377][INFO ][node ] [Wild Thing] starting ... [2015-11-20 17:54:02,614][WARN ][common.network ] [Wild Thing] publish address: {0.0.0.0} is a wildcard address, falling back to first non-loopback: {192.168.179.140}
[2015-11-20 17:54:02,642][INFO ][discovery ] [Wild Thing] elasticsearch/Kv4C1yO5R76Ld3OIPplYkg [2015-11-20 17:54:05,781][INFO ][cluster.service ] [Wild Thing] new_master {Wild Thing}{Kv4C1yO5R76Ld3OIPplYkg}{192.168.179.140}{192.168.179.140:9300}, reason: zen-disco-join(elected_as_master, [0] joins received) [2015-11-20 17:54:06,016][WARN ][common.network ] [Wild Thing] publish address: {0.0.0.0} is a wildcard address, falling back to first non-loopback: {192.168.179.140}
[2015-11-20 17:54:06,017][INFO ][node ] [Wild Thing] started [2015-11-20 17:54:07,890][INFO ][license.plugin.core ] [Wild Thing] license [ef3aa1fd-b527-4719-baff-483ab33f2883] - valid [2015-11-20 17:54:07,896][ERROR][license.plugin.core ] [Wild Thing] #
#
[2015-11-20 17:54:08,009][INFO ][gateway ] [Wild Thing] recovered [12] indices into cluster_state
I can not see a failure in these logs...
hi Jprante, As the logs i saw, it's seem that index was created. but can't import data into the index. can you show more detail about the "no client" error log ?
[16:39:07,655][INFO ][org.xbib.elasticsearch.support.client.BaseIngestTransportClient][pool-3-thread-1] index cus created [16:39:13,843][WARN ][org.xbib.elasticsearch.support.client.transport.BulkTransportClient][Thread-1] no client
Please enable DEBUG level in the logs, then you can see what is happening. JDBC importer disconnects after 5 seconds, this is the node fault detection time period, so there is something with the cluster.
[20:35:28,791][ERROR][importer ][main] java.util.ServiceConfigurationError: org.xbib.elasticsearch.jdbc.strategy.Context: Provider org.xbib.elasticsearch.jdbc.strategy.standard.StandardContext could not be instantiated: java.lang.NoClassDefFoundError: java/util/concurrent/atomic/LongAdder
java.util.concurrent.ExecutionException: java.util.ServiceConfigurationError: org.xbib.elasticsearch.jdbc.strategy.Context: Provider org.xbib.elasticsearch.jdbc.strategy.standard.StandardContext could not be instantiated: java.lang.NoClassDefFoundError: java/util/concurrent/atomic/LongAdder
at java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:252) ~[?:1.7.0_25]
at java.util.concurrent.FutureTask.get(FutureTask.java:111) ~[?:1.7.0_25]
at org.xbib.pipeline.SimplePipelineExecutor.waitFor(SimplePipelineExecutor.java:125) ~[elasticsearch-jdbc-2.0.0.0-uberjar.jar:?]
at org.xbib.pipeline.MetricSimplePipelineExecutor.waitFor(MetricSimplePipelineExecutor.java:59) ~[elasticsearch-jdbc-2.0.0.0-uberjar.jar:?]
at org.xbib.tools.Importer.execute(Importer.java:261) ~[elasticsearch-jdbc-2.0.0.0-uberjar.jar:?]
at org.xbib.tools.Importer.run(Importer.java:147) [elasticsearch-jdbc-2.0.0.0-uberjar.jar:?]
at org.xbib.tools.Runner.main(Runner.java:28) [elasticsearch-jdbc-2.0.0.0-uberjar.jar:?]
Caused by: java.util.ServiceConfigurationError: org.xbib.elasticsearch.jdbc.strategy.Context: Provider org.xbib.elasticsearch.jdbc.strategy.standard.StandardContext could not be instantiated: java.lang.NoClassDefFoundError: java/util/concurrent/atomic/LongAdder
at java.util.ServiceLoader.fail(ServiceLoader.java:224) ~[?:1.7.0_25]
at java.util.ServiceLoader.access$100(ServiceLoader.java:181) ~[?:1.7.0_25]
at java.util.ServiceLoader$LazyIterator.next(ServiceLoader.java:377) ~[?:1.7.0_25]
at java.util.ServiceLoader$1.next(ServiceLoader.java:445) ~[?:1.7.0_25]
at org.xbib.elasticsearch.common.util.StrategyLoader.newContext(StrategyLoader.java:41) ~[elasticsearch-jdbc-2.0.0.0-uberjar.jar:?]
at org.xbib.tools.JDBCImporter.process(JDBCImporter.java:110) ~[elasticsearch-jdbc-2.0.0.0-uberjar.jar:?]
at org.xbib.tools.Importer.newRequest(Importer.java:215) ~[elasticsearch-jdbc-2.0.0.0-uberjar.jar:?]
at org.xbib.tools.Importer.newRequest(Importer.java:54) ~[elasticsearch-jdbc-2.0.0.0-uberjar.jar:?]
at org.xbib.pipeline.AbstractPipeline.call(AbstractPipeline.java:50) ~[elasticsearch-jdbc-2.0.0.0-uberjar.jar:?]
at org.xbib.pipeline.AbstractPipeline.call(AbstractPipeline.java:16) ~[elasticsearch-jdbc-2.0.0.0-uberjar.jar:?]
at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334) ~[?:1.7.0_25]
at java.util.concurrent.FutureTask.run(FutureTask.java:166) ~[?:1.7.0_25]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) ~[?:1.7.0_25]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) ~[?:1.7.0_25]
at java.lang.Thread.run(Thread.java:724) ~[?:1.7.0_25]
Caused by: java.lang.NoClassDefFoundError: java/util/concurrent/atomic/LongAdder
at org.xbib.metrics.MeanMetric.
i don't know what is ti (java.util.concurrent.atomic.LongAdder);
[21:00:31,806][INFO ][org.xbib.elasticsearch.support.client.BaseTransportClient][pool-3-thread-1] trying to connect to [localhost/127.0.0.1:9300] [21:00:31,902][INFO ][org.xbib.elasticsearch.support.client.BaseTransportClient][pool-3-thread-1] connected to [{Screaming Mimi}{a4KH30Z2SfyrkMDZFjVV2g}{10.165.55.188}{localhost/127.0.0.1:9300}] [21:00:31,907][INFO ][importer.jdbc.sink.standard][pool-3-thread-1] creating index myjdbc with settings = {index.number_of_shards=1} and mappings = {mytype={"mytype":{"properties":{"location":{"type":"geo_point"}}}}} [21:00:31,909][INFO ][org.xbib.elasticsearch.support.client.BaseIngestTransportClient][pool-3-thread-1] settings = {index={number_of_shards=1}} [21:00:31,909][INFO ][org.xbib.elasticsearch.support.client.BaseIngestTransportClient][pool-3-thread-1] found mapping for mytype [21:00:31,990][INFO ][org.xbib.elasticsearch.support.client.BaseIngestTransportClient][pool-3-thread-1] index myjdbc created [21:00:38,475][WARN ][org.xbib.elasticsearch.support.client.transport.BulkTransportClient][Thread-1] no client
TruyenNT , Please help me up, I have got the same issue as you. How did you fix? Below are the details. I am also running on single node. Also I tried running on 9300 but nothing worked.
java version "9.0.1"
Java(TM) SE Runtime Environment (build 9.0.1+11)
Java HotSpot(TM) 64-Bit Server VM (build 9.0.1+11, mixed mode)
{
"name" : "LFD-node",
"cluster_name" : "LFD-cluster",
"version" : {
"number" : "2.3.1",
"build_hash" : "bd980929010aef404e7cb0843e61d0665269fc39",
"build_timestamp" : "2016-04-04T12:25:05Z",
"build_snapshot" : false,
"lucene_version" : "5.5.0"
},
"tagline" : "You Know, for Search"
}
elasticsearch-jdbc-2.3.1.0
@mysql-jdbc.sh
bin=/etc/elasticsearch/elasticsearch-jdbc-2.3.1.0/bin
lib=/etc/elasticsearch/elasticsearch-jdbc-2.3.1.0/lib
echo '
{
"type" : "jdbc",
"jdbc" : {
"url" : "jdbc:mysql://localhost:3306/ElasticSearchDatabase",
"user" : "",
"password" : "",
"sql" : "select * from test",
"treat_binary_as_string" : true,
"max_bulk_actions" : 20000,
"max_concurrent_bulk_requests" : 10,
"index" : "users"
"type":"users",
"autocommit":"true",
"metrics": {
"enabled" : true
},
"elasticsearch" : {
"cluster" : "LFD-cluster",
"host" : "localhost",
"port" : 9200
}
}
}
' | java -cp "${lib}/*" "org.xbib.tools.Runner" "org.xbib.tools.JDBCImporter"
When I am requesting [curl -XGET http://localhost:9200/users/_search/?pretty] I AM GETTING.
{
"error" : {
"root_cause" : [ {
"type" : "index_not_found_exception",
"reason" : "no such index",
"resource.type" : "index_or_alias",
"resource.id" : "users",
"index" : "users"
} ],
"type" : "index_not_found_exception",
"reason" : "no such index",
"resource.type" : "index_or_alias",
"resource.id" : "users",
"index" : "users"
},
"status" : 404
}
@TruyenNT @jprante
Dear Jprante, I use ES 2.0.0, Java 8. When i import data from mysql, the Index has been created, but the importing data was failed. below are the logs:
[13:39:23,501][INFO ][importer.jdbc ][main] index name = customers, concrete index name = customers [13:39:23,530][INFO ][importer.jdbc ][pool-3-thread-1] strategy standard: settings = {index=customers, max_bulk_actions=20000, max_concurrent_bulk_requests=10, password=123456, sql=select * from Customers, treat_binary_as_string=true, url=jdbc:mysql://192.168.179.140:3306/northwind, user=es}, context = org.xbib.elasticsearch.jdbc.strategy.standard.StandardContext@139e5d8e [13:39:23,535][INFO ][importer.jdbc.context.standard][pool-3-thread-1] found sink class org.xbib.elasticsearch.jdbc.strategy.standard.StandardSink@6e9ed9d0 [13:39:23,542][INFO ][importer.jdbc.context.standard][pool-3-thread-1] found source class org.xbib.elasticsearch.jdbc.strategy.standard.StandardSource@7e30c562 [13:39:23,574][INFO ][org.xbib.elasticsearch.support.client.BaseTransportClient][pool-3-thread-1] creating transport client on Linux Java HotSpot(TM) 64-Bit Server VM Oracle Corporation 1.8.0_65-b17 25.65-b01 with effective settings {autodiscover=false, client.transport.ignore_cluster_name=false, client.transport.nodes_sampler_interval=5s, client.transport.ping_timeout=5s, cluster.name=elasticsearch, name=importer, port=9300, sniff=false} [13:39:23,601][INFO ][org.elasticsearch.plugins][pool-3-thread-1] [importer] loaded [], sites []
[13:39:57,566][INFO ][org.xbib.elasticsearch.support.client.BaseTransportClient][pool-3-thread-1] trying to connect to [localhost/127.0.0.1:9300] [13:39:57,763][INFO ][org.xbib.elasticsearch.support.client.BaseTransportClient][pool-3-thread-1] connected to [{Doorman}{lDVOIh_VSbW0TAHeIwIUUw}{192.168.179.140}{localhost/127.0.0.1:9300}] [13:39:57,768][INFO ][importer.jdbc.sink.standard][pool-3-thread-1] creating index customers with settings = and mappings = [13:39:57,856][INFO ][org.xbib.elasticsearch.support.client.BaseIngestTransportClient][pool-3-thread-1] index customers created [13:40:03,740][WARN ][org.xbib.elasticsearch.support.client.transport.BulkTransportClient][Thread-1] no client
Can you show me how to solve this ?
This is my code:
bin=/opt/elasticsearch-jdbc-2.0.0.0/bin lib=/opt/elasticsearch-jdbc-2.0.0.0/lib
echo ' { "type" : "jdbc", "jdbc" : { "url" : "jdbc:mysql://192.168.179.140:3306/northwind", "user" : "es", "password" : "123456", "sql" : "select * from Customers", "treat_binary_as_string" : true, "max_bulk_actions" : 20000, "max_concurrent_bulk_requests" : 10, "index" : "customers" } } ' | java \ -cp "${lib}/*" \ -Dlog4j.configurationFile=${bin}/log4j2.xml \ org.xbib.tools.Runner \ org.xbib.tools.JDBCImporter
And this is my java:
java -version java version "1.8.0_65" Java(TM) SE Runtime Environment (build 1.8.0_65-b17) Java HotSpot(TM) 64-Bit Server VM (build 25.65-b01, mixed mode)
i'm waiting for your feedback asap :(