jprante / elasticsearch-jdbc

JDBC importer for Elasticsearch
Apache License 2.0
2.84k stars 709 forks source link

ElasticsearchIllegalStateException: client is closed #438

Open Armstrongya opened 9 years ago

Armstrongya commented 9 years ago

Hi, Is there anybody meet this error before ? I have used ES as a search service in my website for two months, and it works fine. But today I found this error in ES log.
I install ElasticSearch 1.3.2 in two servers, they construct a ES cluster, and use River-JDBC Plugin 1.3.0.4 to pull data from MySQL table to ES index every 10 minutes incrementally. After this error, I can still search the previous index data, but new increased data in MySQL couldn't be indexed.

I also search similar issues about this, https://github.com/jprante/elasticsearch-river-jdbc/issues/312, and this https://github.com/jprante/elasticsearch-river-jdbc/issues/264, but haven't found proper solution. @jprante said this issue is not related with Driver-JDBC Plugin, it's ES Cluster's error. What can I do to fix this ?


[2015-01-12 15:40:11,984][ERROR][Feeder ] error while getting next input: org.elasticsearch.ElasticsearchIllegalStateException: client is closed
java.io.IOException: org.elasticsearch.ElasticsearchIllegalStateException: client is closed
        at org.xbib.elasticsearch.river.jdbc.strategy.simple.SimpleRiverSource.fetch(SimpleRiverSource.java:292)
        at org.xbib.elasticsearch.plugin.feeder.jdbc.JDBCFeeder.fetch(JDBCFeeder.java:335)
        at org.xbib.elasticsearch.plugin.feeder.jdbc.JDBCFeeder.executeTask(JDBCFeeder.java:179)
        at org.xbib.elasticsearch.plugin.feeder.AbstractFeeder.newRequest(AbstractFeeder.java:362)
        at org.xbib.elasticsearch.plugin.feeder.AbstractFeeder.newRequest(AbstractFeeder.java:53)
        at org.xbib.pipeline.AbstractPipeline.call(AbstractPipeline.java:87)
        at org.xbib.pipeline.AbstractPipeline.call(AbstractPipeline.java:14)
        at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
        at java.util.concurrent.FutureTask.run(FutureTask.java:166)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:724)
Caused by: org.elasticsearch.ElasticsearchIllegalStateException: client is closed
        at org.xbib.elasticsearch.support.client.node.BulkNodeClient.bulkIndex(BulkNodeClient.java:237)
        at org.xbib.elasticsearch.support.client.node.BulkNodeClient.bulkIndex(BulkNodeClient.java:37)
        at org.xbib.elasticsearch.river.jdbc.strategy.simple.SimpleRiverMouth.index(SimpleRiverMouth.java:139)
        at org.xbib.elasticsearch.plugin.jdbc.RiverMouthKeyValueStreamListener.end(RiverMouthKeyValueStreamListener.java:38)
        at org.xbib.elasticsearch.plugin.jdbc.RiverMouthKeyValueStreamListener.end(RiverMouthKeyValueStreamListener.java:11)
        at org.xbib.elasticsearch.plugin.jdbc.PlainKeyValueStreamListener.values(PlainKeyValueStreamListener.java:124)
        at org.xbib.elasticsearch.river.jdbc.strategy.simple.SimpleRiverSource.processRow(SimpleRiverSource.java:726)
        at org.xbib.elasticsearch.river.jdbc.strategy.simple.SimpleRiverSource.nextRow(SimpleRiverSource.java:679)
        at org.xbib.elasticsearch.river.jdbc.strategy.simple.SimpleRiverSource.merge(SimpleRiverSource.java:425)
        at org.xbib.elasticsearch.river.jdbc.strategy.simple.SimpleRiverSource.execute(SimpleRiverSource.java:325)
        at org.xbib.elasticsearch.river.jdbc.strategy.simple.SimpleRiverSource.fetch(SimpleRiverSource.java:287)
        ... 11 more
sawickil commented 9 years ago

First, try the latest version from the 1.3.x branch which is 1.3.4.7, but I guess it may be also required to upgrade ES to 1.3.4 (i'm not sure).

Simple question - is it possible that the river run might have taken more than 10 minutes (your interval length) and the next iteration started? See #407 and #398

Armstrongya commented 9 years ago

@sawickil Thank you for your advice. I read your comments in #407 and #398 . My JDBC-River's schedule thread pool size is default 4. I recheck my ES log, and find that before client is closed, ES meet MapperParsingException, it tried several times and finally closed. Here is the log


org.elasticsearch.index.mapper.MapperParsingException: object mapping for [lgtype] tried to parse as object, but got EOF, has a concrete value been provided to it?
        at org.elasticsearch.index.mapper.object.ObjectMapper.parse(ObjectMapper.java:499)
        at org.elasticsearch.index.mapper.DocumentMapper.parse(DocumentMapper.java:534)
        at org.elasticsearch.index.mapper.DocumentMapper.parse(DocumentMapper.java:483)
        at org.elasticsearch.index.shard.service.InternalIndexShard.prepareIndex(InternalIndexShard.java:397)
        at org.elasticsearch.action.bulk.TransportShardBulkAction.shardIndexOperation(TransportShardBulkAction.java:421)
        at org.elasticsearch.action.bulk.TransportShardBulkAction.shardOperationOnPrimary(TransportShardBulkAction.java:158)
        at org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction$AsyncShardOperationAction.performOnPrimary(TransportShardReplicationOperationAction.java:522)
        at org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction$AsyncShardOperationAction$1.run(TransportShardReplicationOperationAction.java:421)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:724)
Armstrongya commented 9 years ago

I finally solve this problem. As far as I know, this problem has no relationship with JDBC-River Plugin or ES Cluster version. It was caused by a invalid column data in MySQL table, which couldn't parsed to valid JSON document, so MapperParsingException happends, and finally client closed.

I modify the invalid data in MySQL table, and restart ElasticSearch Cluster, it runs again.

Thanks for @sawickil and @jprante 's kindly help.