elastic / elasticsearch-js

Official Elasticsearch client library for Node.js
https://ela.st/js-client
Apache License 2.0
5.24k stars 727 forks source link

Elasticsearch - No Living Connections #196

Closed nexflo closed 5 years ago

nexflo commented 9 years ago

This has been mentioned before here and there..but im certain this is still a bug or connections arent pooled correctly?

Im doing the following: (it indexes 6.000.000 docs...one by one..(dont wanna use bulk imports..so lets ignore this topic for a second) I'm even waiting for an index to be complete to call the next one...still at around 16.000 entries...i always get the no more living connections error.

I tried with keepalive: true/false etc. but it seems its a connection pooling issue no? iprange = array of item esearch = Elasticsearch client.

function doItem(i){
    process.stdout.write("Importing item:  " + i + "\r");
    var doc = {};
    doc.index = "geoip";
    doc.type = "location";
    doc.body = iprange[i];
    esearch.index(doc, function (err, resp) {
        if(err)
            trace.info(err);

        i++;

        if(i < len) {
            doItem(i);
        }
    });
}

doItem(1);
spalger commented 9 years ago

What is the problem? Is there an error logged by the client or by elasticsearch?

thecaddy commented 9 years ago

I've had this issue as well. If the app does not use the client for a prolonged period of time the I will receive this error and the only solution has been to restart the because it will not reconnect to elasticsearch. keepAlive is defaulted to true.

nexflo commented 9 years ago

Hey @spalger The elasticsearch client says:

Elasticsearch ERROR: 2015-03-07T16:07:37Z
  Error: Request error, retrying -- connect EADDRNOTAVAIL
      at Log.error (/Users/XXX/Sites/XXX/node_modules/elasticsearch/src/lib/log.js:213:60)
      at checkRespForFailure (/Users/XXX/Sites/XXX/node_modules/elasticsearch/src/lib/transport.js:195:18)
      at HttpConnector.<anonymous> (/Users/XXX/Sites/XXX/node_modules/elasticsearch/src/lib/connectors/http.js:154:7)
      at ClientRequest.bound (/Users/XXX/Sites/XXX/node_modules/elasticsearch/node_modules/lodash-node/modern/internals/baseBind.js:56:17)
      at ClientRequest.emit (events.js:107:17)
      at Socket.socketErrorListener (_http_client.js:272:9)
      at Socket.emit (events.js:107:17)
      at net.js:451:14
      at process._tickCallback (node.js:355:11)

17:07:37.499 location.js:399 | { [Error: No Living connections] message: 'No Living connections', [stack]: [Getter/Setter] } 
17:07:37.507 location.js:399 | { [Error: No Living connections] message: 'No Living connections', [stack]: [Getter/Setter] } 
17:07:37.511 location.js:399 | { [Error: No Living connections] message: 'No Living connections', [stack]: [Getter/Setter] } 
17:07:37.515 location.js:399 | { [Error: No Living connections] message: 'No Living connections', [stack]: [Getter/Setter] } 
17:07:37.519 location.js:399 | { [Error: No Living connections] message: 'No Living connections', [stack]: [Getter/Setter] } 
17:07:37.524 location.js:399 | { [Error: No Living connections] message: 'No Living connections', [stack]: [Getter/Setter] } 

The elasticsearch server doesnt spit out any error logs. For me this happens every time somewhere around indexing the 16420 or so item.

(That number is so close to 16384, which is 2^14 .. that i believe it could be that all available client sockets are filled up??? but shouldn't ES client use existing sockets where available? + maxsockets is default 10? + keepAlive true/false doesn't make a difference).

As the data im indexing is the public maxmind geoip...i could also send over the code for you to reproduce this?

thecaddy commented 9 years ago

The error i see:

error: Error: No Living connections
    at sendReqWithConnection (.../node_modules/elasticsearch/src/lib/transport.js:174:15)
    at next (.../node_modules/elasticsearch/src/lib/connection_pool.js:213:7)
    at process._tickCallback (node.js:355:11)

Elasticsearch WARNING: 2015-03-07T00:28:39Z
  Unable to revive connection
spalger commented 9 years ago

@nexflo I would love to play with the code if you don't mind pushing it online somewhere.

nexflo commented 9 years ago

Heya I just extracted a sample here: (i didn't test it..but i guess it should work)

https://gist.github.com/nexflo/fc2a763a408cd31f27cd (you will need to download the GEO IP database to get the dataset its free)

I guess for testing one can create a simple loop which adds over 20.000 random items into an index as well for sample reasons - guess we would see the timeouts there as well.

nexflo commented 9 years ago

@spalger is there anything else I can do to help you resolve/pin down this issue?

spalger commented 9 years ago

Looking into this now.

rafacustodio commented 9 years ago

I have this issue as well, but in my application is a rabbitmq consumer, and when I use NodeJS Cluster (for more concurrency), at some point start to show to "No living connection" error.

darylrobbins commented 9 years ago

I am seeing the same issue after about 5,000 or so updates (upsert with dynamic Groovy script). I've specific a single address in my 3-node cluster. The connection shouldn't be idle for more than a couple minutes tops while it reads in a big file.

The interesting thing is that the connection eventually recovers after a few minutes. Several batches fail from the error but a few minutes later, it continues successfully with another batch. Then after a while, it starts failing again -- and the cycle repeats.

Before I start getting No living connections, I see the following error:

Elasticsearch ERROR: 2015-03-31T00:27:07Z
  Error: Request error, retrying -- read ECONNRESET
    at Log.error (/app/node_modules/elasticsearch/src/lib/log.js:213:60)
    at checkRespForFailure (/app/node_modules/elasticsearch/src/lib/transport.js:195:18)
    at HttpConnector.<anonymous> (/app/node_modules/elasticsearch/src/lib/connectors/http.js:154:7)
    at ClientRequest.bound (/app/node_modules/elasticsearch/node_modules/lodash-node/modern/internals/baseBind.js:56:17)
    at ClientRequest.emit (events.js:95:17)
    at Socket.socketErrorListener (http.js:1551:9)
    at Socket.emit (events.js:95:17)
    at net.js:440:14
    at process._tickDomainCallback (node.js:463:13)
shandyba commented 9 years ago

Can confirm this happens to us as well. The facts: it has been happening for at least half a year already, with regularly updated javascript elasticsearch clients used. It was the same with node 0.10.x. No change was observed after shift to 0.12.x. It has been indeed happening in windows x86 and x64 as well as linux x64 node builds. For some reason we can't catch it our local environments. On production it happens on those back-end servers which perform lots of sequential background operations with ES (similarly to what is described in this thread). On those production servers which just serve visitors (e.g. use ES for customers-initiated search requests) it is pretty much not observable, even under high load conditions.

nexflo commented 9 years ago

@spalger Any updates on this? Anything we can do to support?

ltamrazov commented 9 years ago

Also getting this. Seen similar issue in many places, my research thus far has taken me to:

142 #90 #69 #46

Running: io.js 1.2.0 elasticsearch 1.4.3 client 4.0.2

My findings thus far: I doing about 1M queries (not indexing). Initially, it started blowing up around 16K with this message:

Elasticsearch ERROR: 2015-04-16T21:38:41Z Error: Request error, retrying -- connect EADDRNOTAVAIL 127.0.0.1:9200 - Local (0.0.0.0:0) at Log.error (/Users/Levon/Documents/my-dev/test/server/node_modules/elasticsearch/src/lib/log.js:213:60) at checkRespForFailure (/Users/Levon/Documents/my-dev/test/server/node_modules/elasticsearch/src/lib/transport.js:195:18) at HttpConnector.<anonymous> (/Users/Levon/Documents/my-dev/test/server/node_modules/elasticsearch/src/lib/connectors/http.js:154:7) at ClientRequest.wrapper (/Users/Levon/Documents/my-dev/test/server/node_modules/elasticsearch/node_modules/lodash/index.js:3189:19) at emitOne (events.js:75:13) at ClientRequest.emit (events.js:150:7) at Socket.socketErrorListener (_http_client.js:249:9) at emitOne (events.js:75:13) at Socket.emit (events.js:150:7) at net.js:432:14 at process._tickCallback (node.js:337:11) Then I've read that

Elasticsearch WARNING: 2015-04-16T21:38:41Z Unable to revive connection: http://localhost:9200/

Elasticsearch WARNING: 2015-04-16T21:38:41Z No living connections

Then I read in documentation: maxSockets —  Maximum number of concurrent requests that can be made to any node. defaults to 10. So I thought maybe thats the issue. Using async i created a queue to control concurrency. I started of doing 2 queries at a time, and at about 16K queries got the same error.

I thought it might help if I increased number of nodes to handle the load, but still getting the same thing.

Following #46 I commented out the agent: this.agent in http.js. At that point, I am able to increase concurrency to 10, which is also the default maxSockets. I can even increase it past that to 20, and then to 50. at 100 concurrent requests, it blows up again with a different error:

Elasticsearch ERROR: 2015-04-16T21:43:48Z Error: Request error, retrying -- connect ECONNRESET 127.0.0.1:9200 at Log.error (/Users/Levon/Documents/my-dev/test/server/node_modules/elasticsearch/src/lib/log.js:213:60) at checkRespForFailure (/Users/Levon/Documents/my-dev/test/server/node_modules/elasticsearch/src/lib/transport.js:195:18) at HttpConnector.<anonymous> (/Users/Levon/Documents/my-dev/test/server/node_modules/elasticsearch/src/lib/connectors/http.js:154:7) at ClientRequest.wrapper (/Users/Levon/Documents/my-dev/test/server/node_modules/elasticsearch/node_modules/lodash/index.js:3189:19) at emitOne (events.js:75:13) at ClientRequest.emit (events.js:150:7) at Socket.socketErrorListener (_http_client.js:249:9) at emitOne (events.js:75:13) at Socket.emit (events.js:150:7) at net.js:432:14 at process._tickCallback (node.js:337:11)

This surprised be since I assumed the client would handle concurrency in a similar way, using some sort of back-burner or queue. Is that not the case?

If I remove the queue all together, (but keep agent: this.agent commented out), I again get a different error around 16K:

Elasticsearch ERROR: 2015-04-16T21:11:50Z Error: Request error, retrying -- connect ECONNREFUSED 127.0.0.1:9200 at Log.error (/Users/Levon/Documents/my-dev/test/server/node_modules/elasticsearch/src/lib/log.js:213:60) at checkRespForFailure (/Users/Levon/Documents/my-dev/test/server/node_modules/elasticsearch/src/lib/transport.js:195:18) at HttpConnector.<anonymous> (/Users/Levon/Documents/my-dev/test/server/node_modules/elasticsearch/src/lib/connectors/http.js:154:7) at ClientRequest.wrapper (/Users/Levon/Documents/my-dev/test/server/node_modules/elasticsearch/node_modules/lodash/index.js:3189:19) at emitOne (events.js:75:13) at ClientRequest.emit (events.js:150:7) at Socket.socketErrorListener (_http_client.js:249:9) at emitOne (events.js:75:13) at Socket.emit (events.js:150:7) at net.js:432:14 at process._tickCallback (node.js:337:11)

Also tried switching keepAlived: false, this did not have any effect.

Thanks!

bugs181 commented 9 years ago

This fix for us was the temporary workaround noted on #142

Client.apis[config.apiVersion].ping.spec.requestTimeout = customDefaultMs;

As far as I can tell, it has solved our issues. I haven't tested any further due to a change in our backend that we have yet to test.

shandyba commented 9 years ago

Ok,

We have some recent findings. Doesn't explain the issue by itself, but seems to fix the problem in our production env.

As I said, we've been having the problem for quite a long time. We got used to it so much that we decided to depricate ES logger output at some point, as it flooded our production logging with, as we percieved it at that time, useless tons of identical No living connections errors with stack traces etc. So we masked ES client output and just relied on actual javascript client's err object passed out of ES client calls.

Few days ago, having read through all the topics related to the problem, we decided to come back to the very beginning and try to dig into ES client sources in the hope of identifying the issue.

So, we unmasked ES logging. It was absolutely surprising to find out that all of the No living connections errors were preceeded by... Error: Request error, retrying -- connect EADDRNOTAVAIL. Just as @nexflo mentioned around month and a half ago.

We replaced hostnames with IP addresses in ES client's hosts configuration and all of a sudden, no more issues at all!

Comparing previously failing environments with healthy ones uncovered that DNS configurations were different on those boxes. Failing boxes had some intermitted DNS replies failure which lead to No living connections in ES client. Looks like DNS was failing only under frequent requests scenario, that why the issue got reproduced only for those ES clients that performed massive susequent queries.

So, to conclude the observations in our case.

Intermittent failing DNS server replies caused No living connections in ES client. Why that lead ES client to a hardly recoverable state --- that's a big question (a bug?).

From the other hand, DNS replies were not cached (neither on nodejs nor ES client levels), otherwise no identical request would have been made with that frequency.

So far, we worked the issue around by resolving hostnames manually in our code prior to passing hosts configuration to ES client. That fixed the case for us: more that 48 hours uptime without a single No living connections message yet.

We'll be keeping an eye on our produciton and report if we have somehting new on the case.

hellboy81 commented 9 years ago

Probably I have the same issue:

nock('http://es-instance:9200')
.log(console.log)
.post('/index/type?op_type=create')
.reply(201, {.. } {...})
Elasticsearch ERROR: 2015-05-12T09:15:19Z
  Error: Request error, retrying -- Nock: No match for request POST http://es-instance:9200/index/type?op_type=create

Elasticsearch WARNING: 2015-05-12T09:15:19Z
  Unable to revive connection: http://es-instance:9200/

Elasticsearch WARNING: 2015-05-12T09:15:19Z
  No living connections

2015-05-12T09:15:19.949Z [ERROR] - D-G0140: No Living connections 
Error: No Living connections
    at sendReqWithConnection (/.../node_modules/elasticsearch/src/lib/transport.js:174:15)
    at next (/.../src/lib/connection_pool.js:213:7)
    at process._tickDomainCallback (node.js:463:13)

How can I fix it?

juliendangers commented 9 years ago

it's simply because your URL '/index/type?op_type=create' does not match the one called by the library, log HTTP requests to see correct URL Btw it has nothing to do with this issue (you can use IRC for help ;) )

maziyarpanahi commented 9 years ago

Hi,

By monitoring tcp connections (running this on your ES machine: tcptrack -i eth0) you would notice at some point around 16K indexed document the number of connections go from around 100 to few thousands until the speed of indexing comes down to 1 or 2 per second! I added keep-alive manually even though it says the default is true and also remove my cluster module (I use it to get the documents from my rabbitmq) and tried to run as many node as I want also manually. It fixed the connection problem immediately.

I hope this helps some people. Always keep an eye on tcptrack -i eth0 on your ES cluster.

ltamrazov commented 9 years ago

For us the issue was with number of open ports. Using "nettop" we saw that when a connection is closed, it ends up in TIME_WAIT state, which makes the port unusable. On OSX by default there are 16K ephemeral ports available (which corresponds to when ES throws the error for most people it seems) and once they are all in TIME_WAIT ES throws that error. We increased it to 32K: sudo sysctl -w net.inet.ip.portrange.first=32768

Might be a bit of a hack / work around, but it has resolved this issue for us. Here is a stackoverflow question explaining the ports config:

http://stackoverflow.com/questions/1216267/ab-program-freezes-after-lots-of-requests-why

On linux systems (not sure about windows) you can also play with kernel settings to: 1) Allows ports to be resused in TIME_WAIT state 2) Set TIME_WAIT to 0 (this is dangerous if you don't know all the consequences / what you are doing.

imranm commented 9 years ago

Hi All,

Please let me know if anyone have solution on same issue which I still not able to resolve,

Elasticsearch ERROR: 2015-06-10T12:13:11Z
  Error: Request error, retrying -- connect ETIMEDOUT
      at Log.error (d:\omniboard\git\omniboard\node_modules\elasticsearch\src\lib\log.js:213:60)
      at checkRespForFailure (d:\omniboard\git\omniboard\node_modules\elasticsearch\src\lib\transport.js:193:18)
      at HttpConnector.<anonymous> (d:\omniboard\git\omniboard\node_modules\elasticsearch\src\lib\connectors\http.js:145:7)
      at ClientRequest.bound (d:\omniboard\git\omniboard\node_modules\elasticsearch\node_modules\lodash-node\modern\internals\baseBind.js:56:17
      at ClientRequest.emit (events.js:107:17)
      at TLSSocket.socketErrorListener (_http_client.js:271:9)
      at TLSSocket.emit (events.js:129:20)
      at net.js:459:14
      at process._tickCallback (node.js:355:11)

Thanks in advance!

gchauvet commented 9 years ago

Hi,

Same issue for me :

Elasticsearch ERROR: 2015-06-11T15:36:54Z
  Error: Request error, retrying -- connect EADDRNOTAVAIL
      at Log.error (/home/gchauvet/Documents/Walnut/services/elasticsearch-dilicom/node_modules/elasticsearch/src/lib/log.js:213:60)
      at checkRespForFailure (/home/gchauvet/Documents/Walnut/services/elasticsearch-dilicom/node_modules/elasticsearch/src/lib/transport.js:192:18)
      at HttpConnector.<anonymous> (/home/gchauvet/Documents/Walnut/services/elasticsearch-dilicom/node_modules/elasticsearch/src/lib/connectors/http.js:153:7)
      at ClientRequest.wrapper (/home/gchauvet/Documents/Walnut/services/elasticsearch-dilicom/node_modules/elasticsearch/node_modules/lodash/index.js:3128:19)
      at ClientRequest.emit (events.js:107:17)
      at Socket.socketErrorListener (_http_client.js:271:9)
      at Socket.emit (events.js:107:17)
      at net.js:459:14
      at process._tickCallback (node.js:355:11)

Elasticsearch WARNING: 2015-06-11T15:36:54Z
  Unable to revive connection: http://localhost:9200/

Elasticsearch WARNING: 2015-06-11T15:36:54Z
  No living connections

....... endless loop of warning

My setting :

var client = new elasticsearch.Client({
    host: 'localhost:9200',
    requestTimeout: Infinity, // Tested
    keepAlive: true // Tested
});

Note: there is no issue with node 0.10 .... see #201

ivarprudnikov commented 9 years ago

Hi, have almost similar issue. In my case I listen to stream of information which comes from Firebase and was pushing it straight to ES via constructed client:

var client = new elasticsearch.Client({
    host: config.elasticsearch.host,
    log: config.elasticsearch.logLevel
});

But I used create action quite aggressively and for each new document I would call it, so in my case initially there were 815 documents and for each I would end up calling:

ids.forEach(function(itemId,idx){
    client.create({
        index: indexName,
        type: type,
        id: itemId,
        body: itemBody,
        ignore: [409]
    }).then(function(resp){
        // noop
    }, function(rejection){
        console.log(('>>> ' + (new Date()).toUTCString()));
        console.log('' + JSON.stringify(rejection));
        console.log('<<<');
    });
});

While testing on local elasticsearch instance I had no issues, but when I switched to remote ES on found.no and used https with basic auth I did immediatelly get

Error: Request error, retrying -- connect EADDRNOTAVAIL Followed by Unable to revive connection: ...

My solution was to switch to bulk indexing and problems went away.

var ops = [];
ids.forEach(function(itemId,idx){
    var operation = {
        create: { _index: indexName, _type: type, _id: itemId }
    };
    var operationDoc = itemBody;
    ops.push(operation);
    ops.push(operationDoc);
});

client.bulk({
    body: ops,
    ignore: [409]
}).then(function(resp){
    // noop
    console.log('bulk upload done, operation count', ops.length);
}, function(rejection){
    console.log(('>>> ' + (new Date()).toUTCString()));
    console.log('' + JSON.stringify(rejection));
    console.log('<<<');
});
astro commented 9 years ago

For the record, on Linux this works for me:

sysctl net.ipv4.tcp_tw_reuse=1

As trouble seems to come from too many connections, my guess is that keepAlive isn't working properly?

pwlmaciejewski commented 9 years ago

@astro's fix with setting net.ipv4.tcp_tw_reuse works for me on my linux machine as well :+1:

francesconero commented 9 years ago

This issue seems to have to do with the default node http agent and the way it reuses sockets. Switching the agent used by the library to agentkeepalive for example fixes the issue for me.

Using the default http agent I could see the number of TIME_WAIT sockets skyrocket under heavy load, while agentkeepalive actually reused the sockets without letting them go to TIME_WAIT.

Maybe leaving the possibility of using an alternative http agent at config time could be a solution?

jsnoble commented 8 years ago

@francesconero doing that seems to work as well, we see no more issues. This needs to get into the standard library though, any chance you can make a PR for the change so we dont have to keep forking?

francesconero commented 8 years ago

@jsnoble there is actually a way to pass a custom http agent through the config during the client initialization. No need to fork the library.

This class creates a connection class that uses the agentkeepalive.

var elasticsearch = require('elasticsearch');
var util = require('util');
var HttpConnector = require('elasticsearch/src/lib/connectors/http');
var customHttpAgent = require('agentkeepalive');

function CustomESHTTPConnector(host, config) {
    HttpConnector.call(this, host, config);
}

util.inherits(CustomESHTTPConnector, HttpConnector);

CustomESHTTPConnector.prototype.createAgent = function (config) {
    return new customHttpAgent(this.makeAgentConfig(config));
};

module.exports = CustomESHTTPConnector;

Then pass it as the connectionClass param to the es client during initialization.

See here for more information.

I think it's a better solution than forking, since if one day the default node http agent fixes this "problem" we can easily revert to it. No?

If someone more expert could verify that this is a viable solution, it would be better though.

sdamon commented 8 years ago

I got the same problème with the latest version ... thanks @francesconero, your solution works for me

This problem should be fixed by ES !

spalger commented 8 years ago

@francesconero @sdamon just merged #329 which allows specifying a function to completely override the Agent used by the HttpConnector.

With this config the code above will look something like this (docs):

var elasticsearch = require('elasticsearch');
var AgentKeepAlive = require('agentkeepalive');

var client = new elasticsearch.Client({
  createNodeAgent(connection, config) {
    return new AgentKeepAlive(connection.makeAgentConfig(config));
  }
});

Update: this isn't released yet (it's in master) but if you could give it a shot and let me know if it works for you I'd be comfortable pushing it to npm today.

ggn06awu commented 8 years ago

Arrived here again after finding this library doesn't connect to Amazon's Elasticsearch Service with the same error.

[34mW20160116-11:07:45.187(0)? (STDERR) [39m[35mTrace: The Elasticsearch cluster is down! Trying again shortly. Retrying in 5 seconds. { [Error: No Living connections] message: 'No Living connections' }[39m
[34mW20160116-11:07:45.187(0)? (STDERR) [39m[35m    at runWithEnvironment (packages/meteor/dynamics_nodejs.js:110:1)[39m

I can curl ES from the same box, I will try to dig deeper in the coming days. This isn't the first time I've had this error however, this library's connection creation and pooling seems mighty fragile!

chiefy commented 8 years ago

@ggn06awu you need to turn off sniffing. I have the latest version of the library working just fine w/ AWS ES service.

karlAlnebratt commented 8 years ago

@spalger We hade the same issue

Elasticsearch ERROR: 2016-01-20T10:44:55Z
  Error: Request error, retrying
  POST http://localhost:9200/klart/locations/102720105 => connect EADDRNOTAVAIL 127.0.0.1:9200 - Local (0.0.0.0:0)
      at Log.error (/Users/karaln/Projects/klart/cron/node_modules/elasticsearch/src/lib/log.js:225:56)
      at checkRespForFailure (/Users/karaln/Projects/klart/cron/node_modules/elasticsearch/src/lib/transport.js:243:18)
      at HttpConnector.<anonymous> (/Users/karaln/Projects/klart/cron/node_modules/elasticsearch/src/lib/connectors/http.js:153:7)
      at ClientRequest.wrapper (/Users/karaln/Projects/klart/cron/node_modules/lodash/index.js:3095:19)
      at emitOne (events.js:77:13)
      at ClientRequest.emit (events.js:169:7)
      at Socket.socketErrorListener (_http_client.js:259:9)
      at emitOne (events.js:77:13)
      at Socket.emit (events.js:169:7)
      at emitErrorNT (net.js:1253:8)

Elasticsearch WARNING: 2016-01-20T10:44:55Z
  Unable to revive connection: http://localhost:9200/

Elasticsearch WARNING: 2016-01-20T10:44:55Z
  No living connections

We updated to elasticsearch.js 11.0.0-snapshot and used agenkeepalive whit your implementation.

var client = new elasticsearch.Client({
  apiVersion: '2.1',
  host: 'http://localhost:9200',
  createNodeAgent(connection, config) {
    return new AgentKeepAlive(connection.makeAgentConfig(config));
  }
});

And it works great for us now. :thumbsup:

romablog commented 8 years ago

i have the same problem and i use createNodeAgent() in the same way and this doesn't work for me.. I could really use some help with this.

jarib commented 8 years ago

I had this problem when piping streams from https://github.com/wdavidw/node-csv into https://github.com/hmalphettes/elasticsearch-streams, indexing around 3 million documents.

The agentkeepalive / createNodeAgent() workaround solved the problem for me.

amir-rahnama commented 8 years ago

@spalger the agentkeepalive solution is not working for me and I am still seeing this in latest elasticsearch npm.

What should I do?

It just doesn't get any connection upon client.connect()

spalger commented 8 years ago

@ambodi @romablog can you try using the default node.js Agent?

var http = require('http')
var client = new elasticsearch.Client({
  host: 'http://localhost:9200',
  createNodeAgent(connection, config) {
    return http.globalAgent;
  }
});
amir-rahnama commented 8 years ago

@spalger I'm still getting this:

Elasticsearch ERROR: 2016-03-12T23:45:30Z
  Error: Request error, retrying -- connect EHOSTDOWN 192.168.99.100:9200 - Local (192.168.99.1:52300)
      at Log.error (/Users/ara/dev/iteam/dm-crawler/node_modules/elasticsearch/src/lib/log.js:218:56)
      at checkRespForFailure (/Users/ara/dev/iteam/dm-crawler/node_modules/elasticsearch/src/lib/transport.js:211:18)
      at HttpConnector.<anonymous> (/Users/ara/dev/iteam/dm-crawler/node_modules/elasticsearch/src/lib/connectors/http.js:153:7)
      at ClientRequest.wrapper (/Users/ara/dev/iteam/dm-crawler/node_modules/lodash/index.js:3095:19)
      at emitOne (events.js:77:13)
      at ClientRequest.emit (events.js:169:7)
      at Socket.socketErrorListener (_http_client.js:259:9)
      at emitOne (events.js:77:13)
      at Socket.emit (events.js:169:7)
      at emitErrorNT (net.js:1253:8)
      at doNTCallback2 (node.js:439:9)
      at process._tickCallback (node.js:353:17)

Elasticsearch WARNING: 2016-03-12T23:45:30Z
  Unable to revive connection: http://docker:9200/

Elasticsearch WARNING: 2016-03-12T23:45:30Z
  No living connections

Is it related to a specific version? Because I keep changing my npm version but error is still there.

pickworth commented 8 years ago

I can confirm that I am also having this issue on node v5.5.0, elasticsearch.js 10.1.3, and elasticsearch v2.2, on ubuntu 14.04

I only started having this issue after upgrading ES from 1.7 -> v2.2, and ES.JS 8.2.0 ->10.1.3 I'll try the master branch. When is this going to be pushed to npm ?

amir-rahnama commented 8 years ago

@spalger when is this going to be fixed?

@nmors did it get fixed by the master branch?

jasminejeane commented 8 years ago

Be sure that you are running the elasticsearch command in your terminal

mavencode01 commented 8 years ago

Am having same issue here, how can I fix it in AngularJS build ?

jasminejeane commented 8 years ago

I wasn't using it with Angular. I just remeber that the error was a result of me not running the `elasticsearch`` command. It is an extra command that we don't normally have to run soI think it could be easily looked over.

jwgoh commented 8 years ago

From Elasticsearch doc:

The Java client must be from the same major version of Elasticsearch as the nodes; otherwise, they may not be able to understand each other.

@ambodi Could it be that you are not using the same version? I had the same 'No living connection' and 'Unable to revive connection' error messages which wasn't very helpful. I knew something fishy was going on when I could get a response back using cURL. I realised that I was using elasticsearch-js v4.0.2 and I had elasticsearch 2.3.0. Downgrading elasticsearch to version 1.5.2 solved my problem. Hope this helps!

amir-rahnama commented 8 years ago

@jwgoh yeah but I can't find anywhere in elasticsearch-js the version compatibility. Elasticsearch is now on version 5 and elasticseach-js is on 11. I am confused @spalger

jwgoh commented 8 years ago

Select the git branch that corresponds to the elasticsearch-js client version you are using. Look under _"supported_esbranches" in package.json. So for my case, elasticsearch-js v4.0.2 only supports API up to version 1.5.

ulion commented 8 years ago

for me, when works with AWS ES instance (v2.3), the sniff has to be switched off, else it will try to call setHosts([]) which will empty the connection pool, and finally cause this problem.

DavidTanner commented 8 years ago

I created a small wrapper for the client that will retry when the lib throws a no living connections error. https://www.npmjs.com/package/elasticsearch-client-retry

nikitabanthiya commented 7 years ago

error creating mapping { [Error: No Living connections] message: 'No Living connections' } Connected to the database Mongoose: mpromise (mongoose's default promise library) is deprecated, plug in your own promise library instead: http://mongoosejs.com/docs/promises.html Indexed90documents

Any Lead will be helpful

kmarellapudi commented 7 years ago

During a load test of my application running on NodeJs with ElasticSearch (2.2.0) on a Mac (Yosemite), I encountered the following error and subsequent requests to elastic search failed to reach elastic search temporarily.

Elasticsearch WARNING: 2016-11-28T02:05:44Z Unable to revive connection: http://x.x.x.x:9200/

Elasticsearch WARNING: 2016-11-28T02:05:44Z No living connections

I was able to mitigate this by increasing the ulimit, that is the number of open file descriptors. The higher the load the larger this ulimit should be in my case to stave off the above mentioned error.

After this change I did not see this error in the logs anymore and no interruption in fulfilling the requests to elastic search.

panuhorsmalahti commented 7 years ago

I encountered this error when I had my Node.js running but ElasticSearch down for about 24 hours. Fixed by restarting Node.js.

EDIT: Also seeing this sometimes during regular development. I'm using Mac OS X.