tessel / t1-runtime

[UNMAINTAINED] Tessel 1 JavaScript runtime.
Other
117 stars 33 forks source link

HTTP Request Handlers throw EventEmitter Leak Warnings #655

Closed johnnyman727 closed 10 years ago

johnnyman727 commented 10 years ago

I'm digging into this forum post about EventEmitter leak warnings. Sure enough, after 5 requests, the warnings start to appear.

I modified colony to print out which who the offenders are and it looks like close, finish, error, unpipe and drain are the events with more than 10 listeners. The number of listeners continue to grow as more requests are made so I don't think that sockets just need a baseline of 11 listeners.

A little more digging uncovered that the code that should be removing all of these listeners is never being called. At what point should the sockets of a request be destroyed?

jiahuang commented 10 years ago

https://github.com/tessel/runtime/blob/master/src/colony/modules/net.js#L515 should get called the first time .destroy() gets called

jiahuang commented 10 years ago

Unless we're removing listeners and then more listeners are being attached after socket destroy

johnnyman727 commented 10 years ago

I think the issue is that .destroy isn't being called. When should it get called from a request handler?

http.createServer(function (req, res) {
    res.writeHead(200, {'Content-Type': 'text/plain'});
    res.end('Hello World\n');
  }).listen(80);

Once the callback completes?

natevw commented 10 years ago

No, the end of a request is not necessarily the end of the connection. The situation is analogous to the Agent on the client side, where the code that handles keep-alive is deciding when to close the socket. It does so, ± correctly IMO, by simply calling .end() on the socket: https://github.com/tessel/runtime/blob/0e2a8fbdeca54a8f8590b3545c34e6ffa3be6c65/src/colony/modules/http.js#L650

This is related to the allowHalfOpen-ish features that we're not totally handling well, and IIRC the network code is not particularly idiomatic in detecting EOF the "traditional" [blocking] way — where a read just hangs if there's no data but the TCP connection is presumably still active, or signals EOF in the usual way of a zero-length result. We have this ioctl stuff for peeking instead to avoid blocking instead, and I don't remember offhand how we detect EOF (global tcp-close event or something was it?)

Anyway, if I had to hazard a guess without reviewing more thoroughly is that the TCP logic may never end up "finishing the socket off" as an indirect effect of ending it. I.e. something in the normal termination flow isn't happening, either we don't send the FIN or the other end's FIN doesn't cause the socket object to emit a 'close' event, or ???

natevw commented 10 years ago

Hmm, actually another culprit could be in this piping here: https://github.com/tessel/runtime/blob/0e2a8fbdeca54a8f8590b3545c34e6ffa3be6c65/src/colony/modules/http.js#L291

Nothing explicitly unpipes that — it's not clear that it would need to, but piping into the socket would likely be what is listening for a 'drain' event and so it's certainly suspect.

UPDATE: I'm guessing https://github.com/tessel/runtime/blob/0e2a8fbdeca54a8f8590b3545c34e6ffa3be6c65/src/colony/modules/http.js#L265 should probably be doing a self._outbox.unpipe(self._socket). I'm also wondering if our initial pipe shouldn't include an {end:false} option too, not really this issue but may be preventing the server from keep alive [although in that case see previous guess, I would expect that to end up simply getting rid of the socket instead of building up listeners on it].

natevw commented 10 years ago

[Guessing guessing guessing, what really needs to be done is make a test case, then replace our special snowflake HTTP implementation with a symlink to node.js source, and see if the problem goes away. If not, it's a bug in 'net.js' and we should replace that with node.js source too. If the problem persists, the trouble is in our CC3K bindings ;-]

jiahuang commented 10 years ago

Here's a test case, working off of this branch https://github.com/tessel/runtime/pull/660

var tessel = require('tessel');
var wifi = require('wifi-cc3000');
var http = require('http');

wifi.on('connect', function (err, res) {
  console.log('connected now!');
  http.createServer(function (req, res) {
  console.log("page hit!");
  res.writeHead(200, {'Content-Type': 'text/plain'});

  res.end('Hello World\n');

  req.resume();
  }).listen(80);
});
curl -v http://192.168.128.226:80 http://192.168.128.226:80 http://192.168.128.226:80 http://192.168.128.226:80 http://192.168.128.226:80 http://192.168.128.226:80 http://192.168.128.226:80 http://192.168.128.226:80 http://192.168.128.226:80 http://192.168.128.226:80

The events that I'm seeing leak are of a readable stream:

Is console.trace implemented? That should be logging more details about the leak but I don't see it doing anything.

natevw commented 10 years ago

Was this fixed by https://github.com/tessel/runtime/pull/660 or is there still more I should look at here?

jiahuang commented 10 years ago

@natevw It'd be great if you could take a look at it. #660 only fixed one problem. I don't know where the other readable stream events are getting attached too many times.

natevw commented 10 years ago

POSIX-friendly test setup (or wait for blinkenlights on H/W before pushing):

require('http').createServer(function (req, res) {
  console.log("page hit!");
  res.writeHead(200, {'Content-Type': 'text/plain'});
  res.end('Hello World\n');
  req.resume();
}).listen(8080);

To reproduce, start code above e.g. colony t_leaks.js and then:

curl localhost:8080/[1-10]

Interestingly, this does not happen if you curl localhost:8080/[1-5] repeatedly instead; seemingly confirms this is related to connection reuse (keep-alive).

Also: one warning logged at 8th request IIRC and the rest a few later:

page hit! page hit! page hit! page hit! page hit! page hit! page hit! page hit! (node) warning: possible EventEmitter memory leak detected. 11 listeners added. Use emitter.setMaxListeners() to increase limit. page hit! (node) warning: possible EventEmitter memory leak detected. 11 listeners added. Use emitter.setMaxListeners() to increase limit. page hit! (node) warning: possible EventEmitter memory leak detected. 11 listeners added. Use emitter.setMaxListeners() to increase limit. (node) warning: possible EventEmitter memory leak detected. 11 listeners added. Use emitter.setMaxListeners() to increase limit. (node) warning: possible EventEmitter memory leak detected. 11 listeners added. Use emitter.setMaxListeners() to increase limit.

natevw commented 10 years ago

This was fixed by #673.