moll / node-mitm

Intercept and mock outgoing Node.js network TCP connections and HTTP requests for testing. Intercepts and gives you a Net.Socket, Http.IncomingMessage and Http.ServerResponse to test and respond with. Super useful when testing code that hits remote servers.
Other
637 stars 48 forks source link

[ERR_INTERNAL_ASSERTION] with http keepAlive #59

Open dylantack opened 5 years ago

dylantack commented 5 years ago

Repro: https://github.com/dylantack/mitm_err_internal_assertion

When running some Jest tests, I got this:

    Error [ERR_INTERNAL_ASSERTION]: This is caused by either a bug in Node.js or incorrect usage of Node.js internals.
    Please open an issue with this stack trace at https://github.com/nodejs/node/issues

      at InternalSocket.readData (node_modules/mitm/lib/internal_socket.js:156:8)

This is the entire stack trace. The line generating the error is here: https://github.com/moll/node-mitm/blob/v1.7.0/lib/internal_socket.js#L156

There are also some warnings:

(node:15577) MaxListenersExceededWarning: Possible EventEmitter memory leak detected. 11 close listeners added to [TlsSocket]. Use emitter.setMaxListeners() to increase limit
    at _addListener (events.js:258:17)
    at TlsSocket.addListener (events.js:274:10)
    at TlsSocket.Readable.on (_stream_readable.js:846:35)
    at TlsSocket.socketListenerWrap [as on] (_http_server.js:763:54)
    at first (project/node_modules/ee-first/index.js:43:10)
    at onSocket (project/node_modules/on-finished/index.js:114:16)
    at attachFinishedListener (project/node_modules/on-finished/index.js:119:5)
    at attachListener (project/node_modules/on-finished/index.js:146:5)
    at onFinished (project/node_modules/on-finished/index.js:52:3)
    at Application.handleRequest (project/node_modules/koa/lib/application.js:150:5)
(node:15577) MaxListenersExceededWarning: Possible EventEmitter memory leak detected. 11 end listeners added to [TlsSocket]. Use emitter.setMaxListeners() to increase limit
    at _addListener (events.js:258:17)
    at TlsSocket.addListener (events.js:274:10)
    at TlsSocket.Readable.on (_stream_readable.js:846:35)
    at TlsSocket.socketListenerWrap (_http_server.js:763:54)
    at connectionListenerInternal (_http_server.js:401:10)
    at defaultTriggerAsyncIdScope (internal/async_hooks.js:301:12)
    at Mitm.connectionListener (_http_server.js:339:3)
    at Mitm.request (project/node_modules/mitm/index.js:159:28)
    at ClientRequest.onSocket (project/node_modules/underscore/underscore.js:748:26)
    at setRequestSocket (_http_agent.js:363:7)
(node:15577) MaxListenersExceededWarning: Possible EventEmitter memory leak detected. 11 drain listeners added to [TlsSocket]. Use emitter.setMaxListeners() to increase limit
    at _addListener (events.js:258:17)
    at TlsSocket.addListener (events.js:274:10)
    at TlsSocket.Readable.on (_stream_readable.js:846:35)
    at TlsSocket.socketListenerWrap (_http_server.js:763:54)
    at connectionListenerInternal (_http_server.js:403:10)
    at defaultTriggerAsyncIdScope (internal/async_hooks.js:301:12)
    at Mitm.connectionListener (_http_server.js:339:3)
    at Mitm.request (project/node_modules/mitm/index.js:159:28)
    at ClientRequest.onSocket (project/node_modules/underscore/underscore.js:748:26)
    at setRequestSocket (_http_agent.js:363:7)
(node:15577) MaxListenersExceededWarning: Possible EventEmitter memory leak detected. 11 error listeners added to [TlsSocket]. Use emitter.setMaxListeners() to increase limit
    at _addListener (events.js:258:17)
    at TlsSocket.addListener (events.js:274:10)
    at TlsSocket.Readable.on (_stream_readable.js:846:35)
    at TlsSocket.socketListenerWrap [as on] (_http_server.js:763:54)
    at first (project/node_modules/ee-first/index.js:43:10)
    at onSocket (project/node_modules/on-finished/index.js:114:16)
    at attachFinishedListener (project/node_modules/on-finished/index.js:119:5)
    at attachListener (project/node_modules/on-finished/index.js:146:5)
    at onFinished (project/node_modules/on-finished/index.js:52:3)
    at Application.handleRequest (project/node_modules/koa/lib/application.js:150:5)

thanks!

moll commented 5 years ago

I don't have Node.js v12.4 at hand yet to test, but on Travis all except a UCS-2 encoding tests pass (on v12.0.0 everything passes). Would you mind pasting at least some snippets of how Mitm.js is used in your tests? I'll try to skim if something changed since Node v12.0 that could break Mitm.js.

dylantack commented 5 years ago

Thanks for the reply! I made a reduced repo, unfortunately it doesn't trigger the error, but it will at least demonstrate the context: https://github.com/dylantack/mitm_err_internal_assertion

I'll update this issue if I'm able to reproduce it independent of my project. It seems to be something specific to the Stripe SDK; if I use the request module to manually POST to api.stripe.com, the issue doesn't occur.

moll commented 5 years ago

I briefly skimmed https://github.com/stripe/stripe-node/blob/e542902dd8fbe591fe3c3ce07a7e89d1d60e4cf7/lib/StripeResource.js#L320 and nothing particularly jumped out, so I'll await for your update when you manage to isolate it a little.

dylantack commented 4 years ago

Update: I was able to reproduce it! It just needed more iterations. Curiously, it fails on the 63rd test (could be a coincidence, but this is 2^6-1, which makes me wonder if there's a buffer somewhere that gets full).

The issue occurs on Node v10.16.0, and also v12.6.0.

dylantack commented 4 years ago

Another interesting observation: the number of test runs needed to induce a failure changes based on the size of the HTTP body. If you make the mock body larger, mitm will fail sooner, and vice-versa.

One more clue I found: By running git-bisect on the Stripe packages, I was able to locate start of the problem: https://github.com/stripe/stripe-node/commit/e67dd8ce4e

It appears to be related to keepAlive. If I hack the Stripe code to new http.Agent({keepAlive: false}); the problem goes away.

dylantack commented 4 years ago

aha! I finally got a "clean" reproduction with no other dependencies. It's definitely keepalive: https://github.com/dylantack/mitm_err_internal_assertion

moll commented 4 years ago

Thanks for the research! I bet it's to do with the fact that Mitm.js kind-of forces a fresh connection every time (https://github.com/moll/node-mitm/blob/58e0673df96b1d16f0de4fc5adf234cfaab2bb44/index.js#L59-L63) and Keep-Alive keeping old ones open. Though I would have thought I made Mitm.js close connections when the HTTP response is sent and all that. ^_^ I'll have to do some code archeology. :P

moll commented 4 years ago

I can confirm it goes haywire when you pass an agent in with keepAlive set to true. In fact, there's a combinatorial explosion with every intercepted connection adding one listener to the reused socket. That doesn't happen with the globalAgent for some reason, which is probably why it hasn't come up so far. Here's an ES5-only test case I'll soon add to Mitm.js's tests that surfaces the problem even in Node v6:

var data = _.times(128, function() { return "deadbeef" }).join("")
data.length.must.equal(1024)

var count = 0
this.mitm.on("request", function(_req, res) {
  ;++count
  res.end(data)
})

var agent = new Http.Agent({keepAlive: true})
var res = _.times(128, noop).reduce(function(res) {
  return res.then(function() {
    return new Promise(function(resolve) {
      var client = Http.request({host: "foo", agent: agent})
      client.end()

      client.on("response", function(res) {
        res.on("data", noop)
        res.on("end", resolve)
      })
    })
  })
}, Promise.resolve())

res.then(function() {
  count.must.equal(128)
  done()
}, done)

I'll see what I can do. Hopefully I can make Mitm.js work with re-used sockets, but for a quick fix until then I suggest disabling keep-alive in tests. :) Thanks again for your debugging!

Oloompa commented 4 years ago

I have a similar issue for the same cause.

I use mitm with ava. Ava allow me to use some context inherent of unit test.

test.beforeEach(t => {
    const mitm = Mitm();

    mitm.on("request", function(request, response) {
        switch(t.context.something) {
            case "behavior A":
                response.end("something");
                break;
            case "behavior B":
                response.end("something else");
                break;
        }
    });

    t.context.mitm = mitm;
});

test.afterEach.always("cleanup", t => {
    t.context.mitm.disable();
});

test.serial("test A", async t => {
    t.context.something = "behavior A";
    // calling http request ...
});

test.serial("test B", async t => {
    t.context.something = "behavior B";
    // calling http request ...
});

When i use keep alive feature, disable mitm has no effect (serial execution of tests). First test is ok but second test fail cause both request handlers (both instance even the disabled one) are called. handler with A context is called first.

This cause second test to get result of first request.

papandreou commented 4 years ago

I guess this might be related to this keep-alive problem I ran into a few years ago: https://github.com/moll/node-mitm/pull/36

I'll just close that one and hope you get to the bottom of it here :)

moll commented 4 years ago

Ah, indeed, @papandreou. Thanks for that! I re-opened it to remember to take a look once I jump into this. :)

marcghorayeb commented 4 years ago

Hey - I stumbled upon the same weird issue with keepAlive. I fixed it on my side by explicitly calling req.destroy() after sending back data in the response.