Open Navigatron opened 3 years ago
@nodejs/streams @joyeecheung
Unfortunately it's close to impossible to help you with this without a script to reproduce this reliably.
I am also facing this issue in version 12.x (we are using AWS lambda). We are calling multiple ajax in loop and sometime this error is thrown. I had also added comment in below similar issue which is for 15.x
I am also facing this issue in version 12.x (we are using AWS lambda). We are calling multiple ajax in loop and sometime this error is thrown.
@hnviradiya Are you able to share your Lambda code?
I'm also experiencing this issue error while downloading large files using https. I tried using node-fetch instead but I'm seeing the same error. It happens randomly, but I've only seen it happen when I'm downloading files from a slow and somewhat unstable server connection. v14.16.0
@terreng can you share your script?
@mcollina
const https = require('https');
const fs = require('fs');
const path = require('path');
var file = fs.createWriteStream(path.resolve(__dirname, "test"));
https.get(URL, function(response) {
response.pipe(file);
file.on("finish", function() {
file.close();
console.log("Download complete");
});
}).on("error", function(err) {
console.error(err);
});
Error: Parse Error: Expected HTTP/
at HTTPParser.execute (<anonymous>)
at TLSSocket.socketOnData (_http_client.js:509:22)
at TLSSocket.emit (events.js:315:20)
at TLSSocket.Readable.read (internal/streams/readable.js:519:10)
at TLSSocket.Socket.read (net.js:625:39)
at flow (internal/streams/readable.js:992:34)
at resume_ (internal/streams/readable.js:973:3)
at processTicksAndRejections (internal/process/task_queues.js:80:21) {
bytesParsed: 5381,
code: 'HPE_INVALID_CONSTANT',
reason: 'Expected HTTP/',
rawPacket: <Buffer 76 5e f0 68 d0 0c 11 d7 50 8f 67 2e 18 c8 b3 a6 ac c8 75 70 2b e3 65 26 5d e9 b5 1f 6e 62 2b 88 87 15 0e 2f 14 4b 7d 90 33 6a 5a 18 06 bf 74 86 cf 6c ... 16334 more bytes>
}
internal/assert.js:14
throw new ERR_INTERNAL_ASSERTION(message);
^
Error [ERR_INTERNAL_ASSERTION]: This is caused by either a bug in Node.js or incorrect usage of Node.js internals.
Please open an issue with this stack trace at https://github.com/nodejs/node/issues
at assert (internal/assert.js:14:11)
at TLSSocket.socketOnData (_http_client.js:507:3)
at TLSSocket.emit (events.js:315:20)
at TLSSocket.Readable.read (internal/streams/readable.js:519:10)
at TLSSocket.Socket.read (net.js:625:39)
at flow (internal/streams/readable.js:992:34)
at resume_ (internal/streams/readable.js:973:3)
at processTicksAndRejections (internal/process/task_queues.js:80:21) {
code: 'ERR_INTERNAL_ASSERTION'
}
v14.16.0 linux arm
Which URL are you downloading from?
@mcollina I've sent you an email with an example URL.
@terreng that's a different issue from the one at the top. In your case the HTTP server is responding with a response that is not conforming to the HTTP spec. I still think there is a bug there somewhere because the second error should not be there. I've opened a new issue: https://github.com/nodejs/node/issues/40242.
@mcollina Got it. I definitely got just an ERR_INTERNAL_ASSERTION with a different stack trace at one point, but I haven't been able to recreate it so far.
I ran into an assertion with the exact same stack trace the other day:
node:internal/assert:14
throw new ERR_INTERNAL_ASSERTION(message);
^
Error [ERR_INTERNAL_ASSERTION]: This is caused by either a bug in Node.js or incorrect usage of Node.js internals.
Please open an issue with this stack trace at https://github.com/nodejs/node/issues
at new NodeError (node:internal/errors:371:5)
at assert (node:internal/assert:14:11)
at JSStreamSocket.doWrite (node:internal/js_stream_socket:164:5)
at JSStream.onwrite (node:internal/js_stream_socket:33:57)
at MqttStream.ondata (node:internal/js_stream_socket:77:22)
at MqttStream.emit (node:events:526:28)
at MqttStream.emit (node:domain:475:12)
at addChunk (node:internal/streams/readable:315:12)
at readableAddChunk (node:internal/streams/readable:289:9)
at MqttStream.Readable.push (node:internal/streams/readable:228:10) {
code: 'ERR_INTERNAL_ASSERTION'
}
In my case I have a custom Duplex
stream wrapped with a TLSSocket
but the symptoms are the same: under moderate to high load every once in a while this assertion is hit and my app crashes.
I did some debugging and it seems to me that this is caused by a missing "current empty write" check in the native TLSWrap
implementation. Debug output for the TLSSocket
in question:
TLSWrap client (6653) DoWrite()
TLSWrap client (6653) Empty write
TLSWrap client (6653) Trying to read cleartext output
TLSWrap client (6653) Read -1 bytes of cleartext output
TLSWrap client (6653) No pending encrypted output, writing to underlying stream
TLSWrap client (6653) Read 33 bytes from underlying stream
TLSWrap client (6653) Trying to write cleartext input
TLSWrap client (6653) Returning from ClearIn(), no pending data
TLSWrap client (6653) Trying to read cleartext output
TLSWrap client (6653) SSLInfoCallback(SSL_CB_HANDSHAKE_START);
TLSWrap client (6653) Read -1 bytes of cleartext output
TLSWrap client (6653) Trying to write encrypted output
TLSWrap client (6653) Writing 1 buffers to the underlying stream
node:internal/assert:14
throw new ERR_INTERNAL_ASSERTION(message);
^
Error [ERR_INTERNAL_ASSERTION]: This is caused by either a bug in Node.js or incorrect usage of Node.js internals.
Please open an issue with this stack trace at https://github.com/nodejs/node/issues
The failed assertion is produced as follows:
TLSWrap
: https://github.com/nodejs/node/blob/v16.14.2/src/crypto/crypto_tls.cc#L944-L961current_empty_write_
variable, not the regular current_write_
: https://github.com/nodejs/node/blob/v16.14.2/src/crypto/crypto_tls.cc#L950JSStream
, JSStreamSocket
and then finally to the custom Duplex
stream_write
logic of my custom Duplex
stream takes its sweet time writing the data and calls the callback with a considerable delayDuplex
stream. This is forwarded into TLSWrap
where it triggers a call to OnStreamRead
: https://github.com/nodejs/node/blob/v16.14.2/src/crypto/crypto_tls.cc#L1050OnStreamRead
processes the read and then calls Cycle
at the end: https://github.com/nodejs/node/blob/v16.14.2/src/crypto/crypto_tls.cc#L1093Cycle
in turn calls EncOut
which fails to check if current_empty_write_
is set – it only checks current_write_
: https://github.com/nodejs/node/blob/v16.14.2/src/crypto/crypto_tls.cc#L574EncOut
writes to the underlying stream even though the previous empty write is still pending: https://github.com/nodejs/node/blob/v16.14.2/src/crypto/crypto_tls.cc#L623JSStreamSocket
blows up on this assertion: https://github.com/nodejs/node/blob/v16.14.2/lib/internal/js_stream_socket.js#L164To reproduce the bug the following conditions need to be fulfilled:
_write
logic in the underlying stream needs a large enough delay before calling the callback.enc_out_
buffer so it can be erroneously written out to the underlying stream during the next EncOut
pass.I tried but haven't been able to come up with a minimal implementation that reliably reproduces the issue.
For now I am using the following workaround in JS-land, in the constructor of my TLSSocket
subclass:
// Patch `JSStreamSocket` implementation to work around a bug in the native `TLSWrap` class. `TLSWrap` treats
// empty writes in a special way, tracking pending ones in a separate state variable, `current_empty_write_`:
// - https://github.com/nodejs/node/blob/v16.14.2/src/crypto/crypto_tls.cc#L934-L961
// Sadly `TLSWrap` forgets to check this variable upon subsequent write requests, only checking `current_write_`:
// - https://github.com/nodejs/node/blob/v16.14.2/src/crypto/crypto_tls.cc#L574
// As a result, if another write is received before the empty write's callback is called, `TLSWrap` forwards it to
// the underlying `JSStreamSocket` straight away where it then blows up with an `ERR_INTERNAL_ASSERTION`:
// - https://github.com/nodejs/node/blob/v16.14.2/lib/internal/js_stream_socket.js#L164
// We work around this problem by patching `doWrite` to buffer any write requests that arrive while another write
// is still in progress, as well as patching `finishWrite` to execute any buffered writes after the pending empty
// write has finished. This *should* prevent the assertion from being hit.
// TODO: re-test with newer Node versions to see if the bug has been fixed
let kCurrentWriteRequest: symbol;
const jsStreamSocket = (this as any).ssl._parentWrap;
// Save original `doWrite` and `finishWrite` methods
const doWrite = jsStreamSocket.doWrite;
const finishWrite = jsStreamSocket.finishWrite;
// List of pending write requests
const pending: [any, Buffer[]][] = [];
// Patch `doWrite` to buffer write requests if another one is already pending
jsStreamSocket.doWrite = (req: any, bufs: Buffer[]) => {
// Fish the private `kCurrentWriteRequest` symbol out of the `JSStreamSocket` and cache it
kCurrentWriteRequest =
kCurrentWriteRequest ||
Object.getOwnPropertySymbols(jsStreamSocket).find(s => s.description === "kCurrentWriteRequest");
// Do we have a pending write request already?
if (kCurrentWriteRequest && jsStreamSocket[kCurrentWriteRequest]) {
// If so, do not do the write now, instead buffer it for later
pending.push([req, bufs]);
return 0;
}
// If not, we can go ahead with the write as usual
return doWrite.call(jsStreamSocket, req, bufs);
};
// Patch `finishWrite` to dispatch any buffered write requests now that the current one has finished
jsStreamSocket.finishWrite = (handle: any, errCode: number) => {
if (handle) {
// Finish the current write
finishWrite.call(jsStreamSocket, handle, errCode);
}
// Take the next request from the buffer, if any
const queued = pending.shift();
if (queued) {
// Call `doWrite` now that the previous write has finished
jsStreamSocket.doWrite(...queued);
}
};
With this in place I haven't been able to hit the assertion even on a ridiculously overloaded server.
I don't feel comfortable enough around C++ code to attempt to fix this myself (nor do I understand the internals of TLSWrap
enough for that), but hopefully this input will help someone more knowledgeable develop a fix.
MongoDB Compass has encountered an unexpected error Error [ERR_INTERNAL_ASSERTION]: This is caused by either a bug in Node.js or incorrect usage of Node.js internals. Please open an issue with this stack trace at https://github.com/nodejs/node/issues at new NodeError (node:internal/errors:406:5) at assert (node:internal/assert:14:11) at internalConnectMultiple (node:net:1118:3) at Timeout.internalConnectMultipleTimeout (node:net:1687:3) at listOnTimeout (node:internal/timers:575:11) at process.processTimers (node:internal/timers:514:7)
my node version v20.13.1
What steps will reproduce the bug?
This occurred while sitting on 2000 concurrent https requests (using a custom agent) across 4 worker threads. ~100 of the requests hit a 2-minute timeout shortly before this happened. In short, this may be difficult to reproduce.
How often does it reproduce? Is there a required condition?
So far, it's only happened once.
Edit: I can reproduce this. 2000 concurrent requests does it, 400 does not.
What is the expected behavior?
No error messages.
What do you see instead?
In text form:
In picture form: