Open decadent opened 6 years ago
Here's a log of events on the sockets involved. Legend:
→pr
: incoming socket on the proxypr→
: outgoing socket from the proxy to the server→ser
: incoming socket on the server from the proxy[ pr→] lookup
[ pr→] connect
[→pr] pipe
[ pr→] pipe
[ pr→] ready
[ pr→] resume
[→pr] resume
[→pr] data
[ pr→] data
[→pr] data
[ pr→] data
[→pr] data
[ pr→] data
[ pr→] data
[→pr] data
[→ser] close
[ pr→] error
[ pr→] unpipe
proxy→server connection socket error: { Error: read ECONNRESET
at TCP.onStreamRead (internal/stream_base_commons.js:111:27) errno: 'ECONNRESET', code: 'ECONNRESET', syscall: 'read' }
[→pr] error
[→pr] unpipe
proxy←client socket error: { Error: read ECONNRESET
at TCP.onStreamRead (internal/stream_base_commons.js:111:27) errno: 'ECONNRESET', code: 'ECONNRESET', syscall: 'read' }
[→pr] close
[ pr→] close
Extracted with the following code:
const http = require('http');
const https = require('https');
const pem = require('pem');
const net = require('net');
const bedeckAnEventEmitterWithDebuggingOutpour = (emitter, name) => {
const oldEmit = emitter.emit;
emitter.emit = (...args) => {
console.log('[%s] %s', name, args[0]);
oldEmit.apply(emitter, args);
};
};
const createHttpsServer = (callback) => {
pem.createCertificate({
days: 365,
selfSigned: true
}, (error, {serviceKey, certificate, csr}) => {
const server = https.createServer({
ca: csr,
cert: certificate,
key: serviceKey
}, (req, res) => {
setImmediate(() => {
res.writeHead(200, {
connection: 'close'
});
res.end('OK');
});
});
server.on('connection', (socket) => {
bedeckAnEventEmitterWithDebuggingOutpour(socket, '→ser');
socket.on('error', (error) => {
console.log('server←proxy socket error:', error);
});
});
server.listen((error) => {
if (error) {
console.error(error);
} else {
callback(null, server.address().port);
}
});
});
};
const createProxy = (httpsServerPort) => {
const proxy = http.createServer();
proxy.on('connect', (request, requestSocket, head) => {
console.log('---------------------------------------------------------------------');
const serverSocket = net.connect({
port: httpsServerPort
}, 'localhost', () => {
requestSocket.write(
'HTTP/1.1 200 Connection established\r\n\r\n'
);
serverSocket.write(head);
serverSocket.pipe(requestSocket);
requestSocket.pipe(serverSocket);
});
bedeckAnEventEmitterWithDebuggingOutpour(serverSocket, ' pr→');
serverSocket.on('error', (error) => {
console.log('proxy→server connection socket error:', error);
});
bedeckAnEventEmitterWithDebuggingOutpour(requestSocket, '→pr');
requestSocket.on('error', (error) => {
console.log('proxy←client socket error:', error);
throw error;
});
});
proxy.listen(9000);
};
createHttpsServer((error, httpsServerPort) => {
if (error) {
console.error(error);
} else {
createProxy(httpsServerPort);
}
});
The question that most vexes me is whether this issue is related to ECONNRESET in absense of setImmediate/setTimeout, which manifests much more rarely but inescapably enough. The sequence of events is different in that case:
[ pr→] lookup
[ pr→] connect
[→pr] pipe
[ pr→] pipe
[ pr→] ready
[ pr→] resume
[→pr] resume
[→pr] data
[ pr→] data
[→pr] data
[→pr] data
[ pr→] data
[→pr] timeout
[→pr] close
[→pr] data
[→ser] prefinish
[→ser] finish
[ pr→] data
[→ser] close
[ pr→] data
[→pr] data
[ pr→] error
[ pr→] unpipe
proxy→server connection socker error: { Error: read ECONNRESET
at TCP.onread (net.js:660:25) errno: 'ECONNRESET', code: 'ECONNRESET', syscall: 'read' }
[→pr] error
[→pr] unpipe
proxy←client socket error: { Error: read ECONNRESET
at TCP.onread (net.js:660:25) errno: 'ECONNRESET', code: 'ECONNRESET', syscall: 'read' }
Most notably, the irritating timeout
, which makes appearance here, doesn't seem to have basis in reality. I'm not certain whether this case warrants a posting of an issue, though.
We get this error at TCP.onStreamRead (internal/stream_base_commons.js:111:27), errno=ECONNRESET, code=ECONNRESET, syscall=read
with Node 10.11.0
Totally randomly when making requests with request
library version 2.88.0
Confirmed working on Node 8.7.0
@crazywako that's probably different, sounds like normal network errors.
I'm also getting the unhandled exception
{ Error: read ECONNRESET at TCP.onStreamRead (internal/stream_base_commons.js:111:27) errno: 'ECONNRESET', code: 'ECONNRESET', syscall: 'read' }
Error occurs on Node v10.15.1, OS Ubuntu 18.04.1, 4.15.0-44-generic. Code was fine on Node 8.x.
It seems to occur around sending an HTTP socket to a forked child process via proc.send() in response to an upgrade request. The child process ends up receiving a null socket at the time the exception occurs on the main sending process, though not every time.
Happens very infrequently, perhaps once per million requests in the wild.
Same problem here. Started happening after upgrading to 10.15.1 coming from node 8 as well. Ubuntu 16 in my case, same code.
Same here. It seems that the behavior is different in Node 8 and Node 10 (or there is a bug):
socket
emits "error" event, but nothing happens if there is no event listener for it. events.js:167
throw er; // Unhandled 'error' event
^
Error: read ECONNRESET
at TLSWrap.onStreamRead (internal/stream_base_commons.js:111:27)
Emitted 'error' event at:
at emitErrorNT (internal/streams/destroy.js:82:8)
at emitErrorAndCloseNT (internal/streams/destroy.js:50:3)
at process._tickCallback (internal/process/next_tick.js:63:19)
I've been strungling with this for quite some time now. Application is using knex, pooling, mysql. My guess is (at least in my case) it has something to do with MySQL server. Sometimes server is killing connections if they're idle for sometime and the application crashes with error: error:
at Protocol._enqueue `
We have this (or similar problem) with other clients for mysql (MySQL Workbench, Heidi, ...), and we managed to solve them for each client seperately.
With node, however, this keeps happening only when the code runs on localhost. It does not happen when the code is running on Digital ocean (https). Mysql server is on third machine and the same in both cases.
So I believe the problem is that mysql needs some ping quite often in order to keep connections alive. If someone is able to solve this, it would be great..
at Protocol._enqueue (C:\Users\dms3\Documents\testnode\bebackend\bebackend_201902\node_modules\mysql\lib\protocol\Protocol.js:144:48)
at Connection.query (C:\Users\dms3\Documents\testnode\bebackend\bebackend_201902\node_modules\mysql\lib\Connection.js:200:25)
at C:\Users\dms3\Documents\testnode\bebackend\bebackend_201902\node_modules\knex\lib\dialects\mysql\index.js:144:18
at Promise._execute (C:\Users\dms3\Documents\testnode\bebackend\bebackend_201902\node_modules\bluebird\js\release\debuggability.js:313:9)
at Promise._resolveFromExecutor (C:\Users\dms3\Documents\testnode\bebackend\bebackend_201902\node_modules\bluebird\js\release\promise.js:483:18)
at new Promise (C:\Users\dms3\Documents\testnode\bebackend\bebackend_201902\node_modules\bluebird\js\release\promise.js:79:10)
at Client_MySQL._query (C:\Users\dms3\Documents\testnode\bebackend\bebackend_201902\node_modules\knex\lib\dialects\mysql\index.js:135:12)
at Client_MySQL.query (C:\Users\dms3\Documents\testnode\bebackend\bebackend_201902\node_modules\knex\lib\client.js:192:17)
at Runner.<anonymous> (C:\Users\dms3\Documents\testnode\bebackend\bebackend_201902\node_modules\knex\lib\runner.js:138:36)
at Runner.tryCatcher (C:\Users\dms3\Documents\testnode\bebackend\bebackend_201902\node_modules\bluebird\js\release\util.js:16:23)
at Runner.query (C:\Users\dms3\Documents\testnode\bebackend\bebackend_201902\node_modules\bluebird\js\release\method.js:15:34)
at C:\Users\dms3\Documents\testnode\bebackend\bebackend_201902\node_modules\knex\lib\runner.js:47:21
at tryCatcher (C:\Users\dms3\Documents\testnode\bebackend\bebackend_201902\node_modules\bluebird\js\release\util.js:16:23)
at C:\Users\dms3\Documents\testnode\bebackend\bebackend_201902\node_modules\bluebird\js\release\using.js:185:26
at tryCatcher (C:\Users\dms3\Documents\testnode\bebackend\bebackend_201902\node_modules\bluebird\js\release\util.js:16:23)
at Promise._settlePromiseFromHandler (C:\Users\dms3\Documents\testnode\bebackend\bebackend_201902\node_modules\bluebird\js\release\promise.js:512:31)`
I have similar problem. When using websockets following error appeared:
Error: read ECONNRESET at TCP.onStreamRead (internal/stream_base_commons.js:111:27)
I looked into source code. The exception is raised here: https://github.com/nodejs/node/blob/v10.x/lib/internal/stream_base_commons.js
if (nread !== UV_EOF) {
return stream.destroy(errnoException(nread, 'read'));
}
The only solution that currently worked for me is downgrade back to v 8.x
Compiling the libuv/samples/socks5-proxy sample and proxying https through it intermittently fails with ECONNRESET or successfully gets the UV__EOF. There seems to be a race whether the last read will result in a UV_EOF or ECONNRESET. It only seems to happen when streaming https though. This was also on a mac. I reproduced it will curl
and Chrome
proxying through the libuv sample.
I wonder if any commenters here realize that this issue is for an incredibly specific case, most importantly an https server being in the same script as an http(s) proxy. The issue exists in the first place only because of a test. And it certainly doesn't have anything to do with MySQL.
Apparently the issue has become a magnet for everyone with an ECONNRESET. Well, if yall hope that Node.js devs will fix your problems based on the case in this issue, good luck to you.
There appears to be some underlying bug in the node.js socket handling causing it to throw an unhandled exception. The main theme I'm seeing with all these reports is that it started with node v10.
At a glance, seems related to PR #22449 /cc @addaleax
@imcotton This also seems to occur with v10.0.0, which doesn’t include that PR. But thanks for pinging me, this might be something for me to look into.
Bisecting confirms that this is caused by https://github.com/nodejs/node/pull/18868, which seems to make sense (and may even be intentional). /cc @lpinca
Wow this looks promising! Going to add an error handler to the upgrade
socket and see if it resolves my issue (#20477) This change coming from v8.x was definitely unexpected. Thanks for looking into this @addaleax
Yes, the 'error'
event on the socket should be explicitly handled by the user after the 'connect'
or 'upgrade'
event is emitted. #18868 was tagged semver-major for this reason.
Add this
requestSocket.on('error',e=>console.log(e))
The reason is that requestSocket may throw this error
Error: read ECONNRESET
at TCP.onStreamRead (internal/stream_base_commons.js:183:27) {
errno: 'ECONNRESET',
code: 'ECONNRESET',
syscall: 'read'
}
Add this
requestSocket.on('error',e=>console.log(e))
Add this where exactly? We're also getting this seemingly spurious error and would like to squelch it in our logs.
I have replicated what I believe is the same issue as described here.
Probably not the same issues as described here. However, here is the solution to my issue.
Hey, Any fix for this problem?
@mack1290 I could handle the error with the following code: https://github.com/nodejs/node/issues/23169#issuecomment-513374363
serverSocket.on('error', (error) => {
log.error({
error: serializeError(error)
}, 'server socket error');
clientSocket.write([
'HTTP/1.1 503 Service Unavailable',
'connection: close'
].join('\n') + '\n\n');
clientSocket.end();
});
However, it is more a way to handle it than to make it a solution or to know what the problem is, in fact there is a lot of follow-up in the next thread
This is full stack, if someone is able to get something from it:
2019-03-19 08:55:57:755 - error: Error: read ECONNRESET at TCP.onStreamRead (internal/stream_base_commons.js:111:27) -------------------- at Protocol._enqueue (C:\Users\dms3\Documents\testnode\bebackend\bebackend_201902\node_modules\mysql\lib\protocol\Protocol.js:144:48) at Connection.query (C:\Users\dms3\Documents\testnode\bebackend\bebackend_201902\node_modules\mysql\lib\Connection.js:200:25) at C:\Users\dms3\Documents\testnode\bebackend\bebackend_201902\node_modules\knex\lib\dialects\mysql\index.js:144:18 at Promise._execute (C:\Users\dms3\Documents\testnode\bebackend\bebackend_201902\node_modules\bluebird\js\release\debuggability.js:313:9) at Promise._resolveFromExecutor (C:\Users\dms3\Documents\testnode\bebackend\bebackend_201902\node_modules\bluebird\js\release\promise.js:483:18) at new Promise (C:\Users\dms3\Documents\testnode\bebackend\bebackend_201902\node_modules\bluebird\js\release\promise.js:79:10) at Client_MySQL._query (C:\Users\dms3\Documents\testnode\bebackend\bebackend_201902\node_modules\knex\lib\dialects\mysql\index.js:135:12) at Client_MySQL.query (C:\Users\dms3\Documents\testnode\bebackend\bebackend_201902\node_modules\knex\lib\client.js:192:17) at Runner.<anonymous> (C:\Users\dms3\Documents\testnode\bebackend\bebackend_201902\node_modules\knex\lib\runner.js:138:36) at Runner.tryCatcher (C:\Users\dms3\Documents\testnode\bebackend\bebackend_201902\node_modules\bluebird\js\release\util.js:16:23) at Runner.query (C:\Users\dms3\Documents\testnode\bebackend\bebackend_201902\node_modules\bluebird\js\release\method.js:15:34) at C:\Users\dms3\Documents\testnode\bebackend\bebackend_201902\node_modules\knex\lib\runner.js:47:21 at tryCatcher (C:\Users\dms3\Documents\testnode\bebackend\bebackend_201902\node_modules\bluebird\js\release\util.js:16:23) at C:\Users\dms3\Documents\testnode\bebackend\bebackend_201902\node_modules\bluebird\js\release\using.js:185:26 at tryCatcher (C:\Users\dms3\Documents\testnode\bebackend\bebackend_201902\node_modules\bluebird\js\release\util.js:16:23) at Promise._settlePromiseFromHandler (C:\Users\dms3\Documents\testnode\bebackend\bebackend_201902\node_modules\bluebird\js\release\promise.js:512:31)
I've been strungling with this for quite some time now. Application is using knex, pooling, mysql. My guess is (at least in my case) it has something to do with MySQL server. Sometimes server is killing connections if they're idle for sometime and the application crashes with error: error:
Error: read ECONNRESET at TCP.onStreamRead (internal/stream_base_commons.js:111:27) -------------------- at Protocol._enqueue
We have this (or similar problem) with other clients for mysql (MySQL Workbench, Heidi, ...), and we managed to solve them for each client seperately.
- https://www.heidisql.com/forum.php?t=15850
- https://stackoverflow.com/questions/31811517/mysql-workbench-drops-connection-when-idle/34498662
With node, however, this keeps happening only when the code runs on localhost. It does not happen when the code is running on Digital ocean (https). Mysql server is on third machine and the same in both cases.
So I believe the problem is that mysql needs some ping quite often in order to keep connections alive. If someone is able to solve this, it would be great..
Have been experiencing the same issue. @grega913 have you managed to find the root cause for the issue yet.
I believe this got something to do with settimeout setting on mysql part, but honestly not sure anymore.
Some suggested solution to make a dummy query (SELECT 1 from ….) every 10 sec . .
Sorry for not being able to provide a better solution.
regards
From: Manjunath Reddy notifications@github.com Sent: Sunday, March 8, 2020 2:32 PM To: nodejs/node node@noreply.github.com Cc: Gregor Sotosek grega913@gmail.com; Mention mention@noreply.github.com Subject: Re: [nodejs/node] Almost guaranteed ECONNRESET on piped sockets if connecting to Node's HTTPS server which answers with "connection: 'close'" after setImmediate or setTimeout, on OSX (#23169)
This is full stack, if someone is able to get something from it:
2019-03-19 08:55:57:755 - error: Error: read ECONNRESET at TCP.onStreamRead (internal/stream_base_commons.js:111:27) -------------------- at Protocol._enqueue (C:\Users\dms3\Documents\testnode\bebackend\bebackend_201902\node_modules\mysql\lib\protocol\Protocol.js:144:48) at Connection.query (C:\Users\dms3\Documents\testnode\bebackend\bebackend_201902\node_modules\mysql\lib\Connection.js:200:25) at C:\Users\dms3\Documents\testnode\bebackend\bebackend_201902\node_modules\knex\lib\dialects\mysql\index.js:144:18 at Promise._execute (C:\Users\dms3\Documents\testnode\bebackend\bebackend_201902\node_modules\bluebird\js\release\debuggability.js:313:9) at Promise._resolveFromExecutor (C:\Users\dms3\Documents\testnode\bebackend\bebackend_201902\node_modules\bluebird\js\release\promise.js:483:18) at new Promise (C:\Users\dms3\Documents\testnode\bebackend\bebackend_201902\node_modules\bluebird\js\release\promise.js:79:10) at Client_MySQL._query (C:\Users\dms3\Documents\testnode\bebackend\bebackend_201902\node_modules\knex\lib\dialects\mysql\index.js:135:12) at Client_MySQL.query (C:\Users\dms3\Documents\testnode\bebackend\bebackend_201902\node_modules\knex\lib\client.js:192:17) at Runner.
I've been strungling with this for quite some time now. Application is using knex, pooling, mysql. My guess is (at least in my case) it has something to do with MySQL server. Sometimes server is killing connections if they're idle for sometime and the application crashes with error: error:
Error: read ECONNRESET at TCP.onStreamRead (internal/stream_base_commons.js:111:27) -------------------- at Protocol._enqueue We have this (or similar problem) with other clients for mysql (MySQL Workbench, Heidi, ...), and we managed to solve them for each client seperately.
With node, however, this keeps happening only when the code runs on localhost. It does not happen when the code is running on Digital ocean (https). Mysql server is on third machine and the same in both cases.
So I believe the problem is that mysql needs some ping quite often in order to keep connections alive. If someone is able to solve this, it would be great..
I have been experiencing the same issue. @grega913 https://github.com/grega913 have you managed to find the root cause for the issue yet.
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/nodejs/node/issues/23169?email_source=notifications&email_token=ADBTWA4X7EQVNGUUCGKPEFLRGOM4BA5CNFSM4FYC5L4KYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEOEWENY#issuecomment-596206135 , or unsubscribe https://github.com/notifications/unsubscribe-auth/ADBTWA4GSU64EYLDW7JTE43RGOM4BANCNFSM4FYC5L4A . https://github.com/notifications/beacon/ADBTWAZRTXZYCTO4ESG4KYDRGOM4BA5CNFSM4FYC5L4KYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEOEWENY.gif
The reason you get so much ECONNRESETs on OS X is that OS X lingers sockets very aggressively. In fact, curl always closes its socket by a RST, instead of a normal FIN/ACK exchange. One thing to try is to curl from another host, Linux, for example to your OS X host - you will get an EPIPE on the external facing socket (which will receive a FIN/ACK from Linux), but you will still get an ECONNRESET on your internal socket - between your proxy and your server. Otherwise the issue is not limited to OS X. I was able to reproduce it on Linux, but with much lower probability (about 5% on Linux vs maybe 95% on OS X). Try running your curl command in while /bin/true loop, you will get an ECONNRESET in no more than 20 attempts. In this case the problem is caused by a race condition between making the read syscall from userland and the kernel closing the connection. The faster the connection, the more probable the problem will be - your best chances of reproducing it is with the loopback interface. Globally, I don't think there is any problem in Node.js. This is a completely normal behavior of a TCP socket. If you implement a proxy in C using low-level sockets, you are going to get the exactly same behavior. And in this particular case the ECONNRESET event is sent from libuv up the streams layer. I wonder if it is worth it to try to filter out this, and if it belongs in Node.js streams or in libuv. One thing I noticed while looking at the Node.js streams is that someone implemented a similar filter for Windows. So maybe one could say that Node.js is supposedly a higher-level language that should hide those details from the developer. Food for thought.
@ronag, curl on OS X sends a RST because of a very aggressive lingering on the socket, your take?
Should there be an 'error'
event or should the reset be treated as a normal close?
Any fix for this? I am facing the same when running cron job on server.
facing this issue while connecting my nodeJs application with remote database, while workbench successfully connected with remote database
Error:
read ECONNRESET at TCP.onStreamRead (internal/stream_base_commons.js:205:27) { errno: 'ECONNRESET', code: 'ECONNRESET', syscall: 'read', fatal: true }
Facing the same issue randomly, can any one help on it?
[exec] Stack trace:
[exec] Error: read ECONNRESET
[exec] at TCP.onStreamRead (internal/stream_base_commons.js:205:27)
[exec]
[exec] Console trace:
[exec] Error
[exec] at StandardRenderer.error (/root/.nvm/v12.16.1/lib/node_modules/bower/lib/renderers/StandardRenderer.js:88:37)
[exec] at Logger.<anonymous> (/root/.nvm/v12.16.1/lib/node_modules/bower/lib/bin/bower.js:113:30)
[exec] at Logger.emit (events.js:311:20)
[exec] at Logger.emit (/root/.nvm/v12.16.1/lib/node_modules/bower/lib/node_modules/bower-logger/lib/Logger.js:29:39)
[exec] at /root/.nvm/v12.16.1/lib/node_modules/bower/lib/commands/index.js:49:24
[exec] at _rejected (/root/.nvm/v12.16.1/lib/node_modules/bower/lib/node_modules/q/q.js:864:24)
[exec] at /root/.nvm/v12.16.1/lib/node_modules/bower/lib/node_modules/q/q.js:890:30
[exec] at Promise.when (/root/.nvm/v12.16.1/lib/node_modules/bower/lib/node_modules/q/q.js:1142:31)
[exec] at Promise.promise.promiseDispatch (/root/.nvm/v12.16.1/lib/node_modules/bower/lib/node_modules/q/q.js:808:41)
[exec] at /root/.nvm/v12.16.1/lib/node_modules/bower/lib/node_modules/q/q.js:624:44
[exec] System info:
[exec] Bower version: 1.8.8
[exec] Node version: 12.16.1
[exec] OS: Linux 3.10.0-957.12.2.el7.x86_64 x64
[exec] Result: 1
[echo] ExitValue from the task is : 1
Suffering from the same issue anyone please help me out?
For me this Bug happens when my http-proxy talks http1 but server and client agree on http2
Bug almost guaranteed with http2-Servers like google and curl-options:
Works fine when forcing 1.1 from the client: curl --http1.1 ...
Assuming the server wants to push some data when proxy or client already closed the connection.
uhhh so annyyy fiiixxx?
any update ?
In the following code,
pem
is a module from NPM, and the rest is an HTTP proxy which serves CONNECT requests and pipes them to an HTTPS server defined in the same script:If you run
curl --proxy http://localhost:9000 https://qweasd/ -k
, you'll see Curl receive the reply, and meanwhile the script will almost certainly fail with:This issue is dependent on several parameters:
curl --proxytunnel
work alright.connection: 'close'
header.setImmediate
orsetTimeout
around the reply (with, I guess, any timeout value).process.nextTick
doesn't do it. And if the handler instead serves the reply immediately, the server continues to answer perfectly alright until you hit a seemingly different ECONNRESET.If you adorn the sockets with
error
event handlers, you'll see that the errors actually don't happen on each request, and when they do, it's primarily on the outgoing socket from the proxy to the HTTPS server; and often, but not necessarily, a second ECONNRESET occurs on the incoming socket to the proxy.