nodejs / node

Node.js JavaScript runtime ✨🐢🚀✨
https://nodejs.org
Other
107.15k stars 29.36k forks source link

False successfully end of request #8102

Closed RoobinGood closed 5 years ago

RoobinGood commented 8 years ago

Long client requests ends successfully if data transfer closed for some reason, for example if server lost connectivity or suddenly fall. For example I have one client Node.js app which make request to server Node.js app to receive a lot of chunks of data. Data transfer lasts for several time and server app falls before request completion. I expect to get some error on client side to inform me that transfer was not successfully ended, but emits end event instead of error. So I have a situation when I receive only part of data but don't know about it and think that request successfully ended.

Example server send parts of JSON object for ~2 seconds. For test purpose I run following shell command:

timeout 1s node server.js & node client.js

Because data transfer lasts 2 seconds client app receive only part of data but ends with emit end handler without any error. May be problem on clientRequest part of Node.js cause the same trick with curl as client side return (18) transfer closed error.

Example of client app:

var http = require('http');

var makeRequest = function() {
    var req = http.request({
        host: '127.0.0.1',
        port: 8080,
        method: 'POST',
    }, function(res) {
        var data = '';
        res.on('data', function(chank) {
            data += chank;
        });

        res.on('end', function() {
            console.log('data', data);
            console.log('succesfull end');
        });

        res.on('error', function(err) {
            console.error(err);
        });
    });

    req.on('error', function(err) {
        console.error(err);
    });

    req.write('');
    req.end();
};

setTimeout(makeRequest, 100);

Example of server app

var http = require('http');

var server = http.createServer();
server.on('request', function (req, res) {

    res.write('{"result":[{"some": "object"}');

    // it must send data pieces for ~2 sec (20*100 ms = 2000 ms) 
    var times = 20;
    var delay = 100;

    var counter = 0;
    var timerId = setInterval(function() {
        res.write(',{"some": "object"}');

        counter++;
        if (counter >= times) {
            res.write(',{"some": "object"}');
            res.write(']}');
            res.end();
            clearInterval(timerId);
        }
    }, delay);
});

var port = 8080;
server.listen(port, function() {
    console.log('listen on 127.0.0.1:' + port);
});
RoobinGood commented 8 years ago

Also tested on v4.4.7 with the same result.

MylesBorins commented 8 years ago

/cc @bnoordhuis I seem to recall you had answered a similar question before

bnoordhuis commented 8 years ago

The client sees an empty read when the server is killed. You can verify that with strace.

curl expects a zero-sized HTTP chunk because it sends Connection: keep-alive. The node.js client on the other hand sends Connection: close, in which case it's proper to end the response by simply closing the connection.

Node.js does not raise an error when keep-alive is enabled and the connection is closed before the final zero chunk but you can check if res.complete === true in your 'end' event listener.

res.complete apparently was never documented and we don't seem to have test coverage for it either.

It's been around for a long time though, it was added when support for trailing HTTP headers was implemented in node.js v0.3.0. Adding doc and test labels.

okv commented 8 years ago

Hi, there.

I encountered same problem. I'll try res.complete flag, thanks for explanation.

Node.js does not raise an error when keep-alive is enabled and the connection is closed before the final zero chunk

Why Node.js does not raise an error in such situation? It looks reasonable.

RoobinGood commented 8 years ago

@bnoordhuis thanks for explanation, res.complete really works.

bnoordhuis commented 8 years ago

Why Node.js does not raise an error in such situation? It looks reasonable.

I'm not 100% sure but I think it's for compatibility. Not all HTTP endpoints follow the spec as closely as they should.

You could argue, tongue in cheek, that closing the connection without sending the zero chunk is a performance optimization - saves a TCP round-trip!

jasnell commented 8 years ago

Yep.. there are some implementations that have been rather pathological about not properly terminating a chunked stream. While throwing would make sense in theory, it ends up being a bit problematic in practice.

okv commented 8 years ago

got it, thanks

Trott commented 7 years ago

@nodejs/documentation @nodejs/testing

Trott commented 7 years ago

@nodejs/http

refack commented 7 years ago

@jasnell does https://github.com/nodejs/node/pull/14315 improve this situation?

Redocram commented 6 years ago

Hi everybody, I have the same problem with my code, or I think so.

May I have a short example of the implementation of the if (res.complete === true) statement for a post request?

Thanks all!