aickin / react-dom-stream

A streaming server-side rendering library for React.
2.01k stars 48 forks source link

Gzip will cause react-dom-stream to not stream content #5

Open geekyme opened 9 years ago

geekyme commented 9 years ago

image

Tried this out on a test site, I don't see the content being streamed down properly cuz if it did, you would see main-xxx.css start downloading before for-him/ finish downloading.

Am I missing some kind of proper encoding?

aickin commented 9 years ago

Look at the response headers in your browser dev tools. If there's a header called Content-Length, then it's not being streamed. If there's a header called Transfer-Encoding with a value of chunked, then it is being streamed.

If the response is not being streamed, there are two usual culprits: middleware and proxies.

geekyme commented 9 years ago

this is awkward. I dont see any Content-Length header. There's a Transfer-Encoding with value of chunked

geekyme commented 9 years ago

You are right. It's gzip that is causing streaming to not work.

image

Removing the gzip middleware will fix the issue. It's too bad though, gzip is useful. :(

geekyme commented 9 years ago

Renamed the issue title so others may find this useful

aickin commented 9 years ago

Did you use compression?

There are a lot of bug reports of people having a hard time getting compression to stream correctly, but their continuous integration tests show that streaming works with it. I've been poking at it for the last hour or so and have been having a hard time getting it to do the right thing; it could be my code's fault. I'l reopen as a tracking bug to make it work.

geekyme commented 9 years ago

Yes i use compression

Regardless, my servers is fronted by a load balancer which will also gzip content. If either the load balancer or compression is running gzip, then my streaming will not work.

http://stackoverflow.com/questions/5280633/gzip-compression-of-chunked-encoding-response

You gzip the content, and only then apply the chunked encoding:

"Since "chunked" is the only transfer-coding required to be understood by HTTP/1.1 recipients, it plays a crucial role in delimiting messages on a persistent connection. Whenever a transfer-coding is applied to a payload body in a request, the final transfer-coding applied MUST be "chunked". If a transfer-coding is applied to a response payload body, then either the final transfer-coding applied MUST be "chunked" or the message MUST be terminated by closing the connection. When the "chunked" transfer-coding is used, it MUST be the last transfer-coding applied to form the message-body. The "chunked" transfer-coding MUST NOT be applied more than once in a message-body."

aickin commented 9 years ago

Right, but that quote doesn't mean that chunked encoding and gzip are incompatible; it says the opposite. They are compatible and can be used together, and the folks behind compression document it as intending to be compatible with chunked encoding. I think I need to add a few calls to flush, and maybe some guidance in how big a buffer to use in compression.

Of course, if you have a load balancer that doesn't support streaming or that doesn't allow its gzip to be tuned, that's a different issue.

th0r commented 9 years ago

@aickin compression is working for me: the response is chunked and gzipped. I think you don't need to add flush calls because it's user's responsibility to do so if he really needs it.

@geekyme Try to set content type of the response via res.type('html') before calling any res.write(). From compression docs:

The default filter function uses the compressible module to determine if res.getHeader('Content-Type') is compressible.

It solved my problem.

P.S. @aickin Thanks a LOT for a GREAT module!

geekyme commented 9 years ago

Oh ok I was using res.writeHead to write the content encoding to text/html.

On 22 Oct 2015, at 5:35 PM, Yuriy Grunin notifications@github.com wrote:

@aickin compression is working for me: the response is chunked and gzipped. I think you don't need to add flush calls because it's user's responsibility to do so if he really needs it.

@geekyme Try to set content type of the response via res.type('html') before calling any res.write(). From compression docs:

The default filter function uses the compressible module to determine if res.getHeader('Content-Type') is compressible.

It solved my problem.

— Reply to this email directly or view it on GitHub.

geekyme commented 9 years ago

@th0r I just tried res.type('html') with compression on. Doesn't make a difference.

th0r commented 9 years ago

@geekyme Are your js- or css- responses gzipped? How do your response headers look in the case of chunked response?

geekyme commented 9 years ago

All my responses from my server are gzipped.

roblg commented 9 years ago

FWIW, at Redfin we had an issue with the default windowBits setting for the compression module that affected our ability to stream in staging and production environments. Even though we thought we were streaming, the compression was doing some internal buffering of the response before writing it (I think to try to maximize compression?). In any event, we lowered the windowBits setting and started getting content much earlier.

Code snippet:

        // The default value for `windowBits` is 15 (32K).  This is too
        // large for our CSS/script includes to make it through before
        // we start waiting for the body.  We _really_ want to kick off
        // secondary resource requests as early as possible, so we'll
        // decrease the window size to 8K.
        //
        server.use(require('compression')({ windowBits: 13 }));

Not sure if that's contributing here, but these issues sound similar.

geekyme commented 9 years ago

Interesting! That's something I could definitely try !

On 24 Oct 2015, at 12:01 AM, roblg notifications@github.com wrote:

FWIW, at Redfin we had an issue with the default windowBits setting for the compression module that affected our ability to stream in staging and production environments. Even though we thought we were streaming, the compression was doing some internal buffering of the response before writing it (I think to try to maximize compression?). In any event, we lowered the windowBits setting and started getting content much earlier.

Code snippet:

    // The default value for `windowBits` is 15 (32K).  This is too
    // large for our CSS/script includes to make it through before
    // we start waiting for the body.  We _really_ want to kick off
    // secondary resource requests as early as possible, so we'll
    // decrease the window size to 8K.
    //
    server.use(require('compression')({ windowBits: 13 }));

Not sure if that's contributing here, but these issues sound similar.

— Reply to this email directly or view it on GitHub.

SOSANA commented 9 years ago

Great thread guys, thanks! very help full

aickin commented 9 years ago

Another zlib option was mentioned by @jakearchibald in the discussion of issue #2: https://github.com/jakearchibald/offline-wikipedia/blob/master/index.js#L64

jakearchibald commented 9 years ago

Lowering the window bits will harm compression, as it's the range that back references can operate.

leebenson commented 9 years ago

May be slightly off-topic, but I'd generally off-load compression to the upstream proxy (e.g. nginx or equivalent). Not only is it generally faster than Node-based compression, but you can get granular with enabling it per-route and disabling based on MIME types or other headers that the Node app might sent back.

aickin commented 9 years ago

@leebenson agreed that's often a great setup (as long as your upstream proxy supports streaming compression, which you need to test!).

you can get granular with enabling it per-route and disabling based on MIME types

Worth noting, though, that I think compression can do this. You implement a filter method in the options you send to compression, which is called for every request, passed the request object, and returns true iff the request should use compression.

I still think you're right that an upstream proxy is usually a better choice, though.

leebenson commented 9 years ago

@aickin totally. Feature-wise, Node is pretty much on par. Hard to beat the speed of a proxy server written in C, though.