apache / incubator-pagespeed-mod

Apache module for rewriting web pages to reduce latency and bandwidth.
http://modpagespeed.com
Apache License 2.0
696 stars 158 forks source link

CSS not loading when optimized #1607

Open abieri opened 7 years ago

abieri commented 7 years ago

When using the following css optimisation we sometimes get users with pages loading without CSS:

pagespeed EnableFilters rewrite_css; pagespeed EnableFilters prioritize_critical_css;

website : www.cigars.com

We had to disallow all css optimisation till we find the issue.

I got various error logs in the server related to those css :

2017/07/28 17:42:16 [info] 8544#0: [ngx_pagespeed 1.12.34.2-0] Deadline exceeded for rewrite of resource http://cdn.cigars.com/css/main.77.css+vendor.css.pagespeed.cc.w_Pm0Z4Sgn.css with cf. 2017/07/28 18:57:51 [info] 8544#0: [ngx_pagespeed 1.12.34.2-0] Could not rewrite resource in-place because URL is not in cache: http://cdn.cigars.com/css/main.77.css 2017/07/28 19:59:27 [info] 8544#0: *988224 client prematurely closed connection, client: 52.15.127.159, server: cdn.cigars.com, request: "GET /css/A.main.77.css+vendor.css,Mcc.w_Pm0Z4Sgn.css.pagespeed.cf.zjDyj7reFG.css HTTP/1.1", host: "cdn.cigars.com" 2017/07/28 20:16:36 [info] 8544#0: [ngx_pagespeed 1.12.34.2-0] Could not rewrite resource in-place because URL is not in cache: http://cdn.cigars.com/css/vendor.css

oschaaf commented 7 years ago

One thing that may be worth trying is instead of disabling css optimization:

pagespeed HttpCacheCompressionLevel 0;

I'd be curious to hear if that fixes the problem as well.

jmarantz commented 7 years ago

It's probably also worth disabling prioritize_critical_css but leaving rewrite_css enabled. rewrite_css is required for prioritize_critical_css but not vice versa, and rewrite_css is much simpler. prioritize_critical_css is inherently riskier. See the Risks section in https://modpagespeed.com/doc/filter-prioritize-critical-css for more details.

abieri commented 7 years ago

@jmarantz @oschaaf We tried to add the pagespeed HttpCacheCompressionLevel 0; rule and after a couple of days we have seen users still experiencing that issue.

We temporally disabled the filter-prioritize-critical-css rule.

Anything i should look inside the logs that could give us a hint on what's happening?

jmarantz commented 7 years ago

Abieri -- thanks for all your persistence. Are you still having the CSS issues after disabling prioritize_critical_css?

Otto -- why did you suspect gzip here? I didn't see anything about encoding errors in this bug report.

oschaaf commented 7 years ago

@jmarantz The logs above mentioned client prematurely closed connection. I was wondering if somehow a bad content-length header could be related to that, perhaps introduced by a bug in the header handling of the compression flow. It looks like we can rule that out.

oschaaf commented 7 years ago

Caught the issue in chrome. Headers of the bad response:

accept-ranges:bytes
access-control-allow-origin:*
cache-control:max-age=300,private
content-encoding:gzip
content-length:42278
content-type:text/css
date:Thu, 10 Aug 2017 12:51:43 GMT
expires:Thu, 10 Aug 2017 12:56:43 GMT
last-modified:Thu, 10 Aug 2017 12:51:43 GMT
server:nginx/1.11.6
status:200
vary:Accept-Encoding
via:1.1 04e581aa5852d3f5018b5cbab537a248.cloudfront.net (CloudFront)
x-amz-cf-id:26AdacfMCiYhuWP8QEktUpL8Sz0l9RmT2y0FadUrbjK9-l0uzXwqAw==
x-cache:Miss from cloudfront
x-content-type-options:nosniff
x-original-content-length:344493
x-page-speed:1.12.34.2-0

Good response:

accept-ranges:bytes
access-control-allow-origin:*
cache-control:max-age=300,private
content-encoding:gzip
content-length:42360
content-type:text/css
date:Thu, 10 Aug 2017 14:18:00 GMT
expires:Thu, 10 Aug 2017 14:23:00 GMT
last-modified:Thu, 10 Aug 2017 14:18:00 GMT
server:nginx/1.11.6
status:200
vary:Accept-Encoding
via:1.1 b163f71436b4720ab1d0eafa590498ec.cloudfront.net (CloudFront)
x-amz-cf-id:lu8r5n4EcQ6BtPTaxu155q9ybkDRgDbAn-95HMA9xjKHngc-Xy6T9w==
x-cache:Miss from cloudfront
x-content-type-options:nosniff
x-original-content-length:344493
x-page-speed:1.12.34.2-0

(unfortunately I didn't have a net-internals network capture running, so I have no information about the bad response body)

oschaaf commented 7 years ago

One observation is that the failing response has a content-length that is a tiny fraction smaller. Another thought is that there is also https://github.com/pagespeed/ngx_pagespeed/issues/1402 Could this be related?

oschaaf commented 7 years ago

@jmarantz do you remember any similar reports for mod_pagespeed? Could this be ngx_pagespeed specific? (That would narrow down the search a lot)

oschaaf commented 7 years ago

Diffing out of the responses what I think may be relevant to the issue - the bad response had:

content-length:42278
via:1.1 04e581aa5852d3f5018b5cbab537a248.cloudfront.net (CloudFront)

The good response was 18 bytes smaller (and was also routed through a different edge):

content-length:42360
via:1.1 b163f71436b4720ab1d0eafa590498ec.cloudfront.net (CloudFront)

What is suspicious is that 18 bytes is also the minimum size of gzip's header/trailer. Hmm.

oschaaf commented 7 years ago

@abieri I am not sure if it is needed -- but did you purge cloudflare and mod_pagespeed's caches after setting pagespeed HttpCacheCompressionLevel 0;? If not, it may be worth trying that.