Open jeffkaufman opened 10 years ago
To test: compile with --with-http_stub_status_module
and see what ngx_pagespeed operations increase Active connections
reported at /nginx_status
.
On the mailing list: not using the native fetcher.
Output of http://www.premaman.co.za/ngx_pagespeed_statistics?config
:
Version: 13: on
Filters
ah Add Head
cw Collapse Whitespace
cc Combine Css
jc Combine Javascript
gp Convert Gif to Png
jp Convert Jpeg to Progressive
mc Convert Meta Tags
pj Convert Png to Jpeg
dj Defer Javascript
ec Cache Extend Css
ei Cache Extend Images
es Cache Extend Scripts
fc Fallback Rewrite Css
if Flatten CSS Imports
hw Flushes html
gf Inline Google Font CSS
ii Inline Images
il Inline @import to Link
idp Insert DNS Prefetch
id Insert Image Dimensions
js Jpeg Subsampling
cm Move Css To Head
co Outline Css
jo Outline Javascript
pr Prioritize Critical Css
rj Recompress Jpeg
rp Recompress Png
rw Recompress Webp
ri Resize Images
cf Rewrite Css
jm Rewrite Javascript
cs Rewrite Style Attributes
cu Rewrite Style Attributes With Url
is Sprite Images
cp Strip Image Color Profiles
md Strip Image Meta Data
Options
aris True
bu /ngx_pagespeed_beacon
e 1
afcci 3600000
afcl 500000
afcp /var/cache/ngx_pagespeed_cache
afc 512000
iprdm 15
ald /var/log/pagespeed
l Core Filters
ase True
asle True
asli 60000
aslfs 1024
snse False
Domain Lawyer
I ./configure
d with --with-http_stub_status_module
, set up location /nginx_status { stub_status on; }
and watched localhost:8050/nginx_status
while running the tests. I didn't see any lasting increase in connections: occasionally they went up to 2 or 3, dropping back to 1 most of the time and at the end. I haven't been able to reproduce the growth problem yet.
Active connections: 1
server accepts handled requests
1799 1799 5871
Reading: 0 Writing: 1 Waiting: 0
@jeffkaufman Re: " I'd also expect to see a socket leak and a memory leak." As for memory - if we somehow don't return memory allocated from pools (or cause nginx to not do that somehow), valgrind won't see that.
These patches could help with detection and debugging: https://github.com/openresty/no-pool-nginx Perhaps we can do a system-test run with a no-pool patch compiled in under valgrind?
Not sure what I can contribute but this is happening on my own server so any debugging or info you need, please don't hesitate to ask.
The information in this email and attachments hereto may contain legally privileged, proprietary or confidential information that is intended for a particular recipient. If you are not the intended recipient(s), or the employee or agent responsible for delivery of this message to the intended recipient(s), you are hereby notified that any disclosure, copying, distribution, retention or use of the contents of this e-mail information is prohibited and may be unlawful. When addressed to customers or vendors, any information contained in this e-mail is subject to the terms and conditions in the governing contract, if applicable. If you have received this communication in error, please immediately notify us by return e-mail, permanently delete any electronic copies of this communication and destroy any paper copies.
On 6 May 2014 17:43, Otto van der Schaaf notifications@github.com wrote:
@jeffkaufman https://github.com/jeffkaufman Re: " I'd also expect to see a socket leak and a memory leak." As for memory - if we somehow don't return memory allocated from pools (or cause nginx to not do that somehow), valgrind won't see that.
These patches could help with detection and debugging: https://github.com/openresty/no-pool-nginx Perhaps we can do a system-test run with a no-pool patch compiled in under valgrind?
Reply to this email directly or view it on GitHubhttps://github.com/pagespeed/ngx_pagespeed/issues/693#issuecomment-42319525 .
I've now enabled http_stub_status_module on jefftk.com, serving status publicly at http://www.jefftk.com/nginx_status. We can watch there whether active connections grow over time. (Getting this to reproduce with a server I can play with will make debugging a lot easier.)
As of this posting, it says "4".
Now logging this for jefftk.com at http://www.jefftk.com/log.nginx_status.txt every minute.
I just plotted the last 18 days of data on jefftk.com:
No apparent increase.
Any idea how and what to debug? Its still increasing on my server unless I restart nginx which as you can see I've done quite often as I've been playing with ngx_pagespeed and different vhosts so a restart of nginx was done... => http://foo.dj/munin/stratoserver.net/h2118175.stratoserver.net/index.html#nginx
Will be offline till Monday morning, open for any suggestions...
What does nginx -V
give you? (Maybe there's something different about how yours was compiled, and I could try to replicate that.)
I use the nginx-extras version from dotdeb
(1:501)# nginx -V nginx version: nginx/1.6.0 built by gcc 4.7.2 (Debian 4.7.2-5) TLS SNI support enabled configure arguments: --with-cc-opt='-g -O2 -fstack-protector --param=ssp-buffer-size=4 -Wformat -Werror=format-security -D_FORTIFY_SOURCE=2' --with-ld-opt=-Wl,-z,relro --prefix=/usr/share/nginx --conf-path=/etc/nginx/nginx.conf --http-log-path=/var/log/nginx/access.log --error-log-path=/var/log/nginx/error.log --lock-path=/var/lock/nginx.lock --pid-path=/run/nginx.pid --http-client-body-temp-path=/var/lib/nginx/body --http-fastcgi-temp-path=/var/lib/nginx/fastcgi --http-proxy-temp-path=/var/lib/nginx/proxy --http-scgi-temp-path=/var/lib/nginx/scgi --http-uwsgi-temp-path=/var/lib/nginx/uwsgi --with-debug --with-pcre-jit --with-ipv6 --with-http_ssl_module --with-http_stub_status_module --with-http_realip_module --with-http_auth_request_module --with-file-aio --with-http_spdy_module --with-http_addition_module --with-http_dav_module --with-http_flv_module --with-http_geoip_module --with-http_gzip_static_module --with-http_gunzip_module --with-http_image_filter_module --with-http_mp4_module --with-http_perl_module --with-http_random_index_module --with-http_secure_link_module --with-http_sub_module --with-http_xslt_module --with-mail --with-mail_ssl_module --add-module=/usr/src/nginx/source/nginx-1.6.0/debian/modules/headers-more-nginx-module --add-module=/usr/src/nginx/source/nginx-1.6.0/debian/modules/naxsi/naxsi_src --add-module=/usr/src/nginx/source/nginx-1.6.0/debian/modules/nginx-auth-ldap --add-module=/usr/src/nginx/source/nginx-1.6.0/debian/modules/nginx-auth-pam --add-module=/usr/src/nginx/source/nginx-1.6.0/debian/modules/nginx-cache-purge --add-module=/usr/src/nginx/source/nginx-1.6.0/debian/modules/nginx-dav-ext-module --add-module=/usr/src/nginx/source/nginx-1.6.0/debian/modules/nginx-development-kit --add-module=/usr/src/nginx/source/nginx-1.6.0/debian/modules/nginx-echo --add-module=/usr/src/nginx/source/nginx-1.6.0/debian/modules/ngx-fancyindex --add-module=/usr/src/nginx/source/nginx-1.6.0/debian/modules/nginx-push-stream-module --add-module=/usr/src/nginx/source/nginx-1.6.0/debian/modules/nginx-lua --add-module=/usr/src/nginx/source/nginx-1.6.0/debian/modules/nginx-upload-progress --add-module=/usr/src/nginx/source/nginx-1.6.0/debian/modules/nginx-upstream-fair --add-module=/usr/src/nginx/source/nginx-1.6.0/debian/modules/nginx-syslog --add-module=/usr/src/nginx/source/nginx-1.6.0/debian/modules/ngx_http_pinba_module --add-module=/usr/src/nginx/source/nginx-1.6.0/debian/modules/ngx_http_substitutions_filter_module --add-module=/usr/src/nginx/source/nginx-1.6.0/debian/modules/ngx_pagespeed --add-module=/usr/src/nginx/source/nginx-1.6.0/debian/modules/nginx-x-rid-header --with-ld-opt=-lossp-uuid 550/16058MB 0.16 0.31 0.50 1/424 836813 [836769:836764 0:502] 07:18:24 Tue May 27 [root@h2118175.stratoserver.net:/dev/pts/2 +1] ~
Did you get around to have a look-see?
I've been meaning to do a stock install of nginx-extras from dotdeb but haven't gotten around to it yet.
I have got the same problem, active connections increase "forever" if pagespeed is on. Any solution for that?
looks the following settings solved the problem: pagespeed UseNativeFetcher on;
It's interesting that the native fetcher changes what happens here. That's probably a clue, but I'm not sure what to make of it. What may be relevant is that the native fetcher uses keepalive by default, and I'm not sure about that with Serf (the stock PageSpeed fetcher).
Reported on the mailing list: the number of active connections reported by
ngx_http_stub_status_module
keeps increasing until server restart. This is incremented inngx_event_accept()
and decremented inngx_close_accepted_connection()
. What's weird is that this function also doesngx_close_socket()
andngx_destroy_pool()
so if we're seeing the number of active connections increasing constantly then I'd also expect to see a socket leak and a memory leak.