Closed vincentami closed 9 years ago
In general you want all your ngx_pagespeed servers in one location to share the same memcached server or set of memcached servers. This is because optimized resources created in response to one request from one user will generally also be needed later by other users and on other pages. Sharing your cache across all the servers means less wasteful re-optimization.
As for how many memcached servers, I would start with one, run for a while, and look at the eviction rate. Generally you want low evictions, and high evictions often means your cache is too small and you should add another memcached server. (Though high evictions can also be a symptom of other problems, such as accidentally generating a cachebuster url server-side on your resources.)
Oh, thanks for your reponse. I wander wheather "MultiGet" is always better than "Get" for memcached here? Yes, the evictions is too high and the memory here is too small, which lead to wasteful re-optimization and lower cpu idle. I have to change into the cache based on file system(SSD).And there is another question: In the shared mem location "rname", the files and size seems do not increase (i get the size using shell command like this :"du -sh *") while the whole size of the file cache increse only. Why ? And it looks like that some strange items are written into the file cache location. I suspect that something that should be written into the rname dictionary is write out the file cache location uncorrectly. And one more question here: it seems that there is no lru eviction for the shared cache(rname), when the shared cache is full , i have to restart the nginx and lost all the items in the shread cache(rname dictionary) ?
My configure listed below:
pagespeed FileCachePath "/home/work/data1/ngx_cachedata/";
pagespeed FileCacheSizeKb 102400000;
pagespeed FileCacheCleanIntervalMs 3600000;
pagespeed FileCacheInodeLimit 5000000;
pagespeed CreateSharedMemoryMetadataCache "/home/work/data1/ngx_cachedata/" 16096000; #KB
pagespeed LRUCacheKbPerProcess 256000;
pagespeed LRUCacheByteLimit 16384;
***************
server {
pagespeed on ;
***********
pagespeed FileCachePath "/home/work/data1/ngx_cachedata/";
************
}
And the strange itmes in the "/home/work/data1/" looks like below: ls /home/work/data/
"7SfFjHMWdk, !clean!time! haiwainet.cn linghit.com qizi.cc UQRcZHw-PD, zynews.com 7tD4YmiqQW, clgd9.com haiyunx.com linkedin.com QJ9hMWdl1q, url.cn zyue.com 7wenta.com cli.im halk-a.com linksmart.com qkankan.com urlshare.cn zyz66.com 7wgo.cn cLkmCKIK-5, halo-hwanghak.com linkwithin.com qlogo.cn url.tw zzdadi.com.cn 7xz.com clkmon.com halo-hwanghak.com.tw linlin.com qlwb.com.cn us567.com zzdmz.cn "
"7SfFjHMWdk 7tD4YmiqQW QJ9hMWdl1q UQRcZHw-PD " ,are thoes items here is OK ?
I wander wheather "MultiGet" is always better than "Get" for memcached here?
MultiGet
should always be better than Get
if there are multiple keys you want to look up at once, and pagespeed automatically uses MultiGet
when appropriate.
Yes, the evictions is too high and the memory here is too small, which lead to wasteful re-optimization and lower cpu idle. I have to change into the cache based on file system(SSD).
This is good, as long as you only have one server. For multiple servers having a local SSD filesystem cache on each is ok, but can lead to wasteful reoptimization as each server needs to optimize all the resources on its own. In the multiple server situation a shared memcache, large enough to avoid evictions, would be best.
In the shared mem location "rname", the files and size seems do not increase (i get the size using shell command like this :"du -sh *") while the whole size of the file cache increse only. Why ?
The rname
directory holds metadata, which is generally pretty small compared to the rest of the cache which mostly holds resources.
it looks like that some strange items are written into the file cache location. I suspect that something that should be written into the rname dictionary is write out the file cache location uncorrectly:
- !clean!time!
This is the cache cleaning timestamp, and does belong there.
- haiwainet.cn
- linghit.com
- qizi.cc
- zynews.com
- clgd9.com
- haiyunx.com
- linkedin.com
- url.cn
- zyue.com
- 7wenta.com
- cli.im
- halk-a.com
- linksmart.com
- qkankan.com
- urlshare.cn
- zyz66.com
- 7wgo.cn
- halo-hwanghak.com
- linkwithin.com
- qlogo.cn
- url.tw
- zzdadi.com.cn
- 7xz.com
- clkmon.com
- halo-hwanghak.com.tw
- linlin.com
- qlwb.com.cn
- us567.com
- zzdmz.cn
These look like host spam. See prior discussion. You can block these by adding a catch-all server block to the top of your config that doesn't have pagespeed enabled:
server {
listen 80;
location / {
deny all;
}
}
server {
listen 80;
server_name www.example.com; // your site
pagespeed on;
...
}
- 7SfFjHMWdk
- UQRcZHw-PD
- 7tD4YmiqQW
- QJ9hMWdl1q
- cLkmCKIK-5
I'm not sure what these are. What's inside one of them?
it seems that there is no lru eviction for the shared cache(rname), when the shared cache is full , i have to restart the nginx and lost all the items in the shread cache(rname dictionary) ?
The shared memory cache should manage it's own evictions; what made you think you needed to wipe it manually?
OK, thanks for your response very very much ! No. 1 I make nginx + pagespeed as a forword proxy here ; so there is only one server config looks like this:
server {
listen 80;
location / {
proxy_pass http://$host$request_uri;
proxy_set_header Host $http_host;
}
}
7SfFjHMWdk
UQRcZHw-PD
7tD4YmiqQW
QJ9hMWdl1q
cLkmCKIK-5
so ,maybe that some strange host lead to those strange file ? I open some of those file using vim,it looks like this :+1:
"h ^A^@^@^HÈ^A^X^@ ^A(Ø<9f><8b>Ì·)0ø÷øË·)8^A@^AJ^T
^DETag^R^L"3886009901"J^V
^MAccept-Ranges^R^EbytesJ.
^MLast-Modified^R^]Fri, 19 Dec 2014 05:30:38 GMTJ^S
^Hpg_sever^R^Gbw-fe08J^Y
^LContent-Type^R image/gifJ%
^DDate^R^]Wed, 11 Feb 2015 16:34:51 GMTJ(
^GExpires^R^]Wed, 11 Feb 2015 16:39:51 GMTJ^\
^MCache-Control^R^Kmax-age=300Xà§^R`^Ah^@p^@
"
And they are all 293Byte in size.
work@*****:~/data1/ngx_cachedata$ ls -al ZyMSlV20O7,
-rw------- 1 work work 293 Feb 11 15:32 ZyMSlV20O7,
I wonder if those strange file would be cleared automaticlly?
No.2 "The shared memory cache should manage it's own evictions; ",so i do not need take care about the overflow of the shared memory cache here ? What is the relationship between the shared memory cache and the rname dictionary?
No.3 My rname dictionary never increase here; all those files under this dictionary looks like below: Dose there any error here ? And if i close all the rewrite filter(all about the html), there will be no more things under rname ?
work@******:~/data1/ngx_cachedata/rname$ tree
.
├── aj_E8H8_YbQ5UdoFhgXFfNc
│ └── http,3A
│ ├── ,2Fcdn00.baidu-img.cn
│ │ └── timg,3Fwapbaike,26quality=80,26size=w240,26sec=1349839550,26di=cb5275664758c5b7c6726e4f36e8239b,26imgtype=,26src=http,3A
│ │ └── ,2Fimgsrc.baidu.com
│ │ └── baike
│ │ └── pic
│ │ └── item
│ │ └── a5c27d1ed21b0ef4a49dbfcbdec451da81cb3e29.jpg,40,40wss_,
│ ├── ,2Fcount.tbcdn.cn
│ │ └── counter3,3Fkeys=DPM_445_subjectpv1354198,26inc=DPM_445_subjectpv1354198,26sign=41600bc2edac5b9a12d0bcb90e8d75c689c20,26callbac,-
│ │ └── k=jsonp,40,40wss_,
│ ├── ,2Fh5.sinaimg.cn
│ │ └── weibocn
│ │ └── v6
│ │ └── img
│ │ └── face
│ │ └── face-2_2x.da6893b5.png,40,40_,
│ ├── ,2Fimg0.imgtn.bdimg.com
│ │ └── it
│ │ ├── u=1177563606,2C932820210,26fm=21,26gp=0.jpg,40,40wss_,
│ │ ├── u=1432920690,2C45683490,26fm=21,26gp=0.jpg,40,40wss_,
│ │ ├── u=2020880917,2C2011793753,26fm=23,26gp=0.jpg,40,40wss_,
│ │ └── u=4267044746,2C4127117030,26fm=21,26gp=0.jpg,40,40wss_,
│ ├── ,2Fimg1.imgtn.bdimg.com
│ │ └── it
│ │ └── u=2018604563,2C292198004,26fm=21,26gp=0.jpg,40,40wss_,
│ ├── ,2Fimg2.imgtn.bdimg.com
│ │ └── it
│ │ └── u=2431876580,2C3930485050,26fm=21,26gp=0.jpg,40,40wss_,
│ ├── ,2Fimg3.imgtn.bdimg.com
│ │ └── it
│ │ └── u=858084172,2C4044012933,26fm=23,26gp=0.jpg,40,40wss_,
│ ├── ,2Fimg4.imgtn.bdimg.com
│ │ └── it
│ │ ├── u=1507920679,2C387393321,26fm=21,26gp=0.jpg,40,40wss_,
│ │ └── u=582311584,2C4265687950,26fm=21,26gp=0.jpg,40,40wss_,
│ ├── ,2Fimg5.imgtn.bdimg.com
│ │ └── it
│ │ ├── u=1340689624,2C589568863,26fm=21,26gp=0.jpg,40,40wss_,
│ │ ├── u=1418917262,2C1611336717,26fm=23,26gp=0.jpg,40,40wss_,
│ │ └── u=3157748319,2C2711285310,26fm=23,26gp=0.jpg,40,40wss_,
│ ├── ,2Fimgsrc.baidu.com
│ │ ├── baike
│ │ │ └── pic
│ │ │ └── item
│ │ │ ├── 1ad5ad6eddc451daf8ab62c6b2fd5266d01632b9.jpg,40,40wss_,
│ │ │ └── c2fdfc039245d68837d46080a0c27d1ed31b24c4.png,40,40wss_,
│ │ └── forum
│ │ ├── pic
│ │ │ └── item
│ │ │ ├── 67c8e346eb9d7f778b82a1e7.jpg,40,40wss_,
│ │ │ ├── 6d00a2b38d9f5a117af055d6.jpg,40,40wss_,
│ │ │ ├── 7eea2256e7966a7b6960fbce.jpg,40,40wss_,
│ │ │ ├── 9680a86b6e46b7e8d2a2d322.jpg,40,40wss_,
│ │ │ └── e650e66ff704b98da96457db.jpg,40,40wss_,
│ │ └── w=580
│ │ ├── sign=f94d0669c93d70cf4cfaaa05c8ddd1ba
│ │ │ └── 76e8a8de9c82d1584acc0de1800a19d8bd3e4271.jpg,40,40wss_,
│ │ └── sign=ff60ac7001e9390156028d364bed54f9
│ │ └── f4a2e0f2b211931333c7ebe065380cd790238d3f.jpg,40,40wss_,
│ ├── ,2Fm.baidu.com
│ │ └── static
│ │ └── ala
│ │ └── ui
│ │ └── foot
│ │ └── ala_icon.gif,40,40wss_,
│ ├── ,2Fn.sinaimg.cn
│ │ └── crawl
│ │ └── 20150210
│ │ ├── OtiO-avxeafs1023999.jpg,40,40wss_,
│ │ └── U8EZ-avxeafs1030917.jpg,40,40wss_,
│ ├── ,2Fretype.wenku.bdimg.com
│ │ └── retype
│ │ └── zoom
│ │ └── b5c276c208a1284ac850431e,3Fpn=1,26x=0,26y=0,26raww=500,26rawh=170,26aimh=95,26o=png_6_0_0_108_414_562_191_1263.375_893.25,26,-
│ │ └── ,26md5sum=7733b561ce69da7d4b94aa2f64fb932d,26sign=4ab2784140,26png=0-30260,26jpg=0-0,26type=pic,40,40wss_,
│ ├── ,2Fs9.rr.itc.cn
│ │ └── org
│ │ └── wapChange
│ │ ├── 20152_10_10
│ │ │ └── b25qc24147118217305.jpg,40,40wss_,
│ │ ├── 20152_10_15
│ │ │ └── a4904d9842990575520.jpg,40,40wss_,
│ │ └── 20152_10_8
│ │ └── a5vz98169515725385.jpg,40,40wss_,
│ └── ,2Fu1.sinaimg.cn
│ └── upload
│ └── 2015
│ └── 0210
│ └── 09
│ └── 64e3cdd5.jpg,40,40wss_,
├── ce_E8H8_YbQ5UdoFhgXFfNc
│ └── http,3A
│ ├── ,2Fm.4008823823.com.cn
│ │ └── kfcmwos
│ │ └── googleapp
│ │ └── images
│ │ ├── xandroid.jpg.pagespeed.ic.4bm6WefCi_.webp,40,40_,
│ │ ├── xbg_02.jpg.pagespeed.ic.m7cZIyuImY.webp,40,40_,
│ │ ├── xbg_03.jpg.pagespeed.ic.gkBTpHA_jx.webp,40,40_,
│ │ ├── xbg_04.jpg.pagespeed.ic.NKmUMl3nYL.webp,40,40_,
│ │ ├── xbg_05.jpg.pagespeed.ic.Lml1l_SLPT.webp,40,40_,
│ │ ├── xiphone.jpg.pagespeed.ic.CuAwjH8Hjv.webp,40,40_,
│ │ └── xshu.jpg.pagespeed.ic.cYPO2HeBJ2.webp,40,40_,
│ ├── ,2Fm.58.com
│ │ └── ga
│ │ ├── ,3Futmac=MO-35618414-1,26utmn=1331291015,26utmr=http,3A
│ │ │ └── ,2Fm.58.com
│ │ │ └── wf
│ │ │ └── ershouche
│ │ │ └── ,3Fminprice=0_3,26from=index_car,26utmp=
│ │ │ └── wf
│ │ │ └── car
│ │ │ └── ershouche
│ │ │ └── detail
│ │ │ └── ,26guid=ON,40,40_,
│ │ ├── ,3Futmac=MO-35618414-1,26utmn=139750019,26utmr=http,3A
│ │ │ └── ,2Fm.58.com
│ │ │ └── yt
│ │ │ └── ,26utmp=
│ │ │ └── yt
│ │ │ └── car
│ │ │ └── ershouche
│ │ │ └── list
│ │ │ └── ,3Fpn=1,26guid=ON,40,40_,
│ │ ├── ,3Futmac=MO-35618414-1,26utmn=160580307,26utmr=http,3A
│ │ │ └── ,2Fm.58.com
│ │ │ └── bj
│ │ │ └── ,3Futm_source=xiaomi_gg,26utmp=
│ │ │ └── bj
│ │ │ └── city
│ │ │ └── ,26guid=ON,40,40_,
│ │ └── ,3Futmac=MO-35618414-1,26utmn=2017858660,26utmr=http,3A
│ │ └── ,2Fm.58.com
│ │ └── wf
│ │ └── ,26utmp=
│ │ └── wf
│ │ └── car
│ │ └── ershouche
│ │ └── list
│ │ └── ,3Ffrom=index_car,26amp,3Bpn=1,26guid=ON,40,40_,
│ ├── ,2Fmarket.cmbchina.com
│ │ └── ccard
│ │ └── wap
│ │ └── wapmljtb
│ │ └── images
│ │ ├── x1.jpg.pagespeed.ic.sTaTqFwRXI.webp,40,40_,
│ │ ├── x2.jpg.pagespeed.ic.d6trweFESZ.webp,40,40_,
│ │ ├── x3.jpg.pagespeed.ic.vE3FHiq81t.webp,40,40_,
│ │ ├── x4.jpg.pagespeed.ic.qYEXuReVxK.webp,40,40_,
│ │ ├── x5.jpg.pagespeed.ic.lsrz4Yeb2X.webp,40,40_,
│ │ ├── xbanner.jpg.pagespeed.ic.PmhXSS6JWy.webp,40,40_,
│ │ ├── xchickimgapp.png.pagespeed.ic.VbHn3sVG-9.png,40,40_,
│ │ └── xchickimg.png.pagespeed.ic.Au0dqs9YGT.png,40,40_,
│ ├── ,2Fm.baidu.com
│ │ └── static
│ │ ├── ala
│ │ │ └── ui
│ │ │ └── foot
│ │ │ └── ala_icon.gif,40,40_,
│ │ ├── index
│ │ │ └── favicon-57.png,40,40_,
│ │ └── search
│ │ ├── appAla
│ │ │ └── weizhan
│ │ │ ├── cc-ala-logo.png,40,40_,
│ │ │ └── cc-go.png,40,40_,
│ │ └── other
│ │ ├── xregion-icon-big.png.pagespeed.ic.ZGZUlMx7G-.png,40,40_,
│ │ └── xtime_icon_03.png.pagespeed.ic.4z2ygu1N1C.png,40,40_,
│ ├── ,2Fv.youmi.cn
│ │ └── static
│ │ └── images
│ │ └── xnewicon120wangye.png.pagespeed.ic.Mz9D-p4kS-.png,40,40_,
│ ├── ,2Fwww.nhl.com
│ │ └── nhl
│ │ └── images
│ │ └── apple_touch_icons
│ │ ├── xapple-icon-114x114.png,2Cqv=8.17.pagespeed.ic.DV4ezRpTVm.png,40,40_,
│ │ ├── xapple-icon-144x144.png,2Cqv=8.17.pagespeed.ic.f_k1XPBmGq.png,40,40_,
│ │ ├── xapple-icon-57x57.png,2Cqv=8.17.pagespeed.ic.D-bV6qAL1H.png,40,40_,
│ │ └── xapple-icon-72x72.png,2Cqv=8.17.pagespeed.ic.u_a2BJQCtJ.png,40,40_,
│ └── ,2Fzhibo.m.sohu.com
│ └── images
│ └── logo-icon-zhibo.png,40,40_,
└── ic_E8H8_YbQ5UdoFhgXFfNc
└── http,3A
├── ,2F3g.zwxgb999.com
│ └── images
│ ├── ttsj_02.jpg,40x,40vss_,
│ ├── ttsj_23.jpg,40x,40vss_,
│ └── ttsj_24.jpg,40x,40vss_,
├── ,2Fcdn00.baidu-img.cn
│ └── timg,3Fwapbaike,26quality=80,26size=w240,26sec=1349839550,26di=cb5275664758c5b7c6726e4f36e8239b,26imgtype=,26src=http,3A
│ └── ,2Fimgsrc.baidu.com
│ └── baike
│ └── pic
│ └── item
│ └── a5c27d1ed21b0ef4a49dbfcbdec451da81cb3e29.jpg,40x,40wss_,
├── ,2Fh5.sinaimg.cn
│ └── weibocn
│ └── v6
│ └── img
│ └── face
│ └── face-2_2x.da6893b5.png,40x,40_,
├── ,2Fimg0.imgtn.bdimg.com
│ └── it
│ ├── u=1177563606,2C932820210,26fm=21,26gp=0.jpg,40x,40wss_,
│ ├── u=1432920690,2C45683490,26fm=21,26gp=0.jpg,40x,40wss_,
│ ├── u=2020880917,2C2011793753,26fm=23,26gp=0.jpg,40x,40wss_,
│ └── u=4267044746,2C4127117030,26fm=21,26gp=0.jpg,40x,40wss_,
├── ,2Fimg1.imgtn.bdimg.com
│ └── it
│ └── u=2018604563,2C292198004,26fm=21,26gp=0.jpg,40x,40wss_,
├── ,2Fimg2.imgtn.bdimg.com
│ └── it
│ └── u=2431876580,2C3930485050,26fm=21,26gp=0.jpg,40x,40wss_,
├── ,2Fimg3.imgtn.bdimg.com
│ └── it
│ └── u=858084172,2C4044012933,26fm=23,26gp=0.jpg,40x,40wss_,
├── ,2Fimg4.imgtn.bdimg.com
│ └── it
│ ├── u=1507920679,2C387393321,26fm=21,26gp=0.jpg,40x,40wss_,
│ └── u=582311584,2C4265687950,26fm=21,26gp=0.jpg,40x,40wss_,
├── ,2Fimg5.imgtn.bdimg.com
│ └── it
│ ├── u=1340689624,2C589568863,26fm=21,26gp=0.jpg,40x,40wss_,
│ ├── u=1418917262,2C1611336717,26fm=23,26gp=0.jpg,40x,40wss_,
│ └── u=3157748319,2C2711285310,26fm=23,26gp=0.jpg,40x,40wss_,
├── ,2Fimgsrc.baidu.com
│ ├── baike
│ │ └── pic
│ │ └── item
│ │ ├── 1ad5ad6eddc451daf8ab62c6b2fd5266d01632b9.jpg,40x,40wss_,
│ │ └── c2fdfc039245d68837d46080a0c27d1ed31b24c4.png,40x,40wss_,
│ └── forum
│ ├── pic
│ │ └── item
│ │ ├── 67c8e346eb9d7f778b82a1e7.jpg,40x,40wss_,
│ │ ├── 6d00a2b38d9f5a117af055d6.jpg,40x,40wss_,
│ │ ├── 7eea2256e7966a7b6960fbce.jpg,40x,40wss_,
│ │ ├── 9680a86b6e46b7e8d2a2d322.jpg,40x,40wss_,
│ │ └── e650e66ff704b98da96457db.jpg,40x,40wss_,
│ └── w=580
│ ├── sign=f94d0669c93d70cf4cfaaa05c8ddd1ba
│ │ └── 76e8a8de9c82d1584acc0de1800a19d8bd3e4271.jpg,40x,40wss_,
│ └── sign=ff60ac7001e9390156028d364bed54f9
│ └── f4a2e0f2b211931333c7ebe065380cd790238d3f.jpg,40x,40wss_,
├── ,2Fm.4008823823.com.cn
│ └── kfcmwos
│ └── googleapp
│ └── images
│ ├── android.jpg,40x,40w_,
│ ├── bg_02.jpg,40x,40w_,
│ ├── bg_03.jpg,40x,40w_,
│ ├── bg_04.jpg,40x,40w_,
│ ├── bg_05.jpg,40x,40w_,
│ ├── iphone.jpg,40x,40w_,
│ └── shu.jpg,40x,40w_,
├── ,2Fm.58.com
│ └── ga
│ ├── ,3Futmac=MO-35618414-1,26utmn=1331291015,26utmr=http,3A
│ │ └── ,2Fm.58.com
│ │ └── wf
│ │ └── ershouche
│ │ └── ,3Fminprice=0_3,26from=index_car,26utmp=
│ │ └── wf
│ │ └── car
│ │ └── ershouche
│ │ └── detail
│ │ └── ,26guid=ON,40x,40wss_,
│ ├── ,3Futmac=MO-35618414-1,26utmn=139750019,26utmr=http,3A
│ │ └── ,2Fm.58.com
│ │ └── yt
│ │ └── ,26utmp=
│ │ └── yt
│ │ └── car
│ │ └── ershouche
│ │ └── list
│ │ └── ,3Fpn=1,26guid=ON,40x,40wss_,
│ ├── ,3Futmac=MO-35618414-1,26utmn=160580307,26utmr=http,3A
│ │ └── ,2Fm.58.com
│ │ └── bj
│ │ └── ,3Futm_source=xiaomi_gg,26utmp=
│ │ └── bj
│ │ └── city
│ │ └── ,26guid=ON,40x,40wss_,
│ └── ,3Futmac=MO-35618414-1,26utmn=2017858660,26utmr=http,3A
│ └── ,2Fm.58.com
│ └── wf
│ └── ,26utmp=
│ └── wf
│ └── car
│ └── ershouche
│ └── list
│ └── ,3Ffrom=index_car,26amp,3Bpn=1,26guid=ON,40x,40wss_,
├── ,2Fmarket.cmbchina.com
│ └── ccard
│ └── wap
│ └── wapmljtb
│ └── images
│ ├── 1.jpg,40x,40wss_,
│ ├── 2.jpg,40x,40wss_,
│ ├── 3.jpg,40x,40wss_,
│ ├── 4.jpg,40x,40wss_,
│ ├── 5.jpg,40x,40wss_,
│ ├── banner.jpg,40x,40wss_,
│ ├── chickimgapp.png,40x,40wss_,
│ └── chickimg.png,40x,40wss_,
├── ,2Fm.baidu.com
│ └── static
│ ├── ala
│ │ └── ui
│ │ └── foot
│ │ └── ala_icon.gif,40x,40wss_,
│ ├── index
│ │ ├── favicon-57.png,40x,40vss_,
│ │ └── favicon-57.png,40x,40wss_,
│ └── search
│ ├── appAla
│ │ └── weizhan
│ │ ├── cc-ala-logo.png,40x,40vss_,
│ │ ├── cc-ala-logo.png,40x,40wss_,
│ │ ├── cc-go.png,40x,40vss_,
│ │ └── cc-go.png,40x,40wss_,
│ └── other
│ ├── region-icon-big.png,40x,40wss_,
│ └── time_icon_03.png,40x,40wss_,
├── ,2Fn.sinaimg.cn
│ └── crawl
│ └── 20150210
│ ├── OtiO-avxeafs1023999.jpg,40x,40wss_,
│ └── U8EZ-avxeafs1030917.jpg,40x,40wss_,
├── ,2Fretype.wenku.bdimg.com
│ └── retype
│ └── zoom
│ └── b5c276c208a1284ac850431e,3Fpn=1,26x=0,26y=0,26raww=500,26rawh=170,26aimh=95,26o=png_6_0_0_108_414_562_191_1263.375_893.25,26,-
│ └── ,26md5sum=7733b561ce69da7d4b94aa2f64fb932d,26sign=4ab2784140,26png=0-30260,26jpg=0-0,26type=pic,40x,40wss_,
├── ,2Fs9.rr.itc.cn
│ └── org
│ └── wapChange
│ ├── 20152_10_10
│ │ └── b25qc24147118217305.jpg,40x,40wss_,
│ ├── 20152_10_15
│ │ └── a4904d9842990575520.jpg,40x,40wss_,
│ └── 20152_10_8
│ └── a5vz98169515725385.jpg,40x,40wss_,
├── ,2Fu1.sinaimg.cn
│ └── upload
│ └── 2015
│ └── 0210
│ └── 09
│ └── 64e3cdd5.jpg,40x,40wss_,
├── ,2Fv.youmi.cn
│ └── static
│ └── images
│ └── newicon120wangye.png,40x,40vss_,
├── ,2Fwww.nhl.com
│ └── nhl
│ └── images
│ └── apple_touch_icons
│ ├── apple-icon-114x114.png,3Fv=8.17,40x,40wss_,
│ ├── apple-icon-144x144.png,3Fv=8.17,40x,40wss_,
│ ├── apple-icon-57x57.png,3Fv=8.17,40x,40wss_,
│ └── apple-icon-72x72.png,3Fv=8.17,40x,40wss_,
└── ,2Fzhibo.m.sohu.com
└── images
└── logo-icon-zhibo.png,40x,40wss_,
239 directories, 130 files
No 4. I user nginx+pagespeed as a forword proxy for this purpose of bandwidth reduction . In order to avoid most of strange problem unknown , i disable most filters of the pagespeed here; Do you know that any one take pagespeed as a forword proxy ? My config listed below . Dose there any advices about it here?
http {
include mime.types;
default_type application/octet-stream;
sendfile on;
keepalive_timeout 0;
pagespeed on;
pagespeed FetchHttps disable;
pagespeed MessageBufferSize 100000;
pagespeed StatisticsPath /ngx_pagespeed_statistics;
pagespeed GlobalStatisticsPath /ngx_pagespeed_global_statistics;
pagespeed MessagesPath /ngx_pagespeed_message;
pagespeed ConsolePath /pagespeed_console;
pagespeed AdminPath /pagespeed_admin;
pagespeed GlobalAdminPath /pagespeed_global_admin;
pagespeed FetchWithGzip on;
pagespeed UseNativeFetcher on;
pagespeed FetcherTimeoutMs 10000;
pagespeed NativeFetcherMaxKeepaliveRequests 100;
pagespeed RateLimitBackgroundFetches off; # risk ??
#pagespeed ForceCaching on;
pagespeed Statistics on;
pagespeed StatisticsLogging on;
pagespeed LogDir "/home/work/data1/pagespeeed/";
pagespeed ImageMaxRewritesAtOnce 1000;
pagespeed InPlaceResourceOptimization on;
pagespeed InPlaceRewriteDeadlineMs 2000;
pagespeed RewriteDeadlinePerFlushMs 2000;
pagespeed FileCachePath "/home/work/data1/ngx_cachedata/";
pagespeed FileCacheSizeKb 102400000;
pagespeed FileCacheCleanIntervalMs 3600000;
pagespeed FileCacheInodeLimit 5000000;
pagespeed CreateSharedMemoryMetadataCache "/home/work/data1/ngx_cachedata/" 16096000; #KB
pagespeed LRUCacheKbPerProcess 256000;
pagespeed LRUCacheByteLimit 16384;
pagespeed ImplicitCacheTtlMs 1800000;
pagespeed NumRewriteThreads 4;
pagespeed NumExpensiveRewriteThreads 32;
server {
pagespeed on;
listen 10.101.31.41:8192 ;
resolver 223.5.5.5 ;
resolver_timeout 5s;
#pagespeed RewriteLevel CoreFilters;
pagespeed RewriteLevel PassThrough;
pagespeed EnableFilters extend_cache;
pagespeed DisableFilters extend_cache_scripts;
pagespeed DisableFilters extend_cache_css;
pagespeed DisableFilters rewrite_css;
pagespeed DisableFilters rewrite_javascript;
pagespeed DisableFilters outline_javascript;
pagespeed DisableFilters dedup_inlined_images;
pagespeed Disallow "*.php**";
pagespeed AvoidRenamingIntrospectiveJavascript off;
pagespeed ReportUnloadTime off;
pagespeed DisableFilters add_instrumentation;
pagespeed EnableFilters rewrite_images;
pagespeed DisableFilters inline_images;
pagespeed EnableFilters recompress_images;
pagespeed EnableFilters recompress_jpeg;
pagespeed EnableFilters recompress_png;
pagespeed EnableFilters recompress_webp;
pagespeed EnableFilters strip_image_color_profile;
pagespeed EnableFilters strip_image_meta_data;
pagespeed EnableFilters jpeg_subsampling;
pagespeed DisableFilters resize_images;
pagespeed DisableFilters resize_rendered_image_dimensions;
pagespeed DisableFilters insert_image_dimensions;
pagespeed DisableFilters convert_gif_to_png;
pagespeed EnableFilters convert_png_to_jpeg;
pagespeed EnableFilters convert_jpeg_to_webp;
pagespeed InPlaceResourceOptimization on;
pagespeed EnableFilters in_place_optimize_for_browser;
pagespeed ImageRecompressionQuality 30;
pagespeed ImageLimitResizeAreaPercent 95;
pagespeed ImageLimitOptimizedPercent 90;
pagespeed JpegRecompressionQuality 30;
pagespeed JpegRecompressionQualityForSmallScreens 20;
pagespeed WebpRecompressionQuality 20;
pagespeed WebpRecompressionQualityForSmallScreens 20;
pagespeed MaxCacheableContentLength 2048000;
pagespeed ServeRewrittenWebpUrlsToAnyAgent on;
#pagespeed EnableFilters resize_mobile_images,sprite_images;
pagespeed NoTransformOptimizedImages off;
pagespeed FileCachePath "/home/work/data1/ngx_cachedata/";
location /ngx_pagespeed_statistics { allow all; }
location /ngx_pagespeed_global_statistics { allow all; }
location /ngx_pagespeed_message { allow all; }
location /pagespeed_console { allow all; }
location ~ ^/pagespeed_admin { allow all; }
location ~ ^/pagespeed_global_admin { allow all; }
location ~ "\.pagespeed\.([a-z]\.)?[a-z]{2}\.[^.]{10}\.[^.]+" { add_header "" ""; }
location ~ "^/ngx_pagespeed_static/" { }
location ~ "^/ngx_pagespeed_beacon$" { return 304; }
client_header_buffer_size 8k;
large_client_header_buffers 64 8k;
location / {
proxy_pass http://$host$request_uri;
proxy_set_header Host $http_host;
proxy_buffers 256 64k;
proxy_max_temp_file_size 2048000;
proxy_connect_timeout 5;
proxy_send_timeout 6;
proxy_read_timeout 15;
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root html;
}
}
No.5 Is there any statistics aboutthe performance of the pagespeed module for nginx ?
No.6
In my config file, the value of CreateSharedMemoryMetadataCache
and FileCachePath
are the same location. Could it be setted diffrent? Like that : CreateSharedMemoryMetadataCache
located on the tmpfs while FileCachePath
located on the sata or ssd partition ? If ok, how ? Put all the metadata on the memcached server to avoid re-optimised of resource when restart, but i do not want the file data cached on the memcached instead on the local file system. Any way ,memroy is less.
No.7 When i clear all the old dictionary and restart my nginx+pagespeed, the rname dictionary disapear ? How could i confirm that the shared metadata cache is in the memroy or on the other place ?
Thansk a lot !!!
No. 1
I'm still not sure where those strange filenames are coming from. I need to look more.
No.2
The shared memory cache should manage it's own evictions;
so i do not need take care about the overflow of the shared memory cache here ?
Correct.
What is the relationship between the shared memory cache and the rname dictionary?
You're currently configured to use a shared memory metadata cache, which should put the equivalent of this rname
directory in shared memory. If you hadn't turned on the shared memory metadata cache you would be using the filesystem to store metadata, in this rname
directory. Your rname
directory on disk isn't changing anymore because it's left behind after switching to the shared memory cache. Eventually the cache cleaner should delete it, when you run low on space.
No.3 My rname dictionary never increase here; all those files under this dictionary looks like below: Dose there any error here ?
This is all fine: the rname
directory on disk isn't being used anymore because you turned on the shared memory metadata cache.
And if i close all the rewrite filter(all about the html), there will be no more things under rname ?
No, this directory won't go away until the cache cleaner gets to it, which won't happen until your cache gets bigger than the size limit.
No 4. I user nginx+pagespeed as a forword proxy for this purpose of bandwidth reduction.
Ah, I missed that. Ignore what I said before about host-spam. Those were from valid forward proxy requests and not from forged host headers.
In order to avoid most of strange problem unknown , i disable most filters of the pagespeed here; Do you know that any one take pagespeed as a forword proxy ? My config listed below . Dose there any advices about it here?
The Chrome Data Compression Proxy is a derivative of pagespeed, working as a forward proxy. One issue you're going to run into is that as a forward proxy you need absolutely huge amounts of cache before the optimizations you make on one request are likely to be useful in serving another request.
Instead of the manual enabling and disabling of filters you're doing, can you just set your RewriteLevel
to OptimizeForBandwidth
?
No. 5 Is there any statistics aboutthe performance of the pagespeed module for nginx ?
PageSpeed collects statistics about itself, and if you turn on the console you can see them.
No.6 In my config file, the value of
CreateSharedMemoryMetadataCache
andFileCachePath
are the same location. Could it be setted diffrent? Like that :CreateSharedMemoryMetadataCache
located on the tmpfs while FileCachePath located on the sata or ssd partition ?
The value of CreateSharedMemoryMetadataCache
isn't actually where the shared memory cache should be created: it's always created in shared memory, not on disk at all. Instead that parameter is the path of the file cache that the shared memory metadata cache is supposed to pair with. So they definitely need to be set to the same value, or else the shared memory metadata cache won't be used at all and instead all the metadata will be stored in the filesystem cache.
Put all the metadata on the memcached server to avoid re-optimised of resource when restart, but i do not want the file data cached on the memcached instead on the local file system.
To avoid reoptimization of resources when you start you should just remove the CreateSharedMemoryMetadataCache
setting. PageSpeed will still use a default shared memory cache, but it will write its changes through to disk. I know, this is weird. We'd like to add checkpointing for the shared memory cache, but we haven't finished that yet.
There's currently no way to put the metadata cache in memcached and the file cache on the filesystem.
No.7 When i clear all the old dictionary and restart my nginx+pagespeed, the rname dictionary disapear ? How could i confirm that the shared metadata cache is in the memroy or on the other place ?
In general, the right way to flush the cache isn't to just delete the files and restart but instead to do this. Instead of deleting the files this just tells pagespeed not to use files created before the flush.
The rname
directory wasn't recreated when you deleted the files and restarted because you're using a manually configured shared memory metadata cache, so it's all in memory.
oH ,You are such a lovely person, thanks very much for your reply . "Instead of the manual enabling and disabling of filters you're doing, can you just set your RewriteLevel to OptimizeForBandwidth?" OK, i will try it later. However, stable and bandwith reduction is our first target at now, i have saw some strange thing before which is caused by the other filters. I think it is either a bug or caused by our uncorrect configure.
"We'd like to add checkpointing for the shared memory cache, but we haven't finished that yet." That is cool.
"This allows PageSpeed to batch multiple Get requests into a single MultiGet request to memcached, which improves performance and reduces network round trips."
HI ALL , if i add more memcache server here, the performance of "multipe Get" is always better than Get ? What do you think about the "Facebook's Memcached Multiget Hole" ? If i make consistent hashing loadbalance at the front server and only left one memcached server for each ngx_pagespeed server, wheather the performance improved or even worse?