zakkymuha / lusca-cache

Automatically exported from code.google.com/p/lusca-cache
0 stars 0 forks source link

posix_fadvise optimization? #93

Open GoogleCodeExporter opened 9 years ago

GoogleCodeExporter commented 9 years ago
I did two optimizations on real production servers, and load average
dropped from >16 (number of threads of AUFS) to 2.0-2.50

src/fs/aufs/store_io_aufs.c

     if (aiostate->flags.close_request)
        storeAufsIOCallback(sio, errflag);
+    posix_fadvise(fd,0,0,POSIX_FADV_SEQUENTIAL);
     debug(79, 3) ("storeAufsOpenDone: exiting\n");

and

     debug(79, 9) ("%s:%d\n", __FILE__, __LINE__);
 #if ASYNC_CLOSE
+    fdatasync(fd);
+    posix_fadvise(fd, 0,0,POSIX_FADV_DONTNEED);
     fd_close(fd);     aioClose(fd);

I didn't had chance to check which one gave great improvement, cause it is
production after all.
As far as i understand first one will recommend OS to perform double
readahead, and second one will remove file from cache after closing.

OS: Linux, 64-bit with 32-bit userspace (to eliminate kernel lowmem
limitations)
HDD: 1 drive,SATA, without NCQ

Here is some stats from proxy:
Store Directory Statistics:
Store Entries          : 1576959
Maximum Swap Size      : 172294144 KB
Current Store Swap Size: 97875056 KB
Current Capacity       : 57% used, 43% free

Store Directory #0 (aufs): /cache1/squid1
FS Block Size 4096 Bytes
First level subdirectories: 16
Second level subdirectories: 256
Maximum Size: 65536 KB
Current Size: 64404 KB
Percent Used: 98.27%
Current load metric: 190 / 1000
Filemap bits in use: 16104 of 1048576 (2%)
Filesystem Space in use: 107184648/307663800 KB (35%)
Filesystem Inodes in use: 1586796/19537920 (8%)
Flags: SELECTED
Accepted object sizes: 0 - 2048 bytes
Removal policy: heap

Store Directory #1 (aufs): /cache1/squid2
FS Block Size 4096 Bytes
First level subdirectories: 16
Second level subdirectories: 256
Maximum Size: 8388608 KB
Current Size: 8245324 KB
Percent Used: 98.29%
Current load metric: 190 / 1000
Filemap bits in use: 1305283 of 2097152 (62%)
Filesystem Space in use: 107184648/307663800 KB (35%)
Filesystem Inodes in use: 1586796/19537920 (8%)
Flags:
Accepted object sizes: 2048 - 65536 bytes
Removal policy: heap

Store Directory #2 (aufs): /cache1/squid3
FS Block Size 4096 Bytes
First level subdirectories: 16
Second level subdirectories: 256
Maximum Size: 163840000 KB
Current Size: 89565328 KB
Percent Used: 54.67%
Current load metric: 190 / 1000
Filemap bits in use: 252841 of 262144 (96%)
Filesystem Space in use: 107184648/307663800 KB (35%)
Filesystem Inodes in use: 1586796/19537920 (8%)
Flags:
Accepted object sizes: 65536 - (unlimited) bytes
Removal policy: heap

client_http.requests = 223.952174/sec
client_http.hits = 34.264979/sec
client_http.kbytes_in = 214.872537/sec
client_http.kbytes_out = 2060.262149/sec
server.all.kbytes_in = 1742.042176/sec
server.all.kbytes_out = 190.699002/sec

hardware:
model name      : Intel(R) Pentium(R) Dual  CPU  E2200  @ 2.20GHz
HDD Model=ST3320418AS

Original issue reported on code.google.com by nuclear...@gmail.com on 6 Mar 2010 at 8:18

GoogleCodeExporter commented 9 years ago
Seems fdatasync is harmful, since it is blocking operation.

Original comment by nuclear...@gmail.com on 6 Mar 2010 at 9:02

GoogleCodeExporter commented 9 years ago
Seems fdatasync is harmful, since it is blocking operation.
On some very loaded proxies it blocks squid with this patch.

    posix_fadvise(fd, 0,0,POSIX_FADV_DONTNEED);
is almost useless without it, i will have to do more tests

Original comment by nuclear...@gmail.com on 6 Mar 2010 at 9:05

GoogleCodeExporter commented 9 years ago
Well, you're doing that on open and close - we could just throw that into the 
AIO open/close handler.

i wonder why it's giving you a noticable improvement in performance. What made 
you try it?

Original comment by adrian.c...@gmail.com on 20 Mar 2010 at 8:47

GoogleCodeExporter commented 9 years ago
I am not sure still in performance improvement. Real load changing... i will 
try to
schedule more extensive testing in my cluster (4 with optimization, and 4 
without).
Load is distributed over Linux Vserver, i can enable over roundrobin to make 
similar
load. 
Probably i must setup some data collection also to draw graphs...

Most significant improvements can be:
1)DONTNEED will release file from file cache. As i understand if file often 
requested
- most probably it will be in squid own memory cache.
2)SEQUENTAL is able to turn more readahead. I'm not sure it will give any 
benefit,
but if there is plenty ram for file caching it will make less seeks.

Original comment by nuclear...@gmail.com on 20 Mar 2010 at 2:07