google-code-export / lusca-cache

Automatically exported from code.google.com/p/lusca-cache
0 stars 0 forks source link

Memory usage grows unbounded when fetching objects #84

Open GoogleCodeExporter opened 9 years ago

GoogleCodeExporter commented 9 years ago

Original post from nuclearcat:

Seems performance gain is minor by changing volatile, but it is not crashing, 
and
maybe no reason to keep it, if it is not used.

What is other interesting thing i found:

if i have null disk storage, 
cache_dir null /tmp

cache_mem 4 MB
maximum_object_size 128M

and i doesn't define "maximum_object_size_in_memory", in config it is defined 
as:
#Default:
# maximum_object_size_in_memory 8 KB

Seems when lusca fetching large object, it is keeping whole object in memory 
(even it
can't be used for caching in any matter, after it is retrieved). So after while 
seems
"slow" clients who is fetching big objects killing whole proxy performance 
(because
memory which stmemref walking become extremely large), and VSZ (process size?)
reaching 2GB. 

As soon as i change to "maximum_object_size 1 MB" situation significantly 
changed,
now process size 351m, and performance is MUCH better.

Original issue reported on code.google.com by adrian.c...@gmail.com on 10 Feb 2010 at 12:16

GoogleCodeExporter commented 9 years ago
Aha! Reproduced!

KEY 8AD0F1F875004EE3313322F7E0302E5F
        GET http://192.168.11.2/test.64m
        STORE_PENDING NOT_IN_MEMORY SWAPOUT_NONE PING_DONE   
        CACHABLE,DISPATCHED,VALIDATED
        LV:1265803922 LU:1265803947 LM:1246070610 EX:-1       
        4 locks, 1 clients, 2 refs
        Swap Dir -1, File 0XFFFFFFFF
        inmem_lo: 0
        inmem_hi: 11600487
        swapout: 0 bytes queued
        Client #0, 0x0
                copy_offset: 11583488
                seen_offset: 11583488
                copy_size: 4096
                flags:

Using: wget --rate-limit=200k <64mb test file> with the above storage config, 
through a localhost proxy, 
sees Lusca slowly grow the memory usage; I wonder why it's doing that.

Original comment by adrian.c...@gmail.com on 10 Feb 2010 at 12:18

GoogleCodeExporter commented 9 years ago
The object memory in use is still very high. I wonder what is going on.

Ok, storeSwapOutMaintainMemObject() is the key here. The object is considered 
swapout'able but it obviously 
won't be swapped out just yet. storeSwapOutObjectBytesOnDisk() returns 0. Since 
it thinks the object may make 
it on disk at some point but there's 0 bytes on disk. Therefore new_mem_lo is 
set to 0.

I wonder what the correct behaviour will be.

Original comment by adrian.c...@gmail.com on 10 Feb 2010 at 12:24

GoogleCodeExporter commented 9 years ago
I've been again thinking about it briefly. The problem is there's no current 
way to
determine _when_ an object _may_ be delay swap-outable. It may fail the swapout
because of load (does it retry in this case?) ; it may fail the swapout because 
of
the size limitations and this behaviour delays the swapout until it meets the 
minimum
object size.

What about defining a "maximum object size for all cachedirs" which is then the
watermark for determining whether an in-memory object is ever going to be 
swapped
out? That'll fix -this- issue but it won't fix the "disk is too slow to swap 
out the
object fast enough" issue. I'm under the impression that the latter is 
addressed by
having the object read from the network as fast as the slowest client - 
including the
swapout "client".

Hm! More food for thought!

Original comment by adrian.c...@gmail.com on 26 Mar 2010 at 2:03