Closed GoogleCodeExporter closed 9 years ago
Hm, how big are your objects in memory? That is the key thing.
Lusca and Squid-2 have poor scalability when large objects are cached inside
Lusca/Squid-2 itself. Squid-3 uses
a tree instead which is slower than a list but it makes it quicker to do
lookups.
I know roughly what I have to do to make Lusca perform well in that case but it
is a question of time. :) (Not
much time would be needed; just -some- time..)
Original comment by adrian.c...@gmail.com
on 2 Nov 2009 at 12:23
Well, it will sound strange, but i didn't expect things to be cached on this
server.
I didn't block caching intentionally, but sure i will, if it will help.
It's default, i guess "maximum_object_size_in_memory 8 KB".
But probably a lot of objects cached, since 512MB is large value. Decreasing
cache_mem to much lower value will help?
Original comment by nuclear...@gmail.com
on 2 Nov 2009 at 12:33
No, the size of cache_mem won't make a difference. only the object size.
Very interesting though; let me have a think about it and I'll get back to you.
Original comment by adrian.c...@gmail.com
on 2 Nov 2009 at 1:31
If required, i can give access to this host, if you need more info. Sure i can
provide anything else you want, including i can try to generate core dump for
example
or etc.
Only one note - it is "semi-embedded", busybox+glibc, almost everything in
initramfs
just to run things from USB flash. So not much tools is available, but i can
compile
most of them if needed.
I can solve my personal case, by restarting lusca (i almost won't loose
anything, i
can run redundant host/port for this moment), but i guess it will be nice to to
solve
such issue, and not everybody can do same with their proxy :-)
Original comment by nuclear...@gmail.com
on 2 Nov 2009 at 1:50
int
stmemRef(const mem_hdr * mem, squid_off_t offset, mem_node_ref * r)
{
mem_node *p = mem->head;
volatile squid_off_t t_off = mem->origin_offset;'
Is volatile really required there?
It is probably destroying all optimizations in this part of code.
Original comment by nuclear...@gmail.com
on 11 Nov 2009 at 9:25
Who knows; I've committed something to disable it. try just removing volatile,
recompiling and see what it does
for performance.
Original comment by adrian.c...@gmail.com
on 12 Nov 2009 at 4:18
Seems performance gain is minor by changing volatile, but it is not crashing,
and
maybe no reason to keep it, if it is not used.
What is other interesting thing i found:
if i have null disk storage,
cache_dir null /tmp
cache_mem 4 MB
maximum_object_size 128M
and i doesn't define "maximum_object_size_in_memory", in config it is defined
as:
#Default:
# maximum_object_size_in_memory 8 KB
Seems when lusca fetching large object, it is keeping whole object in memory
(even it
can't be used for caching in any matter, after it is retrieved). So after while
seems
"slow" clients who is fetching big objects killing whole proxy performance
(because
memory which stmemref walking become extremely large), and VSZ (process size?)
reaching 2GB.
As soon as i change to "maximum_object_size 1 MB" situation significantly
changed,
now process size 351m, and performance is MUCH better.
Original comment by nuclear...@gmail.com
on 12 Nov 2009 at 6:46
That is a much, much more likely candidate for there being performance issues
with stmemRef().
I'll look into it. Thanks!
Original comment by adrian.c...@gmail.com
on 13 Nov 2009 at 1:59
Aha! Reproduced!
KEY 8AD0F1F875004EE3313322F7E0302E5F
GET http://192.168.11.2/test.64m
STORE_PENDING NOT_IN_MEMORY SWAPOUT_NONE PING_DONE
CACHABLE,DISPATCHED,VALIDATED
LV:1265803922 LU:1265803947 LM:1246070610 EX:-1
4 locks, 1 clients, 2 refs
Swap Dir -1, File 0XFFFFFFFF
inmem_lo: 0
inmem_hi: 11600487
swapout: 0 bytes queued
Client #0, 0x0
copy_offset: 11583488
seen_offset: 11583488
copy_size: 4096
flags:
Using: wget --rate-limit=200k <64mb test file> with the above storage config,
through a localhost proxy,
sees Lusca slowly grow the memory usage; I wonder why it's doing that.
Original comment by adrian.c...@gmail.com
on 10 Feb 2010 at 12:14
Closing this ticket - the initial thing was sorted out.
Original comment by adrian.c...@gmail.com
on 10 Feb 2010 at 12:18
Original issue reported on code.google.com by
nuclear...@gmail.com
on 1 Nov 2009 at 4:16