agusneos / lusca-cache

Automatically exported from code.google.com/p/lusca-cache
0 stars 0 forks source link

Transfer chocked while downloading large cached file #9

Open GoogleCodeExporter opened 9 years ago

GoogleCodeExporter commented 9 years ago
What steps will reproduce the problem?

1. Set maximum_object_size to 1024000 KB
2. Download a large cacheable file, for example, 700M avi file
3. Clear browser cache and download that file again

What is the expected output? What do you see instead?

In step 3, the file should be sent to client without any problem, but
instead lusca chocked the transfer after sending some bytes and the
downloading cannot continue. 

"Objects being sent to clients" in cachemgr reports "disk_io_pending":
KEY C8B598FB4374BFEDD88E75C89B9C1DA0
    GET http://192.168.1.6:8081/share/test001.avi
    Store lookup URL: http://192.168.1.6:8081/share/test001.avi
    STORE_OK      NOT_IN_MEMORY SWAPOUT_DONE PING_DONE   
    CACHABLE,DISPATCHED,VALIDATED
    LV:1235031029 LU:1235031081 LM:1232465896 EX:-1       
    2 locks, 1 clients, 2 refs
    Swap Dir 0, File 0X000BE9
    inmem_lo: 0
    inmem_hi: 0
    swapout: 0 bytes queued
    Client #0, 0xf326b18
        copy_offset: 22655278
        seen_offset: 22655278
        copy_size: 4096
        flags: disk_io_pending

When aborted the downloading manually, we can see a TCP_HIT in access.log.

What version of the product are you using? On what operating system?

# /usr/local/squid/sbin/squid -v
Squid Cache: Version LUSCA_1.0
configure options:  '--prefix=/usr/local/squid' '--with-pthreads'
'--with-aio' '--with-dl' '--with-large-files'
'--enable-storeio=ufs,aufs,diskd,coss,null'
'--enable-removal-policies=lru,heap' '--enable-htcp'
'--enable-kill-parent-hack' '--enable-snmp' '--enable-carp'
'--disable-poll' '--disable-select' '--enable-kqueue' '--disable-epoll'
'--disable-ident-lookups' '--enable-stacktraces' '--enable-cache-digests'
'--enable-err-languages=English'

# uname -r
7.1-RELEASE

squid-2.7.STABLE6 runs well with the same configuration.

Please provide any additional information below.

cache_mem 1024 MB
memory_pools_limit 0
cache_replacement_policy heap LFUDA
memory_replacement_policy heap LRU

maximum_object_size 1024000 KB
minimum_object_size 0 KB
maximum_object_size_in_memory 512 KB

cache_swap_low 92
cache_swap_high 95

store_avg_object_size 256 KB
store_objects_per_bucket 128
cache_dir aufs /cache01 128000 128 256
cache_dir aufs /cache02 128000 128 256
cache_dir aufs /cache03 128000 128 256
cache_dir aufs /cache04 128000 128 256

Original issue reported on code.google.com by binliu.l...@gmail.com on 19 Feb 2009 at 9:41

GoogleCodeExporter commented 9 years ago
Could you please tell me what the headers are on the HTTP reply for that 
object? And exactly how big it is?

I'll use this to build a local testing environment.

Original comment by adrian.c...@gmail.com on 19 Feb 2009 at 2:51

GoogleCodeExporter commented 9 years ago
Here is my testing environment:

    A ----------- B -------- C
HTTP Server     Squid      Client

A: FreeBSD 7.1 + apache-1.3.41 (default configure)
B: FreeBSD 7.1 + lusca-1.0-r13795
C: Windows XP  + Internet Explorer 7

All network interfaces are Intel Gigabit adapters.

The size of cached file seems not matter, some files not larger than 50M bytes 
can
still choke the transfer. The hard point is that the "choke point" is random, 
which
can occur after sending several hundreds Megabytes, or several bytes. 

Original comment by binliu.l...@gmail.com on 19 Feb 2009 at 3:24

GoogleCodeExporter commented 9 years ago
Using Firefox 3.1 Beta2, this problem is still there. Here are the HTTP request 
&
reply headers:

GET /share/test001.avi HTTP/1.1
Host: 192.168.1.6
User-Agent: Mozilla/5.0 (Windows; U; Windows NT 5.2; en-US; rv:1.9.1b2)
Gecko/20081201 Firefox/3.1b2
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Language: en-us,en;q=0.5
Accept-Encoding: gzip,deflate
Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7
Keep-Alive: 300
Connection: keep-alive
Referer: http://192.168.1.6/share/

HTTP/1.x 200 OK
Date: Thu, 19 Feb 2009 11:46:32 GMT
Server: Apache/1.3.37 (Unix)
Last-Modified: Tue, 20 Jan 2009 15:38:16 GMT
Etag: "3d7442-13bdb81c-4975efe8"
Accept-Ranges: bytes
Content-Length: 331200540
Content-Type: video/x-msvideo
Age: 84891
X-Cache: HIT from Lusca-Cache
X-Cache-Lookup: HIT from Lusca-Cache:3128
Via: 1.1 Lusca-Cache:3128 (Lusca)
Connection: close

Original comment by binliu.l...@gmail.com on 20 Feb 2009 at 11:25

GoogleCodeExporter commented 9 years ago
Just FYi, I've been unable to reproduce this. I'd really appreciate it if you 
could
reproduce it locally and then tell me exactly how I can do so.

Original comment by adrian.c...@gmail.com on 8 Jul 2009 at 4:57

GoogleCodeExporter commented 9 years ago
same happen here. It stuck for a while and sometimes won't continue.
lusca-head r14148 freebsd7.2

Original comment by chudy.fernandez on 12 Jul 2009 at 3:15

GoogleCodeExporter commented 9 years ago
Reproducing is very simple here, just downloading large cached file. The only 
thing
changed in configure file is "maximum_object_size 1024000 KB". 

lusca-head r14148, freebsd7.2

Original comment by binliu.l...@gmail.com on 12 Jul 2009 at 5:52

GoogleCodeExporter commented 9 years ago
And I'm trying to say that I've not seen the problem here.

So please tell me more about the exact setup - 32 or 64 bit lusca environment, 
what
the testing client is (os, architecture, setup); what the testing http server 
is (os,
architecture, setup.)

Original comment by adrian.c...@gmail.com on 12 Jul 2009 at 3:32

GoogleCodeExporter commented 9 years ago
> 32 or 64 bit lusca environment, 
64 bit

> what the testing client is (os, architecture, setup); 
Windows XP professional (32 bit), Internet Explorer 6, Internet Explorer 7, 
Internet
Explorer 8; setting lusca box as proxy or just as gateway(transparent proxy)

> what the testing http server is (os, architecture, setup.)
FreeBSD 7.1 + apache-1.3.41 (default configure)

Original comment by binliu.l...@gmail.com on 13 Jul 2009 at 5:45

GoogleCodeExporter commented 9 years ago
.. I've seen this in another environment but it was only when there was -one- 
request 
being handled with no further disk IO going on.

Was this how you were testing it? Ie, a single fetch, rather than multiple 
concurrent 
requests?

Original comment by adrian.c...@gmail.com on 14 Feb 2010 at 12:53

GoogleCodeExporter commented 9 years ago
yes.

Original comment by binliu.l...@gmail.com on 14 Feb 2010 at 3:37

GoogleCodeExporter commented 9 years ago
Right. That's why then. There's a bug in the way that the AUFS notification 
occurs 
which means the main process is never guaranteed to wake up to handle pending 
IO.

If there's enough disk IO going on then this never occurs.

I may just revert the behaviour for now back to the way it was so single 
fetches occur 
at a decent speed but the inter-thread signalling needs to be fixed.

Original comment by adrian.c...@gmail.com on 14 Feb 2010 at 4:43

GoogleCodeExporter commented 9 years ago
I've got the same problem, the transfer either chocked or stopped on client 
side, i test this with downloading 150MBx2 files using idm result choked, and 
fedora 13 iso 675MB(stopped), but on log lusca downloading it, it seem lusca 
downloading the files completely then send it to the client?

Original comment by unexplai...@gmail.com on 3 Nov 2010 at 1:41