lancachenet / monolithic

A monolithic lancache service capable of caching all CDNs in a single instance
https://hub.docker.com/r/lancachenet/monolithic
Other
737 stars 73 forks source link

Confusing issue slow Blizzard vs fast Steam #17

Closed Peon-SouthAfrica closed 4 years ago

Peon-SouthAfrica commented 5 years ago

Describe the issue you are having

Steam downloads of cached content max out the LAN connection. When downloading un-cached blizzard games like hearthstone i get maybe 1,5MB/s. When downloading hearthstone after its cached it maxes out at 30MB/s.

I dont understand how Steam is working 100% but I cant get Blizzard updates and games to go full speed.

I have tailed the error log and no errors were produced. The access log reports only HITS on the cache.

Im hoping its something stupid im doing.

How are you running the container(s)?

docker run --restart unless-stopped -d --name steamcache-multipleIPs -p 192.168.0.201:53:53/udp -v /etc/localtime:/etc/localtime -e STEAMCACHE_IP="192.168.0.202 192.168.0.203 192.168.0.204 192.168.0.205" -e BLIZZARDCACHE_IP="192.168.0.206 192.168.0.207 192.168.0.208 192.168.0.209 192.168.0.210" steamcache/steamcache-dns:latest

docker run --restart unless-stopped --name steamcache --detach -e UPSTREAM_DNS=192.168.0.9 -v /dload/steam/cache/data:/data/cache -v /dload/steam/cache/logs:/data/logs -p 192.168.0.202:80:80 -p 192.168.0.203:80:80 -p 192.168.0.204:80:80 -p 192.168.0.205:80:80 steamcache/monolithic:latest

docker run --restart unless-stopped --name blizzardcache --detach -e UPSTREAM_DNS=192.168.0.9 -v /etc/localtime:/etc/localtime -v /dload/blizzard/cache/data:/data/cache -v /dload/blizzard/cache/logs:/data/logs -p 192.168.0.206:80:80 -p 192.168.0.207:80:80 -p 192.168.0.208:80:80 -p 192.168.0.209:80:80 -p 192.168.0.210:80:80 steamcache/monolithic:latest

docker run --restart unless-stopped --name sniproxy --detach -p 443:443 steamcache/sniproxy:latest

DNS Configuration

Ethernet Connection Properties DNS -> Docker IP of .201

Output of container(s)

No Errors

daveplsno commented 5 years ago

Don't have many logs handy but wanted to chime in to confirm I have the same issue and run the containers basically the same fashion; has been an issue for several months, and I didn't really care about it considering there's only a small number of games that'll be impacted by this during the warm up. Only came to back into my mind recently when I moved to monolithic.

Had the same problem across generic, generic w/ multiple IPs and now using the monolithic container. All services besides blizzard are getting max speed (11MB/s) during the cache warm up.

Noting that I did find an improvement when I moved to using multiple IPs for steam and in testing I added multiple IPs for blizzard as well (different to my steam IPs); originally w/ 1 IP serving blizzard requests speeds were around ~0.5 to 1MB/s and with multiple IPs (about 20 lol) I'm seeing between 3-5MB/s.

Also noticing my access logs seem to have an unusual amount of HIT logs.. even for games that I haven't downloaded.. haha. Cached games download at around 50MB/s. I can't give any helpful stats on this now as my cache is mostly primed and reused so I've a lot more hits than misses as expected.. I don't have graphs demonstrating the hit/miss ratio at the moment, but for example steam was clearly 100% misses with new games, and blizzard would be 90% hits for new games.. ha. I don't really understand how the cache hashing works, maybe this is why blizzard shows this.

No obvious errors across SNI, dns or monolithic containers.

Coming from Sydney I get these response times; can't see why this would cause issues but it's the only thing in my mind that sticks out, and something I can't really test any other way.

run from the dns container

bash-4.4# ping us.cdn.blizzard.com PING us.cdn.blizzard.com (137.221.64.8): 56 data bytes 64 bytes from 137.221.64.8: seq=0 ttl=51 time=147.042 ms 64 bytes from 137.221.64.8: seq=1 ttl=51 time=147.384 ms 64 bytes from 137.221.64.8: seq=2 ttl=51 time=148.264 ms 64 bytes from 137.221.64.8: seq=3 ttl=51 time=147.549 ms 64 bytes from 137.221.64.8: seq=4 ttl=51 time=147.183 ms 64 bytes from 137.221.64.8: seq=5 ttl=51 time=147.425 ms ^C --- us.cdn.blizzard.com ping statistics --- 6 packets transmitted, 6 packets received, 0% packet loss round-trip min/avg/max = 147.042/147.474/148.264 ms

run from the monolithic container

root@31c8c4c658f8:/scripts# curl -w "@curl-format.txt" -o /dev/null -s "us.cdn.blizzard.com"
time_namelookup: 0.060664 time_connect: 0.060868 time_appconnect: 0.000000 time_pretransfer: 0.060991 time_redirect: 0.000000 time_starttransfer: 3.424826

     time_total:  3.424940

root@31c8c4c658f8:/scripts# curl -w "@curl-format.txt" -o /dev/null -s "level3.blizzard.com"
time_namelookup: 0.060512 time_connect: 0.060738 time_appconnect: 0.000000 time_pretransfer: 0.060838 time_redirect: 0.000000 time_starttransfer: 0.291258

     time_total:  0.291371

and one for comparing to one of the steam urls.

root@31c8c4c658f8:/scripts# curl -w "@curl-format.txt" -o /dev/null -s "valve2004.steamcontent.com" time_namelookup: 0.126986 time_connect: 0.127358 time_appconnect: 0.000000 time_pretransfer: 0.127473 time_redirect: 0.000000 time_starttransfer: 0.134200

     time_total:  0.134310
Peon-SouthAfrica commented 5 years ago

@blenderpls Wow!, you dug alot deeper than me. Considering what you saying im starting to think its not our setups but rather the code.

VibroAxe, any comment from you?

daveplsno commented 5 years ago

idk hah, if it was something wrong with the containers themselves it'd probably be voiced by more people. much of the cache warm up issues seem to be resolved with the use of the multiple IP tuning.

Having tried running the containers many ways on a couple of different machines, I can't see any change in behaviour with Blizzards cache warm up speeds for myself, and considering I'm not seeing issues with the other cached services I feel the issue will be something between the depot the cdns, not sure how to test, and doesn't really matter once it's cached, blizzard games are probably one of the fastest to come off the cache once it's primed lol.

Peon-SouthAfrica commented 5 years ago

Hey Mods. Please could you test battle.net out. Asking kindly.

Perhaps if we all test and troubleshoot we can get to the bottom of this. Steamcache is crucial for large lans and i-net cafe's

@blenderpls Does your Battle.net cached games DL at max LAN speed?

Peon-SouthAfrica commented 5 years ago

@blenderpls Please can you post your blizzard config. I would be most grateful.

Peon-SouthAfrica commented 5 years ago

Anyone?

gleesnipeshot commented 5 years ago

I'm seeing the same issue.

Peon-SouthAfrica commented 5 years ago

Nobody is going to help us it seems.

Oh well, it was a cool idea.

Peon-SouthAfrica commented 5 years ago

Strangely enough I quickly installed Squid on a clean vm.

Downloads of uncached go at full speed. Cache doesnt work, maybe that means something.

@gleesnipeshot: You want to check aswell and report back if you get the same results?

wofnull commented 5 years ago

Same issue monitored here in my logs: Fresh / Uncached content is cached with an abnormal low speed , compared to other services like Uplay / Steam:

100 Mbit Line: Uncached Files: Steam = 100% Saturation Uplay = 100% Saturation Wargaming Launcher = 100% Saturation Blizzard = 50% Saturation ( ~30-49 Mbit )

Cached files ( Redownload ): Steam: Maxxed out ~ 900 Mbit UPlay: Maxxed out ~ 900 Mbit WG: Maxxed out ~ 900 Mbit Blizz: Maxxed out ~ 900 Mbit

Could monitor the already mentioned log behaviours few misses and extremly long hit list in log for Blizz Content. The cause is visible if further monitored: First a miss on a file to cache ( not in Cache ) -> causing the file to download, all hits on cache afterwards are for the same file but only parts of it, to the point where the download from the cache is slower as the request from the launcher ( additional miss on same file ). As it see,ms the cached files from Blizz games are really large and not broken down in smaller chunks as in Steam Game which leads to the logging behaviour.

However this does not show why the initial download via Cache is so slow ( 50% of 100Mbit ) instead of uncached speeds ( Full Load ). It looks alot like the Launcher Limits the speed while cached...

Peon-SouthAfrica commented 5 years ago

Dont think the authors are active much anymore sadly.

wofnull commented 5 years ago

Hi @Peon-SouthAfrica ,

unfortanly, the issue persists longer as the monlithic container, the same issue was already present in the old steamcache/generic image. ( https://github.com/steamcache/generic/issues/61 )

The issue got closed because the monolithic image should solve the main issue here ( what it obviously not did ).

However, the Authors and especially @VibroAxe are still pretty active over the whole steamcache / uk-lans repositorys.

The problem is more that the project is already to wiedespread to have an overview about any opened issues, since currently over gits ( steamcache/steamcache / steamcache/generic / steamcache/monolithic / steamcache/steamcache-dns ) several ( more or less duplicate ) error reports are presented and to make things worse the same over at the uk-lans git is happening.

Would be nice to have a common Bugtracker for all gits instead to search for an error in several gits ...

However ... the problem will most likely not be fixable because I found that the DL behaviour during cache priming seems to be dependant from the launcher in the first place: It seems to pull the large files on request base, chunk for chunk, the problem is that the cache seems to download the chunks one after another and not like the launcher several at once. I could be wrong, but if this is the case, neither an update to the cache or the dns / dns-list can do anything about this.

This behaviour would aswell be the answer to the question why the cache is on full speed when its primed after this really slow initial download.

Peon-SouthAfrica commented 5 years ago

@wofnull I hear you buddy.

I tried this from top to bottom and broke the containers literally with trying.

I went so far as to think that this is something out of the control of the authors.

Wish VibroAxe would comment.

VibroAxe commented 5 years ago

Sorry guys, myself and the rest of the steamcache team are pretty busy with real life, we do our best to keep track of all the issues.

Fundamentally at present although this issue is inconvenient it's not one which is a major issue in the intended use case at a Lan. Our primary goal is saving bandwidth Vs every user downloading. I cant think of a single Lan which would have ~50mbit of free bandwidth for a single user during live so this issue primarily affects cache priming. Basically, that isn't a time bound issue so it's of less concern to us than some of the other big hitters (steam enabling ssl is taking up a huge proportion of our resources ATM)

Obviously we do value the issues and will try to help where possible!

So with that in mind

1) are you all using multiple ips for the blizzard cdns 2) do you see an improvement Vs single IP 3) which cdns are you upstreaming to 4) where in the world are you (based on @wofnuls diagnostics I'm wondering if this is a us cdn issue as I'm not aware we've seen it in Europe)

wofnull commented 5 years ago

Hi @VibroAxe , thanks for showing up here 👍 As said before, i thought already that something like this was up ;)

so to your questions:

  1. tested on single IP and Multi IP -> as far as i can see, the issue remains
  2. no improvement, the internet connection stays at around 50% saturation,
  3. upstream cdns found: level3.blizzard.com eu.cdn.blizzard.com CDN monitored for WoW / HS / OVW Games in hits and misses.
  4. problem occours for me in europe / germany / berlin area

Furthermore the mentioned logging behaviour ( splitting file on cdn in multiple chunks, where only the initial part is a miss but the later requested ones are hits ):

[blizzard] 192.168.178.114 / - - - [15/May/2019:11:34:34 +0200] "GET /tpr/ovw/data/a5/44/a544d76aec0baa4775599336e67ee8ce HTTP/1.1" 206 266240 "-" "-" "MISS" "level3.blizzard.com" "bytes=0-266239" [blizzard] 192.168.178.114 / - - - [15/May/2019:11:34:34 +0200] "GET /tpr/ovw/data/a5/44/a544d76aec0baa4775599336e67ee8ce HTTP/1.1" 206 266240 "-" "-" "HIT" "level3.blizzard.com" "bytes=532480-798719" [blizzard] 192.168.178.114 / - - - [15/May/2019:11:34:34 +0200] "GET /tpr/ovw/data/a5/44/a544d76aec0baa4775599336e67ee8ce HTTP/1.1" 206 266240 "-" "-" "HIT" "eu.cdn.blizzard.com" "bytes=798720-1064959" [blizzard] 192.168.178.114 / - - - [15/May/2019:11:34:34 +0200] "GET /tpr/ovw/data/a5/44/a544d76aec0baa4775599336e67ee8ce HTTP/1.1" 206 266240 "-" "-" "HIT" "eu.cdn.blizzard.com" "bytes=1064960-1331199" [blizzard] 192.168.178.114 / - - - [15/May/2019:11:34:34 +0200] "GET /tpr/ovw/data/a5/44/a544d76aec0baa4775599336e67ee8ce HTTP/1.1" 206 266240 "-" "-" "HIT" "eu.cdn.blizzard.com" "bytes=1331200-1597439" [blizzard] 192.168.178.114 / - - - [15/May/2019:11:34:34 +0200] "GET /tpr/ovw/data/a5/44/a544d76aec0baa4775599336e67ee8ce HTTP/1.1" 206 266240 "-" "-" "HIT" "level3.blizzard.com" "bytes=1597440-1863679" [blizzard] 192.168.178.114 / - - - [15/May/2019:11:34:34 +0200] "GET /tpr/ovw/data/a5/44/a544d76aec0baa4775599336e67ee8ce HTTP/1.1" 206 266240 "-" "-" "HIT" "eu.cdn.blizzard.com" "bytes=1863680-2129919" [blizzard] 192.168.178.114 / - - - [15/May/2019:11:34:34 +0200] "GET /tpr/ovw/data/a5/44/a544d76aec0baa4775599336e67ee8ce HTTP/1.1" 206 266240 "-" "-" "HIT" "level3.blizzard.com" "bytes=2129920-2396159" [blizzard] 192.168.178.114 / - - - [15/May/2019:11:34:34 +0200] "GET /tpr/ovw/data/a5/44/a544d76aec0baa4775599336e67ee8ce HTTP/1.1" 206 266240 "-" "-" "HIT" "level3.blizzard.com" "bytes=2396160-2662399" [blizzard] 192.168.178.114 / - - - [15/May/2019:11:34:34 +0200] "GET /tpr/ovw/data/a5/44/a544d76aec0baa4775599336e67ee8ce HTTP/1.1" 206 266240 "-" "-" "HIT" "eu.cdn.blizzard.com" "bytes=2662400-2928639" [blizzard] 192.168.178.114 / - - - [15/May/2019:11:34:34 +0200] "GET /tpr/ovw/data/a5/44/a544d76aec0baa4775599336e67ee8ce HTTP/1.1" 206 266240 "-" "-" "HIT" "level3.blizzard.com" "bytes=266240-532479" [blizzard] 192.168.178.114 / - - - [15/May/2019:11:34:34 +0200] "GET /tpr/ovw/data/a5/44/a544d76aec0baa4775599336e67ee8ce HTTP/1.1" 206 266240 "-" "-" "HIT" "eu.cdn.blizzard.com" "bytes=2928640-3194879"

maxxoverclocker commented 5 years ago

Sorry guys, myself and the rest of the steamcache team are pretty busy with real life, we do our best to keep track of all the issues.

Fundamentally at present although this issue is inconvenient it's not one which is a major issue in the intended use case at a Lan. Our primary goal is saving bandwidth Vs every user downloading. I cant think of a single Lan which would have ~50mbit of free bandwidth for a single user during live so this issue primarily affects cache priming. Basically, that isn't a time bound issue so it's of less concern to us than some of the other big hitters (steam enabling ssl is taking up a huge proportion of our resources ATM)

Obviously we do value the issues and will try to help where possible!

So with that in mind

  1. are you all using multiple ips for the blizzard cdns
  2. do you see an improvement Vs single IP
  3. which cdns are you upstreaming to
  4. where in the world are you (based on @wofnuls diagnostics I'm wondering if this is a us cdn issue as I'm not aware we've seen it in Europe)

I am not experiencing this issue any longer after moving from a single shared ip address using docker's bridge networking mode, to 8 dedicated ip addresses using docker's macvlan network mode. However at the same time of this switch I also moved away from the steamcache-dns method for dns to pfsense for dns (very complicated). I'm not sure which change fixed the problem for me. But I'm able to max out my 300MBit/s connection during cache priming with blizzard and steam games.

RobertJamesMichael commented 5 years ago

Sorry guys, myself and the rest of the steamcache team are pretty busy with real life, we do our best to keep track of all the issues. Fundamentally at present although this issue is inconvenient it's not one which is a major issue in the intended use case at a Lan. Our primary goal is saving bandwidth Vs every user downloading. I cant think of a single Lan which would have ~50mbit of free bandwidth for a single user during live so this issue primarily affects cache priming. Basically, that isn't a time bound issue so it's of less concern to us than some of the other big hitters (steam enabling ssl is taking up a huge proportion of our resources ATM) Obviously we do value the issues and will try to help where possible! So with that in mind

  1. are you all using multiple ips for the blizzard cdns
  2. do you see an improvement Vs single IP
  3. which cdns are you upstreaming to
  4. where in the world are you (based on @wofnuls diagnostics I'm wondering if this is a us cdn issue as I'm not aware we've seen it in Europe)

I am not experiencing this issue any longer after moving from a single shared ip address using docker's bridge networking mode, to 8 dedicated ip addresses using docker's macvlan network mode. However at the same time of this switch I also moved away from the steamcache-dns method for dns to pfsense for dns (very complicated). I'm not sure which change fixed the problem for me. But I'm able to max out my 300MBit/s connection during cache priming with blizzard and steam games.

@maxxoverclocker is it possible for you to explain how to do this: docker's bridge networking mode and docker's macvlan network mode. Have you find out what solved the problem?

carroarmato0 commented 5 years ago

I've converted the setup to run in an LXC container ( I use OpenvSwitch and integration with Docker is too much of a pain in the behind ), and seeing the same slow speeds. Only thing that seems to have any effect is adding more IP addresses to the cache container and referencing them in DNS.

manafoo commented 5 years ago

i have the same issue , adding more ips didn't help i use monolithic

maxxoverclocker commented 5 years ago

Sorry guys, myself and the rest of the steamcache team are pretty busy with real life, we do our best to keep track of all the issues. Fundamentally at present although this issue is inconvenient it's not one which is a major issue in the intended use case at a Lan. Our primary goal is saving bandwidth Vs every user downloading. I cant think of a single Lan which would have ~50mbit of free bandwidth for a single user during live so this issue primarily affects cache priming. Basically, that isn't a time bound issue so it's of less concern to us than some of the other big hitters (steam enabling ssl is taking up a huge proportion of our resources ATM) Obviously we do value the issues and will try to help where possible! So with that in mind

  1. are you all using multiple ips for the blizzard cdns
  2. do you see an improvement Vs single IP
  3. which cdns are you upstreaming to
  4. where in the world are you (based on @wofnuls diagnostics I'm wondering if this is a us cdn issue as I'm not aware we've seen it in Europe)

I am not experiencing this issue any longer after moving from a single shared ip address using docker's bridge networking mode, to 8 dedicated ip addresses using docker's macvlan network mode. However at the same time of this switch I also moved away from the steamcache-dns method for dns to pfsense for dns (very complicated). I'm not sure which change fixed the problem for me. But I'm able to max out my 300MBit/s connection during cache priming with blizzard and steam games.

@maxxoverclocker is it possible for you to explain how to do this: docker's bridge networking mode and docker's macvlan network mode. Have you find out what solved the problem?

@RobertJamesMichael Sorry if I led you or anyone awry on this. I've been working on a number of docker projects recently and must have mis-remembered my configuration. I am not using the docker macvlan driver, It's just the default bridge network. My current configuration (that does not have the slow cache loading issue is this: `VMware ESXi 6.7U2

VM assigned 1vCPU and 2G RAM (Ryzen 1700 @ 3.0Ghz)

Ubuntu 18.04.3 LTS

9 IPs assigned to host

Docker (CE) version 18.09.7, build 2d0083d `

IMAGE COMMAND CREATED STATUS PORTS NAMES steamcache/monolithic "/bin/bash -e /init/…" 3 days ago Up 33 hours 10.60.1.61:80->80/tcp, netcache-monolithic 10.60.1.62:80->80/tcp, 10.60.1.63:80->80/tcp, 10.60.1.64:80->80/tcp, 10.60.1.65:80->80/tcp, 10.60.1.66:80->80/tcp, 10.60.1.67:80->80/tcp, 10.60.1.68:80->80/tcp, 443/tcp steamcache/sniproxy "/scripts/bootstrap.…" 3 days ago Up 33 hours 0.0.0.0:443->443/tcp netcache-sniproxy

The only part of my setup that is pretty different is I didn't want to use lancache-dns because I already have a load balanced pfsense DNS server. I think it's possibly my DNS setup that is causing the different (good) behavior. I have all 8 IPs set up with poor-man's DNS load balancing.

Ex (note that the IP order changes after every query): ` nslookup content1.steampowered.com

Name:    content1.steampowered.com
Addresses:  10.60.1.61
            10.60.1.64
            10.60.1.63
            10.60.1.68
            10.60.1.62
            10.60.1.65
            10.60.1.67
            10.60.1.66

nslookup content1.steampowered.com

Name:    content1.steampowered.com
Addresses:  10.60.1.63
            10.60.1.68
            10.60.1.62
            10.60.1.65
            10.60.1.67
            10.60.1.66
            10.60.1.61
            10.60.1.64

nslookup content1.steampowered.com

Name:    content1.steampowered.com
Addresses:  10.60.1.68
            10.60.1.62
            10.60.1.65
            10.60.1.67
            10.60.1.66
            10.60.1.61
            10.60.1.64
            10.60.1.63

`

I accomplished this by using:

This was from before they added support for multiple IPs. I cannot confirm if the current version of the script works well with multiple IPs as I'm not using theirs- I've since forked mine off but can post it if anyone would like.

Here is a screenshot from steam when downloading a game I've never primed before (I have a ~375Mbit download so almost maxing it out): Capture

And here are the nginx access logs from the above showing they were all cache misses: root@netcache:~# docker exec -it netcache-monolithic tail -f /data/logs/access.log [steam] 10.0.1.105 / - - - [21/Aug/2019:16:02:01 +0000] "GET /depot/21101/chunk/7ca3e342ab4ccc0a68737801aa56b15d6e0785f3 HTTP/1.1" 200 753232 "-" "Valve/Steam HTTP Client 1.0" "MISS" "cache8-lax1.steamcontent.com" "-" [steam] 10.0.1.105 / - - - [21/Aug/2019:16:02:01 +0000] "GET /depot/21101/chunk/ce37ccc419a4a4f7363885bd92b006c2019ef409 HTTP/1.1" 200 490736 "-" "Valve/Steam HTTP Client 1.0" "MISS" "cache6-lax1.steamcontent.com" "-" [steam] 10.0.1.105 / - - - [21/Aug/2019:16:02:02 +0000] "GET /depot/21101/chunk/085ac83e8348310453f81e9823d42ac54693c6ac HTTP/1.1" 200 760000 "-" "Valve/Steam HTTP Client 1.0" "MISS" "cache6-lax1.steamcontent.com" "-" [steam] 10.0.1.105 / - - - [21/Aug/2019:16:02:02 +0000] "GET /depot/21101/chunk/8e76e4218b4172a96cc6c2d847d627d2146b8bc4 HTTP/1.1" 200 978848 "-" "Valve/Steam HTTP Client 1.0" "MISS" "cache11-sea1.steamcontent.com" "-" [steam] 10.0.1.105 / - - - [21/Aug/2019:16:02:02 +0000] "GET /depot/21101/chunk/aa03f4226987b3ef01009da190e3766ddff1234f HTTP/1.1" 200 908112 "-" "Valve/Steam HTTP Client 1.0" "MISS" "cache15-lax1.steamcontent.com" "-" [steam] 10.0.1.105 / - - - [21/Aug/2019:16:02:02 +0000] "GET /depot/21101/chunk/0485118c399c29c009cd07d302e537f28e19ecb1 HTTP/1.1" 200 970192 "-" "Valve/Steam HTTP Client 1.0" "MISS" "cache11-sea1.steamcontent.com" "-" [steam] 10.0.1.105 / - - - [21/Aug/2019:16:02:02 +0000] "GET /depot/21101/chunk/8484626f1c0d31d9c6f6672dafb25a45eaff6723 HTTP/1.1" 200 1000528 "-" "Valve/Steam HTTP Client 1.0" "MISS" "cache11-sea1.steamcontent.com" "-" [steam] 10.0.1.105 / - - - [21/Aug/2019:16:02:02 +0000] "GET /depot/21101/chunk/cfdd9ddf1adbc193c3a634c164ae0c02b5582be9 HTTP/1.1" 200 986800 "-" "Valve/Steam HTTP Client 1.0" "MISS" "cache13-lax1.steamcontent.com" "-" [steam] 10.0.1.105 / - - - [21/Aug/2019:16:02:02 +0000] "GET /depot/21101/chunk/1598ae5b673ba0ecae5970dbe28235c4ae1735a3 HTTP/1.1" 200 997248 "-" "Valve/Steam HTTP Client 1.0" "MISS" "cache7-lax1.steamcontent.com" "-" [steam] 10.0.1.105 / - - - [21/Aug/2019:16:02:02 +0000] "GET /depot/21101/chunk/56fd63ec3215af34f06371317108d65426c86ddb HTTP/1.1" 200 961008 "-" "Valve/Steam HTTP Client 1.0" "MISS" "cache8-lax1.steamcontent.com" "-"

amittamari commented 4 years ago

Hey, I'm having the same issue. Currently testing with Ubuntu Server 18.04 LTS on VirtualBox (Bridged Networking) and with 3 IP addresses assigned to BLIZZARDCACHE. When I start the download I get almost 100MB/s but after a moment it drops to a very slow speed. Not a problem with Steam though.

Update: Moved to a dedicated Ubuntu server, the issue persists.

ptepartz commented 4 years ago

I've had the same issue with multiple caches, just with Blizzard. I started with the first lancache and am now using monolithic on ubuntu. Everything else is fine (steam/epic/riot). Blizzard doesn't go above 2mb.

amittamari commented 4 years ago

I've had the same issue with multiple caches, just with Blizzard. I started with the first lancache and am now using monolithic on ubuntu. Everything else is fine (steam/epic/riot). Blizzard doesn't go above 2mb.

How did you get Riot to work ? I get 0 download speed.

ptepartz commented 4 years ago

How did you get Riot to work ? I get 0 download speed.

It worked straight out of the box, normal setup for me. Literally have not tweaked anything.

unspec commented 4 years ago

Regarding slow initial blizzard downloads:

The latest version of monolithic/generic now supports changing the slice size used by nginx. We've found that increasing from 1m to 8m offers a small performance boost to specific use cases (single user initial downloads of blizzard games in particular). See http://lancache.net/docs/advanced/tuning-cache/#tweaking-slice-size for information on how to make use of this.

Please note that it does come with some potential downsides (discussed in the above link) and will invalidate any already cached data on your cache if you change the value.

To tidy up the issues, if you choose to test this please post any feedback on this issue: https://github.com/lancachenet/generic/issues/100

If you need any other support please see http://lancache.net/support/ or open a new issue.

I've added a