lancachenet / monolithic

A monolithic lancache service capable of caching all CDNs in a single instance
https://hub.docker.com/r/lancachenet/monolithic
Other
728 stars 73 forks source link

Hardware Requirements Documentation Suggestion #47

Closed duffyjp closed 4 years ago

duffyjp commented 5 years ago

Describe the issue you are having

Readme correctly states that on typical commodity hardware and spinning disks you'll achieve up to 30MB/s. Exactly what I've experienced.

That's a little underwhelming, especially with typical home internet speeds in many cases exceeding that value. My suggestion is to list the hardware requirements necessary to achieve 1Gbit and 10Gbit throughput. Gigabit is perfect for a home lan-party, 10Gbit for commercial events.

If a cheap SSD can give me 100MB/s what the heck. If it takes whole new server I'll just throw in the towel. Some guidance would be greatly appreciated.

Lepidopterist commented 5 years ago

https://github.com/lancachenet/monolithic/blob/master/hardware.md lists examples of real-world hardware that the maintainers have personally either used, or seen in action.

Because of the wide nature of possible hardware this could be run on, it's difficult to give any recommendations above what we have actually seen. However, that being said, in normal (multi-user) operation, your constraint is generally disk IO so a single SSD would make the most difference.

mintopia commented 5 years ago

The one major change I'd suggest to anyone is to use an SSD. Even a small-ish SSD configured as an lvmcache in front of mechanical disks, will make a huge performance boost. If you can go entirely SSD, you shouldn't bottleneck on disk at all.

As an example, with 2 SSDs in a lvm setup, we had sustained reads doing rsync at about 5-6gbps before being bottlenecked by the receiving system with SSD/mechanical lvmcache setup.

Getting a 2TB SSD and using that as your main cache drive will be the single best performance gain you can get.

dark-swordsman commented 4 years ago

I think a little section for real world test results would be cool as a guide.

We currently have an old dell R620 server with 8 300 GB 10k drives in RAID 0. It was able to achieve just barely over 10 Gbps, maxing out the connection of our server to our aggregate switch, which kinda makes sense, since those drives are rated for about 190-210 MB/s, so the actual is something like 155-160 MB/s limited by the connection. Of course, though, I still haven't gotten monolothic working, so our old solution doesn't purge and we hit our storage cap within a few days.

I've considered a build with just 860 Evos and a decent RAID controller, but the only downside is that in an environment that needs a lot of constant reads and writes, where people may be downloading 5-10 different games at the same time or more, 860 Evos or Pros may become weak pretty quickly.

Apparently Seagate makes pretty cheap "enterprise" SSDs, but I have yet to look into it. https://www.seagate.com/enterprise-storage/nytro-drives/nytro-sas-ssd/

unspec commented 4 years ago

Closing as inactive for > 6 months.

PR's against https://github.com/lancachenet/monolithic/blob/master/hardware.md with any further real world examples or additional guidance are welcome.