Closed durzo closed 4 years ago
Hi @durzo we have not done a head to head with ElastiCache. Although based on their numbers KeyDB should still be significantly faster.
Our benchmarks vs Redis are done using memtier with 32 threads on an m5.8xlarge. Memtier is quite inefficient CPU wise but we like it because its made by Redis and therefore not biased.
Thanks John. I'll report my benchmarks back here if that's OK with you
I'll be comparing r5.xlarge, as its cheaper than both the m5 and m6g, the downside being only 4 cores instead of 8 - but that shouldn't be a problem for KeyDB :)
I ended up using r5.large because thats what we have currently in production.
Results are here: https://gist.githubusercontent.com/durzo/3511aaa274e35187a0e0584a32d60b72/raw/3966b6643f70babc5082bb836f2fa01cfa59aeb9/gistfile1.txt
It seems Elasticache outperformed KeyDB - Is there anything you can recommend to get more performance out of KeyDB on these instances?
Thanks @durzo. On an r5.large we were able to achieve 175,000 ops/sec so a bit higher than you.
A few issues I saw with your benchmark:
You used the public IP of your machine and the internal IP for elasticache (via DNS). This will add latency to the KeyDB test, use the internal IP for both.
Not enough iterations (benchmarking on AWS is very noisy). We usually use “-x 10” to get the average of 10 tests.
That said based on your Elasticache tests it looks like we’ll be about tied for 2 cores. We expect the gap to widen on larger machines and are planning to do those tests in a future blog.
10.1.0.0/16 is the private VPC subnet, there are no public IP's on these instances.
Do you mind sharing the KeyDB server configuration you used to achieve 175,000 ops/sec ?
Hi @durzo, I performed with your exact setup, both Elasticache, KeyDB and Memtier all in us-east-2a AZ. I get 150k ops/sec (all hits) with the c5n.4xlarge using keydb-server --protected-mode no --server-threads 2 --server-thread-affinity true
. I got the 175k with the test method below.
Most testing I do runs Memtier on a m5.8xlarge as it seems to produce fairly consistent results, also keeping in same AZ and on private IPs. I did some followup testing with the r5's using the m5.8xlarge for benchmarking and got these results :
keydb-server --protected-mode no --server-threads <x> --server-thread-affinity true
memtier_benchmark -s <ip> --hide-histogram --threads=32 --ratio=1:0
memtier_benchmark -s <ip> --hide-histogram --threads=32
after database is loaded from test1
Table below shows ops/sec
I tested a few different times, and each time i noticed declined Elasticache perf on the r5.xlarge over the r5.large. (as low as 175k ops/sec). Not sure why this is but seemed consistent in the tests i did
I am in the process of doing a more in depth comparison including testing with YCSB and memtier. We will publish the blog soon which will include instructions for reproducing all numbers.
@benschermel wow, thank you for taking the time to do this - can't wait to read the blog!
Blog is here: https://docs.keydb.dev/blog/2020/04/15/blog-post/
@durzo I'm marking this issue as closed, but let me know if there are any open questions remaining.
I am looking to replace AWS Elasticache (Redis) with KeyDB on the a1 or m6g Graviton EC2 instance family as our cache fleet.
Unfortunately, Elasticache is some magic Amazon implementation of Redis and I'm wondering if anyone has any benchmarks like the ones presented at https://docs.keydb.dev/blog/2020/03/02/blog-post/
Amazon claim that Elasticache can "deliver up to an 83 percent better throughput per node and up to a 47 percent reduction in latency." - https://aws.amazon.com/blogs/database/boosting-application-performance-and-reducing-costs-with-amazon-elasticache-for-redis/
So before I go ahead and spin up my own test instances I wondered if anyone has already done this comparison?