EndPointCorp / end-point-blog

End Point Dev blog
https://www.endpointdev.com/blog/
17 stars 65 forks source link

Comments for Raw Caching Performance in Ruby/Rails #476

Open phinjensen opened 7 years ago

phinjensen commented 7 years ago

Comments for https://www.endpointdev.com/blog/2011/07/raw-caching-performance-in-rubyrails/ By Steph Skardal

To enter a comment:

  1. Log in to GitHub
  2. Leave a comment on this issue.
phinjensen commented 7 years ago
original author: Ethan Rowe
date: 2011-07-12T17:47:08-04:00

It's worth noting that memcache is about scalability, not raw speed. This is something the docs are pretty explicit about. The time for a simple cache lookup may be roughly analogous to that of a simple SELECT on MySuql. With that in mind, one expects small, local file lookup to do better compared to small, individual memcache lookups.

You'll get greater benefit from memcache by adapting your access patterns to play to its strengths. For instance, refactor your loops so rather than performing iterative cache reads, you do a single multiget to retrieve all the items in one blocking call, then iterate over the result. That brings down the wait state overhead, while at the same time encouraging the use of small cached items rather than big, aggregated caches -- an approach that allows for better cache reuse and hopefully better value overall.

And of course, you'll see additional benefits over files when running at scale, with multiple servers. The shared cache will make for a happier database, when caches need only be rebuilt once, rather than once per server as would be necessary for local files. you could of course use nfs for shared files, but if you're not already tied to a file-oriented solution, you'll probably find memcache easier to deploy and more flexible in its use.

And of course, we don't want to overlook redis.

Anyway, thanks for the data points, it was interesting reading.

phinjensen commented 7 years ago
original author: Steph Skardal
date: 2011-07-12T20:23:34-04:00

Ethan - Thanks for the comments. I was expecting & hoping that you'd opine.

Jon and I discussed the benefit here on using memcache on multiple servers. I'm not sure we'll be making the final call on this, but it will certainly be our recommendation.

Redis came up as an option also. This work was a small part of optimizing the entire app, and like I mentioned in the article, the improvement here from caching is negligible compared to other bits of optimization I've done on the app to eliminate database lookup & Ruby object instantiation.