Closed fabn closed 9 years ago
@fabn: I've been thinking about this and have reached a mixed conclusion. Overall I think this is a great suggestion, and it is something I plan on pursuing. I'd like to explore the exact impact on memory usage and what the migration path would look like.
I'll have a PR ready this week.
No PR this week, but a bit of progress. There is a lot of performance sensitive refactoring required in order to enable this. I'm looking at tagging string values with the module a compression flag rather than look at using hashes. Not only is it smaller to store in Redis, it initializes fewer objects in Ruby, and most importantly it is backward compatible.
Thanks for the status update. I'll follow your progresses.
@fabn Addressed with #17, please take a look.
This is fixed, it can be closed now
Following #13 I was wondering if you may implement per call configuration, i.e. instead of defining marshal and compression at cache instance level allow the user to override them for the single call i.e.
This will likely introduce some overhead in your code but honestly I'd prefer to have a (little slower) feature complete library instead of a fastest but less configurable cache object.
The main reason to switch for me is to get rid of memcache, not to improve performances and I was looking for a feature complete library.
I tried to compare plain strings vs structure write and reading in redis and the performance impact seems to be negligible according to these benchmarks, except for
hmset + expire
, but we're still talking of more than 1500 calls/sec, put the network latency in the equation (almost all applications don't have redis available on localhost) and this won't matter anymore.I did not tuned my redis installation with your suggestions for the benchmarks so doing some fine tuning might improve things.
So for 1.0 you may switch to structure saving instead of plain strings allowing to save metadata within the saved value and thus allowing per call options.
What do you think?