A header only C++11 LRU Cache template class that allows you to define key, value and optionally the Map type. uses a double linked list and a std::unordered_map style container to provide fast insert, delete and update No dependencies other than the C++ standard library. This is a C++11 remake of my earlier LRUCache project (https://github.com/mohaps/lrucache) The goal was to create a fast LRUCache header only library and to avoid any dependencies like boost.
Hi,
I am reviewing the code to this header to evaluate if I can use it in my project. I like that the cache makes a design choice towards internal thread safety and that it offers an optional scheme for enabling locks to get/set values in the cache -- however I have found a bug making it not thread safe.
Both get() and getCopy() are not thread safe, even if you instantiate the template with a real mutex.
While they guard access to the cache via a lock guard -- They release the lock when returning and get() returns a reference to internal cache data while the lock is released, while getCopy is just a wrapper around get() that does a copy-construct around the const Value & returned from get(). (This construction is done with the lock released while still pointing to internal data that can get deleted at any time).
In a multithreaded environment with lots of threads accessing the cache -- this can lead to a situation where the const Value & is a dangling reference if another thread causes a prune() to occur before the first thread has a chance to construct a real Value object from the const Value &.
I believe the real solution to this problem is to either:
Use a recursive lock, and have getCopy() also hold the recursive lock as it calls get, or,
Refactor get() into an intenral get_nolock() and have both get() and getCopy() call into it while holding the lock. getCopy() can read like this:
Value getCopy(const Key& k) {
Guard g(lock_);
return get_nolock(k);
}
// note we renamed get() -> getRef() to make it clear that it's a reference to internal data and to make callers explicitly have to choose between getCopy() and getRef()
const Value & getRef(const Key& k) {
Guard g(lock_);
return get_nolock(k);
}
protected:
const Value& get_nolock(const Key& k) {
const auto iter = cache_.find(k);
if (iter == cache_.end()) {
throw KeyNotFound();
}
keys_.splice(keys_.begin(), keys_, iter->second);
return iter->second->value;
}
Hi, I am reviewing the code to this header to evaluate if I can use it in my project. I like that the cache makes a design choice towards internal thread safety and that it offers an optional scheme for enabling locks to get/set values in the cache -- however I have found a bug making it not thread safe.
The issue is here:
https://github.com/mohaps/lrucache11/blob/fc0a9423e83443c8887fede6349c6fa2d3e145f6/LRUCache11.hpp#L142
Both
get()
andgetCopy()
are not thread safe, even if you instantiate the template with a real mutex.While they guard access to the cache via a lock guard -- They release the lock when returning and
get()
returns a reference to internal cache data while the lock is released, whilegetCopy
is just a wrapper aroundget()
that does a copy-construct around theconst Value &
returned fromget()
. (This construction is done with the lock released while still pointing to internal data that can get deleted at any time).In a multithreaded environment with lots of threads accessing the cache -- this can lead to a situation where the
const Value &
is a dangling reference if another thread causes aprune()
to occur before the first thread has a chance to construct a realValue
object from theconst Value &
.I believe the real solution to this problem is to either:
getCopy()
also hold the recursive lock as it callsget
, or,get()
into an intenralget_nolock()
and have bothget()
andgetCopy()
call into it while holding the lock.getCopy()
can read like this:^ The above should fix this issue.