The load_from_db method currently calls to_cache individually for each entry not already present in the cache. This MR optimizes the existing feature by using the set_many cache method to perform a batch update.
The performance impact really depends on the cache backend being used: for instance for LocMemCache, there is no impact at all, because set_many is just the set method being called within a for loop. However for other types of backends like Redis or Memcache, the update would be performed in a single call, no matter the number of preferences needing to be saved, instead of one call per preference.
The
load_from_db
method currently callsto_cache
individually for each entry not already present in the cache. This MR optimizes the existing feature by using theset_many
cache method to perform a batch update.The performance impact really depends on the cache backend being used: for instance for
LocMemCache
, there is no impact at all, becauseset_many
is just theset
method being called within afor
loop. However for other types of backends like Redis or Memcache, the update would be performed in a single call, no matter the number of preferences needing to be saved, instead of one call per preference.