ckrintz / appscale

Automatically exported from code.google.com/p/appscale
0 stars 0 forks source link

Datastore Write-through caching #177

Closed GoogleCodeExporter closed 9 years ago

GoogleCodeExporter commented 9 years ago
Given that we know the Datastore API it should be pretty simple to add
transparent write-through caching to it to boost performance for simple
look-ups. This should significantly boost performance of applications which
use the simple api calls and are read-heavy.  

Provided below is a high-level sketch of what the simple api calls would
look like with write-through caching.

def get(key):
    value = cache.get(key)
    if value:
        return value
    return datastore.get(key)

def put(key,value):
    # Delete first to ensure it is gone, even if the datastore put fails
    cache.delete(key)
    datastore.put(key,value)
    cache.put(key,value)

def delete(key):
    cache.delete(key)
    datastore.delete(key)

The main issue I see is that memcached has a 1MB limit on value size which
could be an issue for large binary objects. It is important to check what
the behavior of the cache api is for such large objects (some silently
fail, others raise exceptions). When doing a put you can check to ensure
the object is smaller than 1MB using sys.getsizeof. 

Original issue reported on code.google.com by jmkupfer...@gmail.com on 23 Feb 2010 at 6:49

GoogleCodeExporter commented 9 years ago
A great idea! Would also like the ability to control whether or not this is on 
to be
controlled via a flag so that we can turn it off for datastore testing (and so 
that
we can easily measure the performance differences).

Original comment by shattere...@gmail.com on 26 Feb 2010 at 4:42

GoogleCodeExporter commented 9 years ago

Original comment by nlak...@gmail.com on 6 Sep 2011 at 9:26