Suor / django-cacheops

A slick ORM cache with automatic granular event-driven invalidation.
BSD 3-Clause "New" or "Revised" License
2.12k stars 227 forks source link

Question: Support for read-only nodes #438

Closed studiojms closed 1 year ago

studiojms commented 2 years ago

I have a redis server with 2 nodes (1 main and 1 read-only worker) and I'd like to better understand how I can configure cacheops to make use of this worker read-only node. Going through the docs I wasn't able to find anything explicit about it, so could anyone help me on how to configure the lib to use this infrastructure, please?

Suor commented 2 years ago

This should be possible to do with a custom Redis class. There is a setting for that.

studiojms commented 2 years ago

@Suor thanks for answering me. Do you have any samples you could share, please?

Suor commented 2 years ago

I don't anything ready to use. There were some discussions about that here though some time ago.

nicwolff commented 1 year ago

We do something like

RoverRedisClient = Redis

if settings.FEATURE_REDIS_REPLICAS and hasattr(settings, 'CACHEOPS_REDIS'):

    def ip(hostname):
        try:
            return socket.gethostbyname(hostname)
        except socket.gaierror as err:
            logger.warning('Hostname %s did not resolve because %s' % (hostname, err))
            raise

    read_client_class = copy(RoverRedisClient)

    class RedisReplicaProxyClient(RoverRedisClient):

        read_clients = []

        @classmethod
        def set_read_clients(cls):
            primary = settings.CACHEOPS_REDIS
            primary_ip = ip(primary['host'])
            replicas = settings.CACHEOPS_REDIS_REPLICAS
            replica_weight = settings.CACHEOPS_REPLICA_WEIGHT

            # If there are no replicas, just use the primary
            if not replicas:
                cls.read_clients = [read_client_class(**primary)]
                return

            # Make Redis clients from all the replicas except the primary
            new_clients = [read_client_class(**r) for r in replicas if ip(r['host']) != primary_ip]

            # Make a list with one client for the primary, if it was removed from the replicas
            primary_client = [read_client_class(**primary)] * (len(new_clients) < len(replicas))

            # Duplicate each client a few times if desired
            if replica_weight > 1:
                new_clients = [c for c in new_clients for _ in range(replica_weight)]

            # Add back the Redis client for the primary, if it was removed from the replicas
            new_clients += primary_client

            cls.read_clients = new_clients

        def get(self, *args, **kwargs):
            """Proxy `get` calls to redis replica."""
            if not self.read_clients:
                self.set_read_clients()
            try:
                client = random.choice(self.read_clients)
                return client.get(*args, **kwargs)
            except ConnectionError:
                return super().get(*args, **kwargs)

        def execute_command(self, *args, **options):
            """Handle failover of AWS elasticache."""
            try:
                return super().execute_command(*args, **options)
            except ResponseError as e:
                if 'READONLY' not in e.args[0]:
                    raise
                connection = self.connection_pool.get_connection(args[0], **options)
                connection.disconnect()
                self.read_clients = []
                logger.warning('Primary probably failed over, reconnecting')
                return super().execute_command(*args, **options)

    RoverRedisClient = RedisReplicaProxyClient