swarthy / redis-semaphore

Distributed mutex and semaphore based on Redis
MIT License
143 stars 28 forks source link

Redlock-mutex is timing out with local redis cluster #208

Open akshaydeo opened 4 months ago

akshaydeo commented 4 months ago

Setup

docker-compose.yml

redis-node-1:
    container_name: redis-node-1
    image: bitnami/redis-cluster:7.2
    volumes:
      - redis-node-1:/data
    environment:
      - REDIS_PASSWORD=test
      - "REDIS_NODES=redis-node-1 redis-node-2 redis-node-3"
    ports:
      - "6379:6379"
    networks:
      - network-v2

  redis-node-2:
    container_name: redis-node-2
    image: bitnami/redis-cluster:7.2
    volumes:
      - redis-node-2:/data
    environment:
      - REDIS_PASSWORD=test
      - "REDIS_NODES=redis-node-1 redis-node-2 redis-node-3"
    ports:
      - "6380:6379"
    networks:
      - network-v2

  redis-node-3:
    image: bitnami/redis-cluster:7.2
    volumes:
      - redis-node-2:/data
    ports:
      - "6381:6379"
    environment:
      - "REDIS_PASSWORD=test"
      - "REDISCLI_AUTH=test"
      - "REDIS_CLUSTER_REPLICAS=0"
      - "REDIS_NODES=redis-node-1 redis-node-2 redis-node-3"
      - "REDIS_CLUSTER_CREATOR=yes"
    depends_on:
      - redis-node-1
      - redis-node-2
    networks:
      - network-v2

My redis connection using ioredis is able to connect and write data to it

redis.ts

import { scribe } from "@/logger/scribe";
import * as IORedis from "ioredis";
import { LockOptions } from "redis-semaphore";
import Config from "../../config";

// This class also maintains references to all the locks and semaphores
// And while going down, it will release all the locks and semaphores
// This will help us to run multiple replicas of the same service
class Redis {
    private static _instance: Redis;
    private readonly cluster: IORedis.Cluster;
    private locks: Map<string, Lock> = new Map();
    private semaphores: Map<string, Semaphore> = new Map();

    constructor() {
                 // Config.Redis.ClusterNodes contains host and port array [{host:"",port:6789}...]
        this.cluster = new IORedis.Cluster(Config.Redis.ClusterNodes, {
            redisOptions: {
                password: Config.Redis.Password,                                
            },
        }); 
        this.cluster.on("ready", () => {
            scribe.info("[ ✅ ] Redis ready");
        });
        this.cluster.on("error", (err) => {
            scribe.error("[ 💥  ] Redis error", err);
        });
       }

       public async ping(): Promise<void> {
        await this.cluster.ping();
        await this.cluster.set("lastPing", new Date().toISOString());
    }
}

When I run ping(), I can see the corresponding key being set

image

I initialize a mutex like this

new RedlockMutext(cluster.nodes("master"), key, options)

It always results in

Acquire redlock-semaphore semaphore:dev:xxxx:xxxxxx:xxxxx:0 timeout',
    stack: 'Error: Acquire redlock-semaphore semaphore:dev:xxxx:xxxxxx:xxxxx:0 timeout\n' +
      '    at MaximSemaphore.acquire (/xxx/xxx/xxx/xxx/node_modules/redis-semaphore/src/Lock.ts:140:13)\n' +
      '    at RetryOperation.operation.attempt.timeout [as _fn] (webpack-internal:///(rsc)/./src/lib/services/xxx/xxx.ts:352:21)

Test setup

function createCluster() {
  const nodes = [
    { host: 'localhost', port: 6379 },
    { host: 'localhost', port: 6380 },
    { host: 'localhost', port: 6381 }
  ]

  console.log('-----', nodes)
  const client = new Redis.Cluster(nodes, {
    redisOptions: {
      password: 'test',
      lazyConnect: true,
      autoResendUnfulfilledCommands: false, // dont queue commands while server is offline (dont break test logic)
      maxRetriesPerRequest: 0 // dont retry, fail faster (default is 20)
      // https://github.com/luin/ioredis#auto-reconnect
      // retryStrategy is a function that will be called when the connection is lost.
      // The argument times means this is the nth reconnection being made and the return value represents how long (in ms) to wait to reconnect.
    },
    lazyConnect: true,
    enableOfflineQueue: false,
    clusterRetryStrategy: () => {
      return 100 // for tests we disable increasing timeout
    }
  })
  client.on('error', err => {
    console.log('Redis client error:', err.message)
  })
  return client
}

export const cluster = createCluster()

const timeoutOptions: TimeoutOptions = { lockTimeout: 300, acquireTimeout: 100, refreshInterval: 80, retryInterval: 10 }

async function expectGetAll(key: string, value: string | null) { await expect( Promise.all([client1.get(key), client2.get(key), client3.get(key)]) ).to.become([value, value, value]) }

describe('RedlockMutex', () => { it('should acquire and release lock using cluster', async () => { const mutex = new RedlockMutex(cluster.nodes('master'), 'key') expect(mutex.isAcquired).to.be.false

await mutex.acquire()
expect(mutex.isAcquired).to.be.true
await expectGetAll('mutex:key', mutex.identifier)

await mutex.release()
expect(mutex.isAcquired).to.be.false
await expectGetAll('mutex:key', null)

})

### Result

RedlockMutex 1) should acquire and release lock using cluster

0 passing (10s) 1 failing

1) RedlockMutex should acquire and release lock using cluster: Error: Acquire redlock-mutex mutex:key timeout at RedlockMutex.acquire (src/Lock.ts:140:13) at async Context. (test/src/RedlockMutex.test.ts:59:5)

➜ redis-semaphore git:(master) ✗

swarthy commented 2 months ago

Hi! Sorry for confusing README example. Redlock algorythm requires independent nodes, so you need 3+ independent single nodes. Please see https://redis.io/docs/latest/develop/use/patterns/distributed-locks/#the-redlock-algorithm