redis / ioredis

🚀 A robust, performance-focused, and full-featured Redis client for Node.js.
MIT License
14.06k stars 1.19k forks source link

fix: Improve cluster connection pool logic when disconnecting #1864

Open martinslota opened 4 months ago

martinslota commented 4 months ago

Motivation and Background

This is an attempt to fix errors occurring when a connect() call is made shortly after a disconnect(), which is something that the Bull library does when pausing a queue.

Here's a relatively minimal way to reproduce an error:

import IORedis from "ioredis";

const cluster = new IORedis.Cluster([{ host: "localhost", port: 6380 }]);

await cluster.set("foo", "bar");

const endPromise = new Promise((resolve) => cluster.once("end", resolve));
await cluster.quit();
cluster.disconnect();
await endPromise;

cluster.connect();
console.log(await cluster.get("foo"));
cluster.disconnect();

Running that script in a loop using

#!/bin/bash

set -euo pipefail

while true
do
    DEBUG=ioredis:cluster node cluster-error.mjs
done

against the main branch of ioredis quickly results in this output:

/Code/ioredis/built/cluster/index.js:124
                    reject(new redis_errors_1.RedisError("Connection is aborted"));
                           ^

RedisError: Connection is aborted
    at /Code/ioredis/built/cluster/index.js:124:28

Node.js v20.11.0

My debugging led me to believe that the existing node cleanup logic in the ConnectionPool class leads to race conditions: upon disconnect(), the this.connectionPool.reset() call will remove nodes from the pool without cleaning up the event listener which may then subsequently issue more than one drain event. Depending on timing, one of the extra drain events may fire after connect() and change the status to close, interfering with the connection attempt and leading to the error above.

Changes

martinslota commented 3 weeks ago

I now created a separate repository that (hopefully) makes it easy to reproduce the bug.

We have been using the fix in this branch in production throughout the last roughly 3 months and it has considerably reduced the error rates we are seeing when shutting down Bull queue clients.