redis / ioredis

🚀 A robust, performance-focused, and full-featured Redis client for Node.js.
MIT License
14.31k stars 1.19k forks source link

Multiple set commands are sent in single packet to server - no pipeline in code - auto pipeline is also not enabled. #1866

Open vgnanasekaran opened 6 months ago

vgnanasekaran commented 6 months ago

Please find below the code that sends 100 set commands in a for loop. For some execution each set command is sent as individual packet to redis server and each response ("ok") is received as individual packet.

one-set-one-packet

On some executions multiple set commands are sent in one packet to redis server and the responses are grouped into one packet.

multiple-set-one-packet

const Redis = require('ioredis');

let endpoints = new Array(1) endpoints[0] = { host: "ip", port: 26379};

const redis = new Redis({ sentinels: endpoints, password: '', name: 'mymaster', role: 'master', }); console.log(redis) redis.on('connect', function(err) { if(err) { console.log('error: ' + JSON.stringify(err)); return; } console.log('[Redis] up and running!'); return; }); for (let i = 0; i < 100; i++) { redis.set('hello' + i, 'test' + i, 'EX', 60, function(err,result) { console.log('set command error response: ' + err) console.log('set command result: ' + result) }); }

When the volume of requests is high to the node.js microservice, less packets with more number of set commands are sent to redis sentinel host. The microservice memory keeps growing and crashes with OOM.

Trying to understand how the grouping of commands happens when pipeline is not used and also the auto pipeline is not enabled.

can this grouping behavior cause memory issues as observed in my case? Is there a way to force one command per request?

ioredis version - 4.11.2

Please advice. Any pointers will be of great help. Thank you

vgnanasekaran commented 6 months ago

@TysonAndre You have mentioned about protocol pipeline (RESP) - https://github.com/redis/ioredis/issues/1377 Is this related?

vgnanasekaran commented 6 months ago

@luin request you to look into this issue and share your thoughts.