dragonflydb / dragonfly

A modern replacement for Redis and Memcached
https://www.dragonflydb.io/
Other
25.96k stars 954 forks source link

Huge lists do not migrate fully between cluster nodes #4143

Closed chakaz closed 1 day ago

chakaz commented 3 days ago

(this may apply to other data types as well)

To reproduce:

./cluster_mgr.py --action=config_single_remote --target_port=7001
./cluster_mgr.py --action=attach --target_port=7001 --attach_port=7002
$ redis-cli -p 7001
localhost:7001> debug populate 1 l: 1000 RAND TYPE list ELEMENTS 500000
OK
localhost:7001> llen l::0
(integer) 500000
./cluster_mgr.py --action=migrate --slot_start=0 --slot_end=16383 --target_port=7002
localhost:7002> llen l::0
(integer) 4096
localhost:7002> memory usage l::0
(integer) 86064
anadion commented 2 days ago

We hame similar problem with RENAME command and big zset keys.

redis_version:6.2.11
dragonfly_version:df-v1.24.0
redis_mode:standalone
127.0.0.1:6379[14]> type ipv4
zset
127.0.0.1:6379[14]> ZCARD ipv6_tmp
(integer) 7391471
127.0.0.1:6379[14]> RENAME ipv6_tmp ipv6
OK
(6.75s)
127.0.0.1:6379[14]> ZCARD ipv6
(integer) 4092
chakaz commented 2 days ago

Yes @anadion, that is due to the same root cause. Thanks for reporting!