Closed air3ijai closed 1 year ago
I think there is a big key in this rdb. so you should use following method to migrate rdb to target redis
step1 rct -f resp -s /data/dump.rdb -o /data/dump.aof -r
change rdb to aof file
step2 cat /data/dump.aof | /path/to/redis/src/redis-cli -p 6379 --pipe
This tool designed for small keys(less than 512mb) migration.
so if you have big key in rdb. you have two ways to handle this.
first way
step1: use rdt
generate an rdb without big keys(use --key
option filter big keys). after that use rmt
migrate filtered rdb to target.
step2: change that big key to aof file. rct -f resp -s /path/to/dump.rdb --key ${big_key} -o /path/to/bigkey.aof
step3: migrate that bigkey aof file alone. cat /data/bigkey.aof | /path/to/redis/src/redis-cli -p 6379 --pipe
second way
step1 rct -f resp -s /data/dump.rdb -o /data/dump.aof -r
change rdb to aof file
step2 cat /data/dump.aof | /path/to/redis/src/redis-cli -p 6379 --pipe
Tips: Avoiding big keys is one of the best practices for redis
Using your comment
export JAVA_TOOL_OPTIONS="-Xms4g -Xmx4g"
/opt/redis-rdb-cli/bin/rmt \
-s /data/dump.rdb \
-m redis://localhost:6379?authPassword=password
Picked up JAVA_TOOL_OPTIONS: -Xms4g -Xmx4g
/[ 20.5GB| 95.0MB/s]
# Source
db0:keys=1994658,expires=0,avg_ttl=0
# Destination
db0:keys=1994658,expires=0,avg_ttl=0
So, 4gb was enough in our case to work correctly (still OOM with 2gb), and an option proto-max-bulk-len 1024mb
on Redis side to increase import limit.
Thank you for your support!
So, 4gb was enough in our case to work correctly (still OOM with 2gb), and an option proto-max-bulk-len 1024mb on Redis side to increase import limit.
Thread count affect OOM
by default redis-rdb-cli use 4 threads to migrate data to target.
so you need set Xms
, Xmx
at least 4*{max key size}
memory
Let's try to tune this. Based on our max key size it would be: 792.1MB * 4 = 3168.4MB
I did some tests with 3000m - 1500m
, with a 100m
step - just a single run. And noted the following
1600m - 3 x Ok in a row
\[ 20.4GB| 96.5MB/s]
/[ 20.5GB| 94.8MB/s]
/[ 20.4GB| 94.1MB/s]
1500m - 3 x OOM in a row
\[ 7.2GB| 99.4MB/s]Terminating due to java.lang.OutOfMemoryError: Java heap space
/[ 7.4GB| 97.0MB/s]Terminating due to java.lang.OutOfMemoryError: Java heap space
\[ 7.2GB|133.1MB/s]Terminating due to java.lang.OutOfMemoryError: Java heap space
But with all that, my precedent experiment
Picked up JAVA_TOOL_OPTIONS: -Xms2g -Xmx2g
\[ 11.2GB| 92.3MB/s]Terminating due to java.lang.OutOfMemoryError: Java heap space
And without any memory specification in an initial comment
\[ 18.4GB| 91.9MB/s]Terminating due to java.lang.OutOfMemoryError: Java heap space
On my node MaxHeapSize = 8417968128
~ 8GB and it is higher than what I set and the limit I've found during the latest tests.
java -XX:+PrintFlagsFinal -version | grep HeapSize
uintx ErgoHeapSizeLimit = 0 {product}
uintx HeapSizePerGCThread = 87241520 {product}
uintx InitialHeapSize := 526385152 {product}
uintx LargePageHeapSizeThreshold = 134217728 {product}
uintx MaxHeapSize := 8417968128 {product}
openjdk version "1.8.0_352"
OpenJDK Runtime Environment (build 1.8.0_352-8u352-ga-1~22.04-b08)
OpenJDK 64-Bit Server VM (build 25.352-b08, mixed mode)
Hi
About MaxHeapSize = 8417968128
please refer to this doc
792.1MB * 4 = 3168.4MB
means 4 threads at the same time to migrate 4 big keys. you can take it as MaxHeapSize
. But it will be less than this value in actual use.
Describe the bug Migration failed due to the out of memory
To Reproduce
Start to sync RDB (~ 21GB) to remote Redis
Get the error
Information about free memory before and after OOM
Expected behavior If application require more memory, we should be able to specify it
Version(run
rct --version
orrct -V
and paste the information):Additional context We have tried to set
export JAVA_OPTS="-Xmx30G"
, but nothing changed.