遷移Redis單機到Redis Cluster


1 前言

線上有套Redis 5.x單機在運行,為了能實現Redis高可用和以后能橫向擴展放棄Redis主從、Redis哨兵,決定將Redis單機遷移到Redis Cluster。

此方案適用於Redis 5.X、6.X版本。

 

遷移方式:

  • 使用RDB、AOF遷移:
    • 步驟較多,相對復雜;
    • 對Redis單機與Redis Cluster之間網絡要求不高;
    • 停機時間長。
  • 使用Redis-shake遷移:
    • 相對簡單;
    • 需保證Redis單機與Redis Cluster之間網絡能通;
    • 停機時間短。

2 使用RDB、AOF遷移

Redis默認未啟用AOF,可以根據自己的需求來開啟AOF(開啟AOF對性能有一定的影響)。

2.1 移動哈希槽

Redis Cluster總共有16384個哈希槽,16384哈希槽是平均分布在Master節點上,我的Redis Cluster是6個節點(3個Master,3個Slave,Master與Slave之間是主從關系)。

 

查看Redis Cluster節點信息和哈希槽分布情況

[redis]# redis-cli -p 6381 -a Passwd@123 --cluster check 127.0.0.1:6381
Warning: Using a password with '-a' or '-u' option on the command line interface may not be safe.
127.0.0.1:6381 (0939488f...) -> 0 keys | 5461 slots | 1 slaves.
10.150.57.13:6383 (cab3f65e...) -> 0 keys | 5461 slots | 1 slaves.
10.150.57.13:6382 (3b02d1d4...) -> 0 keys | 5462 slots | 1 slaves.
[OK] 0 keys in 3 masters.
0.00 keys per slot on average.
>>> Performing Cluster Check (using node 127.0.0.1:6381)
M: 0939488f471b96bca4feffff01a9f179e10041a5 127.0.0.1:6381
   slots:[0-5460] (5461 slots) master
   1 additional replica(s)
M: cab3f65e28f8f42dd78e1bb2ea6bb417960db600 10.150.57.13:6383
   slots:[10923-16383] (5461 slots) master
   1 additional replica(s)
S: 6c18fdfd69e67df4340437169738e8e00e67e4dd 10.150.57.13:6384
   slots: (0 slots) slave
   replicates 3b02d1d43f219759e76df2804d4ed160f602a7ea
S: bf2fe8ffbbb22772c600af19ce7ce15a89d26ba5 10.150.57.13:6385
   slots: (0 slots) slave
   replicates cab3f65e28f8f42dd78e1bb2ea6bb417960db600
S: d83e5f8dc57cb6c8fbc4ed2dd667835cbf6b7542 10.150.57.13:6386
   slots: (0 slots) slave
   replicates 0939488f471b96bca4feffff01a9f179e10041a5
M: 3b02d1d43f219759e76df2804d4ed160f602a7ea 10.150.57.13:6382
   slots:[5461-10922] (5462 slots) master
   1 additional replica(s)
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.

 

移動10.150.57.13:6382節點上的哈希槽到10.150.57.13:6381上

[redis]# redis-cli -p 6381 -a Passwd@123 --cluster reshard 10.150.57.13:6382
Warning: Using a password with '-a' or '-u' option on the command line interface may not be safe.
>>> Performing Cluster Check (using node 10.150.57.13:6383)
M: cab3f65e28f8f42dd78e1bb2ea6bb417960db600 10.150.57.13:6383
   slots:[10923-16383] (5461 slots) master
   1 additional replica(s)
S: d83e5f8dc57cb6c8fbc4ed2dd667835cbf6b7542 10.150.57.13:6386
   slots: (0 slots) slave
   replicates 0939488f471b96bca4feffff01a9f179e10041a5
M: 3b02d1d43f219759e76df2804d4ed160f602a7ea 10.150.57.13:6382
   slots: (0 slots) master
S: bf2fe8ffbbb22772c600af19ce7ce15a89d26ba5 10.150.57.13:6385
   slots: (0 slots) slave
   replicates cab3f65e28f8f42dd78e1bb2ea6bb417960db600
S: 6c18fdfd69e67df4340437169738e8e00e67e4dd 10.150.57.13:6384
   slots: (0 slots) slave
   replicates 0939488f471b96bca4feffff01a9f179e10041a5
M: 0939488f471b96bca4feffff01a9f179e10041a5 10.150.57.13:6381
   slots:[0-10922] (10923 slots) master
   2 additional replica(s)
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
How many slots do you want to move (from 1 to 16384)? 5461	#指定移動哈希槽的數量
What is the receiving node ID? 0939488f471b96bca4feffff01a9f179e10041a5 #指定接收端的Redis ID
Please enter all the source node IDs.
  Type 'all' to use all the nodes as source nodes for the hash slots.
  Type 'done' once you entered all the source nodes IDs.
Source node #1: 3b02d1d43f219759e76df2804d4ed160f602a7ea	#指定源端Redis ID
Source node #2: done
	省略Moving輸出
Do you want to proceed with the proposed reshard plan (yes/no)? yes	#輸入“yes”繼續進行reshard
	省略Moving輸出

 

移動10.150.57.13:6383節點上的哈希槽到10.150.57.13:6381上

[redis]# redis-cli -p 6381 -a Passwd@123 --cluster reshard 10.150.57.13:6383
Warning: Using a password with '-a' or '-u' option on the command line interface may not be safe.
>>> Performing Cluster Check (using node 10.150.57.13:6383)
M: cab3f65e28f8f42dd78e1bb2ea6bb417960db600 10.150.57.13:6383
   slots:[10923-16383] (5461 slots) master
   1 additional replica(s)
S: d83e5f8dc57cb6c8fbc4ed2dd667835cbf6b7542 10.150.57.13:6386
   slots: (0 slots) slave
   replicates 0939488f471b96bca4feffff01a9f179e10041a5
M: 3b02d1d43f219759e76df2804d4ed160f602a7ea 10.150.57.13:6382
   slots: (0 slots) master
S: bf2fe8ffbbb22772c600af19ce7ce15a89d26ba5 10.150.57.13:6385
   slots: (0 slots) slave
   replicates cab3f65e28f8f42dd78e1bb2ea6bb417960db600
S: 6c18fdfd69e67df4340437169738e8e00e67e4dd 10.150.57.13:6384
   slots: (0 slots) slave
   replicates 0939488f471b96bca4feffff01a9f179e10041a5
M: 0939488f471b96bca4feffff01a9f179e10041a5 10.150.57.13:6381
   slots:[0-10922] (10923 slots) master
   2 additional replica(s)
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
How many slots do you want to move (from 1 to 16384)? 5461	#指定移動哈希槽的數量
What is the receiving node ID? 0939488f471b96bca4feffff01a9f179e10041a5	#指定接收端的Redis ID
Please enter all the source node IDs.
  Type 'all' to use all the nodes as source nodes for the hash slots.
  Type 'done' once you entered all the source nodes IDs.
Source node #1: cab3f65e28f8f42dd78e1bb2ea6bb417960db600	#指定源端Redis ID
Source node #2: done
	省略Moving輸出
Do you want to proceed with the proposed reshard plan (yes/no)? yes	#輸入“yes”繼續進行reshard
	省略Moving輸出

 

查看哈希槽的分布情況

[redis]# redis-cli -p 6381 -a Passwd@123 --cluster check 127.0.0.1:6383
Warning: Using a password with '-a' or '-u' option on the command line interface may not be safe.
127.0.0.1:6383 (cab3f65e...) -> 0 keys | 0 slots | 0 slaves.
10.150.57.13:6382 (3b02d1d4...) -> 0 keys | 0 slots | 0 slaves.
10.150.57.13:6381 (0939488f...) -> 0 keys | 16384 slots | 3 slaves.
[OK] 0 keys in 3 masters.
0.00 keys per slot on average.
>>> Performing Cluster Check (using node 127.0.0.1:6383)
M: cab3f65e28f8f42dd78e1bb2ea6bb417960db600 127.0.0.1:6383
   slots: (0 slots) master
S: d83e5f8dc57cb6c8fbc4ed2dd667835cbf6b7542 10.150.57.13:6386
   slots: (0 slots) slave
   replicates 0939488f471b96bca4feffff01a9f179e10041a5
M: 3b02d1d43f219759e76df2804d4ed160f602a7ea 10.150.57.13:6382
   slots: (0 slots) master
S: bf2fe8ffbbb22772c600af19ce7ce15a89d26ba5 10.150.57.13:6385
   slots: (0 slots) slave
   replicates 0939488f471b96bca4feffff01a9f179e10041a5
S: 6c18fdfd69e67df4340437169738e8e00e67e4dd 10.150.57.13:6384
   slots: (0 slots) slave
   replicates 0939488f471b96bca4feffff01a9f179e10041a5
M: 0939488f471b96bca4feffff01a9f179e10041a5 10.150.57.13:6381
   slots:[0-16383] (16384 slots) master
   3 additional replica(s)
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.

至此可以看到16384個哈希槽全部在10.150.57.13:6381節點上。

2.2 傳輸RDB、AOF

將Redis單機的RDB快照、AOF文件傳輸到10.150.57.13:6381節點相應的RDB、AOF路徑下。

重啟Redis Cluster(會自動加載單機的RDB、AOF到Redis Cluster 10.150.57.13:6381節點)。

2.3 重新分布哈希槽

查看哈希槽分布情況

[redis]# redis-cli -p 6381 -a Passwd@123 --cluster check 127.0.0.1:6383
Warning: Using a password with '-a' or '-u' option on the command line interface may not be safe.
127.0.0.1:6383 (cab3f65e...) -> 0 keys | 0 slots | 0 slaves.
10.150.57.13:6381 (0939488f...) -> 0 keys | 16384 slots | 3 slaves.
10.150.57.13:6382 (3b02d1d4...) -> 0 keys | 0 slots | 0 slaves.
[OK] 0 keys in 3 masters.
0.00 keys per slot on average.
>>> Performing Cluster Check (using node 127.0.0.1:6383)
M: cab3f65e28f8f42dd78e1bb2ea6bb417960db600 127.0.0.1:6383
   slots: (0 slots) master
S: d83e5f8dc57cb6c8fbc4ed2dd667835cbf6b7542 10.150.57.13:6386
   slots: (0 slots) slave
   replicates 0939488f471b96bca4feffff01a9f179e10041a5
M: 0939488f471b96bca4feffff01a9f179e10041a5 10.150.57.13:6381
   slots:[0-16383] (16384 slots) master
   3 additional replica(s)
M: 3b02d1d43f219759e76df2804d4ed160f602a7ea 10.150.57.13:6382
   slots: (0 slots) master
S: bf2fe8ffbbb22772c600af19ce7ce15a89d26ba5 10.150.57.13:6385
   slots: (0 slots) slave
   replicates 0939488f471b96bca4feffff01a9f179e10041a5
S: 6c18fdfd69e67df4340437169738e8e00e67e4dd 10.150.57.13:6384
   slots: (0 slots) slave
   replicates 0939488f471b96bca4feffff01a9f179e10041a5
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.

此時全部(16384個)哈希槽在10.150.57.13:6381節點上。

 

將16384個哈希槽平均分布在3個Master節點上

[redis]# redis-cli -p 6381 -a Passwd@123 --cluster rebalance --cluster-use-empty-masters 10.150.57.13:6381
Warning: Using a password with '-a' or '-u' option on the command line interface may not be safe.
>>> Performing Cluster Check (using node 10.150.57.13:6381)
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
>>> Rebalancing across 3 nodes. Total weight = 3.00
Moving 5462 slots from 10.150.57.13:6381 to 10.150.57.13:6382
#############################################################################################
Moving 5461 slots from 10.150.57.13:6381 to 10.150.57.13:6383
#############################################################################################

 

再次查看哈希槽的分布情況

[redis]# redis-cli -p 6381 -a Passwd@123 --cluster check 127.0.0.1:6381
Warning: Using a password with '-a' or '-u' option on the command line interface may not be safe.
127.0.0.1:6381 (0939488f...) -> 0 keys | 5461 slots | 1 slaves.
10.150.57.13:6382 (3b02d1d4...) -> 0 keys | 5462 slots | 1 slaves.
10.150.57.13:6383 (cab3f65e...) -> 0 keys | 5461 slots | 1 slaves.
[OK] 0 keys in 3 masters.
0.00 keys per slot on average.
>>> Performing Cluster Check (using node 127.0.0.1:6381)
M: 0939488f471b96bca4feffff01a9f179e10041a5 127.0.0.1:6381
   slots:[10923-16383] (5461 slots) master
   1 additional replica(s)
M: 3b02d1d43f219759e76df2804d4ed160f602a7ea 10.150.57.13:6382
   slots:[0-5461] (5462 slots) master
   1 additional replica(s)
S: bf2fe8ffbbb22772c600af19ce7ce15a89d26ba5 10.150.57.13:6385
   slots: (0 slots) slave
   replicates cab3f65e28f8f42dd78e1bb2ea6bb417960db600
M: cab3f65e28f8f42dd78e1bb2ea6bb417960db600 10.150.57.13:6383
   slots:[5462-10922] (5461 slots) master
   1 additional replica(s)
S: d83e5f8dc57cb6c8fbc4ed2dd667835cbf6b7542 10.150.57.13:6386
   slots: (0 slots) slave
   replicates 0939488f471b96bca4feffff01a9f179e10041a5
S: 6c18fdfd69e67df4340437169738e8e00e67e4dd 10.150.57.13:6384
   slots: (0 slots) slave
   replicates 3b02d1d43f219759e76df2804d4ed160f602a7ea
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.

 

至此,遷移Redis單機到Redis Cluster完成。

3 使用Redis-shake遷移

3.1 介紹

3.1.1 介紹

GitHub地址:https://github.com/alibaba/RedisShake

redis-shake是阿里雲Redis&MongoDB團隊開源的用於redis數據同步的工具。

redis-shake is a tool for synchronizing data between two redis databases. Redis-shake 是一個用於在兩個 redis之 間同步數據的工具,滿足用戶非常靈活的同步、遷移需求。

3.1.2 基本功能

redis-shake是阿里基於redis-port基礎上進行改進的一款產品。它支持解析恢復備份同步四個功能。以下主要介紹同步sync。

  • 恢復restore:將RDB文件恢復到目的redis數據庫。
  • 備份dump:將源redis的全量數據通過RDB文件備份起來。
  • 解析decode:對RDB文件進行讀取,並以json格式解析存儲。
  • 同步sync:支持源redis和目的redis的數據同步,支持全量和增量數據的遷移,支持從雲下到阿里雲雲上的同步,也支持雲下到雲下不同環境的同步,支持單節點、主從版、集群版之間的互相同步。需要注意的是,如果源端是集群版,可以啟動一個RedisShake,從不同的db結點進行拉取,同時源端不能開啟move slot功能;對於目的端,如果是集群版,寫入可以是1個或者多個db結點。
  • 同步rump:支持源redis和目的redis的數據同步,僅支持全量的遷移。采用scan和restore命令進行遷移,支持不同雲廠商不同redis版本的遷移。

3.1.3 基本原理

redis-shake的基本原理就是模擬一個從節點加入源redis集群,首先進行全量拉取並回放,然后進行增量的拉取(通過psync命令)。如下圖所示:

Screen_Shot_2019_03_29_at_6_23_58_PM

 

 

 

 

 

 

         

如果源端是集群模式,只需要啟動一個redis-shake進行拉取,同時不能開啟源端的move slot操作。如果目的端是集群模式,可以寫入到一個結點,然后再進行slot的遷移,當然也可以多對多寫入。

         目前,redis-shake到目的端采用單鏈路實現,對於正常情況下,這不會成為瓶頸,但對於極端情況,qps比較大的時候,此部分性能可能成為瓶頸,后續我們可能會計划對此進行優化。另外,redis-shake到目的端的數據同步采用異步的方式,讀寫分離在2個線程操作,降低因為網絡時延帶來的同步性能下降。

3.2 遷移

3.2.1 安裝redis-shake

[root]# wget https://github.com/alibaba/RedisShake/releases/download/release-v2.1.1-20210903/release-v2.1.1-20210903.tar.gz
[root]# tar -zxvf release-v2.1.1-20210903.tar.gz

 

3.2.2 配置參數文件

配置redis-shake.conf參數文件(修改的部分)

[root]# cd release-v2.1.1-20210903
[root]# vi redis-shake.conf
source.type = standalone		#源端架構類型
source.address = 127.0.0.1:6379		#源端IP:PORT
source.password_raw = Passwd@123	#源端密碼
target.type = cluster			#目的端架構類型
target.address = 10.150.57.13:6381;10.150.57.13:6382;10.150.57.13:6383	#目的端IP:PORT(Redis Cluster的Master或者Slave)
target.password_raw = Passwd@123	#目的端密碼
key_exists = rewrite	#如果目的端有同樣的鍵值對,則覆蓋

[root]# cat redis-shake.conf

點擊查看代碼
# This file is the configuration of redis-shake.

# If you have any problem, please visit: https://github.com/alibaba/RedisShake/wiki/FAQ
# 有疑問請先查閱:https://github.com/alibaba/RedisShake/wiki/FAQ

# current configuration version, do not modify.
# 當前配置文件的版本號,請不要修改該值。
conf.version = 1

# id
id = redis-shake

# The log file name, if left blank, it will be printed to stdout,
# otherwise it will be printed to the specified file.
# 日志文件名,留空則會打印到 stdout,否則打印到指定文件。
# for example:
#   log.file =
#   log.file = /var/log/redis-shake.log
log.file = /var/log/redis-shake.log

# log level: "none", "error", "warn", "info", "debug".
# default is "info".
# 日志等級,可選:none error warn info debug
# 默認為:info
log.level = info

# 進程文件存儲目錄,留空則會輸出到當前目錄,
# 注意這個是目錄,真正生成的 pid 是 {pid_path}/{id}.pid
# 例如:
#   pid_path = ./
#   pid_path = /var/run/
pid_path = 

# pprof port.
system_profile = 9310

# restful port, set -1 means disable, in `restore` mode RedisShake will exit once finish restoring RDB only if this value
# is -1, otherwise, it'll wait forever.
# restful port, 查看 metric 端口, -1 表示不啟用. 如果是`restore`模式,只有設置為-1才會在完成RDB恢復后退出,否則會一直block。
#   http://127.0.0.1:9320/conf   查看 redis-shake 使用的配置
#   http://127.0.0.1:9320/metric 查看 redis-shake 的同步情況
http_profile = 9320

# parallel routines number used in RDB file syncing. default is 64.
# 啟動多少個並發線程同步一個RDB文件。
parallel = 32

# source redis configuration.
# used in `dump`, `sync` and `rump`.
# source redis type, e.g. "standalone" (default), "sentinel" or "cluster".
#   1. "standalone": standalone db mode.
#   2. "sentinel": the redis address is read from sentinel.
#   3. "cluster": the source redis has several db.
#   4. "proxy": the proxy address, currently, only used in "rump" mode.
# used in `dump`, `sync` and `rump`.
# 源端 Redis 的類型,可選:standalone sentinel cluster proxy
# 注意:proxy 只用於 rump 模式。
source.type = standalone

# ip:port
# the source address can be the following:
#   1. single db address. for "standalone" type.
#   2. ${sentinel_master_name}:${master or slave}@sentinel single/cluster address, e.g., mymaster:master@127.0.0.1:26379;127.0.0.1:26380, or @127.0.0.1:26379;127.0.0.1:26380. for "sentinel" type.
#   3. cluster that has several db nodes split by semicolon(;). for "cluster" type. e.g., 10.1.1.1:20331;10.1.1.2:20441.
#   4. proxy address(used in "rump" mode only). for "proxy" type.
# 源redis地址。對於sentinel或者開源cluster模式,輸入格式為"master名字:拉取角色為master或者slave@sentinel的地址",別的cluster
# 架構,比如codis, twemproxy, aliyun proxy等需要配置所有master或者slave的db地址。
# 源端 redis 的地址
#   1. standalone 模式配置 ip:port, 例如: 10.1.1.1:20331
#   2. cluster 模式需要配置所有 nodes 的 ip:port, 例如: source.address = 10.1.1.1:20331;10.1.1.2:20441
source.address = 127.0.0.1:6379

# source password, left blank means no password.
# 源端密碼,留空表示無密碼。
source.password_raw =

# auth type, don't modify it
source.auth_type = auth
# tls enable, true or false. Currently, only support standalone.
# open source redis does NOT support tls so far, but some cloud versions do.
source.tls_enable = false
# input RDB file.
# used in `decode` and `restore`.
# if the input is list split by semicolon(;), redis-shake will restore the list one by one.
# 如果是decode或者restore,這個參數表示讀取的rdb文件。支持輸入列表,例如:rdb.0;rdb.1;rdb.2
# redis-shake將會挨個進行恢復。
source.rdb.input =
# the concurrence of RDB syncing, default is len(source.address) or len(source.rdb.input).
# used in `dump`, `sync` and `restore`. 0 means default.
# This is useless when source.type isn't cluster or only input is only one RDB.
# 拉取的並發度,如果是`dump`或者`sync`,默認是source.address中db的個數,`restore`模式默認len(source.rdb.input)。
# 假如db節點/輸入的rdb有5個,但rdb.parallel=3,那么一次只會
# 並發拉取3個db的全量數據,直到某個db的rdb拉取完畢並進入增量,才會拉取第4個db節點的rdb,
# 以此類推,最后會有len(source.address)或者len(rdb.input)個增量線程同時存在。
source.rdb.parallel = 0
# for special cloud vendor: ucloud
# used in `decode` and `restore`.
# ucloud集群版的rdb文件添加了slot前綴,進行特判剝離: ucloud_cluster。
source.rdb.special_cloud = 

# target redis configuration. used in `restore`, `sync` and `rump`.
# the type of target redis can be "standalone", "proxy" or "cluster".
#   1. "standalone": standalone db mode.
#   2. "sentinel": the redis address is read from sentinel.
#   3. "cluster": open source cluster (not supported currently).
#   4. "proxy": proxy layer ahead redis. Data will be inserted in a round-robin way if more than 1 proxy given.
# 目的redis的類型,支持standalone,sentinel,cluster和proxy四種模式。
target.type = cluster
# ip:port
# the target address can be the following:
#   1. single db address. for "standalone" type.
#   2. ${sentinel_master_name}:${master or slave}@sentinel single/cluster address, e.g., mymaster:master@127.0.0.1:26379;127.0.0.1:26380, or @127.0.0.1:26379;127.0.0.1:26380. for "sentinel" type.
#   3. cluster that has several db nodes split by semicolon(;). for "cluster" type.
#   4. proxy address. for "proxy" type.
target.address = 10.150.57.13:6381;10.150.57.13:6382;10.150.57.13:6383

# target password, left blank means no password.
# 目的端密碼,留空表示無密碼。
target.password_raw = Gaoyu@029

# auth type, don't modify it
target.auth_type = auth
# all the data will be written into this db. < 0 means disable.
target.db = 0

# Format: 0-5;1-3 ,Indicates that the data of the source db0 is written to the target db5, and 
# the data of the source db1 is all written to the target db3. 
# Note: When target.db is specified, target.dbmap will not take effect.
# 例如 0-5;1-3 表示源端 db0 的數據會被寫入目的端 db5, 源端 db1 的數據會被寫入目的端 db3
# 當 target.db 開啟的時候 target.dbmap 不會生效.
target.dbmap =

# tls enable, true or false. Currently, only support standalone.
# open source redis does NOT support tls so far, but some cloud versions do.
target.tls_enable = false
# output RDB file prefix.
# used in `decode` and `dump`.
# 如果是decode或者dump,這個參數表示輸出的rdb前綴,比如輸入有3個db,那么dump分別是:
# ${output_rdb}.0, ${output_rdb}.1, ${output_rdb}.2
target.rdb.output = local_dump
# some redis proxy like twemproxy doesn't support to fetch version, so please set it here.
# e.g., target.version = 4.0
target.version =

# use for expire key, set the time gap when source and target timestamp are not the same.
# 用於處理過期的鍵值,當遷移兩端不一致的時候,目的端需要加上這個值
fake_time =

# how to solve when destination restore has the same key.
# rewrite: overwrite. 
# none: panic directly.
# ignore: skip this key. not used in rump mode.
# used in `restore`, `sync` and `rump`.
# 當源目的有重復 key 時是否進行覆寫, 可選值:
#   1. rewrite: 源端覆蓋目的端
#   2. none: 一旦發生進程直接退出
#   3. ignore: 保留目的端key,忽略源端的同步 key. 該值在 rump 模式下不會生效.
key_exists = rewrite

# filter db, key, slot, lua.
# filter db.
# used in `restore`, `sync` and `rump`.
# e.g., "0;5;10" means match db0, db5 and db10.
# at most one of `filter.db.whitelist` and `filter.db.blacklist` parameters can be given.
# if the filter.db.whitelist is not empty, the given db list will be passed while others filtered.
# if the filter.db.blacklist is not empty, the given db list will be filtered while others passed.
# all dbs will be passed if no condition given.
# 指定的db被通過,比如0;5;10將會使db0, db5, db10通過, 其他的被過濾
filter.db.whitelist =
# 指定的db被過濾,比如0;5;10將會使db0, db5, db10過濾,其他的被通過
filter.db.blacklist =
# filter key with prefix string. multiple keys are separated by ';'.
# e.g., "abc;bzz" match let "abc", "abc1", "abcxxx", "bzz" and "bzzwww".
# used in `restore`, `sync` and `rump`.
# at most one of `filter.key.whitelist` and `filter.key.blacklist` parameters can be given.
# if the filter.key.whitelist is not empty, the given keys will be passed while others filtered.
# if the filter.key.blacklist is not empty, the given keys will be filtered while others passed.
# all the namespace will be passed if no condition given.
# 支持按前綴過濾key,只讓指定前綴的key通過,分號分隔。比如指定abc,將會通過abc, abc1, abcxxx
filter.key.whitelist =
# 支持按前綴過濾key,不讓指定前綴的key通過,分號分隔。比如指定abc,將會阻塞abc, abc1, abcxxx
filter.key.blacklist =
# filter given slot, multiple slots are separated by ';'.
# e.g., 1;2;3
# used in `sync`.
# 指定過濾slot,只讓指定的slot通過
filter.slot =
# filter lua script. true means not pass. However, in redis 5.0, the lua 
# converts to transaction(multi+{commands}+exec) which will be passed.
# 控制不讓lua腳本通過,true表示不通過
filter.lua = false

# big key threshold, the default is 500 * 1024 * 1024 bytes. If the value is bigger than
# this given value, all the field will be spilt and write into the target in order. If
# the target Redis type is Codis, this should be set to 1, please checkout FAQ to find 
# the reason.
# 正常key如果不大,那么都是直接調用restore寫入到目的端,如果key對應的value字節超過了給定
# 的值,那么會分批依次一個一個寫入。如果目的端是Codis,這個需要置為1,具體原因請查看FAQ。
# 如果目的端大版本小於源端,也建議設置為1。
big_key_threshold = 524288000

# enable metric
# used in `sync`.
# 是否啟用metric
metric = true
# print in log
# 是否將metric打印到log中
metric.print_log = false

# sender information.
# sender flush buffer size of byte.
# used in `sync`.
# 發送緩存的字節長度,超過這個閾值將會強行刷緩存發送
sender.size = 104857600
# sender flush buffer size of oplog number.
# used in `sync`. flush sender buffer when bigger than this threshold.
# 發送緩存的報文個數,超過這個閾值將會強行刷緩存發送,對於目的端是cluster的情況,這個值
# 的調大將會占用部分內存。
sender.count = 4095
# delay channel size. once one oplog is sent to target redis, the oplog id and timestamp will also
# stored in this delay queue. this timestamp will be used to calculate the time delay when receiving
# ack from target redis.
# used in `sync`.
# 用於metric統計時延的隊列
sender.delay_channel_size = 65535

# enable keep_alive option in TCP when connecting redis.
# the unit is second.
# 0 means disable.
# TCP keep-alive保活參數,單位秒,0表示不啟用。
keep_alive = 0

# used in `rump`.
# number of keys captured each time. default is 100.
# 每次scan的個數,不配置則默認100.
scan.key_number = 50
# used in `rump`.
# we support some special redis types that don't use default `scan` command like alibaba cloud and tencent cloud.
# 有些版本具有特殊的格式,與普通的scan命令有所不同,我們進行了特殊的適配。目前支持騰訊雲的集群版"tencent_cluster"
# 和阿里雲的集群版"aliyun_cluster",注釋主從版不需要配置,只針對集群版。
scan.special_cloud =
# used in `rump`.
# we support to fetching data from given file which marks the key list.
# 有些雲版本,既不支持sync/psync,也不支持scan,我們支持從文件中進行讀取所有key列表並進行抓取:一行一個key。
scan.key_file =

# limit the rate of transmission. Only used in `rump` currently.
# e.g., qps = 1000 means pass 1000 keys per second. default is 500,000(0 means default)
qps = 200000

# enable resume from break point, please visit xxx to see more details.
# 斷點續傳開關
resume_from_break_point = false

# ----------------splitter----------------
# below variables are useless for current open source version so don't set.

# replace hash tag.
# used in `sync`.
replace_hash_tag = false

3.2.3 遷移

啟動redis-shake

[root]# ./redis-shake.linux -type=sync -conf=redis-shake.conf

日志輸出:

點擊查看代碼
2021/12/02 23:53:58 [WARN] source.auth_type[auth] != auth
2021/12/02 23:53:58 [WARN] target.auth_type[auth] != auth
2021/12/02 23:53:58 [INFO] the target redis type is cluster, all db syncing to db0
2021/12/02 23:53:58 [INFO] input password is empty, skip auth address[127.0.0.1:6379] with type[auth].
2021/12/02 23:53:58 [INFO] input password is empty, skip auth address[127.0.0.1:6379] with type[auth].
2021/12/02 23:53:58 [INFO] source rdb[127.0.0.1:6379] checksum[yes]
2021/12/02 23:53:58 [WARN] 
______________________________
\                             \           _         ______ |
 \                             \        /   \___-=O'/|O'/__|
  \   RedisShake, here we go !! \_______\          / | /    )
  /                             /        '/-==__ _/__|/__=-|  -GM
 /        Alibaba Cloud        /         *             \ | |
/                             /                        (o)
------------------------------
if you have any problem, please visit https://github.com/alibaba/RedisShake/wiki/FAQ

2021/12/02 23:53:58 [INFO] redis-shake configuration: {"ConfVersion":1,"Id":"redis-shake","LogFile":"","LogLevel":"info","SystemProfile":9310,"HttpProfile":9320,"Parallel":32,"SourceType":"standalone","SourceAddress":"127.0.0.1:6379","SourcePasswordRaw":"***","SourcePasswordEncoding":"***","SourceAuthType":"auth","SourceTLSEnable":false,"SourceRdbInput":[],"SourceRdbParallel":1,"SourceRdbSpecialCloud":"","TargetAddress":"10.150.57.13:6381;10.150.57.13:6382;10.150.57.13:6383","TargetPasswordRaw":"***","TargetPasswordEncoding":"***","TargetDBString":"0","TargetDBMapString":"","TargetAuthType":"auth","TargetType":"cluster","TargetTLSEnable":false,"TargetRdbOutput":"local_dump","TargetVersion":"6.2.1","FakeTime":"","KeyExists":"rewrite","FilterDBWhitelist":[],"FilterDBBlacklist":[],"FilterKeyWhitelist":[],"FilterKeyBlacklist":[],"FilterSlot":[],"FilterLua":false,"BigKeyThreshold":524288000,"Metric":true,"MetricPrintLog":false,"SenderSize":104857600,"SenderCount":4095,"SenderDelayChannelSize":65535,"KeepAlive":0,"PidPath":"","ScanKeyNumber":50,"ScanSpecialCloud":"","ScanKeyFile":"","Qps":200000,"ResumeFromBreakPoint":false,"Psync":true,"NCpu":0,"HeartbeatUrl":"","HeartbeatInterval":10,"HeartbeatExternal":"","HeartbeatNetworkInterface":"","ReplaceHashTag":false,"ExtraInfo":false,"SockFileName":"","SockFileSize":0,"FilterKey":null,"FilterDB":"","Rewrite":false,"SourceAddressList":["127.0.0.1:6379"],"TargetAddressList":["10.150.57.13:6381","10.150.57.13:6382","10.150.57.13:6383"],"SourceVersion":"5.0.12","HeartbeatIp":"127.0.0.1","ShiftTime":0,"TargetReplace":false,"TargetDB":0,"Version":"develop,cc226f841e2e244c48246ebfcfd5a50396b59710,go1.15.7,2021-09-03_10:06:55","Type":"sync","TargetDBMap":null}
2021/12/02 23:53:58 [INFO] DbSyncer[0] starts syncing data from 127.0.0.1:6379 to [10.150.57.13:6381 10.150.57.13:6382 10.150.57.13:6383] with http[9321], enableResumeFromBreakPoint[false], slot boundary[-1, -1]
2021/12/02 23:53:58 [INFO] input password is empty, skip auth address[127.0.0.1:6379] with type[auth].
2021/12/02 23:53:58 [INFO] DbSyncer[0] psync connect '127.0.0.1:6379' with auth type[auth] OK!
2021/12/02 23:53:58 [INFO] DbSyncer[0] psync send listening port[9320] OK!
2021/12/02 23:53:58 [INFO] DbSyncer[0] try to send 'psync' command: run-id[?], offset[-1]
2021/12/02 23:53:58 [INFO] Event:FullSyncStart	Id:redis-shake	
2021/12/02 23:53:58 [INFO] DbSyncer[0] psync runid = a579ea824ae5845c709d093d75467cfee86753e0, offset = 849589400, fullsync
2021/12/02 23:53:58 [INFO] DbSyncer[0] rdb file size = 276
2021/12/02 23:53:58 [INFO] Aux information key:redis-ver value:5.0.12
2021/12/02 23:53:58 [INFO] Aux information key:redis-bits value:64
2021/12/02 23:53:58 [INFO] Aux information key:ctime value:1638460438
2021/12/02 23:53:58 [INFO] Aux information key:used-mem value:1902536
2021/12/02 23:53:58 [INFO] Aux information key:repl-stream-db value:0
2021/12/02 23:53:58 [INFO] Aux information key:repl-id value:a579ea824ae5845c709d093d75467cfee86753e0
2021/12/02 23:53:58 [INFO] Aux information key:repl-offset value:849589400
2021/12/02 23:53:58 [INFO] Aux information key:aof-preamble value:0
2021/12/02 23:53:58 [INFO] db_size:8 expire_size:0
2021/12/02 23:53:58 [INFO] DbSyncer[0] total = 276B -         276B [100%]  entry=8           
2021/12/02 23:53:58 [INFO] DbSyncer[0] sync rdb done
2021/12/02 23:53:58 [INFO] DbSyncer[0] FlushEvent:IncrSyncStart	Id:redis-shake	
2021/12/02 23:53:58 [INFO] input password is empty, skip auth address[127.0.0.1:6379] with type[auth].
2021/12/02 23:53:59 [INFO] DbSyncer[0] sync:  +forwardCommands=0      +filterCommands=0      +writeBytes=0
2021/12/02 23:54:00 [INFO] DbSyncer[0] sync:  +forwardCommands=0      +filterCommands=0      +writeBytes=0
2021/12/02 23:54:46 [INFO] DbSyncer[0] sync:  +forwardCommands=0      +filterCommands=0      +writeBytes=0
sh2021/12/02 23:54:47 [INFO] DbSyncer[0] sync:  +forwardCommands=0      +filterCommands=0      +writeBytes=0
gui	42021/12/02 23:54:48 [INFO] DbSyncer[0] sync:  +forwardCommands=1      +filterCommands=0      +writeBytes=4
2021/12/02 23:54:49 [INFO] DbSyncer[0] sync:  +forwardCommands=0      +filterCommands=0      +writeBytes=0
redis-2021/12/02 23:54:50 [INFO] DbSyncer[0] sync:  +forwardCommands=0      +filterCommands=0      +writeBytes=0
cl2021/12/02 23:54:51 [INFO] DbSyncer[0] sync:  +forwardCommands=0      +filterCommands=0      +writeBytes=0
i
2021/12/02 23:54:52 [INFO] DbSyncer[0] sync:  +forwardCommands=0      +filterCommands=0      +writeBytes=0
2021/12/02 23:54:53 [INFO] DbSyncer[0] sync:  +forwardCommands=0      +filterCommands=0      +writeBytes=0
keys 2021/12/02 23:54:54 [INFO] DbSyncer[0] sync:  +forwardCommands=0      +filterCommands=0      +writeBytes=0
*
2021/12/02 23:54:55 [INFO] DbSyncer[0] sync:  +forwardCommands=0      +filterCommands=0      +writeBytes=0
set 2021/12/02 23:54:56 [INFO] DbSyncer[0] sync:  +forwardCommands=0      +filterCommands=0      +writeBytes=0
2021/12/02 23:54:57 [INFO] DbSyncer[0] sync:  +forwardCommands=0      +filterCommands=0      +writeBytes=0
2021/12/02 23:54:58 [INFO] DbSyncer[0] sync:  +forwardCommands=1      +filterCommands=0      +writeBytes=4
key12021/12/02 23:54:59 [INFO] DbSyncer[0] sync:  +forwardCommands=0      +filterCommands=0      +writeBytes=0
00 2021/12/02 23:55:00 [INFO] DbSyncer[0] sync:  +forwardCommands=0      +filterCommands=0      +writeBytes=0
name102021/12/02 23:55:01 [INFO] DbSyncer[0] sync:  +forwardCommands=0      +filterCommands=0      +writeBytes=0
0
2021/12/02 23:55:02 [INFO] DbSyncer[0] sync:  +forwardCommands=2      +filterCommands=0      +writeBytes=23
2021/12/02 23:55:03 [INFO] DbSyncer[0] sync:  +forwardCommands=0      +filterCommands=0      +writeBytes=0
2021/12/02 23:55:04 [INFO] DbSyncer[0] sync:  +forwardCommands=0      +filterCommands=0      +writeBytes=0
2021/12/02 23:55:05 [INFO] DbSyncer[0] sync:  +forwardCommands=0      +filterCommands=0      +writeBytes=0
2021/12/02 23:55:06 [INFO] DbSyncer[0] sync:  +forwardCommands=0      +filterCommands=0      +writeBytes=0

4 redis-full-check校驗工具

4.1 介紹

4.1.1 簡介

       redis-full-check是阿里雲Redis&MongoDB團隊開源的用於校驗2個redis數據是否一致的工具,通常用於redis數據遷移(redis-shake)后正確性的校驗。

       支持:單節點、主從版、集群版、帶proxy的雲上集群版(阿里雲)之間的同構或者異構對比,版本支持2.x-5.x。

4.1.2 基本原理

下圖是基本的邏輯比較:

 

       redis-full-check通過全量對比源端和目的端的redis中的數據的方式來進行數據校驗,其比較方式通過多輪次比較:每次都會抓取源和目的端的數據進行差異化比較,記錄不一致的數據進入下輪對比。然后通過多倫比較不斷收斂,減少因數據增量同步導致的源庫和目的庫的數據不一致。最后sqlite中存在的數據就是最終的差異結果。

       redis-full-check對比的方向是單向:抓取源庫A的數據,然后檢測是否位於B中,反向不會檢測,也就是說,它檢測的是源庫是否是目的庫的子集。如果希望對比雙向,則需要對比2次,第一次以A為源庫,B為目的庫,第二次以B為源庫,A為目的庫。

       下圖是基本的數據流圖,redis-full-check內部分為多輪比較,也就是黃色框所指示的部分。每次比較,會先抓取比較的key,第一輪是從源庫中進行抓取,后面輪次是從sqlite3 db中進行抓取;抓取key之后是分別抓取key對應的field和value進行對比,然后將存在差異的部分存入sqlite3 db中,用於下次比較。

4.1.3 不一致類型

redis-full-check判斷不一致的方式主要分為2類:key不一致和value不一致。

 

key不一致

key不一致主要分為以下幾種情況:

  • lack_target : key存在於源庫,但不存在於目的庫。
  • type: key存在於源庫和目的庫,但是類型不一致。
  • value: key存在於源庫和目的庫,且類型一致,但是value不一致。

 

value不一致

不同數據類型有不同的對比標准:

  • string: value不同。
  • hash: 存在field,滿足下面3個條件之一:
    • field存在於源端,但不存在與目的端。
    • field存在於目的端,但不存在與源端。
    • field同時存在於源和目的端,但是value不同。
  • set/zset:與hash類似。
  • list: 與hash類似。

field沖突類型有以下幾種情況(只存在於hash,set,zset,list類型key中):

  • lack_source: field存在於源端key,field不存在與目的端key。
  • lack_target: field不存在與源端key,field存在於目的端key。
  • value: field存在於源端key和目的端key,但是field對應的value不同。

4.1.4 比較原理

對比模式(comparemode)有三種可選:

  • KeyOutline:只對比key值是否相等。
  • ValueOutline:只對比value值的長度是否相等。
  • FullValue:對比key值、value長度、value值是否相等。

 

對比會進行comparetimes輪(默認comparetimes=3)比較:

  • 第一輪,首先找出在源庫上所有的key,然后分別從源庫和目的庫抓取進行比較。
  • 第二輪開始迭代比較,只比較上一輪結束后仍然不一致的key和field。
    • 對於key不一致的情況,包括lack_source ,lack_target 和type,從源庫和目的庫重新取key、value進行比較。
    • value不一致的string,重新比較key:從源和目的取key、value比較。
    • value不一致的hash、set和zset,只重新比較不一致的field,之前已經比較且相同的filed不再比較。這是為了防止對於大key情況下,如果更新頻繁,將會導致校驗永遠不通過的情況。
    • value不一致的list,重新比較key:從源和目的取key、value比較。
  • 每輪之間會停止一定的時間(Interval)。

 

對於hash,set,zset,list大key處理采用以下方式:

  • len <= 5192,直接取全量field、value進行比較,使用如下命令:hgetall,smembers,zrange 0 -1 withscores,lrange 0 -1。
  • len > 5192,使用hscan,sscan,zscan,lrange分批取field和value。

4.2 校驗

4.2.1 安裝

GitHub:https://github.com/alibaba/RedisFullCheck

 

安裝redis-full-check

[root]# wget https://github.com/alibaba/RedisFullCheck/releases/download/release-v1.4.8-20200212/redis-full-check-1.4.8.tar.gz
[root]# tar -zxvf redis-full-check-1.4.8.tar.gz
redis-full-check-1.4.8/
redis-full-check-1.4.8/redis-full-check
redis-full-check-1.4.8/ChangeLog
[root]# redis-full-check-1.4.8
[root]# ./redis-full-check --help
Usage:
  redis-full-check [OPTIONS]

Application Options:
  -s, --source=SOURCE                   Set host:port of source redis. If db type is cluster, split by semicolon(;'), e.g.,
                                        10.1.1.1:1000;10.2.2.2:2000;10.3.3.3:3000. We also support auto-detection, so "master@10.1.1.1:1000" or
                                        "slave@10.1.1.1:1000" means choose master or slave. Only need to give a role in the master or slave.
  -p, --sourcepassword=Password         Set source redis password
      --sourceauthtype=AUTH-TYPE        useless for opensource redis, valid value:auth/adminauth (default: auth)
      --sourcedbtype=                   0: db, 1: cluster 2: aliyun proxy, 3: tencent proxy (default: 0)
      --sourcedbfilterlist=             db white list that need to be compared, -1 means fetch all, "0;5;15" means fetch db 0, 5, and 15 (default: -1)
  -t, --target=TARGET                   Set host:port of target redis. If db type is cluster, split by semicolon(;'), e.g.,
                                        10.1.1.1:1000;10.2.2.2:2000;10.3.3.3:3000. We also support auto-detection, so "master@10.1.1.1:1000" or
                                        "slave@10.1.1.1:1000" means choose master or slave. Only need to give a role in the master or slave.
  -a, --targetpassword=Password         Set target redis password
      --targetauthtype=AUTH-TYPE        useless for opensource redis, valid value:auth/adminauth (default: auth)
      --targetdbtype=                   0: db, 1: cluster 2: aliyun proxy 3: tencent proxy (default: 0)
      --targetdbfilterlist=             db white list that need to be compared, -1 means fetch all, "0;5;15" means fetch db 0, 5, and 15 (default: -1)
  -d, --db=Sqlite3-DB-FILE              sqlite3 db file for store result. If exist, it will be removed and a new file is created. (default: result.db)
      --result=FILE                     store all diff result into the file, format is 'db	diff-type	key	field'
      --comparetimes=COUNT              Total compare count, at least 1. In the first round, all keys will be compared. The subsequent rounds of the
                                        comparison will be done on the previous results. (default: 3)
  -m, --comparemode=                    compare mode, 1: compare full value, 2: only compare value length, 3: only compare keys outline, 4: compare full
                                        value, but only compare value length when meets big key (default: 2)
      --id=                             used in metric, run id, useless for open source (default: unknown)
      --jobid=                          used in metric, job id, useless for open source (default: unknown)
      --taskid=                         used in metric, task id, useless for open source (default: unknown)
  -q, --qps=                            max batch qps limit: e.g., if qps is 10, full-check fetches 10 * $batch keys every second (default: 15000)
      --interval=Second                 The time interval for each round of comparison(Second) (default: 5)
      --batchcount=COUNT                the count of key/field per batch compare, valid value [1, 10000] (default: 256)
      --parallel=COUNT                  concurrent goroutine number for comparison, valid value [1, 100] (default: 5)
      --log=FILE                        log file, if not specified, log is put to console
      --loglevel=LEVEL                  log level: 'debug', 'info', 'warn', 'error', default is 'info'
      --metric                          print metric in log
      --bigkeythreshold=COUNT
  -f, --filterlist=FILTER               if the filter list isn't empty, all elements in list will be synced. The input should be split by '|'. The end of
                                        the string is followed by a * to indicate a prefix match, otherwise it is a full match. e.g.: 'abc*|efg|m*'
                                        matches 'abc', 'abc1', 'efg', 'm', 'mxyz', but 'efgh', 'p' aren't'
      --systemprofile=SYSTEM-PROFILE    port that used to print golang inner head and stack message (default: 20445)
  -v, --version

Help Options:
  -h, --help                            Show this help message

 

參數解釋:

-s, --source=SOURCE               源redis庫地址(ip:port),如果是集群版,那么需要以分號(;)分割不同的db,只需要配置主或者從的其中之一。例如:10.1.1.1:1000;10.2.2.2:2000;10.3.3.3:3000。
-p, --sourcepassword=Password     源redis庫密碼
    --sourceauthtype=AUTH-TYPE    源庫管理權限,開源reids下此參數無用。
    --sourcedbtype=               源庫的類別,0:db(standalone單節點、主從),1: cluster(集群版),2: 阿里雲
    --sourcedbfilterlist=         源庫需要抓取的邏輯db白名單,以分號(;)分割,例如:0;5;15表示db0,db5和db15都會被抓取
-t, --target=TARGET               目的redis庫地址(ip:port)
-a, --targetpassword=Password     目的redis庫密碼
    --targetauthtype=AUTH-TYPE    目的庫管理權限,開源reids下此參數無用。
    --targetdbtype=               參考sourcedbtype
    --targetdbfilterlist=         參考sourcedbfilterlist
-d, --db=Sqlite3-DB-FILE          對於差異的key存儲的sqlite3 db的位置,默認result.db
    --comparetimes=COUNT          比較輪數
-m, --comparemode=                比較模式,1表示全量比較,2表示只對比value的長度,3只對比key是否存在,4全量比較的情況下,忽略大key的比較
    --id=                         用於打metric
    --jobid=                      用於打metric
    --taskid=                     用於打metric
-q, --qps=                        qps限速閾值
    --interval=Second             每輪之間的時間間隔
    --batchcount=COUNT            批量聚合的數量
    --parallel=COUNT              比較的並發協程數,默認5
    --log=FILE                    log文件
    --result=FILE                 不一致結果記錄到result文件中,格式:'db    diff-type    key    field'
    --metric=FILE                 metric文件
    --bigkeythreshold=COUNT       大key拆分的閾值,用於comparemode=4
-f, --filterlist=FILTER           需要比較的key列表,以分號(;)分割。例如:"abc*|efg|m*"表示對比'abc', 'abc1', 'efg', 'm', 'mxyz',不對比'efgh', 'p'。
-v, --version

4.2.2 校驗

校驗源端與目的端鍵值對

[root]# ./redis-full-check --source=10.150.57.9:6379 --sourcepassword= --sourcedbtype=0 --target="10.150.57.13:6381;10.150.57.13:6382;10.150.57.13:6383" --targetpassword=Gaoyu@029 --targetdbtype=1 --comparemode=1 --qps=10 --batchcount=1000 --parallel=10
[root@guizhou_hp-pop-10-150-57-9 redis-full-check-1.4.8]# ./redis-full-check --source=10.150.57.9:6379 --sourcepassword= --sourcedbtype=0 --target="10.150.57.13:6381;10.150.57.13:6382;10.150.57.13:6383" --targetpassword=Gaoyu@029 --targetdbtype=1 --comparemode=1 --qps=10 --batchcount=1000 --parallel=10
[INFO 2021-12-03-10:47:25 main.go:65]: init log success
[INFO 2021-12-03-10:47:25 main.go:168]: configuration: {10.150.57.9:6379  auth 0 -1 10.150.57.13:6381;10.150.57.13:6382;10.150.57.13:6383 Gaoyu@029 auth 1 -1 result.db  3 1 unknown unknown unknown 10 5 1000 10   false 16384  20445 false}
[INFO 2021-12-03-10:47:25 main.go:170]: ---------
[INFO 2021-12-03-10:47:25 full_check.go:238]: sourceDbType=0, p.sourcePhysicalDBList=[meaningless]
[INFO 2021-12-03-10:47:25 full_check.go:243]: db=0:keys=9
[INFO 2021-12-03-10:47:25 full_check.go:253]: ---------------- start 1th time compare
[INFO 2021-12-03-10:47:25 full_check.go:278]: start compare db 0
[INFO 2021-12-03-10:47:25 scan.go:20]: build connection[source redis addr: [10.150.57.9:6379]]
[INFO 2021-12-03-10:47:26 full_check.go:203]: stat:
times:1, db:0, dbkeys:9, finish:33%, finished:true
KeyScan:{9 9 0}
KeyEqualInProcess|string|equal|{9 9 0}

[INFO 2021-12-03-10:47:26 full_check.go:250]: wait 5 seconds before start
[INFO 2021-12-03-10:47:31 full_check.go:253]: ---------------- start 2th time compare
[INFO 2021-12-03-10:47:31 full_check.go:278]: start compare db 0
[INFO 2021-12-03-10:47:31 full_check.go:203]: stat:
times:2, db:0, finished:true
KeyScan:{0 0 0}

[INFO 2021-12-03-10:47:31 full_check.go:250]: wait 5 seconds before start
[INFO 2021-12-03-10:47:36 full_check.go:253]: ---------------- start 3th time compare
[INFO 2021-12-03-10:47:36 full_check.go:278]: start compare db 0
[INFO 2021-12-03-10:47:36 full_check.go:203]: stat:
times:3, db:0, finished:true
KeyScan:{0 0 0}

[INFO 2021-12-03-10:47:36 full_check.go:328]: --------------- finished! ----------------
all finish successfully, totally 0 key(s) and 0 field(s) conflict

校驗完畢,沒有鍵沖突。

 


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM