一、redis實現主從復制-單機測試
1、安裝redis
tar -zxvf redis-2.8.4.tar.gz
cd redis-2.8.4
make && make install
2、配置主從關系
需要在slave服務器的redis.conf中配置
slaveof 192.168.1.1 6379 #指定master的ip和端口
具體配置見下:
cp redis.conf redis-master-6379.conf
vi2 redis-master-6379.conf
logfile "/appcom/Redis/redis-2.8.4/redis-master-6379.log"
cp redis.conf redis-master-6389.conf
vi2 redis-master-6379.conf
port 6389
slaveof localhost 6379
logfile "/appcom/Redis/redis-2.8.4/redis-slave-6389.log"
3、啟動master服務器和slave服務器
./src/redis-server redis-master-6379.conf &
[19810] 28 Jan 14:18:55.825 * The server is now ready to accept connections on port 6379
[19810] 28 Jan 14:23:19.918 * Slave asks for synchronization
[19810] 28 Jan 14:23:19.919 * Full resync requested by slave.
[19810] 28 Jan 14:23:19.919 * Starting BGSAVE for SYNC
[19810] 28 Jan 14:23:19.928 * Background saving started by pid 22336
[22336] 28 Jan 14:23:19.947 * DB saved on disk
[22336] 28 Jan 14:23:19.948 * RDB: 6 MB of memory used by copy-on-write
[19810] 28 Jan 14:23:19.985 * Background saving terminated with success
[19810] 28 Jan 14:23:19.986 * Synchronization with slave succeeded
[19810] 28 Jan 14:23:21.038 # Connection with slave ::1:6389 lost.
[19810] 28 Jan 14:23:25.159 * Slave asks for synchronization
[19810] 28 Jan 14:23:25.159 * Full resync requested by slave.
[19810] 28 Jan 14:23:25.159 * Starting BGSAVE for SYNC
[19810] 28 Jan 14:23:25.163 * Background saving started by pid 22399
[22399] 28 Jan 14:23:25.177 * DB saved on disk
[22399] 28 Jan 14:23:25.178 * RDB: 6 MB of memory used by copy-on-write
[19810] 28 Jan 14:23:25.210 * Background saving terminated with success
[19810] 28 Jan 14:23:25.210 * Synchronization with slave succeeded
./src/redis-server redis-slave-6389.conf &
[22327] 28 Jan 14:23:18.915 * The server is now ready to accept connections on port 6389
[22327] 28 Jan 14:23:19.913 * Connecting to MASTER localhost:6379
[22327] 28 Jan 14:23:19.915 * MASTER <-> SLAVE sync started
[22327] 28 Jan 14:23:19.915 * Non blocking connect for SYNC fired the event.
[22327] 28 Jan 14:23:19.916 * Master replied to PING, replication can continue...
[22327] 28 Jan 14:23:19.917 * Partial resynchronization not possible (no cached master)
在master shutdown之后slave中可以使用數據,但在后台日志中出現下面信息,並不會將slave轉化為master
[7084] 28 Jan 14:04:59.940 * Connecting to MASTER localhost:6379
[7084] 28 Jan 14:04:59.941 * MASTER <-> SLAVE sync started
[7084] 28 Jan 14:04:59.941 # Error condition on socket for SYNC: Connection refused
二、利用Redis-sentinel實現redis集群的故障恢復-單機測試
1、redis安裝
master localhost 6379
slave1 localhost 6389
slave2 localhost 6399
master-sentinel: localhost 26379
slave1-sentinel: localhost 26389
slave2-sentinel: localhost 26399
2、redis配置
master配置
cp redis.conf redis-master-6379.conf
vi2 redis-master-6379.conf
port 6379
requirepass rd123
masterauth rd123
#rename-command
appendonly yes //開啟aof
save “”
slave-read-only yes
logfile "/appcom/Redis/redis-2.8.4/redis-master-6379.log"
vi2 sentinel-6379.conf
port 26379
sentinel monitor mymaster 127.0.0.1 6379 2 //sentinel需要監控的master信息:<mastername> <masterIP> <masterPort> <quorum>. <quorum>應該小於集群中slave的個數,只有當至少<quorum>個sentinel實例提交"master失效" 才會認為master為ODWON("客觀"失效) .
sentinel auth-pass mymaster rd123
sentinel down-after-milliseconds mymaster 30000 //master被當前sentinel實例認定為“失效”(SDOWN)的間隔時間
sentinel parallel-syncs mymaster 1 //當新master產生時,同時進行“slaveof”到新master並進行同步復制的slave個數。
sentinel failover-timeout mymaster 180000 //failover過期時間,當failover開始后,在此時間內仍然沒有觸發任何failover操作,當前sentinel將會認為此次failoer失敗
slave1配置
cp redis-master-6379.conf redis-slave-6389.conf
vi2 redis-slave-6389.conf
prot 6389
slaveof localhost 6379
logfile "/appcom/Redis/redis-2.8.4/redis-master-6389.log"
cp sentinel-6379.conf sentinel-6389.conf
vi2 sentinel-6389.conf
prot 26389
slave2配置
cp redis-master-6379.conf redis-slave-6399.conf
vi2 redis-slave-6399.conf
prot 6399
slaveof localhost 6379
logfile "/appcom/Redis/redis-2.8.4/redis-master-6399.log"
cp sentinel-6379.conf sentinel-6399.conf
vi2 sentinel-6399.conf
prot 26399
3、啟動
首先啟動master server和master sentinel
./src/redis-server --include redis-master-6379.conf &
./src/redis-sentinel sentinel-6379.conf > sentinel-6379.log &
啟動slave1 server和sentinel
./src/redis-server --include redis-slave-6389.conf &
./src/redis-sentinel sentinel-6389.conf > sentinel-6389.log &
啟動slave1 server和sentinel
./src/redis-server --include redis-slave-6399.conf &
./src/redis-sentinel sentinel-6399.conf > sentinel-6399.log &
[45564] 28 Jan 15:03:37.444 * +slave slave 127.0.0.1:6389 127.0.0.1 6389 @ mymaster 127.0.0.1 6379
[45564] 28 Jan 15:03:37.444 * +slave slave 127.0.0.1:6399 127.0.0.1 6399 @ mymaster 127.0.0.1 6379
[45564] 28 Jan 15:04:02.364 * +sentinel sentinel 127.0.0.1:26389 127.0.0.1 26389 @ mymaster 127.0.0.1 6379
[45564] 28 Jan 15:04:19.711 * +sentinel sentinel 127.0.0.1:26399 127.0.0.1 26399 @ mymaster 127.0.0.1 6379
查看master的狀態:
# ./src/redis-cli -h 127.0.0.1 -p 6379 -a rd123
localhost:6379> info Replication
# Replication
role:master
connected_slaves:2
slave0:ip=127.0.0.1,port=6389,state=online,offset=54505,lag=0
slave1:ip=127.0.0.1,port=6399,state=online,offset=54505,lag=1
master_repl_offset:54505
repl_backlog_active:1
repl_backlog_size:1048576
repl_backlog_first_byte_offset:2
repl_backlog_histlen:54504
查看slave1的狀態:
# ./src/redis-cli -h localhost -p 6389 -a rd123
localhost:6389> info Replication
# Replication
role:slave
master_host:127.0.0.1
master_port:6379
master_link_status:up
master_last_io_seconds_ago:2
master_sync_in_progress:0
slave_repl_offset:59720
slave_priority:100
slave_read_only:1
connected_slaves:0
master_repl_offset:0
repl_backlog_active:0
repl_backlog_size:1048576
repl_backlog_first_byte_offset:0
repl_backlog_histlen:0
查看slave2的狀態:
# ./src/redis-cli -h localhost -p 6399 -a rd123
localhost:6399> info Replication
# Replication
role:slave
master_host:127.0.0.1
master_port:6379
master_link_status:up
master_last_io_seconds_ago:0
master_sync_in_progress:0
slave_repl_offset:68701
slave_priority:100
slave_read_only:1
connected_slaves:0
master_repl_offset:0
repl_backlog_active:0
repl_backlog_size:1048576
repl_backlog_first_byte_offset:0
repl_backlog_histlen:0
4、測試
(1)場景一,slave1宕機
localhost:6389> shutdown
在sentinel中
[45794] 28 Jan 15:12:10.335 # +sdown slave 127.0.0.1:6389 127.0.0.1 6389 @ mymaster 127.0.0.1 6379
# ./src/redis-cli -h localhost -p 6379 -a rd123
localhost:6379> info Replication
# Replication
role:master
connected_slaves:1
slave0:ip=127.0.0.1,port=6399,state=online,offset=120536,lag=1
master_repl_offset:120669
repl_backlog_active:1
repl_backlog_size:1048576
repl_backlog_first_byte_offset:2
repl_backlog_histlen:120668
(2)場景二,slave恢復
重啟slave1
./src/redis-server --include redis-slave-6389.conf &
[3] 52287
[45794] 28 Jan 15:15:19.726 * +reboot slave 127.0.0.1:6389 127.0.0.1 6389 @ mymaster 127.0.0.1 6379
[45794] 28 Jan 15:15:19.874 # -sdown slave 127.0.0.1:6389 127.0.0.1 6389 @ mymaster 127.0.0.1 6379
localhost:6379> info Replication
# Replication
role:master
connected_slaves:2
slave0:ip=127.0.0.1,port=6399,state=online,offset=197860,lag=1
slave1:ip=127.0.0.1,port=6389,state=online,offset=197727,lag=1
master_repl_offset:198126
repl_backlog_active:1
repl_backlog_size:1048576
repl_backlog_first_byte_offset:2
repl_backlog_histlen:198125
(3)場景三,master宕機
localhost:6379> shutdown
localhost:6379> info Replication
[45564] 28 Jan 15:36:37.710 # +sdown master mymaster 127.0.0.1 6379
[45564] 28 Jan 15:36:37.967 # +new-epoch 1
[45564] 28 Jan 15:36:37.968 # +vote-for-leader 1f6f588c7c28a2176c2886e540a638ce92033e65 1
[45564] 28 Jan 15:36:38.892 # +odown master mymaster 127.0.0.1 6379 #quorum 3/2
[45564] 28 Jan 15:36:39.178 # +switch-master mymaster 127.0.0.1 6379 127.0.0.1 6399
[45564] 28 Jan 15:36:39.178 * +slave slave 127.0.0.1:6389 127.0.0.1 6389 @ mymaster 127.0.0.1 6399
[45564] 28 Jan 15:36:39.180 * +slave slave 127.0.0.1:6379 127.0.0.1 6379 @ mymaster 127.0.0.1 6399
[45564] 28 Jan 15:37:09.193 # +sdown slave 127.0.0.1:6379 127.0.0.1 6379 @ mymaster 127.0.0.1 6399
master轉換為slave2
localhost:6399> info Replication
# Replication
role:master
connected_slaves:1
slave0:ip=127.0.0.1,port=6389,state=online,offset=21724,lag=1
master_repl_offset:21990
repl_backlog_active:1
repl_backlog_size:1048576
repl_backlog_first_byte_offset:2
repl_backlog_histlen:21989
(4)場景四,master恢復
./src/redis-server --include redis-master-6379.conf &
[1] 67400
[45564] 28 Jan 15:41:47.608 # -sdown slave 127.0.0.1:6379 127.0.0.1 6379 @ mymaster 127.0.0.1 6399
[45564] 28 Jan 15:41:57.513 * +reboot slave 127.0.0.1:6379 127.0.0.1 6379 @ mymaster 127.0.0.1 6399
原來的master自動切換成slave,不會自動恢復成master
localhost:6379> info Replication
# Replication
role:slave
master_host:127.0.0.1
master_port:6399
master_link_status:up
master_last_io_seconds_ago:0
master_sync_in_progress:0
slave_repl_offset:70642
slave_priority:100
slave_read_only:1
connected_slaves:0
master_repl_offset:0
repl_backlog_active:0
repl_backlog_size:1048576
repl_backlog_first_byte_offset:0
repl_backlog_histlen:0
localhost:6399> info Replication
# Replication
role:master
connected_slaves:2
slave0:ip=127.0.0.1,port=6389,state=online,offset=93539,lag=0
slave1:ip=127.0.0.1,port=6379,state=online,offset=93539,lag=0
master_repl_offset:93553
repl_backlog_active:1
repl_backlog_size:1048576
repl_backlog_first_byte_offset:2
repl_backlog_histlen:93552
三、redis集群搭建
1、從github中下載最新開發版的redis,https://codeload.github.com/antirez/redis/zip/unstable
2、安裝redis
node1 10.25.22.185 6379
node2 10.25.22.186 6379
node3 10.25.22.187 6379
3、修改配置
cluster-enabled yes
cluster-config-file nodes-6379.conf
cluster-node-timeout 15000
logfile "/appcom/Redis/redis-unstable/redis.log"
./src/redis-server redis.conf &
[1] 6856
./src/redis-server redis.conf &
[1] 43951
./src/redis-server redis.conf &
[1] 80642
在node1中查看集群狀態
# ./src/redis-cli
127.0.0.1:6379> cluster nodes
af6224cbc9ce9b66e21b90af442678ba096989d9 :0 myself,master - 0 0 0 connected
127.0.0.1:6379> cluster info
cluster_state:fail
cluster_slots_assigned:0
cluster_slots_ok:0
cluster_slots_pfail:0
cluster_slots_fail:0
cluster_known_nodes:1
cluster_size:0
cluster_current_epoch:0
cluster_stats_messages_sent:0
cluster_stats_messages_received:0
通過cluster meet命令關聯集群各個服務器
127.0.0.1:6379> cluster meet 10.25.22.186 6379
OK
127.0.0.1:6379> cluster meet 10.25.22.187 6379
OK
127.0.0.1:6379> cluster nodes
ed85b32aa566511bf917e8ecdc6150df7449dcf2 10.25.22.187:6379 master - 0 1390897200350 0 connected
af6224cbc9ce9b66e21b90af442678ba096989d9 :0 myself,master - 0 0 0 connected
918fc015490599a93e680893c7e387336dac35bc 10.25.22.186:6379 master - 0 1390897199347 0 connected
127.0.0.1:6379> cluster info
cluster_state:fail
cluster_slots_assigned:0
cluster_slots_ok:0
cluster_slots_pfail:0
cluster_slots_fail:0
cluster_known_nodes:3
cluster_size:0
cluster_current_epoch:0
cluster_stats_messages_sent:23
cluster_stats_messages_received:23
為集群中各個服務器分配hash slots
Redis Cluster通過hash slot將數據根據主鍵來分區,所以一條key-value數據會根據算法自動映射到一個hash slot,
但是一個hash slot存儲在哪個Redis節點上並不是自動映射的,是需要集群管理者自行分配的。
根據源碼得知共有16384個hash slots
修改node-conf文件,保留myself那行記錄,其余記錄刪除
node1的改為:af6224cbc9ce9b66e21b90af442678ba096989d9 :0 myself,master - 0 0 0 connected 0-5000
node2的改為:918fc015490599a93e680893c7e387336dac35bc :0 myself,master - 0 0 0 connected 5001-10000
node3的改為:ed85b32aa566511bf917e8ecdc6150df7449dcf2 :0 myself,master - 0 0 0 connected 10001-16383
之后重啟服務器
重新使用cluster meet命令關聯各個服務器節點
127.0.0.1:6379> cluster meet 10.25.22.186 6379
OK
127.0.0.1:6379> cluster meet 10.25.22.187 6379
OK
127.0.0.1:6379> cluster info
cluster_state:ok
cluster_slots_assigned:16384
cluster_slots_ok:16384
cluster_slots_pfail:0
cluster_slots_fail:0
cluster_known_nodes:3
cluster_size:3
cluster_current_epoch:0
cluster_stats_messages_sent:29
cluster_stats_messages_received:29
127.0.0.1:6379>
[root@CNSZ141195 redis-unstable]# ./src/redis-cli
127.0.0.1:6379> set name "Make"
(error) MOVED 5798 10.25.22.186:6379
[root@CNSZ141196 redis-unstable]# ./src/redis-cli
127.0.0.1:6379> set name "Make"
OK
127.0.0.1:6379> get name
"Make"