1.主從: 國王和丞相,國王權力大(讀寫),丞相權利小(讀)
2.哨兵: 國王和王子,國王死了(主服務掛掉),王子繼位(從服務變主服務)
3.集群: 國王和國王,一個國王死了(節點掛掉),其他國王還活着,世界還沒毀滅
主從配置
主從配置
流程:
- 復制多份redis編譯之后(make)的文件,分別命名為: xxx-6379 xxx-6380 xxx-6381 ...
- 開啟6379服務和 6380服務
方式一: 在6380的客戶端輸入:slaveof 127.0.0.1 6379即可(一次性)
方式二: 在6380的redis.conf文件中配置(永久性) - 使用
info replication查看信息
哨兵配置
哨兵配置
流程:
1.配置好主從后, 修改sentinel.conf文件
#sentinel端口
port 26379
#工作路徑,注意路徑不要和主重復
dir "/usr/local/redis-6379"
# 守護進程模式
daemonize yes
#關閉保護模式
protected-mode no
# 指明日志文件名
logfile "./sentinel.log"
#哨兵監控的master,主從配置一樣,這里只用輸入redis主節點的ip/port和法定人數。
sentinel monitor mymaster 192.168.125.128 6379 1
# master或slave多長時間(默認30秒)不能使用后標記為s_down狀態。
sentinel down-after-milliseconds mymaster 3000
#若sentinel在該配置值內未能完成failover操作(即故障時master/slave自動切換),則認為本次failover失敗。
sentinel failover-timeout mymaster 18000
#設置master和slaves驗證密碼
sentinel auth-pass mymaster 123456
#指定了在執行故障轉移時, 最多可以有多少個從服務器同時對新的主服務器進行同步
sentinel parallel-syncs mymaster 1
2.啟動主服務和從服務, 開始從服務的哨兵進程
方式一:redis-sentinel /path/to/sentinel.conf(推薦,這種方式啟動和redis實例沒有任何關系)
方式二:redis-server /path/to/sentinel.conf --sentinel
3.當主服務掛掉,會有一個從服務自動變為主服務
集群配置
集群配置
流程:
1.創建文件夾cluster-test以及子文件夾7000/7001/7002/7003/7004/7005
2.每個文件夾中創建一份redis.conf文件,端口依次為7000/7001/7002/7003/7004/7005
redis.conf最簡配置:
port 7000
daemonize yes
cluster-enabled yes
cluster-config-file nodes.conf
cluster-node-timeout 5000
appendonly yes
3.把redis編譯之后的src中redis-cli redis-server redis-trib.rb復制到cluster-test文件夾
4.進入每個文件夾中,依次啟動6個redis實例 ../redis-server ./redis.conf
5.通過集群命令工具redis-trib(ruby編寫)創建集群,需要安裝ruby環境
$ yum install ruby
$ yum install rubygems
$ gem install redis
6.安裝ruby2.4.0(yum install ruby 是1.6.*版本太低)
1)安裝rvm
$ curl -L get.rvm.io | bash -s stable
如果報錯運行提示信息中的gpg2 --recv-keys xxxxxx
2)啟動服務
$ source /usr/local/rvm/scripts/rvm
3)查看rvm庫中已知ruby版本
$ rvm list known
4)升級Ruby
#安裝ruby
rvm install 2.4.0
#使用新版本
rvm use 2.4.0
#移除舊版本
rvm remove 2.0.0
#查看當前版本
ruby --version
7.安裝gem
$ gem install redis
8.執行redis-trib.rb命令
$ ./redis-trib.rb create --replicas 1 127.0.0.1:7000 127.0.0.1:7001 127.0.0.1:7002 127.0.0.1:7003 127.0.0.1:7004 127.0.0.1:7005
[root@root cluster-test]# ./redis-trib.rb create --replicas 1 127.0.0.1:7000 127.0.0.1:7001 \
> 127.0.0.1:7002 127.0.0.1:7003 127.0.0.1:7004 127.0.0.1:7005
>>> Creating cluster
>>> Performing hash slots allocation on 6 nodes...
Using 3 masters:
127.0.0.1:7000
127.0.0.1:7001
127.0.0.1:7002
Adding replica 127.0.0.1:7004 to 127.0.0.1:7000
Adding replica 127.0.0.1:7005 to 127.0.0.1:7001
Adding replica 127.0.0.1:7003 to 127.0.0.1:7002
>>> Trying to optimize slaves allocation for anti-affinity
[WARNING] Some slaves are in the same host as their master
M: 033d0dbea959fb15a3a27552d18dbb623985a180 127.0.0.1:7000
slots:0-5460 (5461 slots) master
M: 77812723f46f25191eeed04a42303ae83bec66be 127.0.0.1:7001
slots:5461-10922 (5462 slots) master
M: 17fccfc10108c81301a501ec8eaccfb57541fa87 127.0.0.1:7002
slots:10923-16383 (5461 slots) master
S: 89b452f2d5553bef131152932c5725429b0c4aa1 127.0.0.1:7003
replicates 77812723f46f25191eeed04a42303ae83bec66be
S: 095f3d47fa5788ddd94a4c962f7e02fe79a6e8b1 127.0.0.1:7004
replicates 17fccfc10108c81301a501ec8eaccfb57541fa87
S: ba4ca2e1e87ec1a38ad891f2aec5d44f300a52bd 127.0.0.1:7005
replicates 033d0dbea959fb15a3a27552d18dbb623985a180
Can I set the above configuration? (type 'yes' to accept): yes
>>> Nodes configuration updated
>>> Assign a different config epoch to each node
>>> Sending CLUSTER MEET messages to join the cluster
Waiting for the cluster to join...
>>> Performing Cluster Check (using node 127.0.0.1:7000)
M: 033d0dbea959fb15a3a27552d18dbb623985a180 127.0.0.1:7000
slots:0-5460 (5461 slots) master
1 additional replica(s)
S: 095f3d47fa5788ddd94a4c962f7e02fe79a6e8b1 127.0.0.1:7004
slots: (0 slots) slave
replicates 17fccfc10108c81301a501ec8eaccfb57541fa87
S: ba4ca2e1e87ec1a38ad891f2aec5d44f300a52bd 127.0.0.1:7005
slots: (0 slots) slave
replicates 033d0dbea959fb15a3a27552d18dbb623985a180
S: 89b452f2d5553bef131152932c5725429b0c4aa1 127.0.0.1:7003
slots: (0 slots) slave
replicates 77812723f46f25191eeed04a42303ae83bec66be
M: 17fccfc10108c81301a501ec8eaccfb57541fa87 127.0.0.1:7002
slots:10923-16383 (5461 slots) master
1 additional replica(s)
M: 77812723f46f25191eeed04a42303ae83bec66be 127.0.0.1:7001
slots:5461-10922 (5462 slots) master
1 additional replica(s)
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
9.客戶端驗證
$ ./redis-cli -c -p 7000
127.0.0.1:7000> set name lin
-> Redirected to slot [5798] located at 127.0.0.1:7001
OK
10.查看集群狀態
127.0.0.1:7001> cluster nodes
17fccfc10108c81301a501ec8eaccfb57541fa87 127.0.0.1:7002@17002 master - 0 1535691791595 3 connected 10923-16383
89b452f2d5553bef131152932c5725429b0c4aa1 127.0.0.1:7003@17003 slave 77812723f46f25191eeed04a42303ae83bec66be 0 1535691791595 4 connected
77812723f46f25191eeed04a42303ae83bec66be 127.0.0.1:7001@17001 myself,master - 0 1535691790000 2 connected 5461-10922
ba4ca2e1e87ec1a38ad891f2aec5d44f300a52bd 127.0.0.1:7005@17005 slave 033d0dbea959fb15a3a27552d18dbb623985a180 0 1535691790000 6 connected
033d0dbea959fb15a3a27552d18dbb623985a180 127.0.0.1:7000@17000 master - 0 1535691791393 1 connected 0-5460
095f3d47fa5788ddd94a4c962f7e02fe79a6e8b1 127.0.0.1:7004@17004 slave 17fccfc10108c81301a501ec8eaccfb57541fa87 0 1535691790390 5 connected
127.0.0.1:7001> cluster info
cluster_state:ok
cluster_slots_assigned:16384
cluster_slots_ok:16384
cluster_slots_pfail:0
cluster_slots_fail:0
cluster_known_nodes:6
cluster_size:3
cluster_current_epoch:6
cluster_my_epoch:2
cluster_stats_messages_ping_sent:429
cluster_stats_messages_pong_sent:440
cluster_stats_messages_meet_sent:1
cluster_stats_messages_sent:870
cluster_stats_messages_ping_received:436
cluster_stats_messages_pong_received:430
cluster_stats_messages_meet_received:4
cluster_stats_messages_received:870
