1、理論知識
1、Redis Cluster設計要點
redis cluster在設計的時候,就考慮到了去中心化,去中間件,也就是說,集群中的每個節點都是平等的關系,都是對等的,每個節點都保存各自的數據和整個集群的狀態。每個節點都和其他所有節點連接,而且這些連接保持活躍,這樣就保證了我們只需要連接集群中的任意一個節點,就可以獲取到其他節點的數據。
那么redis 是如何合理分配這些節點和數據的呢?
Redis 集群沒有並使用傳統的一致性哈希來分配數據,而是采用另外一種叫做
哈希槽 (hash slot)的方式來分配的。redis cluster 默認分配了 16384 個slot,當我們set一個key 時,會用CRC16算法來取模得到所屬的slot,然后將這個key 分到哈希槽區間的節點上,具體算法就是:CRC16(key) % 16384。注意的是:必須要
3個以上的主節點,否則在創建集群時會失敗。所以,我們假設現在有3個節點已經組成了集群,分別是:A, B, C 三個節點,它們可以是一台機器上的三個端口,也可以是三台不同的服務器。那么,采用
哈希槽 (hash slot)的方式來分配16384個slot 的話,它們三個節點分別承擔的slot 區間是:
- 節點A覆蓋0-5460;
- 節點B覆蓋5461-10922;
- 節點C覆蓋10923-16383.
那么,現在我想設置一個key ,比如叫
my_name:set my_name yangyi按照redis cluster的哈希槽算法:
CRC16('my_name')%16384 = 2412。 那么就會把這個key 的存儲分配到 A 上了。同樣,當我連接(A,B,C)任何一個節點想獲取
my_name這個key時,也會這樣的算法,然后內部跳轉到B節點上獲取數據。這種
哈希槽的分配方式有好也有壞,好處就是很清晰,比如我想新增一個節點D,redis cluster的這種做法是從各個節點的前面各拿取一部分slot到D上。大致就會變成這樣:
- 節點A覆蓋1365-5460
- 節點B覆蓋6827-10922
- 節點C覆蓋12288-16383
- 節點D覆蓋0-1364,5461-6826,10923-12287
同樣刪除一個節點也是類似,移動完成后就可以刪除這個節點了。
所以redis cluster 就是這樣的一個形狀:
2、Redis Cluster主從模式
redis cluster 為了保證數據的高可用性,加入了主從模式,一個主節點對應一個或多個從節點,主節點提供數據存取,從節點則是從主節點拉取數據備份,當這個主節點掛掉后,就會有這個從節點選取一個來充當主節點,從而保證集群不會掛掉。
上面那個例子里, 集群有ABC三個主節點, 如果這3個節點都沒有加入從節點,如果B掛掉了,我們就無法訪問整個集群了。A和C的slot也無法訪問。
所以我們在集群建立的時候,一定要為每個主節點都添加了從節點, 比如像這樣, 集群包含主節點A、B、C, 以及從節點A1、B1、C1, 那么即使B掛掉系統也可以繼續正確工作。
B1節點替代了B節點,所以Redis集群將會選擇B1節點作為新的主節點,集群將會繼續正確地提供服務。 當B重新開啟后,它就會變成B1的從節點。
不過需要注意,如果節點B和B1同時掛了,Redis集群就無法繼續正確地提供服務了。
2、搭建
1、 redis實例:
192.168.244.128:6379 主 192.168.244.128:6380 從 192.168.244.130:6379 主 192.168.244.130:6380 從 192.168.244.131:6379 主 192.168.244.131:6380 從2、執行命令創建集群
把cluster-enabled yes 的注釋打開
執行命令:./redis-trib.rb create --replicas 1 192.168.244.128:6379 192.168.244.128:6380 192.168.244.130:6379 192.168.244.130:6380 192.168.244.131:6379 192.168.244.131:6380
新版本已經修改命令為:
./redis-cli --cluster create 192.168.244.128:6379 192.168.244.128:6380 192.168.244.130:6379 192.168.244.130:6380 192.168.244.131:6379 192.168.244.131:6380 --cluster-replicas 1 -a zjl123
注意一個服務器啟動多實例時以下配置要不一樣:
pidfile : pidfile/var/run/redis/redis_6380.pid
port 6380
logfile : logfile/var/log/redis/redis_6380.log
rdbfile : dbfilenamedump_6380.rdb
3、問題:
1、報錯:/usr/bin/env: ruby: No such file or directory
安裝ruby,rubygems 依賴
yum -y install ruby rubygems
2、報錯
./redis-trib.rb:6: odd number list for Hash
white: 29,
^
./redis-trib.rb:6: syntax error, unexpected ':', expecting '}'
white: 29,
^
./redis-trib.rb:7: syntax error, unexpected ',', expecting kEND
安裝新版ruby
yum remove -y ruby yum remove -y rubygems下載ruby-2.6.5.tar.gztar –zxvf ruby-2.6.5.tar.gzcd ruby-2.6.5./configuremakemake install3、重復運行創建集群命令提示:You should use redis-cli instead. All commands and features belonging to redis-trib.rb have been moved to redis-cli. In order to use them you should call redis-cli with the --cluster option followed by the subcommand name, arguments and options. Use the following syntax: redis-cli --cluster SUBCOMMAND [ARGUMENTS] [OPTIONS] Example: redis-cli --cluster create 192.168.244.128:6379 192.168.244.128:6380 192.168.244.130:6379 192.168.244.130:6380 192.168.244.131:6379 192.168.244.131:6380 --cluster-replicas 1 To get help about all subcommands, type: redis-cli --cluster help[root@zjltest3 src]# redis-cli --cluster create 192.168.244.128:6379 192.168.244.128:6380 192.168.244.130:6379 192.168.244.130:6380 192.168.244.131:6379 192.168.244.131:6380 --cluster-replicas 1
-bash: redis-cli: command not foun
4、執行命令 . /redis-cli --cluster create 192.168.244.128:6379 192.168.244.128:6380 192.168.244.130:6379 192.168.244.130:6380 192.168.244.131:6379 192.168.244.131:6380 --cluster-replicas 1[ERR] Node 192.168.244.128:6379 NOAUTH Authentication required.5、執行命令
./redis-cli --cluster create 192.168.244.128:6379 192.168.244.128:6380 192.168.244.130:6379 192.168.244.130:6380 192.168.244.131:6379 192.168.244.131:6380 --cluster-replicas 1 -a zjl123Warning: Using a password with '-a' or '-u' option on the command line interface may not be safe. [ERR] Node 192.168.244.128:6379 is not configured as a cluster node.6、把cluster-enabled yes 的注釋打開 成功:
Master[0] -> Slots 0 - 5460 Master[1] -> Slots 5461 - 10922 Master[2] -> Slots 10923 - 16383 Adding replica 192.168.244.130:6380 to 192.168.244.128:6379 Adding replica 192.168.244.131:6380 to 192.168.244.130:6379 Adding replica 192.168.244.128:6380 to 192.168.244.131:6379 M: 0f6f4aabd05f0ef3a214a3e67b139ce52c8d5ca4 192.168.244.128:6379 slots:[0-5460],[5634],[8157] (5461 slots) master S: 1d6b7a10046a75b3ab5461a8d29f411837a3c0d8 192.168.244.128:6380 replicates 0f30ac78eba3be20aa307ea64c09b5025de165af M: d34845ed63f35645e820946cc0dc24460621a386 192.168.244.130:6379 slots:[5461-10922] (5462 slots) master S: 1332da24115473f73e04dfe8b67cd1e595a34a11 192.168.244.130:6380 replicates 0f6f4aabd05f0ef3a214a3e67b139ce52c8d5ca4 M: 0f30ac78eba3be20aa307ea64c09b5025de165af 192.168.244.131:6379 slots:[10923-16383] (5461 slots) master S: 1a57a63faa17e5fda4025a5f088fc70055990a07 192.168.244.131:6380 replicates d34845ed63f35645e820946cc0dc24460621a386 Can I set the above configuration? (type 'yes' to accept): yes >>> Nodes configuration updated >>> Assign a different config epoch to each node >>> Sending CLUSTER MEET messages to join the cluster Waiting for the cluster to join ...... >>> Performing Cluster Check (using node 192.168.244.128:6379) M: 0f6f4aabd05f0ef3a214a3e67b139ce52c8d5ca4 192.168.244.128:6379 slots:[0-5460] (5461 slots) master 1 additional replica(s) S: 1d6b7a10046a75b3ab5461a8d29f411837a3c0d8 192.168.244.128:6380 slots: (0 slots) slave replicates 0f30ac78eba3be20aa307ea64c09b5025de165af M: d34845ed63f35645e820946cc0dc24460621a386 192.168.244.130:6379 slots:[5461-10922] (5462 slots) master 1 additional replica(s) S: 1a57a63faa17e5fda4025a5f088fc70055990a07 192.168.244.131:6380 slots: (0 slots) slave replicates d34845ed63f35645e820946cc0dc24460621a386 S: 1332da24115473f73e04dfe8b67cd1e595a34a11 192.168.244.130:6380 slots: (0 slots) slave replicates 0f6f4aabd05f0ef3a214a3e67b139ce52c8d5ca4 M: 0f30ac78eba3be20aa307ea64c09b5025de165af 192.168.244.131:6379 slots:[10923-16383] (5461 slots) master 1 additional replica(s) [OK] All nodes agree about slots configuration.
3、驗證
1、查看集群狀態:
./redis-cli --cluster check 192.168.244.128:6379
Warning: Using a password with '-a' or '-u' option on the command line interface may not be safe. 192.168.244.128:6379 (0f6f4aab...) -> 0 keys | 5461 slots | 1 slaves. 192.168.244.130:6379 (d34845ed...) -> 0 keys | 5462 slots | 1 slaves. 192.168.244.131:6379 (0f30ac78...) -> 0 keys | 5461 slots | 1 slaves. [OK] 0 keys in 3 masters. 0.00 keys per slot on average. >>> Performing Cluster Check (using node 192.168.244.128:6379) M: 0f6f4aabd05f0ef3a214a3e67b139ce52c8d5ca4 192.168.244.128:6379 slots:[0-5460] (5461 slots) master 1 additional replica(s) S: 1d6b7a10046a75b3ab5461a8d29f411837a3c0d8 192.168.244.128:6380 slots: (0 slots) slave replicates 0f30ac78eba3be20aa307ea64c09b5025de165af M: d34845ed63f35645e820946cc0dc24460621a386 192.168.244.130:6379 slots:[5461-10922] (5462 slots) master 1 additional replica(s) S: 1a57a63faa17e5fda4025a5f088fc70055990a07 192.168.244.131:6380 slots: (0 slots) slave replicates d34845ed63f35645e820946cc0dc24460621a386 S: 1332da24115473f73e04dfe8b67cd1e595a34a11 192.168.244.130:6380 slots: (0 slots) slave replicates 0f6f4aabd05f0ef3a214a3e67b139ce52c8d5ca4 M: 0f30ac78eba3be20aa307ea64c09b5025de165af 192.168.244.131:6379 slots:[10923-16383] (5461 slots) master 1 additional replica(s) [OK] All nodes agree about slots configuration. >>> Check for open slots... >>> Check slots coverage... [OK] All 16384 slots covered.2、登錄服務器設置和獲取值
[root@zjltest3 src]# ./redis-cli -c -h 192.168.244.128 -p 6379 -a zjl123 Warning: Using a password with '-a' or '-u' option on the command line interface may not be safe. 192.168.244.128:6379> set zjl 123 -> Redirected to slot [5634] located at 192.168.244.130:6379 OK 192.168.244.130:6379>如上,設置的值被保存在130服務器
[root@zjltest2 redis]# src/redis-cli -c -h 192.168.244.128 -p 6379 -a zjl123 Warning: Using a password with '-a' or '-u' option on the command line interface may not be safe. 192.168.244.128:6379> get zjl -> Redirected to slot [5634] located at 192.168.244.130:6379 "123"如上獲取值是從130上獲取的
以上說明cluster集群配置成功

