Kubernetes 通過statefulset部署redis cluster集群
部署redis集群方式的選擇
- Statefulset
- Service&depolyment
對於redis,mysql這種有狀態的服務,我們使用statefulset方式為首選.我們這邊主要就是介紹statefulset這種方式
ps:
statefulset 的設計原理模型:
拓撲狀態.應用的多個實例之間不是完全對等的關系,這個應用實例的啟動必須按照某些順序啟動,比如應用的
主節點 A 要先於從節點 B 啟動。而如果你把 A 和 B 兩個Pod刪除掉,他們再次被創建出來是也必須嚴格按
照這個順序才行,並且,新創建出來的Pod,必須和原來的Pod的網絡標識一樣,這樣原先的訪問者才能使用同樣
的方法,訪問到這個新的Pod
存儲狀態:應用的多個實例分別綁定了不同的存儲數據.對於這些應用實例來說,Pod A第一次讀取到的數據,和
隔了十分鍾之后再次讀取到的數據,應該是同一份,哪怕在此期間Pod A被重新創建過.一個數據庫應用的多個
存儲實例
部署
安裝NFS(共享存儲)
因為k8s上pod是飄忽不定的,所以我們肯定需要用一個共享存儲來提供存儲,這樣不管pod漂移到哪個節點都能訪問這個共享的數據卷.我這個地方先使用NFS
來做共享存儲,后期可以 選擇別的替換
yum -y install nfs-utils rpcbind
vim /etc/exports
/usr/local/kubernetes/redis/pv1 0.0.0.0/0(rw,all_squash) /usr/local/kubernetes/redis/pv2 0.0.0.0/0(rw,all_squash) /usr/local/kubernetes/redis/pv3 0.0.0.0/0(rw,all_squash) /usr/local/kubernetes/redis/pv4 0.0.0.0/0(rw,all_squash) /usr/local/kubernetes/redis/pv5 0.0.0.0/0(rw,all_squash) /usr/local/kubernetes/redis/pv6 0.0.0.0/0(rw,all_squash) mkdir -p /usr/local/kubernetes/redis/pv{1..6} chmod 777 /usr/local/kubernetes/redis/pv{1..6}
后期我們可以寫成域名 通配符
啟動服務
systemctl enable nfs systemctl enable rpcbind systemctl start nfs systemctl start rpcbind
創建pv
創建6個pv 一會供pvc掛載使用
vim pv.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: nfs-pv1
spec:
capacity:
storage: 200M #磁盤大小200M
accessModes:
- ReadWriteMany #多客戶可讀寫
nfs:
server: NFS服務器地址
path: "/usr/local/kubernetes/redis/pv1"
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: nfs-vp2
spec:
capacity:
storage: 200M
accessModes:
- ReadWriteMany
nfs:
server: NFS服務器地址
path: "/usr/local/kubernetes/redis/pv2"
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: nfs-pv3
spec:
capacity:
storage: 200M
accessModes:
- ReadWriteMany
nfs:
server: NFS服務器地址
path: "/usr/local/kubernetes/redis/pv3"
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: nfs-pv4
spec:
capacity:
storage: 200M
accessModes:
- ReadWriteMany
nfs:
server: NFS服務器地址
path: "/usr/local/kubernetes/redis/pv4"
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: nfs-pv5
spec:
capacity:
storage: 200M
accessModes:
- ReadWriteMany
nfs:
server: NFS服務器地址
path: "/usr/local/kubernetes/redis/pv5"
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: nfs-pv6
spec:
capacity:
storage: 200M
accessModes:
- ReadWriteMany
nfs:
server: NFS服務器地址
path: "/usr/local/kubernetes/redis/pv6"
字段說明:
apiversion: api版本
kind: 這個yaml是生成pv的
metadata: 元數據
spec.capacity: 進行資源限制的
spec.accessmodes: 訪問模式(讀寫模式)
spec.nfs: 這個pv卷名是通過nfs提供的
創建pv
kubectl create -f pv.yaml kubectl get pv #查看創建的pv
創建configmap,用來存放redis的配置文件
因為redis的配置文件里面可能會改變,所以我們使用configmap這種方式給配置文件弄出來,我們后期改的時候就不需要沒改次配置文件就從新生成一個docker images包了
appendonly yes #開啟Redis的AOF持久化 cluster-enabled yes #集群模式打開 cluster-config-file /var/lib/redis/nodes.conf #下面說明 cluster-node-timeout 5000 #節點超時時間 dir /var/lib/redis #AOF持久化文件存在的位置 port 6379 #開啟的端口
cluster-conf-file: 選項設定了保存節點配置文件的路徑,如果這個配置文件不存在,每個節點在啟動的時候都為他自身指定了一個新的ID存檔到這個文件中,實例會一直使用同一個ID,在集群中保持一個獨一無二的(Unique)名字.每個節點都是用ID而不是IP或者端口號來記錄其他節點,因為在k8s中,IP地址是不固定的,而這個獨一無二的標識符(Identifier)則會在節點的整個生命周期中一直保持不變,我們這個文件里面存放的是節點ID
創建名為redis-conf的Configmap:
kubectl create configmap redis-conf --from-file=redis.conf
查看:
[root@rke ~]# kubectl get cm NAME DATA AGE redis-conf 1 22h [root@rke ~]# kubectl describe cm redis-conf Name: redis-conf Namespace: default Labels: <none> Annotations: <none> Data ==== redis.conf: ---- appendonly yes cluster-enabled yes cluster-config-file /var/lib/redis/nodes.conf cluster-node-timeout 5000 dir /var/lib/redis port 6379 Events: <none>
創建headless service
Headless service是StatefulSet實現穩定網絡標識的基礎,我們需要提前創建。准備文件headless-service.yml如下:
apiVersion: v1
kind: Service
metadata:
name: redis-service
labels:
app: redis
spec:
ports:
- name: redis-port
port: 6379
clusterIP: None
selector:
app: redis
appCluster: redis-cluster
創建:
kubectl create -f headless-service.yml
查看:
[root@k8s-node1 redis]# kubectl get svc redis-service NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE redis-service ClusterIP None <none> 6379/TCP 53s
可以看到,服務名稱為redis-service,其CLUSTER-IP為None,表示這是一個“無頭”服務。
創建redis集群節點
這是本文的核心內容,創建redis.yaml
文件
[root@rke ~]# cat /home/docker/redis/redis.yml
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: redis-app
spec:
serviceName: "redis-service"
replicas: 6
template:
metadata:
labels:
app: redis
appCluster: redis-cluster
spec:
terminationGracePeriodSeconds: 20
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 100
podAffinityTerm:
labelSelector:
matchExpressions:
- key: app
operator: In
values:
- redis
topologyKey: kubernetes.io/hostname
containers:
- name: redis
image: "redis"
command:
- "redis-server" #redis啟動命令
args:
- "/etc/redis/redis.conf" #redis-server后面跟的參數,換行代表空格
- "--protected-mode" #允許外網訪問
- "no"
# command: redis-server /etc/redis/redis.conf --protected-mode no
resources: #資源
requests: #請求的資源
cpu: "100m" #m代表千分之,相當於0.1 個cpu資源
memory: "100Mi" #內存100m大小
ports:
- name: redis
containerPort: 6379
protocol: "TCP"
- name: cluster
containerPort: 16379
protocol: "TCP"
volumeMounts:
- name: "redis-conf" #掛載configmap生成的文件
mountPath: "/etc/redis" #掛載到哪個路徑下
- name: "redis-data" #掛載持久卷的路徑
mountPath: "/var/lib/redis"
volumes:
- name: "redis-conf" #引用configMap卷
configMap:
name: "redis-conf"
items:
- key: "redis.conf" #創建configMap指定的名稱
path: "redis.conf" #里面的那個文件--from-file參數后面的文件
volumeClaimTemplates: #進行pvc持久卷聲明,
- metadata:
name: redis-data
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 200M
PodAntiAffinity
:表示反親和性,其決定了某個pod不可以和哪些Pod部署在同一拓撲域,可以用於將一個服務的POD分散在不同的主機或者拓撲域中,提高服務本身的穩定性。matchExpressions
:規定了Redis Pod要盡量不要調度到包含app為redis的Node上,也即是說已經存在Redis的Node上盡量不要再分配Redis Pod了.
另外,根據StatefulSet的規則,我們生成的Redis的6個Pod的hostname會被依次命名為$(statefulset名稱)-$(序號),如下圖所示:
[root@rke ~]# kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE redis-app-0 1/1 Running 0 40m 10.42.2.17 192.168.1.21 <none> redis-app-1 1/1 Running 0 40m 10.42.0.15 192.168.1.114 <none> redis-app-2 1/1 Running 0 40m 10.42.1.13 192.168.1.20 <none> redis-app-3 1/1 Running 0 40m 10.42.2.18 192.168.1.21 <none> redis-app-4 1/1 Running 0 40m 10.42.0.16 192.168.1.114 <none> redis-app-5 1/1 Running 0 40m 10.42.1.14 192.168.1.20 <none>
如上,可以看到這些Pods在部署時是以{0..N-1}的順序依次創建的。注意,直到redis-app-0狀態啟動后達到Running狀態之后,redis-app-1 才開始啟動。
同時,每個Pod都會得到集群內的一個DNS域名,格式為$(podname).$(service name).$(namespace).svc.cluster.local
,也即是:
redis-app-0.redis-service.default.svc.cluster.local
redis-app-1.redis-service.default.svc.cluster.local
...以此類推...
在K8S集群內部,這些Pod就可以利用該域名互相通信。我們可以使用busybox鏡像的nslookup檢驗這些域名:
[root@k8s-node1 ~]# kubectl run -i --tty --image busybox dns-test --restart=Never --rm /bin/sh If you don't see a command prompt, try pressing enter. / # nslookup redis-app-1.redis-service.default.svc.cluster.local Server: 10.43.0.10 Address: 10.43.0.10:53 Name: redis-app-1.redis-service.default.svc.cluster.local Address: 10.42.0.15 *** Can't find redis-app-1.redis-service.default.svc.cluster.local: No answer / # nslookup redis-app-0.redis-service.default.svc.cluster.local Server: 10.43.0.10 Address: 10.43.0.10:53 Name: redis-app-0.redis-service.default.svc.cluster.local Address: 10.42.2.17
可以看到, redis-app-0的IP為10.42.2.17。當然,若Redis Pod遷移或是重啟(我們可以手動刪除掉一個Redis Pod來測試),則IP是會改變的,但Pod的域名、SRV records、A record都不會改變。
另外可以發現,我們之前創建的pv都被成功綁定了:
[root@k8s-node1 ~]# kubectl get pv NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE nfs-pv1 200M RWX Retain Bound default/redis-data-redis-app-2 1h nfs-pv2 200M RWX Retain Bound default/redis-data-redis-app-3 1h nfs-pv3 200M RWX Retain Bound default/redis-data-redis-app-4 1h nfs-pv4 200M RWX Retain Bound default/redis-data-redis-app-5 1h nfs-pv5 200M RWX Retain Bound default/redis-data-redis-app-0 1h nfs-pv6 200M RWX Retain Bound default/redis-data-redis-app-1 1h
初始化redis集群
創建好6個Redis Pod后,我們還需要利用常用的Redis-tribe工具進行集群的初始化。
創建centos容器
由於Redis集群必須在所有節點啟動后才能進行初始化,而如果將初始化邏輯寫入Statefulset中,則是一件非常復雜而且低效的行為。這里,本人不得不稱贊一下原項目作者的思路,值得學習。也就是說,我們可以在K8S上創建一個額外的容器,專門用於進行K8S集群內部某些服務的管理控制。
這里,我們專門啟動一個Ubuntu的容器,可以在該容器中安裝Redis-tribe,進而初始化Redis集群,執行:
kubectl run -i --tty centos --image=centos --restart=Never /bin/bash
成功后,我們可以進入centos容器中,原項目要求執行如下命令安裝基本的軟件環境:
cat >> /etc/yum.repo.d/epel.repo<<'EOF' [epel] name=Extra Packages for Enterprise Linux 7 - $basearch baseurl=https://mirrors.tuna.tsinghua.edu.cn/epel/7/$basearch #mirrorlist=https://mirrors.fedoraproject.org/metalink?repo=epel-7&arch=$basearch failovermethod=priority enabled=1 gpgcheck=0 gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7 EOF
初始化redis集群
首先,我們需要安裝redis-trib(redis集群命令行工具):
yum -y install redis-trib.noarch bind-utils-9.9.4-72.el7.x86_64
然后創建一主一從的集群節點信息:
redis-trib create --replicas 1 \
`dig +short redis-app-0.redis-service.default.svc.cluster.local`:6379 \
`dig +short redis-app-1.redis-service.default.svc.cluster.local`:6379 \
`dig +short redis-app-2.redis-service.default.svc.cluster.local`:6379 \
`dig +short redis-app-3.redis-service.default.svc.cluster.local`:6379 \
`dig +short redis-app-4.redis-service.default.svc.cluster.local`:6379 \
`dig +short redis-app-5.redis-service.default.svc.cluster.local`:6379
#create: 創建一個新的集群 #--replicas 1 : 創建的集群中每個主節點分配一個從節點,達到3主3從 #后面跟的就是redis實例所在的位置
如上,命令dig +short redis-app-0.redis-service.default.svc.cluster.local
用於將Pod的域名轉化為IP,這是因為redis-trib
不支持域名來創建集群。
執行完成后redis-trib
會打印一份預配置文件給你查看,如果沒問題輸入yes,redis-trib
就會把這份配置文件應用到集群中
>>> Creating cluster
>>> Performing hash slots allocation on 6 nodes... Using 3 masters: 10.42.2.17:6379 10.42.0.15:6379 10.42.1.13:6379 Adding replica 10.42.2.18:6379 to 10.42.2.17:6379 Adding replica 10.42.0.16:6379 to 10.42.0.15:6379 Adding replica 10.42.1.14:6379 to 10.42.1.13:6379 M: 4676f8913cdcd1e256db432531c80591ae6c5fc3 10.42.2.17:6379 slots:0-5460 (5461 slots) master M: 505f3e126882c0c5115885e54f9b361bc7e74b97 10.42.0.15:6379 slots:5461-10922 (5462 slots) master M: 589b4f4f908a04f56d2ab9cd6fd0fd25ea14bb8f 10.42.1.13:6379 slots:10923-16383 (5461 slots) master S: 366abbba45d3200329a5c6305fbcec9e29b50c80 10.42.2.18:6379 replicates 4676f8913cdcd1e256db432531c80591ae6c5fc3 S: cee3a27cc27635da54d94f16f6375cd4acfe6c30 10.42.0.16:6379 replicates 505f3e126882c0c5115885e54f9b361bc7e74b97 S: e9f1f704ff7c8f060d6b39e23be9cd8e55cb2e46 10.42.1.14:6379 replicates 589b4f4f908a04f56d2ab9cd6fd0fd25ea14bb8f Can I set the above configuration? (type 'yes' to accept):
輸入yes后開始創建集群
>>> Nodes configuration updated
>>> Assign a different config epoch to each node
>>> Sending CLUSTER MEET messages to join the cluster
Waiting for the cluster to join... >>> Performing Cluster Check (using node 10.42.2.17:6379) M: 4676f8913cdcd1e256db432531c80591ae6c5fc3 10.42.2.17:6379 slots:0-5460 (5461 slots) master 1 additional replica(s) M: 589b4f4f908a04f56d2ab9cd6fd0fd25ea14bb8f 10.42.1.13:6379@16379 slots:10923-16383 (5461 slots) master 1 additional replica(s) S: e9f1f704ff7c8f060d6b39e23be9cd8e55cb2e46 10.42.1.14:6379@16379 slots: (0 slots) slave replicates 589b4f4f908a04f56d2ab9cd6fd0fd25ea14bb8f S: 366abbba45d3200329a5c6305fbcec9e29b50c80 10.42.2.18:6379@16379 slots: (0 slots) slave replicates 4676f8913cdcd1e256db432531c80591ae6c5fc3 M: 505f3e126882c0c5115885e54f9b361bc7e74b97 10.42.0.15:6379@16379 slots:5461-10922 (5462 slots) master 1 additional replica(s) S: cee3a27cc27635da54d94f16f6375cd4acfe6c30 10.42.0.16:6379@16379 slots: (0 slots) slave replicates 505f3e126882c0c5115885e54f9b361bc7e74b97 [OK] All nodes agree about slots configuration. >>> Check for open slots... >>> Check slots coverage... [OK] All 16384 slots covered.
最后一句表示集群中的16384
個槽都有至少一個主節點在處理, 集群運作正常.
至此,我們的Redis集群就真正創建完畢了,連到任意一個Redis Pod中檢驗一下:
root@k8s-node1 ~]# kubectl exec -it redis-app-2 /bin/bash root@redis-app-2:/data# /usr/local/bin/redis-cli -c 127.0.0.1:6379> cluster info cluster_state:ok cluster_slots_assigned:16384 cluster_slots_ok:16384 cluster_slots_pfail:0 cluster_slots_fail:0 cluster_known_nodes:6 cluster_size:3 cluster_current_epoch:6 cluster_my_epoch:1 cluster_stats_messages_ping_sent:186 cluster_stats_messages_pong_sent:199 cluster_stats_messages_sent:385 cluster_stats_messages_ping_received:194 cluster_stats_messages_pong_received:186 cluster_stats_messages_meet_received:5 cluster_stats_messages_received:385 127.0.0.1:6379> cluster nodes 589b4f4f908a04f56d2ab9cd6fd0fd25ea14bb8f 10.42.1.13:6379@16379 master - 0 1550555011000 3 connected 10923-16383 e9f1f704ff7c8f060d6b39e23be9cd8e55cb2e46 10.42.1.14:6379@16379 slave 589b4f4f908a04f56d2ab9cd6fd0fd25ea14bb8f 0 1550555011512 6 connected 366abbba45d3200329a5c6305fbcec9e29b50c80 10.42.2.18:6379@16379 slave 4676f8913cdcd1e256db432531c80591ae6c5fc3 0 1550555010507 4 connected 505f3e126882c0c5115885e54f9b361bc7e74b97 10.42.0.15:6379@16379 master - 0 1550555011000 2 connected 5461-10922 cee3a27cc27635da54d94f16f6375cd4acfe6c30 10.42.0.16:6379@16379 slave 505f3e126882c0c5115885e54f9b361bc7e74b97 0 1550555011713 5 connected 4676f8913cdcd1e256db432531c80591ae6c5fc3 10.42.2.17:6379@16379 myself,master - 0 1550555010000 1 connected 0-5460
另外,還可以在NFS上查看Redis掛載的數據:
[root@rke ~]# tree /usr/local/kubernetes/redis/ /usr/local/kubernetes/redis/ ├── pv1 │ ├── appendonly.aof │ ├── dump.rdb │ └── nodes.conf ├── pv2 │ ├── appendonly.aof │ ├── dump.rdb │ └── nodes.conf ├── pv3 │ ├── appendonly.aof │ ├── dump.rdb │ └── nodes.conf ├── pv4 │ ├── appendonly.aof │ ├── dump.rdb │ └── nodes.conf ├── pv5 │ ├── appendonly.aof │ ├── dump.rdb │ └── nodes.conf └── pv6 ├── appendonly.aof ├── dump.rdb └── nodes.conf 6 directories, 18 files
創建用於訪問service
前面我們創建了用於實現statefulset的headless service,但該service沒有cluster IP,因此不能用於外界訪問.所以我們還需要創建一個service,專用於為Redis集群提供訪問和負載均衡:
piVersion: v1
kind: Service
metadata:
name: redis-access-service
labels:
app: redis
spec:
ports:
- name: redis-port
protocol: "TCP"
port: 6379
targetPort: 6379
selector:
app: redis
appCluster: redis-cluster
如上,該Service名稱為 redis-access-service
,在K8S集群中暴露6379端口,並且會對labels name
為app: redis
或appCluster: redis-cluster
的pod進行負載均衡。
創建后查看:
[root@rke ~]# kubectl get svc redis-access-service -o wide NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR redis-access-service ClusterIP 10.43.40.62 <none> 6379/TCP 47m app=redis,appCluster=redis-cluster
如上,在k8s集群中,所有應用都可以通過10.43.40.62:6379
來訪問redis集群,當然,為了方便測試,我們也可以為Service添加一個NodePort映射到物理機上,待測試。
測試主從切換
在K8S上搭建完好Redis集群后,我們最關心的就是其原有的高可用機制是否正常。這里,我們可以任意挑選一個Master的Pod來測試集群的主從切換機制,如redis-app-2
:
[root@rke ~]# kubectl get pods redis-app-2 -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE redis-app-2 1/1 Running 0 1h 10.42.1.13 192.168.1.20 <none>
進入redis-app-2
查看:
[root@rke ~]# kubectl exec -it redis-app-2 /bin/bash root@redis-app-2:/data# redis-cli 127.0.0.1:6379> role 1) "master" 2) (integer) 9478 3) 1) 1) "10.42.1.14" 2) "6379" 3) "9478"
如上可以看到,其為master,slave
為10.42.1.14
即redis-app-5
。
接着,我們手動刪除redis-app-2
:
[root@rke ~]# kubectl delete pods redis-app-2 pod "redis-app-2" deleted [root@rke ~]# kubectl get pods redis-app-2 -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE redis-app-2 1/1 Running 0 19s 10.42.1.15 192.168.1.20 <none>
如上,IP改變為10.42.1.15
。我們再進入redis-app-2
內部查看:
[root@rke ~]# kubectl exec -it redis-app-2 /bin/bash root@redis-app-2:/data# redis-cli 127.0.0.1:6379> ROLE 1) "slave" 2) "10.42.1.14" 3) (integer) 6379 4) "connected" 5) (integer) 9688
如上,redis-app-2
變成了slave
,從屬於它之前的從節點10.42.1.14
即redis-app-5
。
redis動態擴容
我們現在這個集群中有6個節點三主三從
,我現在添加兩個pod節點,達到4主4從
添加nfs共享目錄
cat >> /etc/exports <<'EOF' /usr/local/kubernetes/redis/pv7 192.168.0.0/16(rw,all_squash) /usr/local/kubernetes/redis/pv8 192.168.0.0/16(rw,all_squash) EOF systemctl restart nfs rpcbind [root@rke ~]# mkdir /usr/local/kubernetes/redis/pv{7..8} [root@rke ~]# chmod 777 /usr/local/kubernetes/redis/*
更新pv的yml文件,也可以自己在重新創建一個,這邊選擇自己新建
[root@rke redis]# cat pv_add.yml
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: nfs-pv7
spec:
capacity:
storage: 200M
accessModes:
- ReadWriteMany
nfs:
server: 192.168.1.253
path: "/usr/local/kubernetes/redis/pv7"
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: nfs-pv8
spec:
capacity:
storage: 200M
accessModes:
- ReadWriteMany
nfs:
server: 192.168.1.253
path: "/usr/local/kubernetes/redis/pv8"
創建查看pv:
[root@rke redis]# kubectl create -f pv_add.yml persistentvolume/nfs-pv7 created persistentvolume/nfs-pv8 created [root@rke redis]# kubectl get pv NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE nfs-pv1 200M RWX Retain Bound default/redis-data-redis-app-1 2h nfs-pv2 200M RWX Retain Bound default/redis-data-redis-app-2 2h nfs-pv3 200M RWX Retain Bound default/redis-data-redis-app-4 2h nfs-pv4 200M RWX Retain Bound default/redis-data-redis-app-5 2h nfs-pv5 200M RWX Retain Bound default/redis-data-redis-app-0 2h nfs-pv6 200M RWX Retain Bound default/redis-data-redis-app-3 2h nfs-pv7 200M RWX Retain Available 7s nfs-pv8 200M RWX Retain Available 7s
添加redis節點
更改redis的yml文件里面的replicas:
字段,把這個字段改為8,然后升級運行
[root@rke redis]# kubectl apply -f redis.yml Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply statefulset.apps/redis-app configured [root@rke redis]# kubectl get pods NAME READY STATUS RESTARTS AGE redis-app-0 1/1 Running 0 2h redis-app-1 1/1 Running 0 2h redis-app-2 1/1 Running 0 19m redis-app-3 1/1 Running 0 2h redis-app-4 1/1 Running 0 2h redis-app-5 1/1 Running 0 2h redis-app-6 1/1 Running 0 57s redis-app-7 1/1 Running 0 30s
添加集群節點
[root@rke redis]#kubectl exec -it centos /bin/bash [root@centos /]# redis-trib add-node \ `dig +short redis-app-6.redis-service.default.svc.cluster.local`:6379 \ `dig +short redis-app-0.redis-service.default.svc.cluster.local`:6379 [root@centos /]# redis-trib add-node \ `dig +short redis-app-7.redis-service.default.svc.cluster.local`:6379 \ `dig +short redis-app-0.redis-service.default.svc.cluster.local`:6379
add-node后面跟的是新節點的信息,后面是以前集群中的任意 一個節點
查看添加redis節點是否正常
[root@rke redis]# kubectl exec -it redis-app-0 bash root@redis-app-0:/data# redis-cli 127.0.0.1:6379> cluster nodes 589b4f4f908a04f56d2ab9cd6fd0fd25ea14bb8f 10.42.1.15:6379@16379 slave e9f1f704ff7c8f060d6b39e23be9cd8e55cb2e46 0 1550564776000 7 connected e9f1f704ff7c8f060d6b39e23be9cd8e55cb2e46 10.42.1.14:6379@16379 master - 0 1550564776000 7 connected 10923-16383 366abbba45d3200329a5c6305fbcec9e29b50c80 10.42.2.18:6379@16379 slave 4676f8913cdcd1e256db432531c80591ae6c5fc3 0 1550564777051 4 connected 505f3e126882c0c5115885e54f9b361bc7e74b97 10.42.0.15:6379@16379 master - 0 1550564776851 2 connected 5461-10922 cee3a27cc27635da54d94f16f6375cd4acfe6c30 10.42.0.16:6379@16379 slave 505f3e126882c0c5115885e54f9b361bc7e74b97 0 1550564775000 5 connected e4697a7ba460ae2979692116b95fbe1f2c8be018 10.42.0.20:6379@16379 master - 0 1550564776549 0 connected 246c79682e6cc78b4c2c28d0e7166baf47ecb265 10.42.2.23:6379@16379 master - 0 1550564776548 8 connected 4676f8913cdcd1e256db432531c80591ae6c5fc3 10.42.2.17:6379@16379 myself,master - 0 1550564775000 1 connected 0-5460
重新分配哈希槽
redis-trib.rb reshard `dig +short redis-app-0.redis-service.default.svc.cluster.local`:6379
## 輸入要移動的哈希槽 ## 移動到哪個新的master節點(ID) ## all 是從所有master節點上移動
查看對應的節點信息
127.0.0.1:6379> cluster nodes
589b4f4f908a04f56d2ab9cd6fd0fd25ea14bb8f 10.42.1.15:6379@16379 slave e9f1f704ff7c8f060d6b39e23be9cd8e55cb2e46 0 1550566162000 7 connected e9f1f704ff7c8f060d6b39e23be9cd8e55cb2e46 10.42.1.14:6379@16379 master - 0 1550566162909 7 connected 11377-16383 366abbba45d3200329a5c6305fbcec9e29b50c80 10.42.2.18:6379@16379 slave 4676f8913cdcd1e256db432531c80591ae6c5fc3 0 1550566161600 4 connected 505f3e126882c0c5115885e54f9b361bc7e74b97 10.42.0.15:6379@16379 master - 0 1550566161902 2 connected 5917-10922 cee3a27cc27635da54d94f16f6375cd4acfe6c30 10.42.0.16:6379@16379 slave 505f3e126882c0c5115885e54f9b361bc7e74b97 0 1550566162506 5 connected 246c79682e6cc78b4c2c28d0e7166baf47ecb265 10.42.2.23:6379@16379 master - 0 1550566161600 8 connected 0-453 5461-5916 10923-11376 4676f8913cdcd1e256db432531c80591ae6c5fc3 10.42.2.17:6379@16379 myself,master - 0 1550566162000 1