K8s有狀態服務StatefulSet
https://blog.51cto.com/newfly/2140004
https://www.cnblogs.com/cocowool/p/kubernetes_statefulset.html
1 有狀態服務
RC、Deployment、DaemonSet都是面向無狀態的服務,它們所管理的Pod的IP、名字,啟停順序等都是隨機的,而StatefulSet是什么?顧名思義,有狀態的集合,管理所有有狀態的服務,比如MySQL、MongoDB集群等。MySQL,cache,這種組件要不要上K8S???
2 StatefulSet
StatefulSet本質上是Deployment的一種變體,在v1.9版本中已成為GA版本,它為了解決有狀態服務的問題,它所管理的Pod擁有固定的Pod名稱,啟停順序,在StatefulSet中,Pod名字稱為網絡標識(hostname),還必須要用到共享存儲。
在Deployment中,與之對應的服務是service,而在StatefulSet中與之對應的headless service,headless service,即無頭服務,與service的區別就是它沒有Cluster IP,解析它的名稱時將返回該Headless Service對應的全部Pod的Endpoint列表。
除此之外,StatefulSet在Headless Service的基礎上又為StatefulSet控制的每個Pod副本創建了一個DNS域名,這個域名的格式為:
$(podname).(headless server name)
FQDN: $(podname).(headless server name).namespace.svc.cluster.local
StatefulSet適用於具有以下特點的應用:
- 具有固定的網絡標記(主機名)
- 具有持久化存儲
- 需要按順序部署和擴展
- 需要按順序終止及刪除
- 需要按順序滾動更新
3 StatefulSet示例
3.1 創建storageclass
kubernetes(14):k8s基於NFS部署storageclass實現pv自動供給
#test.yaml kind: PersistentVolumeClaim apiVersion: v1 metadata: name: test-claim annotations: volume.beta.kubernetes.io/storage-class: "managed-nfs-storage" spec: accessModes: - ReadWriteMany resources: requests: storage: 1Mi
[root@k8s-master storageclass]# kubectl get pods NAME READY STATUS RESTARTS AGE nfs-client-provisioner-5558488b74-mkhbs 1/1 Running 0 24s [root@k8s-master storageclass]# kubectl apply -f test-claim.yaml persistentvolumeclaim/test-claim unchanged [root@k8s-master storageclass]# kubectl get storageclasses.storage.k8s.io NAME PROVISIONER AGE managed-nfs-storage fuseim.pri/ifs 13m [root@k8s-master storageclass]#
3.2 創建Nginx-StatefulSet
apiVersion: v1 kind: Service metadata: name: nginx labels: app: nginx spec: ports: - port: 80 name: web clusterIP: None selector: app: nginx --- apiVersion: apps/v1 kind: StatefulSet metadata: name: web spec: selector: matchLabels: app: nginx serviceName: "nginx" replicas: 3 template: metadata: labels: app: nginx spec: terminationGracePeriodSeconds: 10 containers: - name: nginx image: nginx ports: - containerPort: 80 name: web volumeMounts: - name: www mountPath: /usr/share/nginx/html volumeClaimTemplates: - metadata: name: www spec: accessModes: [ "ReadWriteOnce" ] storageClassName: "managed-nfs-storage" resources: requests: storage: 1Gi
[root@k8s-master statefulset]# kubectl create -f nginx_statefulSet.yaml service/nginx created statefulset.apps/web created [root@k8s-master statefulset]#
3.3 查看sts/pod/pv/pvc/svc
[root@k8s-master v1]# kubectl get sts NAME READY AGE web 3/3 13m [root@k8s-master v1]# [root@k8s-master statefulset]# kubectl get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE test-claim Bound default-test-claim-pvc-0ece5a66-dab2-4a13-be51-1a2acdbc45eb 1Mi RWX managed-nfs-storage 24m www-web-0 Bound default-www-web-0-pvc-fb47aba4-6d37-4f3e-b118-a35d78f4bca8 1Gi RWO managed-nfs-storage 66s www-web-1 Bound default-www-web-1-pvc-39123d42-6712-4a66-b7fe-beae48708aad 1Gi RWO managed-nfs-storage 55s www-web-2 Bound default-www-web-2-pvc-e395f225-ef4a-4645-a2a9-d139590346dc 1Gi RWO managed-nfs-storage 43s [root@k8s-master statefulset]# kubectl get pv NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE default-test-claim-pvc-0ece5a66-dab2-4a13-be51-1a2acdbc45eb 1Mi RWX Delete Bound default/test-claim managed-nfs-storage 19m default-www-web-0-pvc-fb47aba4-6d37-4f3e-b118-a35d78f4bca8 1Gi RWO Delete Bound default/www-web-0 managed-nfs-storage 68s default-www-web-1-pvc-39123d42-6712-4a66-b7fe-beae48708aad 1Gi RWO Delete Bound default/www-web-1 managed-nfs-storage 57s default-www-web-2-pvc-e395f225-ef4a-4645-a2a9-d139590346dc 1Gi RWO Delete Bound default/www-web-2 managed-nfs-storage 45s [root@k8s-master statefulset]# [root@k8s-master statefulset]# kubectl get pods NAME READY STATUS RESTARTS AGE nfs-client-provisioner-5558488b74-mkhbs 1/1 Running 0 21m web-0 1/1 Running 0 2m45s web-1 1/1 Running 0 2m34s web-2 1/1 Running 0 2m22s
pod創建過程從web-0到web-2順序創建
StatefulSet創建順序是從0到N-1,終止順序則是相反。如果需要對StatefulSet擴容,則之前的N個Pod必須已經存在。如果要終止一個Pod,則它的后序Pod必須全部終止。
[root@k8s-master statefulset]# kubectl get pods NAME READY STATUS RESTARTS AGE nfs-client-provisioner-5558488b74-mkhbs 1/1 Running 0 18m web-0 1/1 Running 0 18s web-1 0/1 ContainerCreating 0 7s [root@k8s-master statefulset]# kubectl get pods NAME READY STATUS RESTARTS AGE nfs-client-provisioner-5558488b74-mkhbs 1/1 Running 0 18m web-0 1/1 Running 0 24s web-1 1/1 Running 0 13s web-2 0/1 Pending 0 1s [root@k8s-master statefulset]# kubectl get pods NAME READY STATUS RESTARTS AGE nfs-client-provisioner-5558488b74-mkhbs 1/1 Running 0 19m web-0 1/1 Running 0 33s web-1 1/1 Running 0 22s web-2 0/1 ContainerCreating 0 10s [root@k8s-master statefulset]# [root@k8s-master statefulset]# kubectl get pods NAME READY STATUS RESTARTS AGE nfs-client-provisioner-5558488b74-mkhbs 1/1 Running 0 19m web-0 1/1 Running 0 52s web-1 1/1 Running 0 41s web-2 1/1 Running 0 29s [root@k8s-master statefulset]#
根據volumeClaimTemplates自動創建的PVC
[root@k8s-master statefulset]# kubectl get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE test-claim Bound default-test-claim-pvc-0ece5a66-dab2-4a13-be51-1a2acdbc45eb 1Mi RWX managed-nfs-storage 28m www-web-0 Bound default-www-web-0-pvc-fb47aba4-6d37-4f3e-b118-a35d78f4bca8 1Gi RWO managed-nfs-storage 5m15s www-web-1 Bound default-www-web-1-pvc-39123d42-6712-4a66-b7fe-beae48708aad 1Gi RWO managed-nfs-storage 5m4s www-web-2 Bound default-www-web-2-pvc-e395f225-ef4a-4645-a2a9-d139590346dc 1Gi RWO managed-nfs-storage 4m52s [root@k8s-master statefulset]# [root@k8s-master statefulset]# cd /data/volumes/ [root@k8s-master volumes]# ls v1 v2 v3 [root@k8s-master volumes]# tree . ├── v1 │ ├── default-test-claim-pvc-0ece5a66-dab2-4a13-be51-1a2acdbc45eb │ ├── default-www-web-0-pvc-fb47aba4-6d37-4f3e-b118-a35d78f4bca8 │ ├── default-www-web-1-pvc-39123d42-6712-4a66-b7fe-beae48708aad │ └── default-www-web-2-pvc-e395f225-ef4a-4645-a2a9-d139590346dc ├── v2 └── v3 7 directories, 0 files
3.4 測試statefulset持久化存儲和自愈
寫一個文件
[root@k8s-master v1]# ll 總用量 0 drwxrwxrwx 2 root root 6 9月 5 14:35 default-test-claim-pvc-0ece5a66-dab2-4a13-be51-1a2acdbc45eb drwxrwxrwx 2 root root 23 9月 5 15:01 default-www-web-0-pvc-fb47aba4-6d37-4f3e-b118-a35d78f4bca8 drwxrwxrwx 2 root root 6 9月 5 14:53 default-www-web-1-pvc-39123d42-6712-4a66-b7fe-beae48708aad drwxrwxrwx 2 root root 6 9月 5 14:54 default-www-web-2-pvc-e395f225-ef4a-4645-a2a9-d139590346dc [root@k8s-master v1]# echo "<h1>test Server</h1>" > default-www-web-0-pvc-fb47aba4-6d37-4f3e-b118-a35d78f4bca8/index.html [root@k8s-master v1]# kubectl get pod -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES nfs-client-provisioner-5558488b74-mkhbs 1/1 Running 0 29m 10.254.1.103 k8s-node-1 <none> <none> web-0 1/1 Running 0 10m 10.254.1.104 k8s-node-1 <none> <none> web-1 1/1 Running 0 10m 10.254.2.77 k8s-node-2 <none> <none> web-2 1/1 Running 0 10m 10.254.1.105 k8s-node-1 <none> <none> [root@k8s-master v1]# curl 10.254.1.104 <h1>test Server</h1> [root@k8s-master v1]#
刪除文件,pod自愈名稱不變,只是IP變動了
[root@k8s-master v1]# kubectl get pod -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES nfs-client-provisioner-5558488b74-mkhbs 1/1 Running 0 29m 10.254.1.103 k8s-node-1 <none> <none> web-0 1/1 Running 0 10m 10.254.1.104 k8s-node-1 <none> <none> web-1 1/1 Running 0 10m 10.254.2.77 k8s-node-2 <none> <none> web-2 1/1 Running 0 10m 10.254.1.105 k8s-node-1 <none> <none> [root@k8s-master v1]# kubectl delete pod web-0 pod "web-0" deleted [root@k8s-master v1]# kubectl get pod -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES nfs-client-provisioner-5558488b74-mkhbs 1/1 Running 0 29m 10.254.1.103 k8s-node-1 <none> <none> web-0 0/1 ContainerCreating 0 7s <none> k8s-node-2 <none> <none> web-1 1/1 Running 0 11m 10.254.2.77 k8s-node-2 <none> <none> web-2 1/1 Running 0 10m 10.254.1.105 k8s-node-1 <none> <none> [root@k8s-master v1]# kubectl get pod -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES nfs-client-provisioner-5558488b74-mkhbs 1/1 Running 0 29m 10.254.1.103 k8s-node-1 <none> <none> web-0 1/1 Running 0 10s 10.254.2.78 k8s-node-2 <none> <none> web-1 1/1 Running 0 11m 10.254.2.77 k8s-node-2 <none> <none> web-2 1/1 Running 0 11m 10.254.1.105 k8s-node-1 <none> <none> [root@k8s-master v1]# curl 10.254.2.78 <h1>test Server</h1>
3.5 順序擴容,倒序縮容
[root@k8s-master v1]# kubectl get sts NAME READY AGE web 3/3 13m [root@k8s-master v1]# kubectl scale statefulset web --replicas=6 statefulset.apps/web scaled [root@k8s-master v1]# kubectl get pods NAME READY STATUS RESTARTS AGE nfs-client-provisioner-5558488b74-mkhbs 1/1 Running 0 33m web-0 1/1 Running 0 3m30s web-1 1/1 Running 0 14m web-2 1/1 Running 0 14m web-3 0/1 ContainerCreating 0 6s [root@k8s-master v1]# kubectl get pods NAME READY STATUS RESTARTS AGE nfs-client-provisioner-5558488b74-mkhbs 1/1 Running 0 33m web-0 1/1 Running 0 4m web-1 1/1 Running 0 15m web-2 1/1 Running 0 14m web-3 1/1 Running 0 36s web-4 1/1 Running 0 23s web-5 0/1 ContainerCreating 0 7s [root@k8s-master v1]# kubectl get pods NAME READY STATUS RESTARTS AGE nfs-client-provisioner-5558488b74-mkhbs 1/1 Running 0 34m web-0 1/1 Running 0 4m23s web-1 1/1 Running 0 15m web-2 1/1 Running 0 15m web-3 1/1 Running 0 59s web-4 1/1 Running 0 46s web-5 1/1 Running 0 30s [root@k8s-master v1]# [root@k8s-master v1]# [root@k8s-master v1]# kubectl scale statefulset web --replicas=2 statefulset.apps/web scaled [root@k8s-master v1]# kubectl get pods NAME READY STATUS RESTARTS AGE nfs-client-provisioner-5558488b74-mkhbs 1/1 Running 0 34m web-0 1/1 Running 0 4m35s web-1 1/1 Running 0 15m web-2 1/1 Running 0 15m web-3 1/1 Running 0 71s web-4 1/1 Running 0 58s web-5 1/1 Terminating 0 42s [root@k8s-master v1]# kubectl get pods NAME READY STATUS RESTARTS AGE nfs-client-provisioner-5558488b74-mkhbs 1/1 Running 0 34m web-0 1/1 Running 0 4m49s web-1 1/1 Running 0 15m web-2 0/1 Terminating 0 15m [root@k8s-master v1]# kubectl get pods NAME READY STATUS RESTARTS AGE nfs-client-provisioner-5558488b74-mkhbs 1/1 Running 0 34m web-0 1/1 Running 0 4m57s web-1 1/1 Running 0 16m [root@k8s-master v1]#
3.6 滾動更新-倒序更新
我們用的是最新的Nginx版本,現在換個1.15版本的
[root@k8s-master statefulset]# kubectl get pods -o wide |grep web web-0 1/1 Running 0 40s 10.254.1.111 k8s-node-1 <none> <none> web-1 1/1 Running 0 56s 10.254.2.81 k8s-node-2 <none> <none> web-2 1/1 Running 0 76s 10.254.1.110 k8s-node-1 <none> <none> [root@k8s-master statefulset]# curl -s -I 10.254.1.111|grep Server: Server: nginx/1.17.3 [root@k8s-master statefulset]# curl 10.254.1.111 <h1>test Server</h1> [root@k8s-master statefulset]# [root@k8s-master statefulset]# kubectl apply -f nginx_statefulSet.yaml service/nginx unchanged statefulset.apps/web configured [root@k8s-master statefulset]# kubectl get pods -o wide |grep web web-0 1/1 Running 0 2m22s 10.254.1.111 k8s-node-1 <none> <none> web-1 1/1 Running 0 2m38s 10.254.2.81 k8s-node-2 <none> <none> web-2 0/1 Terminating 0 2m58s 10.254.1.110 k8s-node-1 <none> <none> [root@k8s-master statefulset]# kubectl get pods -o wide |grep web web-0 1/1 Running 0 2m34s 10.254.1.111 k8s-node-1 <none> <none> web-1 1/1 Running 0 2m50s 10.254.2.81 k8s-node-2 <none> <none> web-2 0/1 ContainerCreating 0 10s <none> k8s-node-1 <none> <none> [root@k8s-master statefulset]# kubectl get pods -o wide |grep web web-0 0/1 Terminating 0 2m53s <none> k8s-node-1 <none> <none> web-1 1/1 Running 0 13s 10.254.2.82 k8s-node-2 <none> <none> web-2 1/1 Running 0 29s 10.254.1.112 k8s-node-1 <none> <none> [root@k8s-master statefulset]# kubectl get pods -o wide |grep web web-0 1/1 Running 0 23s 10.254.2.83 k8s-node-2 <none> <none> web-1 1/1 Running 0 37s 10.254.2.82 k8s-node-2 <none> <none> web-2 1/1 Running 0 53s 10.254.1.112 k8s-node-1 <none> <none> [root@k8s-master statefulset]# curl -s -I 10.254.2.83 |grep Server Server: nginx/1.15.12 [root@k8s-master statefulset]# curl 10.254.2.83 <h1>test Server</h1> [root@k8s-master statefulset]#