1.什么是StorageClass
Kubernetes提供了一套可以自動創建PV的機制,即:Dynamic Provisioning.而這個機制的核心在於:StorageClass這個API對象. StorageClass對象會定義下面兩部分內容: 1,PV的屬性.比如,存儲類型,Volume的大小等. 2,創建這種PV需要用到的存儲插件 有了這兩個信息之后,Kubernetes就能夠根據用戶提交的PVC,找到一個對應的StorageClass,之后Kubernetes就會調用該StorageClass聲明的存儲插件,進而創建出需要的PV. 但是其實使用起來是一件很簡單的事情,你只需要根據自己的需求,編寫YAML文件即可,然后使用kubectl create命令執行即可
2.為什么需要StorageClass
在一個大規模的Kubernetes集群里,可能有成千上萬個PVC,這就意味着運維人員必須實現創建出這個多個PV,此外,隨着項目的需要,會有新的PVC不斷被提交,那么運維人員就需要不斷的添加新的,滿足要求的PV,否則新的Pod就會因為PVC綁定不到PV而導致創建失敗.而且通過 PVC 請求到一定的存儲空間也很有可能不足以滿足應用對於存儲設備的各種需求
而且不同的應用程序對於存儲性能的要求可能也不盡相同,比如讀寫速度、並發性能等,為了解決這一問題,Kubernetes 又為我們引入了一個新的資源對象:StorageClass,通過 StorageClass 的定義,管理員可以將存儲資源定義為某種類型的資源,比如快速存儲、慢速存儲等,用戶根據 StorageClass 的描述就可以非常直觀的知道各種存儲資源的具體特性了,這樣就可以根據應用的特性去申請合適的存儲資源了。
3.StorageClass運行原理及部署流程
要使用 StorageClass,我們就得安裝對應的自動配置程序,比如我們這里存儲后端使用的是 nfs,那么我們就需要使用到一個 nfs-client 的自動配置程序,我們也叫它 Provisioner,這個程序使用我們已經配置好的 nfs 服務器,來自動創建持久卷,也就是自動幫我們創建 PV。
1.自動創建的 PV 以${namespace}-${pvcName}-${pvName}這樣的命名格式創建在 NFS 服務器上的共享數據目錄中
2.而當這個 PV 被回收后會以archieved-${namespace}-${pvcName}-${pvName}這樣的命名格式存在 NFS 服務器上。
1)詳細的運作流程可以參考下圖:
2)搭建StorageClass+NFS,大致有以下幾個步驟:
1.創建一個可用的NFS Serve 2.創建Service Account.這是用來管控NFS provisioner在k8s集群中運行的權限 3.創建StorageClass.負責建立PVC並調用NFS provisioner進行預定的工作,並讓PV與PVC建立管理
4.創建NFS provisioner.有兩個功能,一個是在NFS共享目錄下創建掛載點(volume),另一個則是建了PV並將PV與NFS的掛載點建立關聯
4.創建StorageClass
4.1創建NFS共享服務
該步驟比較簡單不在贅述,大家可以自行百度搭建,當前環境NFS server及共享目錄信息
IP: 10.3.104.51
Export PATH: /data/volumes/
4.2使用以下文檔配置account及相關權限
1)rbac.yaml: #唯一需要修改的地方只有namespace,根據實際情況定義
[root@k8s-master storageclass]# pwd /root/k8s_practice/storageclass [root@k8s-master storageclass]# cat rbac.yaml apiVersion: v1 kind: ServiceAccount metadata: name: nfs-client-provisioner namespace: default #根據實際環境設定namespace,下面類同 --- kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 metadata: name: nfs-client-provisioner-runner rules: - apiGroups: [""] resources: ["persistentvolumes"] verbs: ["get", "list", "watch", "create", "delete"] - apiGroups: [""] resources: ["persistentvolumeclaims"] verbs: ["get", "list", "watch", "update"] - apiGroups: ["storage.k8s.io"] resources: ["storageclasses"] verbs: ["get", "list", "watch"] - apiGroups: [""] resources: ["events"] verbs: ["create", "update", "patch"] --- kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: run-nfs-client-provisioner subjects: - kind: ServiceAccount name: nfs-client-provisioner namespace: default roleRef: kind: ClusterRole name: nfs-client-provisioner-runner apiGroup: rbac.authorization.k8s.io --- kind: Role apiVersion: rbac.authorization.k8s.io/v1 metadata: name: leader-locking-nfs-client-provisioner namespace: default rules: - apiGroups: [""] resources: ["endpoints"] verbs: ["get", "list", "watch", "create", "update", "patch"] --- kind: RoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: leader-locking-nfs-client-provisioner subjects: - kind: ServiceAccount name: nfs-client-provisioner namespace: default roleRef: kind: Role name: leader-locking-nfs-client-provisioner apiGroup: rbac.authorization.k8s.io
2)創建rbac
[root@k8s-master storageclass]# kubectl apply -f rbac.yaml serviceaccount/nfs-client-provisioner created clusterrole.rbac.authorization.k8s.io/nfs-client-provisioner-runner created clusterrolebinding.rbac.authorization.k8s.io/run-nfs-client-provisioner created role.rbac.authorization.k8s.io/leader-locking-nfs-client-provisioner created rolebinding.rbac.authorization.k8s.io/leader-locking-nfs-client-provisioner created [root@k8s-master storageclass]# kubectl get role,rolebinding NAME CREATED AT role.rbac.authorization.k8s.io/leader-locking-nfs-client-provisioner 2021-09-14T03:15:03Z NAME ROLE AGE rolebinding.rbac.authorization.k8s.io/leader-locking-nfs-client-provisioner Role/leader-locking-nfs-client-provisioner 5m36s
4.3創建NFS資源的StorageClass
1)nfs-StorageClass.yaml
[root@k8s-master storageclass]# pwd /root/k8s_practice/storageclass [root@k8s-master storageclass]# cat nfs-StorageClass.yaml apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: managed-nfs-storage provisioner: wxc-nfs-storage #這里的名稱要和provisioner配置文件中的環境變量PROVISIONER_NAME保持一致 parameters: archiveOnDelete: "false"
2)創建nfs-StorageClass
[root@k8s-master storageclass]# kubectl apply -f nfs-StorageClass.yaml storageclass.storage.k8s.io/managed-nfs-storage created [root@k8s-master storageclass]# kubectl get sc NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE managed-nfs-storage wxc-nfs-storage Delete Immediate false 8s
4.4創建NFS provisioner
1)nfs-provisioner.yaml
[root@k8s-master storageclass]# pwd /root/k8s_practice/storageclass [root@k8s-master storageclass]# cat nfs-provisioner.yaml apiVersion: apps/v1 kind: Deployment metadata: name: nfs-client-provisioner labels: app: nfs-client-provisioner namespace: default #與RBAC文件中的namespace保持一致 spec: replicas: 1 selector: matchLabels: app: nfs-client-provisioner strategy: type: Recreate selector: matchLabels: app: nfs-client-provisioner template: metadata: labels: app: nfs-client-provisioner spec: serviceAccountName: nfs-client-provisioner containers: - name: nfs-client-provisioner image: quay.io/external_storage/nfs-client-provisioner:latest volumeMounts: - name: nfs-client-root mountPath: /persistentvolumes env: - name: PROVISIONER_NAME value: wxc-nfs-storage #provisioner名稱,請確保該名稱與 nfs-StorageClass.yaml文件中的provisioner名稱保持一致 - name: NFS_SERVER value: 10.3.104.51 #NFS Server IP地址 - name: NFS_PATH value: /nfsdata/volumes #NFS掛載卷 volumes: - name: nfs-client-root nfs: server: 10.3.104.51 #NFS Server IP地址 path: /nfsdata/volumes #NFS 掛載卷
2)創建nfs-provisioner
[root@k8s-master storageclass]# kubectl apply -f nfs-provisioner.yaml deployment.apps/nfs-client-provisioner created [root@k8s-master storageclass]# kubectl get deploy NAME READY UP-TO-DATE AVAILABLE AGE nfs-client-provisioner 1/1 1 1 11s [root@k8s-master storageclass]# kubectl get pods NAME READY STATUS RESTARTS AGE nfs-client-provisioner-677fc9c97c-9cj92 1/1 Running 0 18s
5.創建測試pod,檢查是否部署成功
5.1創建Pod+PVC
1)test-claim.yaml
[root@k8s-master storageclass]# pwd /root/k8s_practice/storageclass [root@k8s-master storageclass]# cat test-claim.yaml kind: PersistentVolumeClaim apiVersion: v1 metadata: name: test-claim annotations: volume.beta.kubernetes.io/storage-class: "managed-nfs-storage" #與nfs-StorageClass.yaml metadata.name保持一致,關聯storageclass進行后續動態撞見PV spec: accessModes: - ReadWriteMany resources: requests: storage: 1Mi [root@k8s-master storageclass]# kubectl apply -f test-claim.yaml persistentvolumeclaim/test-claim created
2)確保PVC狀態為Bound(默認自動創建PV且Bound狀態)
[root@k8s-master storageclass]# kubectl get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE test-claim Bound pvc-f2aa9a85-dcff-49d0-a0a8-549e2d8c9f92 1Mi RWX managed-nfs-storage 8s [root@k8s-master storageclass]# kubectl get pv NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE pvc-f2aa9a85-dcff-49d0-a0a8-549e2d8c9f92 1Mi RWX Delete Bound default/test-claim managed-nfs-storage 18s
3)創建測試pod,查看是否可以正常掛載
test-pod.yaml
[root@k8s-master storageclass]# pwd /root/k8s_practice/storageclass [root@k8s-master storageclass]# cat test-pod.yaml kind: Pod apiVersion: v1 metadata: name: test-pod spec: containers: - name: test-pod image: busybox:1.24 command: - "/bin/sh" args: - "-c" - "touch /mnt/SUCCESS && exit 0 || exit 1" #創建一個SUCCESS文件后退出 volumeMounts: - name: nfs-pvc mountPath: "/mnt" restartPolicy: "Never" volumes: - name: nfs-pvc persistentVolumeClaim: claimName: test-claim #與PVC名稱保持一致 [root@k8s-master storageclass]# kubectl apply -f test-pod.yaml pod/test-pod created
4)檢查結果
[root@k8s-master storageclass]# ll /nfsdata/volumes/default-test-claim-pvc-f2aa9a85-dcff-49d0-a0a8-549e2d8c9f92/ #文件規則是按照${namespace}-${pvcName}-${pvName}創建的 total 0 -rw-r--r-- 1 nfsnobody nfsnobody 0 Sep 14 11:34 SUCCESS #下面有一個 SUCCESS 的文件,證明我們上面的驗證是成功
5.2StateFulDet+volumeClaimTemplates自動創建PV
1)創建無頭服務及statefulset
nginx-statefulset.yaml
[root@k8s-master storageclass]# pwd /root/k8s_practice/storageclass [root@k8s-master storageclass]# cat nginx-statefulset.yaml apiVersion: v1 kind: Service metadata: name: nginx-headless labels: app: nginx spec: ports: - port: 80 name: web clusterIP: None #注意此處的值,None表示無頭服務 selector: app: nginx --- apiVersion: apps/v1beta1 kind: StatefulSet metadata: name: web spec: serviceName: "nginx-headless" #與service名稱保持一致 replicas: 2 #兩個副本 template: metadata: labels: app: nginx spec: containers: - name: nginx image: ikubernetes/myapp:v1 ports: - containerPort: 80 name: web volumeMounts: - name: www mountPath: /usr/share/nginx/html volumeClaimTemplates: - metadata: name: www annotations: volume.beta.kubernetes.io/storage-class: "managed-nfs-storage" #managed-nfs-storage為我們創建的storage-class名稱 spec: accessModes: [ "ReadWriteOnce" ] resources: requests: storage: 1Gi
2)檢查結果
集群節點上
[root@k8s-master storageclass]# kubectl get pod -l app=nginx NAME READY STATUS RESTARTS AGE web-0 1/1 Running 0 19s web-1 0/1 ContainerCreating 0 12s [root@k8s-master storageclass]# kubectl get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE test-claim Bound pvc-f2aa9a85-dcff-49d0-a0a8-549e2d8c9f92 1Mi RWX managed-nfs-storage 148m www-web-0 Bound pvc-5c8e5883-3d67-43f7-b63c-b7e76f02fb2b 1Gi RWO managed-nfs-storage 34s www-web-1 Bound pvc-67780cda-dd3b-4f3c-9802-0dedc45d57f3 1Gi RWO managed-nfs-storage 27s [root@k8s-master storageclass]# kubectl get pv NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE pvc-5c8e5883-3d67-43f7-b63c-b7e76f02fb2b 1Gi RWO Delete Bound default/www-web-0 managed-nfs-storage 38s pvc-67780cda-dd3b-4f3c-9802-0dedc45d57f3 1Gi RWO Delete Bound default/www-web-1 managed-nfs-storage 31s pvc-f2aa9a85-dcff-49d0-a0a8-549e2d8c9f92 1Mi RWX Delete Bound default/test-claim managed-nfs-storage 148m
3)NFS Server上:
[root@k8s-master storageclass]# cd /nfsdata/volumes/ [root@k8s-master volumes]# ll total 12 drwxrwxrwx 2 nfsnobody nfsnobody 4096 Sep 14 11:34 default-test-claim-pvc-f2aa9a85-dcff-49d0-a0a8-549e2d8c9f92 drwxrwxrwx 2 nfsnobody nfsnobody 4096 Sep 14 13:56 default-www-web-0-pvc-5c8e5883-3d67-43f7-b63c-b7e76f02fb2b drwxrwxrwx 2 nfsnobody nfsnobody 4096 Sep 14 13:56 default-www-web-1-pvc-67780cda-dd3b-4f3c-9802-0dedc45d57f3 [root@k8s-master volumes]# echo "web-00" > default-www-web-0-pvc-5c8e5883-3d67-43f7-b63c-b7e76f02fb2b/index.html [root@k8s-master volumes]# echo "web-01" > default-www-web-1-pvc-67780cda-dd3b-4f3c-9802-0dedc45d57f3/index.html
4)集群任意節點上:
[root@k8s-master volumes]# kubectl get pod -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES nfs-client-provisioner-677fc9c97c-9cj92 1/1 Running 0 156m 10.244.1.18 k8s-node1 <none> <none> recycler-for-pv-nfs2 0/1 Completed 0 4d22h 10.244.2.11 k8s-node2 <none> <none> recycler-for-pv-nfs6 0/1 Completed 0 4d22h 10.244.1.16 k8s-node1 <none> <none> test-pod 0/1 Completed 0 148m 10.244.2.13 k8s-node2 <none> <none> web-0 1/1 Running 0 6m14s 10.244.2.14 k8s-node2 <none> <none> web-1 1/1 Running 0 6m7s 10.244.1.19 k8s-node1 <none> <none> [root@k8s-master volumes]# kubectl get svc -o wide NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 7d2h <none> nginx-headless ClusterIP None <none> 80/TCP 6m25s app=nginx [root@k8s-master volumes]# kubectl exec -it nfs-client-provisioner-677fc9c97c-9cj92 sh #進入集群中任意pod中,解析nginx-headless 服務 kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead. / # nslookup nginx-headless nslookup: can't resolve '(null)': Name does not resolve Name: nginx-headless Address 1: 10.244.1.19 web-0.nginx-headless.default.svc.cluster.local #可以看到有兩個地址 Address 2: 10.244.2.14 web-1.nginx-headless.default.svc.cluster.local [root@k8s-master volumes]# curl 10.244.2.14 web-00 [root@k8s-master volumes]# curl 10.244.1.19 web-01
#對於statefulset我們可以通過添加/刪除pod副本的數量,觀察PV/PVC的狀態及變化.
6.關於StorageClass回收策略對數據的影響
6.1第一種配置
archiveOnDelete: "false" reclaimPolicy: Delete #默認沒有配置,默認值為Delete
測試結果:
1.pod刪除重建后數據依然存在,舊pod名稱及數據依然保留給新pod使用 2.sc刪除重建后數據依然存在,舊pod名稱及數據依然保留給新pod使用 3.刪除PVC后,PV被刪除且NFS Server對應數據被刪除
6.2第二種配置
archiveOnDelete: "false" reclaimPolicy: Retain
測試結果:
1.pod刪除重建后數據依然存在,舊pod名稱及數據依然保留給新pod使用 2.sc刪除重建后數據依然存在,舊pod名稱及數據依然保留給新pod使用 3.刪除PVC后,PV不會別刪除,且狀態由Bound變為Released,NFS Server對應數據被保留 4.重建sc后,新建PVC會綁定新的pv,舊數據可以通過拷貝到新的PV中
6.3第三種配置
archiveOnDelete: "ture" reclaimPolicy: Retain
1.pod刪除重建后數據依然存在,舊pod名稱及數據依然保留給新pod使用 2.sc刪除重建后數據依然存在,舊pod名稱及數據依然保留給新pod使用 3.刪除PVC后,PV不會別刪除,且狀態由Bound變為Released,NFS Server對應數據被保留 4.重建sc后,新建PVC會綁定新的pv,舊數據可以通過拷貝到新的PV中
6.4第四種配置
archiveOnDelete: "ture" reclaimPolicy: Delete
結果:
1.pod刪除重建后數據依然存在,舊pod名稱及數據依然保留給新pod使用 2.sc刪除重建后數據依然存在,舊pod名稱及數據依然保留給新pod使用 3.刪除PVC后,PV不會別刪除,且狀態由Bound變為Released,NFS Server對應數據被保留 4.重建sc后,新建PVC會綁定新的pv,舊數據可以通過拷貝到新的PV中
總結:除以第一種配置外,其他三種配置在PV/PVC被刪除后數據依然保留
7.常見問題
7.1如何設置默認的StorageClass
1)命令kubectl patch 來更新:
#查看當前sc [root@k8s-master storageclass]# kubectl get sc NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE managed-nfs-storage wxc-nfs-storage Delete Immediate false 171m #設置managed-nfs-storage為默認后端存儲 [root@k8s-master storageclass]# kubectl patch storageclass managed-nfs-storage -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}' storageclass.storage.k8s.io/managed-nfs-storage patched #再次查看,注意是否有default標識 [root@k8s-master storageclass]# kubectl get sc NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE managed-nfs-storage (default) wxc-nfs-storage Delete Immediate false 171m #取消默認存儲后端 [root@k8s-master storageclass]# kubectl patch storageclass managed-nfs-storage -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"false"}}}' storageclass.storage.k8s.io/managed-nfs-storage patched [root@k8s-master storageclass]# kubectl get sc NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE managed-nfs-storage wxc-nfs-storage Delete Immediate false 172m
2)修改YAML文件更新
apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: managed-nfs-storage annotations: "storageclass.kubernetes.io/is-default-class": "true" #添加此注釋 provisioner: wxc-nfs-storage #or choose another name, must match deployment's env PROVISIONER_NAME' parameters: archiveOnDelete: "false"
7.2如何使用默認的StorageClass
如果集群有一個默認的StorageClass能夠滿足我們的需求,那么剩下所有需要做的就是創建PersistentVolumeClaim(PVC),剩下的都有默認的動態配置搞定,包括無需去指定storageClassName:
apiVersion: v1 kind: PersistentVolumeClaim metadata: name: mypvc namespace: testns spec: accessModes: - ReadWriteOnce resources: requests: storage: 10Gi
7.3修改默回收策略(默認為Delete)
apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: managed-nfs-storage annotations: provisioner: qgg-nfs-storage #or choose another name, must match deployment's env PROVISIONER_NAME' parameters: archiveOnDelete: "ture" #暫時不清楚該值對回收策略產生什么影響 reclaimPolicy: Retain #只有NFS 和hostPth支持兩種回收策略
7.4能過刪除/關閉默認的StorageClass
不能刪除默認的StorageClass,因為它是作為集群的add-on安裝的,如果它被刪除,會被重新安裝。 當然,可以停掉默認的StorageClass行為,通過刪除annotation:storageclass.beta.kubernetes.io/is-default-class,或者設置為false。 如果沒有StorageClass對象標記默認的annotation,那么PersistentVolumeClaim對象(在不指定StorageClass情況下)將不自動觸發動態配置。相反,它們將回到綁定可用的*PersistentVolume(PV)*的狀態。
7.5當刪除PersistentVolumeClaim(PVC)會發生什么
如果一個卷是動態配置的卷,則默認的回收策略為“刪除”。這意味着,在默認的情況下,當PVC被刪除時,基礎的PV和對應的存儲也會被刪除。如果需要保留存儲在卷上的數據,則必須在PV被設置之后將回收策略從delete更改為retain。
參考文檔:https://github.com/kubernetes-incubator/external-storage/tree/master/nfs-client