k8s持久卷 NFS 動態創建 PV & PVC


1、環境

[root@k8s-node01 dynamic-pv]# kubectl get node -o wide NAME STATUS ROLES AGE VERSION INTERNAL-IP     EXTERNAL-IP   OS-IMAGE                KERNEL-VERSION              CONTAINER-RUNTIME k8s-master   Ready    master   91d   v1.18.8   192.168.1.230   <none>        CentOS Linux 7 (Core)   5.8.2-1.el7.elrepo.x86_64   docker://19.3.12
k8s-node01   Ready    <none>   91d   v1.18.8   192.168.1.231   <none>        CentOS Linux 7 (Core)   5.8.1-1.el7.elrepo.x86_64   docker://19.3.12
k8s-node02   Ready    <none>   91d   v1.18.8   192.168.1.232   <none>        CentOS Linux 7 (Core)   5.8.1-1.el7.elrepo.x86_64   docker://19.3.12

 2、創建 SC

[root@k8s-node01 dynamic-pv]# more bxy-nfs-sc.yaml apiVersion: storage.k8s.io/v1beta1 kind: StorageClass metadata: name: managed-nfs-storage provisioner: fuseim.pri/ifs  #必須匹配 deploy env PROVISIONER_NAME 字段 reclaimPolicy: Retain # fuseim.pri/ifs  插件只支持 Retain or  Delete 選項,並且指定 Delete 之后仍然會有一份 持久卷 備份目錄
啟動 & 狀態
[root@k8s-node01 dynamic-pv]# kubectl apply -f bxy-nfs-sc.yaml storageclass.storage.k8s.io/managed-nfs-storage created [root@k8s-node01 dynamic-pv]# kubectl get -f bxy-nfs-sc.yaml NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE managed-nfs-storage fuseim.pri/ifs Retain Immediate false 4s

這里沒有指定VolumBindMode ,默認 Immediate

3、創建 RBAC

[root@k8s-node01 dynamic-pv]# more rbac.yaml 
# 因為 storage 自動創建 pv 需要經過 kube-apiserver ,所以要進行授權 apiVersion: v1 kind: ServiceAccount metadata: name: nfs-client-provisioner --- kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 metadata: # 此處的 "namespace" 被省略掉是因為 ClusterRoles 是沒有命名空間的。 name: nfs-client-provisioner rules: - apiGroups: [""]       #代表所有 apiGroup 資源 resources: ["persistentvolumes"] #資源類型 verbs: ["get", "list", "watch", "create", "delete"] #權限 - apiGroups: [""] resources: ["persistentvolumeclaims"] verbs: ["get", "list", "watch", "update"] - apiGroups: ["storage.k8s.io"] resources: ["storageclasses"] verbs: ["get", "list", "watch"] - apiGroups: [""] resources: ["events"] verbs: ["list", "watch", "create", "update", "patch"] - apiGroups: [""] resources: ["endpoints"] verbs: ["list", "watch", "create", "update", "patch", "get"] --- kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: run-nfs-client-provisioner subjects: - kind: ServiceAccount name: nfs-client-provisioner namespace: default roleRef:       #roleRef 里的內容決定了實際創建綁定的方法 kind: ClusterRole                #kind 可以是 Role 或 ClusterRole name: nfs-client-provisioner #name 將引用你要指定的 Role 或 ClusterRole 的名稱 apiGroup: rbac.authorization.k8s.io
啟動 & 狀態 [root@k8s-node01 dynamic-pv]# kubectl apply -f rbac.yaml serviceaccount/nfs-client-provisioner created clusterrole.rbac.authorization.k8s.io/nfs-client-provisioner created clusterrolebinding.rbac.authorization.k8s.io/run-nfs-client-provisioner created [root@k8s-node01 dynamic-pv]# kubectl get -f rbac.yaml NAME SECRETS AGE serviceaccount/nfs-client-provisioner   1 10s NAME CREATED AT clusterrole.rbac.authorization.k8s.io/nfs-client-provisioner   2020-11-19T07:29:36Z NAME ROLE AGE clusterrolebinding.rbac.authorization.k8s.io/run-nfs-client-provisioner   ClusterRole/nfs-client-provisioner   10s

4、創建 NFS 擴展插件 Deployment

[root@k8s-node01 dynamic-pv]# more bxy-nfs-deploy.yaml apiVersion: apps/v1 kind: Deployment metadata: name: nfs-client-provisioner spec: replicas: 1 strategy: #容器重啟策略 Recreate 刪除所有已啟動容器,重新啟動新容器 type: Recreate selector: matchLabels: app: nfs-client-provisioner template: metadata: labels: app: nfs-client-provisioner spec: # imagePullSecrets: # - name: registry-pull-secret serviceAccount: nfs-client-provisioner containers: - name: nfs-client-provisioner image: quay.io/external_storage/nfs-client-provisioner:latest imagePullPolicy: IfNotPresent #image: lizhenliang/nfs-client-provisioner:v2.0.0 volumeMounts: - name: nfs-client-root mountPath: /persistentvolumes env: - name: PROVISIONER_NAME #定義 StorageClass 里面的 provisioner 字段 value: fuseim.pri/ifs - name: NFS_SERVER value: 192.168.1.231  # NFS 真實服務器 IP
            - name: NFS_PATH value: /bxy/nfsdata volumes: - name: nfs-client-root nfs: server: 192.168.1.231  # NFS 真實服務器 IP path: /bxy/nfsdata

 

啟動 & 狀態
[root@k8s-node01 dynamic-pv]# kubectl apply -f bxy-nfs-deploy.yaml deployment.apps/nfs-client-provisioner created [root@k8s-node01 dynamic-pv]# kubectl get -f bxy-nfs-deploy.yaml NAME READY UP-TO-DATE AVAILABLE AGE nfs-client-provisioner 1/1 1 1 5s [root@k8s-node01 dynamic-pv]# kubectl get po NAME READY STATUS RESTARTS AGE bxy-local-nginx-deploy-59d9f57449-2lbrt 1/1 Running 0 78m bxy-local-nginx-deploy-59d9f57449-xbsmj 1/1 Running 0 78m nfs-client-provisioner-6ffd9d54c5-9htxz 1/1 Running 0 13s tomcat-cb9688cd5-xnwqb 1/1 Running 17 90d
nfs pods 描述信息
[root@k8s-node01 dynamic-pv]# kubectl describe pods nfs-client-provisioner-6ffd9d54c5-9htxz Name: nfs-client-provisioner-6ffd9d54c5-9htxz Namespace: default Priority: 0 Node: k8s-node01/192.168.1.231 Start Time: Thu, 19 Nov 2020 15:39:21 +0800 Labels: app=nfs-client-provisioner pod-template-hash=6ffd9d54c5 Annotations: <none> Status: Running IP: 10.244.1.85 IPs: IP: 10.244.1.85 ....... ....... Restart Count: 0 Environment: PROVISIONER_NAME: fuseim.pri/ifs NFS_SERVER: 192.168.1.231 NFS_PATH: /bxy/nfsdata Mounts: /persistentvolumes from nfs-client-root (rw) /var/run/secrets/kubernetes.io/serviceaccount from nfs-client-provisioner-token-ct675 (ro) ...... ...... Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 2m19s default-scheduler Successfully assigned default/nfs-client-provisioner-6ffd9d54c5-9htxz to k8s-node01 Normal Pulled 2m18s kubelet, k8s-node01 Container image "quay.io/external_storage/nfs-client-provisioner:latest" already present on machine Normal Created 2m18s kubelet, k8s-node01 Created container nfs-client-provisioner Normal Started 2m18s kubelet, k8s-node01 Started container nfs-client-provisioner

4、創建 NG 實例

[root@k8s-node01 dynamic-pv]# more bxy-nfs-nginx.yaml apiVersion: v1 kind: Service metadata: name: nginx labels: app: nginx spec: ports: - port: 80 name: web clusterIP: None selector: app: nginx --- apiVersion: apps/v1 kind: StatefulSet     #類型 StatefulSet 有狀態 metadata: name: web spec: serviceName: "nginx" #聲明它屬於哪個Headless Service. 使用的是 Service 的 metadata.name
 
#當啟動之后可以通過以下規則來實現pod之間的互相訪問,
#statefulSetName-{0..N-1}.serviceName.namespace.svc.cluster.local ,

  #其中 serviceName 為 spec.serviceName ,並且需要 Service 和 StatefulSet 必須在同一個 namespace 下。
 replicas: 2 selector: matchLabels: app: nginx template: metadata: labels: app: nginx spec: containers: - name: nginx image: nginx ports: - containerPort: 80 name: web volumeMounts: - name: www mountPath: /usr/share/nginx/html volumeClaimTemplates: #可看作 PVC 模板 - metadata: name: www spec: accessModes: [ "ReadWriteOnce" ] storageClassName: "managed-nfs-storage" resources: requests: storage: 1Gi
啟動 & 狀態
[root@k8s-node01 dynamic-pv]# kubectl apply -f bxy-nfs-nginx.yaml service/nginx created statefulset.apps/web created [root@k8s-node01 dynamic-pv]# kubectl get -f bxy-nfs-nginx.yaml NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/nginx ClusterIP None <none> 80/TCP 6s NAME READY AGE statefulset.apps/web 1/2 6s

#NG 正常啟動后 PV & PVC 將會自動創建,並綁定
#PV & PVC 狀態 [root@k8s
-node01 dynamic-pv]# kubectl get pv,pvc NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE persistentvolume/bxy-local-pv-volume 5Gi RWO Delete Bound default/bxy-local-pvc-volume bxy-local-sc-volume 109m persistentvolume/pvc-0517358e-17ca-4dce-9fff-a0e15494c1a1 1Gi RWO Retain Bound default/www-web-1 managed-nfs-storage 26s persistentvolume/pvc-a421c94c-fb55-4705-af1a-37dd07537f58 1Gi RWO Retain Bound default/www-web-0 managed-nfs-storage 31s NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE persistentvolumeclaim/bxy-local-pvc-volume Bound bxy-local-pv-volume 5Gi RWO bxy-local-sc-volume 103m persistentvolumeclaim/www-web-0 Bound pvc-a421c94c-fb55-4705-af1a-37dd07537f58 1Gi RWO managed-nfs-storage 31s persistentvolumeclaim/www-web-1 Bound pvc-0517358e-17ca-4dce-9fff-a0e15494c1a1 1Gi RWO managed-nfs-storage 26s

可以看到,它動態創建了 PV & PVC 並綁定
回收策略 Retain 是在 SC 里面指定好的
POD 狀態 & 訪問狀態
[root@k8s-node01 dynamic-pv]# kubectl get po NAME READY STATUS RESTARTS AGE bxy-local-nginx-deploy-59d9f57449-2lbrt 1/1 Running 0 91m bxy-local-nginx-deploy-59d9f57449-xbsmj 1/1 Running 0 91m nfs-client-provisioner-6ffd9d54c5-9htxz 1/1 Running 0 13m tomcat-cb9688cd5-xnwqb 1/1 Running 17 90d web-0 1/1 Running 0 5m31s web-1 1/1 Running 0 5m26s [root@k8s-node01 dynamic-pv]# kubectl exec -it web-0 /bin/bash kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl kubectl exec [POD] -- [COMMAND] instead. root@web-0:/# curl 127.0.0.1 <html> <head><title>403 Forbidden</title></head> <body> <center><h1>403 Forbidden</h1></center> <hr><center>nginx/1.19.4</center> </body> </html>


訪問不到,403 說明掛載成功,因為掛載目錄下沒有添加任何數據 (如果沒有掛載成功,NG 啟動后有默認訪問頁面)

5、添加測試文件

查看掛載目錄狀態,生成兩個權限為 777 的新目錄與 NG 的 兩個 POD 實例對應 [root@k8s-node01 dynamic-pv]# cd /bxy/nfsdata/ [root@k8s-node01 nfsdata]# ls default-www-web-0-pvc-a421c94c-fb55-4705-af1a-37dd07537f58  default-www-web-1-pvc-0517358e-17ca-4dce-9fff-a0e15494c1a1 我給 web-0 目錄添加文件后測試訪問效果 [root@k8s-node01 nfsdata]# ll default-www-web-0-pvc-a421c94c-fb55-4705-af1a-37dd07537f58/ 總用量 0 [root@k8s-node01 nfsdata]# echo 'k8s nfs dynamic mount test !!!' > default-www-web-0-pvc-a421c94c-fb55-4705-af1a-37dd07537f58/index.html 因為沒有設置集群 IP ,繼續進入容器 web-0 訪問 NG 看看效果 [root@k8s-node01 nfsdata]# kubectl exec -it web-0 /bin/bash kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl kubectl exec [POD] -- [COMMAND] instead. root@web-0:/# curl 127.0.0.1 k8s nfs dynamic mount test !!! 沒毛病,說明動態掛載很成功。

題外話:
kind: StatefulSet 為有狀態類型,這樣啟動的容器都是按照 0,1,2,3 順序排列,並且在刪除時也會按照 3,2,1,0 這種倒敘來刪除容器
把剛才的 NG 實例數由 2 增加到 5 ,看看效果
[root@k8s-node01 nfsdata]# kubectl get statefulset NAME READY AGE web 2/2 21m [root@k8s-node01 nfsdata]# kubectl edit statefulset web # Please edit the object below. Lines beginning with a '#' will be ignored, # and an empty file will abort the edit. If an error occurs while saving this file will be # reopened with the relevant failures. # apiVersion: apps/v1 kind: StatefulSet metadata: annotations: --- spec: podManagementPolicy: OrderedReady replicas: 2        #將此處改為 5 個實例 revisionHistoryLimit: 10 selector: matchLabels: app: nginx serviceName: nginx template: metadata: creationTimestamp: null labels: app: nginx spec: containers: - image: nginx imagePullPolicy: Always name: nginx ports: - containerPort: 80
---- 將 spec.replicas: 2 改成 5 點擊保存后,將會出發自動構建效果
部分截圖: [root@k8s-master ~]# kubectl get pod -w -l app=nginx NAME READY STATUS RESTARTS AGE web-0   1/1     Running   0 21m web-1   1/1     Running   0 21m web-2   0/1     Pending   0 0s web-2   0/1     Pending   0 0s web-2   0/1     Pending   0 2s web-2   0/1     ContainerCreating   0 2s web-2   1/1     Running             0 5s web-3   0/1     Pending             0 0s web-3   0/1     Pending             0 0s web-3   0/1     Pending             0 2s web-3   0/1     ContainerCreating   0 2s 不會動態,沒辦法。 [root@k8s-master ~]# kubectl get pod -w -l app=nginx NAME READY STATUS RESTARTS AGE web-0   1/1     Running   0 21m web-1   1/1     Running   0 21m web-2   0/1     Pending   0 0s web-2   0/1     Pending   0 0s web-2   0/1     Pending   0 2s web-2   0/1     ContainerCreating   0 2s web-2   1/1     Running             0 5s web-3   0/1     Pending             0 0s web-3   0/1     Pending             0 0s web-3   0/1     Pending             0 2s web-3   0/1     ContainerCreating   0 2s web-3   1/1     Running             0 11s web-4   0/1     Pending             0 0s web-4   0/1     Pending             0 0s web-4   0/1     Pending             0 2s web-4   0/1     ContainerCreating   0 2s web-4   1/1     Running             0          5s
 
         

 

可以看到 web-0 , web-1 時之前創建的,web-2/4 是剛新增的。
狀態為 web-2 創建啟動成功后才會順序啟動 web-3 , web -4 。刪除也是一樣,這里就不在演示。

 


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM