利用阿里雲NAS實現Kubernetes持久化動態存儲


一、動態供給存儲介紹

Dynamic Provisioning機制工作的核心在於StorageClass的API對象
StorageClass聲明存儲插件,用於自動創建PV
Kubernetes支持動態供給的存儲插件: https://kubernetes.io/docs/concepts/storage/storage-classes/
 
流程示意圖:
實現原理:
存儲控制器 Volume Controller,是用來專門處理持久化存儲的控制器,其一個子控制循環 PersistentVolumeController 負責實現 PV 和 PVC 的綁定。PersistentVolumeController 會 watch kube-apiserver 的 PVC 對象。如果發現有 PVC對象創建,則會查看所有可用的 PV,如果有則綁定,若沒有,則會使用 StorageClass 的配置和 PVC 的描述創建 PV 進行綁定
特性:
動態卷供給是kubernetes獨有的功能,這一功能允許按需創建存儲建。在此之前,集群管理員需要事先在集群外由存儲提供者或者雲提供商創建存儲卷,成功之后再創建PersistentVolume對象,才能夠在kubernetes中使用。動態卷供給能讓集群管理員不必進行預先創建存儲卷,而是隨着用戶需求進行創建。

二、部署步驟

1、創建NFS服務的provisioner
# vim nfs-client-provisioner-deploy.yaml
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
  name: nfs-client-provisioner
spec:
  replicas: 1
  strategy:
    type: Recreate
  template:
    metadata:
      labels:
        app: nfs-client-provisioner
    spec:
      serviceAccountName: nfs-client-provisioner
      containers:
        - name: nfs-client-provisioner
          image: quay.io/external_storage/nfs-client-provisioner:latest
          volumeMounts:
            - name: nfs-client-root
              mountPath: /persistentvolumes
          env:
            - name: PROVISIONER_NAME
              value: fuseim.pri/ifs - name: NFS_SERVER
              value: *-*.cn-beijing.nas.aliyuncs.com
            - name: NFS_PATH
              value: /pods-volumes
      volumes:
        - name: nfs-client-root
          nfs:
            server:  *-*-beijing.nas.aliyuncs.com
            path: /pods-volumes

# kubectl apply -f  nfs-client-provisioner-deploy.yaml

 

2、創建SA以及RBAC授權

# vim nfs-client-rbac.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  name: nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: default
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: nfs-client-provisioner-runner
rules:
  - apiGroups: [""]
    resources: ["persistentvolumes"]
    verbs: ["get", "list", "watch", "create", "delete"]
  - apiGroups: [""]
    resources: ["persistentvolumeclaims"]
    verbs: ["get", "list", "watch", "update"]
  - apiGroups: ["storage.k8s.io"]
    resources: ["storageclasses"]
    verbs: ["get", "list", "watch"]
  - apiGroups: [""]
    resources: ["events"]
    verbs: ["create", "update", "patch"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: run-nfs-client-provisioner
subjects:
  - kind: ServiceAccount
    name: nfs-client-provisioner
    # replace with namespace where provisioner is deployed
    namespace: default
roleRef:
  kind: ClusterRole
  name: nfs-client-provisioner-runner
  apiGroup: rbac.authorization.k8s.io
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: leader-locking-nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: default
rules:
  - apiGroups: [""]
    resources: ["endpoints"]
    verbs: ["get", "list", "watch", "create", "update", "patch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: leader-locking-nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: default
subjects:
  - kind: ServiceAccount
    name: nfs-client-provisioner
    # replace with namespace where provisioner is deployed
    namespace: default
roleRef:
  kind: Role
  name: leader-locking-nfs-client-provisioner
  apiGroup: rbac.authorization.k8s.io

# kubectl apply -f nfs-client-rbac.yaml

 

3、創建存儲類

storageClassName :在pvc的請求存儲大小和訪問權限與創建的pv一致的情況下 根據storageClassName進行與pv綁定。常用在pvc需要和特定pv進行綁定的情況下。
舉例:當有創建多個pv設置存儲的大小和訪問權限一致時,且pv,pvc沒有配置storageClassName時,pvc會根據存儲大小和訪問權限去隨機匹配。如果配置了storageClassName會根據這三個條件進行匹配。
當然也可以用其他方法實現pvc與特定pv的綁定,例如使用標簽。

# vim nfs-sotrage-class.yaml

apiVersion: storage.k8s.io/v1
#allowVolumeExpansion: true 開啟允許擴容功能,但是nfs類型不支持
kind: StorageClass
metadata:
  name: yiruike-nfs-storage
mountOptions:
- vers=4
- minorversion=0
- noresvport
provisioner: fuseim.pri/ifs
parameters:
  archiveOnDelete: "false"
# kubectl apply -f nfs-storage-class.yaml

設置pointsmart-nfs-storage sc為后端默認存儲類: 

# kubectl patch storageclass yiruike-nfs-storage -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'

# kubecctl get sc
[root@master-92 pv-pvc]# kubectl get sc
NAME                           PROVISIONER      AGE
yiruike-nfs-storage(default)   fuseim.pri/ifs   48s

三、驗證部署結果

1、創建測試PVC文件

# vim test-claim.yaml

kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: test-claim
  annotations:
    volume.beta.kubernetes.io/storage-class: "yiruike-nfs-storage"
spec:
  accessModes:
    - ReadWriteMany
  #persistentVolumeReclaimPolicy: Retain
  resources:
    requests:
      storage: 2Gi

# kubectl apply -f test-claim.yaml

# kubectl get pv,pvc

NAME                     CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                STORAGECLASS          REASON   AGE
persistentvolume/pvc-*   2Gi        RWX            Delete           Bound    default/test-claim   yiruike-nfs-storage            1s

NAME                               STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS          AGE
persistentvolumeclaim/test-claim   Bound    pvc-2fc935df-62f2-11ea-9e5a-00163e0a8e3e   2Gi        RWX            yiruike-nfs-storage   5s

 

2、創建測試POD

啟動一個pod測試在test-claim的PV里touch一個SUCCESS文件

# vim test-pod.yaml

kind: Pod
apiVersion: v1
metadata:
  name: test-pod
spec:
  containers:
  - name: test-pod
    image: busybox:1.24
    command:
      - "/bin/sh"
    args:
      - "-c"
      - "touch /mnt/SUCCESS && exit 0 || exit 1"
    volumeMounts:
      - name: nfs-pvc
        mountPath: "/mnt"
  restartPolicy: "Never"
  volumes:
    - name: nfs-pvc
      persistentVolumeClaim:
        claimName: test-claim

# kubectl apply -f test-pod.yaml

# df -Th | grep aliyun

*-*.cn-beijing.nas.aliyuncs.com:/pods-volumes nfs4  10P  0  10P  0%  /data/k8s/k8s/kubelet/pods/77a4ad8b-62e1-11ea-89e3-00163e301bb2/volumes/kubernetes.io~nfs/nfs-client-root

# ls  /data/k8s/k8s/kubelet/pods/77a4ad8b-62e1-11ea-89e3-00163e301bb2/volumes/kubernetes.io~nfs/nfs-client-root

default-test-claim-pvc-0b1ce53d-62f4-11ea-9e5a-00163e0a8e3e

# ls  /data/k8s/k8s/kubelet/pods/77a4ad8b-62e1-11ea-89e3-00163e301bb2/volumes/kubernetes.io~nfs/nfs-client-root/default-test-claim-pvc-0b1ce53d-62f4-11ea-9e5a-00163e0a8e3e

SUCCESS

由此可見,部署正常,並且可以動態分配NFS的共享卷

 

3、數據持久化驗證

現在我們可以將 test-pod 這個pod刪掉,測試數據卷里面的文件會不會消失。 

# kubectl delete pod/test-pod

經過查看可以得知,刪掉這個pod以后,數據不會丟失,這樣我們也就達到了動態的數據持久化 

 

四、volumeClaimTemplates

volumeClaimTemplates: 存儲卷申請模板,創建PVC,指定pvc名稱大小,將自動創建pvc,且pvc必須由存儲類供應。

為什么需要volumeClaimTemplate?

對於有狀態的副本集都會用到持久存儲,對於分布式系統來講,它的最大特點是數據是不一樣的,所以各個節點不能使用同一存儲卷,每個節點有自已的專用存儲,但是如果在Deployment中的Pod template里定義的存儲卷,是所有副本集共用一個存儲卷,數據是相同的,因為是基於模板來的 ,而statefulset中每個Pod都要自已的專有存儲卷,所以statefulset的存儲卷就不能再用Pod模板來創建了,於是statefulSet使用volumeClaimTemplate,稱為卷申請模板,它會為每個Pod生成不同的pvc,並綁定pv, 從而實現各pod有專用存儲

示例:

spec: 
  storage:
    volumeClaimTemplate:
      spec:
        storageClassName: yiruike-nfs-storage
        resources:
          requests:
            storage: 10Gi

 

 

 


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM