k8s持久化存儲PV、PVC、StorageClass


k8s持久化存儲

1. 以前數據持久化方式

  通過volumes 數據卷掛載

1. web3.yaml 內容如下:

apiVersion: apps/v1
kind: Deployment
metadata:
  creationTimestamp: null
  labels:
    app: web3
  name: web3
spec:
  replicas: 1
  selector:
    matchLabels:
      app: web3
  strategy: {}
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: web3
    spec:
      containers:
      - image: nginx
        name: nginx
        resources: {}
        volumeMounts:
        - name: varlog
          mountPath: /tmp/log
      volumes:
      - name: varlog
        hostPath:
          path: /tmp/log/web3log
status: {}

2. 創建資源后查看

[root@k8smaster1 volumestest]# kubectl get pods | grep web3
web3-6c6557674d-xt7kr                           1/1     Running   0          6m38s
[root@k8smaster1 volumestest]# kubectl describe pods web3-6c6557674d-xt7kr

相關掛載信息如下:

 3. 到容器內部創建一個文件

[root@k8smaster1 volumestest]# kubectl exec -it web3-6c6557674d-xt7kr
error: you must specify at least one command for the container
[root@k8smaster1 volumestest]# kubectl exec -it web3-6c6557674d-xt7kr -- bash
root@web3-6c6557674d-xt7kr:/# echo "123" > /tmp/log/test.txt
root@web3-6c6557674d-xt7kr:/# exit
exit

4. 到pod 調度的節點查看宿主機目錄是否掛載成功

(1) master 節點查看pod 調度節點

[root@k8smaster1 volumestest]# kubectl get pods -o wide | grep web3
web3-6c6557674d-xt7kr                           1/1     Running   0          11m     10.244.2.108     k8snode2     <none>           <none>

(2) 到k8snode2 節點查看

[root@k8snode2 web3log]# ll
total 4
-rw-r--r-- 1 root root 4 Jan 21 05:49 test.txt
[root@k8snode2 web3log]# cat test.txt 
123

5. 測試k8snode2 節點宕機,pod 自動調度到k8snode1 節點再次查看

[root@k8smaster1 volumestest]# kubectl get pods -o wide | grep web3
web3-6c6557674d-6wlh4                           1/1     Running       0          4m22s   10.244.1.110     k8snode1     <none>           <none>
web3-6c6557674d-xt7kr                           1/1     Terminating   0          22m     10.244.2.108     k8snode2     <none>           <none>
[root@k8smaster1 volumestest]# kubectl exec -it web3-6c6557674d-6wlh4 -- bash
root@web3-6c6557674d-6wlh4:/# ls /tmp/log/
root@web3-6c6557674d-6wlh4:/# 

   發現自動調度到k8snode1 節點,進入容器之后發現之前新建的文件丟失。

6. 從k8snode1 宿主機查看發現也沒有文件

[root@k8snode1 web3log]# pwd
/tmp/log/web3log
[root@k8snode1 web3log]# ls
[root@k8snode1 web3log]# 

  造成的現象就是pod 所在的節點宕機后,volume 數據卷掛載的文件也丟失,因此需要一種解決方案。

1. nfs 持久化存儲

  網絡文件系統,是一種共享文件系統,實際上相當於客戶端將文件上傳到服務器,實現共享。

1. 下載nfs

1. 找一台服務器安裝nfs

(1) 安裝nfs以及查看nfs 服務狀態

yum install -y nfs-utils

(2) 設置掛載路徑, 注意需要將掛載路徑創建出來

[root@k8smaster2 logs]# cat /etc/exports
/data/nfs *(rw,no_root_squash)

解釋: rw 代表讀寫訪問, no_root_squash 代表root 用戶具有根目錄的完全管理訪問權限

2. k8s 集群node 節點安裝nfs-utils

yum install -y nfs-utils

3. nfs 服務器啟動nfs 服務且查看服務狀態

[root@k8smaster2 nfs]# systemctl start nfs    # 啟動nfs
[root@k8smaster2 nfs]# systemctl status nfs    # 查看狀態
● nfs-server.service - NFS server and services
   Loaded: loaded (/usr/lib/systemd/system/nfs-server.service; disabled; vendor preset: disabled)
   Active: active (exited) since Fri 2022-01-21 19:55:38 EST; 5min ago
  Process: 51947 ExecStopPost=/usr/sbin/exportfs -f (code=exited, status=0/SUCCESS)
  Process: 51943 ExecStopPost=/usr/sbin/exportfs -au (code=exited, status=0/SUCCESS)
  Process: 51941 ExecStop=/usr/sbin/rpc.nfsd 0 (code=exited, status=0/SUCCESS)
  Process: 51977 ExecStartPost=/bin/sh -c if systemctl -q is-active gssproxy; then systemctl reload gssproxy ; fi (code=exited, status=0/SUCCESS)
  Process: 51960 ExecStart=/usr/sbin/rpc.nfsd $RPCNFSDARGS (code=exited, status=0/SUCCESS)
  Process: 51958 ExecStartPre=/usr/sbin/exportfs -r (code=exited, status=0/SUCCESS)
 Main PID: 51960 (code=exited, status=0/SUCCESS)
    Tasks: 0
   Memory: 0B
   CGroup: /system.slice/nfs-server.service

Jan 21 19:55:38 k8smaster2 systemd[1]: Starting NFS server and services...
Jan 21 19:55:38 k8smaster2 systemd[1]: Started NFS server and services.
[root@k8smaster2 nfs]# showmount -e localhost    # 查看掛載的nfs 信息
Export list for localhost:
/data/nfs *

也可以查看nfs 的進程信息

[root@k8smaster2 nfs]# ps -ef | grep nfs
root      51962      2  0 19:55 ?        00:00:00 [nfsd4_callbacks]
root      51968      2  0 19:55 ?        00:00:00 [nfsd]
root      51969      2  0 19:55 ?        00:00:00 [nfsd]
root      51970      2  0 19:55 ?        00:00:00 [nfsd]
root      51971      2  0 19:55 ?        00:00:00 [nfsd]
root      51972      2  0 19:55 ?        00:00:00 [nfsd]
root      51973      2  0 19:55 ?        00:00:00 [nfsd]
root      51974      2  0 19:55 ?        00:00:00 [nfsd]
root      51975      2  0 19:55 ?        00:00:00 [nfsd]
root      54774  45013  0 20:02 pts/2    00:00:00 grep --color=auto nfs

2. 客戶端安裝

 1. 在所有k8snode 節點安裝客戶端,並且查看遠程nfs 信息

yum install -y nfs-utils

2. 查看遠程信息

[root@k8snode1 ~]# showmount -e 192.168.13.106
Export list for 192.168.13.106:
/data/nfs *

3. 本地測試nfs

(1) 創建掛載並進行測試

[root@k8snode1 ~]# mkdir /share
[root@k8snode1 ~]# mount 192.168.13.106:/data/nfs /share
[root@k8snode1 ~]# df -h | grep 13.106
192.168.13.106:/data/nfs   17G   12G  5.4G  69% /share

(2) node 節點創建文件

[root@k8snode1 ~]# echo "hello from 104" >> /share/104.txt
[root@k8snode1 ~]# cat /share/104.txt 
hello from 104

(3) nfs 服務器查看

[root@k8smaster2 nfs]# cat 104.txt 
hello from 104

(4) 客戶端取消掛載

[root@k8snode1 ~]# umount /share
[root@k8snode1 ~]# df -h | grep 13.106

取消掛載之后,nfs 服務器上的文件仍然存在。

3. k8s 集群使用nfs

1. 編寫nfs-nginx.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-dep1
spec:
  replicas: 1
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx
        volumeMounts:
        - name: wwwroot
          mountPath: /usr/share/nginx/html
        ports:
        - containerPort: 80
      volumes:
        - name: wwwroot
          nfs:
            server: 192.168.13.106
            path: /data/nfs

2. 創建資源

[root@k8smaster1 nfs]# kubectl apply -f nfs-nginx.yaml 
deployment.apps/nginx-dep1 created

然后查看pod describe 信息

3. 我們進入容器然后創建一個文件導/usr/share/nginx/html

root@nginx-dep1-6d7f9c85dc-lqfbf:/# cat /usr/share/nginx/html/index.html 
hello

4. 然后到nfs 服務器查看

[root@k8smaster2 nfs]# pwd
/data/nfs
[root@k8smaster2 nfs]# ls
104.txt  index.html
[root@k8smaster2 nfs]# cat index.html 
hello

4. pv 和 pvc

上面使用nfs 有一個問題,就是每個需要持久化的都需要知道遠程nfs 服務器的地址以及相關權限,可能不太安全。下面研究pv和pvc 使用。

pv pvc 對應PersistentVolume和PersistentVolumeClaim。  pv 類似於一個聲明nfs 地址等信息,抽象成配置文件; pvc 通過引用pv 中聲明的信息,然后即可實現nfs 持久化存儲。

pv 有好多實現方式,實際上是對nfs進行一層包裝,因為我們已經安裝了nfs, 所以基於nfs 實現。

參考: https://kubernetes.io/docs/concepts/storage/persistent-volumes/

1.  創建pv

1. 創建 pv.yaml

apiVersion: v1
kind: PersistentVolume
metadata:
  name: my-pv
spec:
  capacity:
    storage: 5Gi
  accessModes:
    - ReadWriteMany
  nfs:
    path: /data/nfs
    server: 192.168.13.106

2. 創建並查看

[root@k8smaster1 nfs]# kubectl apply -f pv.yaml 
persistentvolume/my-pv created
[root@k8smaster1 nfs]# kubectl get pv -o wide
NAME    CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM   STORAGECLASS   REASON   AGE    VOLUMEMODE
my-pv   5Gi        RWX            Retain           Available                                   2m4s   Filesystem

補充: 關於PV的一些核心概念

1. test-pv.yml

apiVersion: v1
kind: PersistentVolume
metadata:
  name:  pv2
spec:
  capacity: 
    storage: 1Gi
  accessModes:
  - ReadWriteOnce
  persistentVolumeReclaimPolicy: Recycle
  nfs:
    path: /data/nfs
    server: 192.168.13.106

2. 執行創建並且查看

[root@k8smaster1 storageclass]# kubectl apply -f test-pv.yml 
persistentvolume/pv2 created
[root@k8smaster1 storageclass]# kubectl get pv
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM              STORAGECLASS         REASON   AGE
pv2                                        1Gi        RWO            Recycle          Available                                                    5s
pvc-97bce597-0788-49a1-be6d-5a938363797b   1Mi        RWX            Delete           Bound       default/test-pvc   course-nfs-storage            27m
[root@k8smaster1 storageclass]# kubectl describe pv pv2
Name:            pv2
Labels:          <none>
Annotations:     Finalizers:  [kubernetes.io/pv-protection]
StorageClass:    
Status:          Available
Claim:           
Reclaim Policy:  Recycle
Access Modes:    RWO
VolumeMode:      Filesystem
Capacity:        1Gi
Node Affinity:   <none>
Message:         
Source:
    Type:      NFS (an NFS mount that lasts the lifetime of a pod)
    Server:    192.168.13.106
    Path:      /data/nfs
    ReadOnly:  false
Events:        <none>

3. 核心概念

(1) Capacity(存儲能力)

  一般來說,一個 PV 對象都要指定一個存儲能力,通過 PV 的 capacity屬性來設置的,目前只支持存儲空間的設置,就是我們這里的 storage=1Gi,不過未來可能會加入 IOPS、吞吐量等指標的配置。

(2) AccessModes(訪問模式)

AccessModes 是用來對 PV 進行訪問模式的設置,用於描述用戶應用對存儲資源的訪問權限,訪問權限包括下面幾種方式:

ReadWriteOnce(RWO):讀寫權限,但是只能被單個節點掛載

ReadOnlyMany(ROX):只讀權限,可以被多個節點掛載

ReadWriteMany(RWX):讀寫權限,可以被多個節點掛載

(3) persistentVolumeReclaimPolicy(回收策略)

我這里指定的 PV 的回收策略為 Recycle,目前 PV 支持的策略有三種:

Retain(保留)- 保留數據,需要管理員手工清理數據

Recycle(回收)- 清除 PV 中的數據,效果相當於執行 rm -rf /thevoluem/*

Delete(刪除)- 與 PV 相連的后端存儲完成 volume 的刪除操作,當然這常見於雲服務商的存儲服務,比如 ASW EBS。

(4) 狀態:

一個 PV 的生命周期中,可能會處於4種不同的階段:

Available(可用):表示可用狀態,還未被任何 PVC 綁定

Bound(已綁定):表示 PVC 已經被 PVC 綁定

Released(已釋放):PVC 被刪除,但是資源還未被集群重新聲明

Failed(失敗): 表示該 PV 的自動回收失敗

2. 創建pvc 使用上面的pv

1. 創建pvc.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-dep1
spec:
  replicas: 3
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx
        volumeMounts:
        - name: wwwroot
          mountPath: /usr/share/nginx/html
        ports:
        - containerPort: 80
      volumes:
      - name: wwwroot
        persistentVolumeClaim:
          claimName: my-pvc

---

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: my-pvc
spec:
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 5Gi

2. 創建並查看

[root@k8smaster1 nfs]# kubectl apply -f pvc.yaml 
deployment.apps/nginx-dep1 created
persistentvolumeclaim/my-pvc created
[root@k8smaster1 nfs]# kubectl get pvc -o wide
NAME     STATUS   VOLUME   CAPACITY   ACCESS MODES   STORAGECLASS   AGE   VOLUMEMODE
my-pvc   Bound    my-pv    5Gi        RWX                           60s   Filesystem
[root@k8smaster1 nfs]# kubectl get pods -o wide
NAME                                            READY   STATUS    RESTARTS   AGE     IP               NODE         NOMINATED NODE   READINESS GATES
nginx-dep1-58b7bf955f-4jhbq                     1/1     Running   0          75s     10.244.2.112     k8snode2     <none>           <none>
nginx-dep1-58b7bf955f-m69dm                     1/1     Running   0          75s     10.244.2.110     k8snode2     <none>           <none>
nginx-dep1-58b7bf955f-qh6pg                     1/1     Running   0          75s     10.244.2.111     k8snode2     <none>           <none>
nginx-f89759699-vkf7d                           1/1     Running   3          4d16h   10.244.1.106     k8snode1     <none>           <none>
tomcat-58767d5b5-f5qwj                          1/1     Running   2          4d15h   10.244.1.103     k8snode1     <none>           <none>
weave-scope-agent-ui-kbq7b                      1/1     Running   2          45h     192.168.13.105   k8snode2     <none>           <none>
weave-scope-agent-ui-tg5q4                      1/1     Running   2          45h     192.168.13.103   k8smaster1   <none>           <none>
weave-scope-agent-ui-xwh2b                      1/1     Running   2          45h     192.168.13.104   k8snode1     <none>           <none>
weave-scope-cluster-agent-ui-7498b8d4f4-zdlk7   1/1     Running   2          45h     10.244.1.104     k8snode1     <none>           <none>
weave-scope-frontend-ui-649c7dcd5d-7gb9s        1/1     Running   2          45h     10.244.1.107     k8snode1     <none>           <none>
web3-6c6557674d-6wlh4                           1/1     Running   0          14h     10.244.1.110     k8snode1     <none>           <none>
[root@k8smaster1 nfs]# 

3. 隨便進入一個pod的第一個容器,然后創建文件

[root@k8smaster1 nfs]# kubectl exec -it nginx-dep1-58b7bf955f-4jhbq -- bash
root@nginx-dep1-58b7bf955f-4jhbq:/# echo "111222" >> /usr/share/nginx/html/1.txt
root@nginx-dep1-58b7bf955f-4jhbq:/# exit
exit

4. 到nfs 服務器查看與其他容器查看

(1) nfs 服務器查看

[root@k8smaster2 nfs]# ls
104.txt  1.txt  index.html
[root@k8smaster2 nfs]# cat 1.txt 
111222

(2) 進入其他pod 的第一個容器查看

[root@k8smaster1 nfs]# kubectl exec -it nginx-dep1-58b7bf955f-qh6pg -- bash
root@nginx-dep1-58b7bf955f-qh6pg:/# ls /usr/share/nginx/html/
1.txt  104.txt  index.html

  至此簡單實現了基於nfs 和 pv、pvc 的持久化存儲。

5. storageclass

  PV 可以理解為靜態的,就是要使用的一個 PVC 的話就必須手動去創建一個 PV,這種方式在很大程度上並不能滿足我們的需求,比如我們有一個應用需要對存儲的並發度要求比較高,而另外一個應用對讀寫速度又要求比較高,特別是對於 StatefulSet 類型的應用簡單的來使用靜態的 PV 就很不合適了,這種情況下我們就需要用到動態 PV,也就 StorageClass。

1. 創建storageclass

  要使用 StorageClass,我們就得安裝對應的自動配置程序,比如我們這里存儲后端使用的是 nfs,那么我們就需要使用到一個 nfs-client 的自動配置程序,我們也叫它 Provisioner,這個程序使用我們已經配置好的 nfs 服務器,來自動創建持久卷,也就是自動幫我們創建 PV。

  自動創建的 PV 以${namespace}-${pvcName}-${pvName}這樣的命名格式創建在 NFS 服務器上的共享數據目錄中,而當這個 PV 被回收后會以archieved-${namespace}-${pvcName}-${pvName}這樣的命名格式存在 NFS 服務器上。

  當然在部署nfs-client之前,我們需要先成功安裝上 nfs 服務器,服務地址是192.168.13.106,共享數據目錄是/data/nfs/,然后接下來我們部署 nfs-client 即可,我們也可以直接參考 nfs-client 的文檔:https://github.com/kubernetes-incubator/external-storage/tree/master/nfs-client ,進行安裝即可。

第一步: 配置 Deployment,將里面的對應的參數替換成我們自己的 nfs 配置(nfs-client.yml)

kind: Deployment
apiVersion: apps/v1
metadata:
  name: nfs-client-provisioner
spec:
  replicas: 1
  strategy:
    type: Recreate
  selector:
    matchLabels:
      app: nfs-client-provisioner
  template:
    metadata:
      labels:
        app: nfs-client-provisioner
    spec:
      serviceAccountName: nfs-client-provisioner
      containers:
        - name: nfs-client-provisioner
          image: quay.io/external_storage/nfs-client-provisioner:latest
          volumeMounts:
            - name: nfs-client-root
              mountPath: /persistentvolumes
          env:
            - name: PROVISIONER_NAME
              value: fuseim.pri/ifs
            - name: NFS_SERVER
              value: 192.168.13.106
            - name: NFS_PATH
              value: /data/nfs
      volumes:
        - name: nfs-client-root
          nfs:
            server: 192.168.13.106
            path: /data/nfs
View Code

第二步:使用一個名為 nfs-client-provisioner 的serviceAccount,也需要創建一個 sa,然后綁定上對應的權限:(nfs-client-sa.yml)

apiVersion: v1
kind: ServiceAccount
metadata:
  name: nfs-client-provisioner

---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: nfs-client-provisioner-runner
rules:
  - apiGroups: [""]
    resources: ["persistentvolumes"]
    verbs: ["get", "list", "watch", "create", "delete"]
  - apiGroups: [""]
    resources: ["persistentvolumeclaims"]
    verbs: ["get", "list", "watch", "update"]
  - apiGroups: ["storage.k8s.io"]
    resources: ["storageclasses"]
    verbs: ["get", "list", "watch"]
  - apiGroups: [""]
    resources: ["events"]
    verbs: ["list", "watch", "create", "update", "patch"]
  - apiGroups: [""]
    resources: ["endpoints"]
    verbs: ["create", "delete", "get", "list", "watch", "patch", "update"]

---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: run-nfs-client-provisioner
subjects:
  - kind: ServiceAccount
    name: nfs-client-provisioner
    namespace: default
roleRef:
  kind: ClusterRole
  name: nfs-client-provisioner-runner
  apiGroup: rbac.authorization.k8s.io
View Code

  我們這里新建的一個名為 nfs-client-provisioner 的ServiceAccount,然后綁定了一個名為 nfs-client-provisioner-runner 的ClusterRole,而該ClusterRole聲明了一些權限,其中就包括對persistentvolumes的增、刪、改、查等權限,所以我們可以利用該ServiceAccount來自動創建 PV。

第三步: nfs-client 的 Deployment 聲明完成后,我們就可以來創建一個StorageClass對象了:(nfs-client-class.yml)

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: course-nfs-storage
provisioner: fuseim.pri/ifs # or choose another name, must match deployment's env PROVISIONER_NAME'
View Code

  聲明了一個名為 course-nfs-storage 的StorageClass對象,注意下面的provisioner對應的值一定要和上面的Deployment下面的 PROVISIONER_NAME 這個環境變量的值一樣。

接下來使用kubectl apply -f XXX.yml 創建上面資源並且查看相關資源:

[root@k8smaster1 storageclass]# kubectl get pods,deployments -o wide | grep nfs 
pod/nfs-client-provisioner-6888b56547-7ts79         1/1     Running   0          101m   10.244.2.118     k8snode2     <none>           <none>
deployment.apps/nfs-client-provisioner         1/1     1            1           3h26m   nfs-client-provisioner      quay.io/external_storage/nfs-client-provisioner:latest   app=nfs-client-provisioner
[root@k8smaster1 storageclass]# kubectl get storageclass -o wide
NAME                 PROVISIONER      RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
course-nfs-storage   fuseim.pri/ifs   Delete          Immediate           false                  44m

也可以創建的時候設置為默認的storageclass

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: course-nfs-storage
  annotations:
        storageclass.kubernetes.io/is-default-class: "true"
provisioner: fuseim.pri/ifs

 

2. 新建

1. 首先創建一個 PVC 對象, test-pvc.yml

kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: test-pvc
spec:
  accessModes:
  - ReadWriteMany
  resources:
    requests:
      storage: 1Mi

執行創建:

[root@k8smaster1 storageclass]# kubectl apply -f test-pvc.yml 
persistentvolumeclaim/test-pvc created
[root@k8smaster1 storageclass]# kubectl get pvc -o wide
NAME       STATUS    VOLUME   CAPACITY   ACCESS MODES   STORAGECLASS   AGE   VOLUMEMODE
test-pvc   Pending                                                     2s    Filesystem

  聲明了一個 PVC 對象,采用 ReadWriteMany 的訪問模式,請求 1Mi 的空間,但是我們可以看到上面的 PVC 文件我們沒有標識出任何和 StorageClass 相關聯的信息,那么如果我們現在直接創建這個 PVC 對象不會自動綁定上合適的 PV 對象,我們這里有兩種方法可以來利用上面我們創建的 StorageClass 對象來自動幫我們創建一個合適的 PV

方法1:我們可以設置這個 course-nfs-storage 的 StorageClass 為 Kubernetes 的默認存儲后端

kubectl patch storageclass course-nfs-storage -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'

查看默認的storageclass以及取消默認:

[root@k8smaster1 storageclass]# kubectl get storageclass
NAME                           PROVISIONER      RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
course-nfs-storage (default)   fuseim.pri/ifs   Delete          Immediate           false                  122m
[root@k8smaster1 storageclass]# kubectl patch storageclass course-nfs-storage -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"false"}}}'
storageclass.storage.k8s.io/course-nfs-storage patched
[root@k8smaster1 storageclass]# kubectl get storageclass
NAME                 PROVISIONER      RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
course-nfs-storage   fuseim.pri/ifs   Delete          Immediate           false                  123m

方法二:在這個 PVC 對象中添加一個聲明 StorageClass 對象的標識,這里我們可以利用一個 annotations 屬性來標識,如下: (推薦這種)

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: test-pvc
  annotations:
    volume.beta.kubernetes.io/storage-class: "course-nfs-storage"
spec:
  accessModes:
  - ReadWriteMany
  resources:
    requests:
      storage: 1Mi

創建並且查看:

[root@k8smaster1 storageclass]# kubectl apply -f test-pvc.yml 
persistentvolumeclaim/test-pvc created
[root@k8smaster1 storageclass]# kubectl get pvc
NAME       STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS         AGE
test-pvc   Bound    pvc-97bce597-0788-49a1-be6d-5a938363797b   1Mi        RWX            course-nfs-storage   5s
[root@k8smaster1 storageclass]# kubectl get pv
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM              STORAGECLASS         REASON   AGE
pvc-97bce597-0788-49a1-be6d-5a938363797b   1Mi        RWX            Delete           Bound    default/test-pvc   course-nfs-storage            53s
[root@k8smaster1 storageclass]# kubectl describe pv pvc-97bce597-0788-49a1-be6d-5a938363797b
Name:            pvc-97bce597-0788-49a1-be6d-5a938363797b
Labels:          <none>
Annotations:     pv.kubernetes.io/provisioned-by: fuseim.pri/ifs
Finalizers:      [kubernetes.io/pv-protection]
StorageClass:    course-nfs-storage
Status:          Bound
Claim:           default/test-pvc
Reclaim Policy:  Delete
Access Modes:    RWX
VolumeMode:      Filesystem
Capacity:        1Mi
Node Affinity:   <none>
Message:         
Source:
    Type:      NFS (an NFS mount that lasts the lifetime of a pod)
    Server:    192.168.13.106
    Path:      /data/nfs/default-test-pvc-pvc-97bce597-0788-49a1-be6d-5a938363797b
    ReadOnly:  false
Events:        <none>
[root@k8smaster1 storageclass]# kubectl describe pvc test-pvc
Name:          test-pvc
Namespace:     default
StorageClass:  course-nfs-storage
Status:        Bound
Volume:        pvc-97bce597-0788-49a1-be6d-5a938363797b
Labels:        <none>
Annotations:   pv.kubernetes.io/bind-completed: yes
               pv.kubernetes.io/bound-by-controller: yes
               volume.beta.kubernetes.io/storage-class: course-nfs-storage
               volume.beta.kubernetes.io/storage-provisioner: fuseim.pri/ifs
Finalizers:    [kubernetes.io/pvc-protection]
Capacity:      1Mi
Access Modes:  RWX
VolumeMode:    Filesystem
Mounted By:    <none>
Events:
  Type    Reason                 Age                    From                                                                                         Message
  ----    ------                 ----                   ----                                                                                         -------
  Normal  ExternalProvisioning   5m18s (x2 over 5m18s)  persistentvolume-controller                                                                  waiting for a volume to be created, either by external provisioner "fuseim.pri/ifs" or manually created by system administrator
  Normal  Provisioning           5m18s                  fuseim.pri/ifs_nfs-client-provisioner-6888b56547-7ts79_6aa1d177-8966-11ec-b368-9e5ccaa198de  External provisioner is provisioning volume for claim "default/test-pvc"
  Normal  ProvisioningSucceeded  5m18s                  fuseim.pri/ifs_nfs-client-provisioner-6888b56547-7ts79_6aa1d177-8966-11ec-b368-9e5ccaa198de  Successfully provisioned volume pvc-97bce597-0788-49a1-be6d-5a938363797b

  可以看到: 一個名為 test-pvc 的 PVC 對象創建成功了,狀態已經是 Bound 了,也產生了一個對應的 VOLUME 對象,最重要的一欄是 STORAGECLASS,現在也有值了,就是我們剛剛創建的 StorageClass 對象 course-nfs-storage。 並且也自動創建了一個pv 對象,訪問模式是 RWX,回收策略是 Delete,這個是通過StorageClass 對象自動創建的。

 3. 測試

1. 新建test-pvc-pod.yml

kind: Pod
apiVersion: v1
metadata:
  name: test-pod
spec:
  containers:
  - name: test-pod
    image: busybox
    imagePullPolicy: IfNotPresent
    command:
    - "/bin/sh"
    args:
    - "-c"
    - "touch /mnt/SUCCESS && exit 0 || exit 1"
    volumeMounts:
    - name: nfs-pvc
      mountPath: "/mnt"
  restartPolicy: "Never"
  volumes:
  - name: nfs-pvc
    persistentVolumeClaim:
      claimName: test-pvc
View Code

  上面這個 Pod 非常簡單,就是用一個 busybox 容器(這個容器集成了常見的linux 命令),在 /mnt 目錄下面新建一個 SUCCESS 的文件,然后把 /mnt 目錄掛載到上面我們新建的 test-pvc 這個資源對象上面了,要驗證很簡單,只需要去查看下我們 nfs 服務器上面的共享數據目錄下面是否有 SUCCESS 這個文件即可

2. 創建后查看

[root@k8smaster1 storageclass]# kubectl apply -f test-pvc-pod.yml 
pod/test-pod created
[root@k8smaster1 storageclass]# kubectl get pods -o wide | grep test-
test-pod                                        0/1     Completed   0          3m48s   10.244.2.119     k8snode2     <none>           <none>

3. 到nfs 服務器節點查看

[root@k8smaster2 default-test-pvc-pvc-97bce597-0788-49a1-be6d-5a938363797b]# pwd
/data/nfs/default-test-pvc-pvc-97bce597-0788-49a1-be6d-5a938363797b
[root@k8smaster2 default-test-pvc-pvc-97bce597-0788-49a1-be6d-5a938363797b]# ll
total 0
-rw-r--r-- 1 root root 0 Feb  9 02:36 SUCCESS

  可以看到nfs 服務器掛載的目錄下面有名字很長的文件夾,這個文件夾的命名滿足規則:${namespace}-${pvcName}-${pvName}

4. 常用方法

使用 StorageClass 更多的是 StatefulSet 類型的服務,StatefulSet 類型的服務我們也可以通過一個 volumeClaimTemplates 屬性來直接使用 StorageClass

1.  test-statefulset-nfs.yml

apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: nfs-web
spec:
  serviceName: "nginx"
  replicas: 3
  selector:
    matchLabels:
      app: nfs-web
  template:
    metadata:
      labels:
        app: nfs-web
    spec:
      terminationGracePeriodSeconds: 10
      containers:
      - name: nginx
        image: nginx:1.7.9
        ports:
        - containerPort: 80
          name: web
        volumeMounts:
        - name: www
          mountPath: /usr/share/nginx/html
  volumeClaimTemplates:
  - metadata:
      name: www
      annotations:
        volume.beta.kubernetes.io/storage-class: course-nfs-storage
    spec:
      accessModes: [ "ReadWriteOnce" ]
      resources:
        requests:
          storage: 1Gi

  實際上 volumeClaimTemplates 下面就是一個 PVC 對象的模板,就類似於我們這里 StatefulSet 下面的 template,實際上就是一個 Pod 的模板,我們不單獨創建成 PVC 對象,而用這種模板就可以動態的去創建了對象了,這種方式在 StatefulSet 類型的服務下面使用得非常多。

2. 創建並查看

[root@k8smaster1 storageclass]# kubectl apply -f test-statefulset-nfs.yml 
pstatefulset.apps/nfs-web created
[root@k8smaster1 storageclass]# kubectl get pods | grep nfs-web
nfs-web-0                                       1/1     Running     0          2m42s
nfs-web-1                                       1/1     Running     0          115s
nfs-web-2                                       1/1     Running     0          109s
[root@k8smaster1 storageclass]# kubectl get pvc
NAME            STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS         AGE
test-pvc        Bound    pvc-97bce597-0788-49a1-be6d-5a938363797b   1Mi        RWX            course-nfs-storage   50m
www-nfs-web-0   Bound    pvc-c234c21b-c3c4-4ffb-a14b-aa47cad7183e   1Gi        RWO            course-nfs-storage   2m57s
www-nfs-web-1   Bound    pvc-7fdeb85f-481e-48c1-9734-284cce8014fb   1Gi        RWO            course-nfs-storage   2m10s
www-nfs-web-2   Bound    pvc-7810f38b-2779-49e3-84f2-4b56e16df419   1Gi        RWO            course-nfs-storage   2m4s
[root@k8smaster1 storageclass]# kubectl get pv
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                   STORAGECLASS         REASON   AGE
pvc-7810f38b-2779-49e3-84f2-4b56e16df419   1Gi        RWO            Delete           Bound    default/www-nfs-web-2   course-nfs-storage            2m16s
pvc-7fdeb85f-481e-48c1-9734-284cce8014fb   1Gi        RWO            Delete           Bound    default/www-nfs-web-1   course-nfs-storage            2m22s
pvc-97bce597-0788-49a1-be6d-5a938363797b   1Mi        RWX            Delete           Bound    default/test-pvc        course-nfs-storage            50m
pvc-c234c21b-c3c4-4ffb-a14b-aa47cad7183e   1Gi        RWO            Delete           Bound    default/www-nfs-web-0   course-nfs-storage            3m9s

3. 到nfs 服務器查看共享目錄如下

[root@k8smaster2 nfs]# pwd
/data/nfs
[root@k8smaster2 nfs]# ll
total 4
drwxrwxrwx 2 root root 21 Feb  9 02:36 default-test-pvc-pvc-97bce597-0788-49a1-be6d-5a938363797b
drwxrwxrwx 2 root root  6 Feb  9 02:49 default-www-nfs-web-0-pvc-c234c21b-c3c4-4ffb-a14b-aa47cad7183e
drwxrwxrwx 2 root root  6 Feb  9 02:50 default-www-nfs-web-1-pvc-7fdeb85f-481e-48c1-9734-284cce8014fb
drwxrwxrwx 2 root root  6 Feb  9 02:50 default-www-nfs-web-2-pvc-7810f38b-2779-49e3-84f2-4b56e16df419
-rw-r--r-- 1 root root  4 Feb  8 21:22 test.txt
[root@k8smaster2 nfs]# 

 補充: StorageClass 相當於一個創建 PV 的模板,用戶通過 PVC 申請存儲卷,StorageClass 通過模板自動創建 PV,然后和 PVC 進行綁定。

  在有storageclass 環境的k8s 中,可以通過如下方式創建storageclass 以及 pvc、pv

1. yml 內容如下

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: sc2
provisioner: fuseim.pri/ifs
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: test-pvc-sc
  annotations:
    volume.beta.kubernetes.io/storage-class: "sc2"
spec:
  accessModes:
  - ReadWriteMany
  resources:
    requests:
      storage: 1Mi

2. 執行創建查看資源如下

[root@k8smaster1 storageclass]# kubectl get sc
NAME                 PROVISIONER      RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
course-nfs-storage   fuseim.pri/ifs   Delete          Immediate           false                  169m
sc2                  fuseim.pri/ifs   Delete          Immediate           false                  9m28s
[root@k8smaster1 storageclass]# kubectl get pv
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                   STORAGECLASS         REASON   AGE
pvc-1d0c10d4-c7f7-433f-8143-78b11fd8fe58   1Mi        RWX            Delete           Bound    default/test-pvc-sc     sc2                           9m40s
pvc-7810f38b-2779-49e3-84f2-4b56e16df419   1Gi        RWO            Delete           Bound    default/www-nfs-web-2   course-nfs-storage            65m
pvc-7fdeb85f-481e-48c1-9734-284cce8014fb   1Gi        RWO            Delete           Bound    default/www-nfs-web-1   course-nfs-storage            65m
pvc-97bce597-0788-49a1-be6d-5a938363797b   1Mi        RWX            Delete           Bound    default/test-pvc        course-nfs-storage            113m
pvc-c234c21b-c3c4-4ffb-a14b-aa47cad7183e   1Gi        RWO            Delete           Bound    default/www-nfs-web-0   course-nfs-storage            66m
[root@k8smaster1 storageclass]# kubectl get pvc
NAME            STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS         AGE
test-pvc        Bound    pvc-97bce597-0788-49a1-be6d-5a938363797b   1Mi        RWX            course-nfs-storage   113m
test-pvc-sc     Bound    pvc-1d0c10d4-c7f7-433f-8143-78b11fd8fe58   1Mi        RWX            sc2                  9m48s
www-nfs-web-0   Bound    pvc-c234c21b-c3c4-4ffb-a14b-aa47cad7183e   1Gi        RWO            course-nfs-storage   66m
www-nfs-web-1   Bound    pvc-7fdeb85f-481e-48c1-9734-284cce8014fb   1Gi        RWO            course-nfs-storage   65m
www-nfs-web-2   Bound    pvc-7810f38b-2779-49e3-84f2-4b56e16df419   1Gi        RWO            course-nfs-storage   65m

 

補充: 重啟后發現nfs 無效,解決辦法:

nfs 服務器需要將nfs 服務啟動,並且設置為開機自啟動;nfs 客戶端將nfs-utils 服務啟動並且設置為開機自啟動。

補充: 設置了默認的storageclass之后,如果不指定會使用默認的

1. yml 如下:

kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: test-claim
spec:
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 1Mi

2. 創建后查看: 使用的默認的sc

[root@k8smaster01 storageclass]# kubectl apply -f test-default-sc.yml 
persistentvolumeclaim/test-claim created
[root@k8smaster01 storageclass]# kubectl get pvc
NAME         STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS         AGE
test-claim   Bound    pvc-9f784512-e537-468c-9e3c-2b084776a368   1Mi        RWX            course-nfs-storage   4s

補充: 核心概念解釋

PV 的全稱是:PersistentVolume(持久化卷),是對底層的共享存儲的一種抽象,PV 由管理員進行創建和配置,它和具體的底層的共享存儲技術的實現方式有關,比如 Ceph、GlusterFS、NFS 等,都是通過插件機制完成與共享存儲的對接。

PVC 的全稱是:PersistentVolumeClaim(持久化卷聲明),PVC 是用戶存儲的一種聲明,PVC 和 Pod 比較類似,Pod 消耗的是節點,PVC 消耗的是 PV 資源,Pod 可以請求 CPU 和內存,而 PVC 可以請求特定的存儲空間和訪問模式。對

StorageClass,通過 StorageClass 的定義,管理員可以將存儲資源定義為某種類型的資源,比如快速存儲、慢速存儲等,用戶根據 StorageClass 的描述就可以非常直觀的知道各種存儲資源的具體特性了,這樣就可以根據應用的特性去申請合適的存儲資源了。

 


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM