前文我們聊到了k8s中給Pod添加存儲卷相關話題,回顧請參考:https://www.cnblogs.com/qiuhom-1874/p/14180752.html;今天我們來聊一下持久存儲卷相關話題;
volume的基礎使用,需要我們用戶手動來向不同類型存儲接口傳遞不同的參數,從而實現把外部存儲映射到k8s上的一個volume對象,使得pod才能正常的掛載對應的存儲卷,對應pod里的容器才能正常使用;這種使用方式的前提是用戶必須了解對應的存儲系統,了解對應類型的存儲接口,以及相關參數;這使得用戶在k8s上使用存儲卷變得有些復雜;為了簡化這一過程,在k8s上使用pv和pvc資源來把對應底層存儲接口給隱藏了,用戶使用存儲卷不再關心底層存儲系統接口;不管底層是那種類型的存儲,用戶只需面對一個pvc接口即可;
PV、PVC和K8s集群以及pod的關系
提示:用戶在創建pod時使用存儲卷只需要關心對應名稱空間的pvc對象;而對應pv是需要集群管理管理員定義;后端存儲是專門的存儲管理員負責管理;pv是k8s上的一種標准資源,全稱叫做PersistentVolume翻譯成中文就是持久存儲卷;它主要作用是把后端存儲中的某個邏輯單元,映射為k8s上的pv資源;pv是集群級別的資源;任意名稱空間都可以直接關聯某一個pv;關聯pv的過程我們叫做綁定pv;而對應名稱空間關聯某一pv需要使用pvc資源來定義;pvc全稱PersistentVolumeClaim的縮寫,意思就是持久存儲卷申請;在一個名稱空間下創建一個pvc就是把對應名稱空間同集群上的某一pv做綁定;一旦一個名稱空間綁定了一個pv后,對應的pv就會從available狀態轉變成bond狀態,其他名稱空間將不能再使用,只有對應pv是available狀態才能正常的被其他名稱空間關聯綁定;簡單講pvc和pv的關系是一一對應的,一個pv只能對應一個pvc;至於同一名稱空間下的多個pod是否能夠同時使用一個PVC取決pv是否允許多路讀寫,對應pv是否支持多路讀寫取決后端存儲系統;不同類型的存儲系統,對應訪問模式也有所不同。訪問模式有三種,單路讀寫(ReadWriteOnce簡稱RWO),多路讀寫(ReadWriteMany簡稱RWX),多路只讀(ReadOnlyMany簡稱ROX);
示例:pv資源創建
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
|
[root@master01 ~]
# cat pv-v1-demo.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: nfs-pv-v1
labels:
storsystem: nfs-v1
rel: stable
spec:
capacity:
storage: 1Gi
volumeMode: Filesystem
accessModes: [
"ReadWriteOnce"
,
"ReadWriteMany"
,
"ReadOnlyMany"
]
persistentVolumeReclaimPolicy: Retain
mountOptions:
- hard
- nfsvers=4.1
nfs:
path:
/data/v1
server: 192.168.0.99
[root@master01 ~]
#
|
提示:pv是k8s的標准資源,其群組版本為v1,類型為PersistentVolume;spec.capacity.storage字段用來描述pv的存儲容量;volumeMode用來描述對應存儲系統提供的存儲卷類型接口,一般存儲卷類型接口有兩種,分別是文件系統接口和塊設備接口;accessModes用來描述pv的訪問模式;presistentVolumeReclaimPolicy字段用來描述存儲卷回收策略,持久卷回收策略有3中,一種是Delete,表示當pvc刪除以后,對應pv也隨之刪除;第二種是Recycle,表示當pvc刪除以后,對應pv的數據也隨之被刪除;第三種是Retain表示當pvc被刪除以后,pv原封動,即pv也在,對應數據也在;mountOptions字段用來指定掛載選項;nfs表示后端存儲為nfs,對於不同類型的存儲,對應的要傳遞的參數各不相同,對於nfs這種類型的存儲,我們只需要指定其nfs服務器地址以及對應共享出來的文件路徑;以上配置就表示把nfs上的/data/v1目錄映射到k8s上的pv,對應pv的名稱為nfs-pv-v1;這里需要注意一點,在創建pv時,對應后端存儲應該提前准備好;
應用配置清單
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
|
[root@master01 ~]
# kubectl apply -f pv-v1-demo.yaml
persistentvolume
/nfs-pv-v1
created
[root@master01 ~]
# kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
nfs-pv-v1 1Gi RWO,ROX,RWX Retain Available 4s
[root@master01 ~]
# kubectl describe pv nfs-pv-v1
Name: nfs-pv-v1
Labels: rel=stable
storsystem=nfs-v1
Annotations: <none>
Finalizers: [kubernetes.io
/pv-protection
]
StorageClass:
Status: Available
Claim:
Reclaim Policy: Retain
Access Modes: RWO,ROX,RWX
VolumeMode: Filesystem
Capacity: 1Gi
Node Affinity: <none>
Message:
Source:
Type: NFS (an NFS
mount
that lasts the lifetime of a pod)
Server: 192.168.0.99
Path:
/data/v1
ReadOnly:
false
Events: <none>
[root@master01 ~]
#
|
提示:在pv的詳細信息中能夠看到,當前pv的狀態為available,pv對應后端的存儲是nfs,對應存儲的ip地址為192.168.0.99,當前pv對應后端存儲的邏輯單元就是/data/v1;
示例:創建pvc
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
|
[root@master01 ~]
# cat pvc-v1-demo.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pvc-nfs-pv-v1
namespace: default
labels:
storsystem: nfs-v1
spec:
accessModes:
- ReadWriteMany
volumeMode: Filesystem
resources:
requests:
storage: 500Mi
selector:
matchLabels:
storsystem: nfs-v1
rel: stable
[root@master01 ~]
#
|
提示:pvc也是k8s上的標准資源,對應的群組版本為v1,類型為PersistentVolumeClaim;其中spec.accessModes字段是用來指定其pvc的訪問模式,一般這個模式是被pv的accessModes包含,也就說pvc的訪問模式必須是pv的子集,即等於小於pv的訪問模式;resources用來描述對應pvc的存儲空間限制,requests用來描述對應pvc最小容量限制,limits用來描述最大容量限制;selector用來定義標簽選擇器,主要作用過濾符合對應標簽的pv;如果不定義標簽選擇器,它會在所有available狀態的pv中,通過其容量大小限制以及訪問模式去匹配一個最佳的pv進行關聯;
應用配置清單
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
|
[root@master01 ~]
# kubectl apply -f pvc-v1-demo.yaml
persistentvolumeclaim
/pvc-nfs-pv-v1
created
[root@master01 ~]
# kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
pvc-nfs-pv-v1 Bound nfs-pv-v1 1Gi RWO,ROX,RWX 8s
[root@master01 ~]
# kubectl describe pvc pvc-nfs-pv-v1
Name: pvc-nfs-pv-v1
Namespace: default
StorageClass:
Status: Bound
Volume: nfs-pv-v1
Labels: storsystem=nfs-v1
Annotations: pv.kubernetes.io
/bind-completed
:
yes
pv.kubernetes.io
/bound-by-controller
:
yes
Finalizers: [kubernetes.io
/pvc-protection
]
Capacity: 1Gi
Access Modes: RWO,ROX,RWX
VolumeMode: Filesystem
Used By: <none>
Events: <none>
[root@master01 ~]
# kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
nfs-pv-v1 1Gi RWO,ROX,RWX Retain Bound default
/pvc-nfs-pv-v1
19m
[root@master01 ~]
#
|
提示:這里顯示pvc的大小是pvc最大容量顯示,默認不限制最大容量就是其pv的最大容量;從上面的顯示可以看到對應pv被pvc綁定以后,其狀態就變成了bound;
示例:創建pod關聯pvc,並在其pod容器里掛載pvc
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
|
[root@master01 ~]
# cat redis-demo.yaml
apiVersion: v1
kind: Pod
metadata:
name: redis-demo
labels:
app: redis
spec:
containers:
- name: redis
image: redis:alpine
volumeMounts:
- mountPath:
/data
name: redis-data
volumes:
- name: redis-data
persistentVolumeClaim:
claimName: pvc-nfs-pv-v1
[root@master01 ~]
#
|
提示:在pod里關聯pvc,只需要指定后端存儲類型為persistentVolumeClaim,然后指定對應的pvc名稱;
應用資源清單
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
|
[root@master01 ~]
# kubectl apply -f redis-demo.yaml
pod
/redis-demo
created
[root@master01 ~]
# kubectl get pod
NAME READY STATUS RESTARTS AGE
redis-demo 0
/1
ContainerCreating 0 7s
[root@master01 ~]
# kubectl get pod
NAME READY STATUS RESTARTS AGE
redis-demo 1
/1
Running 0 27s
[root@master01 ~]
# kubectl describe pod redis-demo
Name: redis-demo
Namespace: default
Priority: 0
Node: node03.k8s.org
/192
.168.0.46
Start Time: Fri, 25 Dec 2020 21:55:41 +0800
Labels: app=redis
Annotations: <none>
Status: Running
IP: 10.244.3.105
IPs:
IP: 10.244.3.105
Containers:
redis:
Container ID: docker:
//8e8965f52fd0144f8d6ce68185209114163a42f8437d7d845d431614f3d6dd05
Image: redis:alpine
Image ID: docker-pullable:
//redis
@sha256:68d4030e07912c418332ba6fdab4ac69f0293d9b1daaed4f1f77bdeb0a5eb048
Port: <none>
Host Port: <none>
State: Running
Started: Fri, 25 Dec 2020 21:55:48 +0800
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/data
from redis-data (rw)
/var/run/secrets/kubernetes
.io
/serviceaccount
from default-token-xvd4c (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
redis-data:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim
in
the same namespace)
ClaimName: pvc-nfs-pv-v1
ReadOnly:
false
default-token-xvd4c:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-xvd4c
Optional:
false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io
/not-ready
:NoExecute
op
=Exists
for
300s
node.kubernetes.io
/unreachable
:NoExecute
op
=Exists
for
300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 37s default-scheduler Successfully assigned default
/redis-demo
to node03.k8s.org
Normal Pulling 36s kubelet Pulling image
"redis:alpine"
Normal Pulled 30s kubelet Successfully pulled image
"redis:alpine"
in
5.284107704s
Normal Created 30s kubelet Created container redis
Normal Started 30s kubelet Started container redis
[root@master01 ~]
#
|
提示:可以看到對應pod已經正常運行起來;從詳細信息中可以看到對應pod使用的volumes類型為PersistentVolumeClaim,對應名稱為pvc-nfs-pv-v1;對應容器以讀寫方式掛載了對應存儲卷;
測試:在redis-demo上產生數據,看看是否能夠正常保存到nfs服務器上?
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
|
[root@master01 ~]
# kubectl get pod
NAME READY STATUS RESTARTS AGE
redis-demo 1
/1
Running 0 5m28s
[root@master01 ~]
# kubectl exec -it redis-demo -- /bin/sh
/data
# redis-cli
127.0.0.1:6379>
set
mykey
"this is test key "
OK
127.0.0.1:6379> get mykey
"this is test key "
127.0.0.1:6379> BGSAVE
Background saving started
127.0.0.1:6379>
exit
/data
# ls
dump.rdb
/data
#
|
在nfs服務器上查看對應目錄下是否有dump.rdb文件產生?
1
2
3
4
|
[root@docker_registry ~]
# ll /data/v1
total 4
-rw-r--r-- 1 polkitd qiuhom 122 Dec 25 22:02 dump.rdb
[root@docker_registry ~]
#
|
提示:可以看到,redis上產生的快照文件在nfs服務器上有對應的文件存在;
測試:刪除pod,看看對應文件是否還在?
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
|
[root@master01 ~]
# kubectl delete -f redis-demo.yaml
pod
"redis-demo"
deleted
[root@master01 ~]
# kubectl get pods
No resources found
in
default namespace.
[root@master01 ~]
# ssh 192.168.0.99
The authenticity of host
'192.168.0.99 (192.168.0.99)'
can't be established.
ECDSA key fingerprint is SHA256:hQoossQnTJMXB0+DxJdTt6DMHuPFLDd5084tHyJ7920.
ECDSA key fingerprint is MD5:ef:61:b6:ee:76:46:9d:0e:38:b6:b5:
dd
:11:66:23:26.
Are you sure you want to
continue
connecting (
yes
/no
)?
yes
Warning: Permanently added
'192.168.0.99'
(ECDSA) to the list of known hosts.
root@192.168.0.99's password:
Last login: Fri Dec 25 20:13:05 2020 from 192.168.0.232
[root@docker_registry ~]
# ll /data/v1
total 4
-rw-r--r-- 1 polkitd qiuhom 122 Dec 25 22:05 dump.rdb
[root@docker_registry ~]
# exit
logout
Connection to 192.168.0.99 closed.
[root@master01 ~]
#
|
提示:可以看到刪除了pod對應快照文件在nfs服務器還是存在;
綁定節點,重新新建pod,看看對應是否能夠自動應用快照中的數據?
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
|
[root@master01 ~]
# cat redis-demo.yaml
apiVersion: v1
kind: Pod
metadata:
name: redis-demo
labels:
app: redis
spec:
nodeName: node01.k8s.org
containers:
- name: redis
image: redis:alpine
volumeMounts:
- mountPath:
/data
name: redis-data
volumes:
- name: redis-data
persistentVolumeClaim:
claimName: pvc-nfs-pv-v1
[root@master01 ~]
# kubectl apply -f redis-demo.yaml
pod
/redis-demo
created
[root@master01 ~]
# kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
redis-demo 0
/1
ContainerCreating 0 8s <none> node01.k8s.org <none> <none>
[root@master01 ~]
# kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
redis-demo 1
/1
Running 0 21s 10.244.1.88 node01.k8s.org <none> <none>
[root@master01 ~]
#
|
提示:可以看到新建的pod被調度到node01上了;
進入對應pod,看看是否應用了其快照文件中的數據?對應key是否能夠被應用到內存?
1
2
3
4
5
6
7
8
9
10
11
12
|
[root@master01 ~]
# kubectl get pods
NAME READY STATUS RESTARTS AGE
redis-demo 1
/1
Running 0 2m39s
[root@master01 ~]
# kubectl exec -it redis-demo -- /bin/sh
/data
# redis-cli
127.0.0.1:6379> get mykey
"this is test key "
127.0.0.1:6379>
exit
/data
# ls
dump.rdb
/data
# exit
[root@master01 ~]
#
|
提示:可以看到新建的pod能夠正常讀取到nfs上的快照文件並應用到內存中;
刪除pvc,看看對應pv是否被刪除?
提示:可以看到在沒有刪除pod的情況下,對應刪除操作被阻塞了;
查看pvc狀態
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
|
[root@master01 ~]
# kubectl delete pvc pvc-nfs-pv-v1
persistentvolumeclaim
"pvc-nfs-pv-v1"
deleted
^C
[root@master01 ~]
# kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
pvc-nfs-pv-v1 Terminating nfs-pv-v1 1Gi RWO,ROX,RWX 34m
[root@master01 ~]
# kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
pvc-nfs-pv-v1 Terminating nfs-pv-v1 1Gi RWO,ROX,RWX 34m
[root@master01 ~]
# kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
pvc-nfs-pv-v1 Terminating nfs-pv-v1 1Gi RWO,ROX,RWX 34m
[root@master01 ~]
# kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
nfs-pv-v1 1Gi RWO,ROX,RWX Retain Bound default
/pvc-nfs-pv-v1
52m
[root@master01 ~]
#
|
提示:可以看到現在pvc的狀態變成了terminating,但對應pvc還在並沒有被刪除;對應pv還處於綁定狀態;
刪除pod,看看對應pvc是否會被刪除呢?
1
2
3
4
5
6
7
8
9
10
11
|
[root@master01 ~]
# kubectl get pod
NAME READY STATUS RESTARTS AGE
redis-demo 1
/1
Running 0 14m
[root@master01 ~]
# kubectl delete pod redis-demo
pod
"redis-demo"
deleted
[root@master01 ~]
# kubectl get pvc
No resources found
in
default namespace.
[root@master01 ~]
# kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
nfs-pv-v1 1Gi RWO,ROX,RWX Retain Released default
/pvc-nfs-pv-v1
54m
[root@master01 ~]
#
|
提示:可以看到刪除對應pod以后,pvc就立即刪除了;對應pvc被刪除以后,對應pv的狀態就從bound狀態轉變為Released狀態,表示等待回收;我們在資源清單中使用的是Retain回收策略,pv和pvc都是我們人工手動回收;
刪除pv,看看對應數據是否會被刪除?
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
|
[root@master01 ~]
# kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
nfs-pv-v1 1Gi RWO,ROX,RWX Retain Released default
/pvc-nfs-pv-v1
57m
[root@master01 ~]
# kubectl delete pv nfs-pv-v1
persistentvolume
"nfs-pv-v1"
deleted
[root@master01 ~]
# kubectl get pv
No resources found
[root@master01 ~]
# ssh 192.168.0.99
root@192.168.0.99's password:
Last login: Fri Dec 25 22:05:53 2020 from 192.168.0.41
[root@docker_registry ~]
# ll /data/v1
total 4
-rw-r--r-- 1 polkitd qiuhom 122 Dec 25 22:24 dump.rdb
[root@docker_registry ~]
# exit
logout
Connection to 192.168.0.99 closed.
[root@master01 ~]
#
|
提示:可以看到刪除了pv,對應快照文件並沒有清除;
以上就是pv和pvc資源的用法,下面我們再來說一下sc資源
SC是StorageClass的縮寫,表示存儲類;這種資源主要用來對pv資源的自動供給提供接口;所謂自動供給是指用戶無需手動創建pv,而是在創建pvc時對應pv會由persistentVolume-controller自動創建並完成pv和pvc的綁定;使用sc資源的前提是對應后端存儲必須支持restfull類型接口的管理接口,並且pvc必須指定對應存儲類名稱來引用SC;簡單講SC資源就是用來為后端存儲提供自動創建pv並關聯對應pvc的接口;如下圖
提示:使用sc動態創建pv,對應pvc必須也是屬於對應的sc;上圖主要描述了用戶在創建pvc時,引用對應的sc以后,對應sc會調用底層存儲系統的管理接口,創建對應的pv並關聯至對應pvc;
示例:創建sc資源
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
|
apiVersion: storage.k8s.io
/v1
kind: StorageClass
metadata:
name: slow
provisioner: kubernetes.io
/glusterfs
parameters:
resturl:
"http://127.0.0.1:8081"
clusterid:
"630372ccdc720a92c681fb928f27b53f"
restauthenabled:
"true"
restuser:
"admin"
secretNamespace:
"default"
secretName:
"heketi-secret"
gidMin:
"40000"
gidMax:
"50000"
volumetype:
"replicate:3"
|
提示:上述是官方文檔中的一個示例,在創建sc資源時,對應群組是storage.k8s.io/v1,類型為StorageClass;provisioner字段用於描述對應供給接口名稱;parameters用來定義向對應存儲管理接口要傳遞的參數;
在pvc資源中引用SC資源對象
1
2
3
4
5
6
7
8
9
|
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: foo-pvc
namespace: foo
spec:
storageClassName:
"slow"
volumeName: foo-pv
...
|
提示:在創建pvc時用storageClassName字段來指定對應的SC名稱即可;