k8s pv與pvc持久化存儲(靜態與動態)
PersistenVolume(PV):對存儲資源創建和使用的抽象,使得存儲作為集群中的資源管理
PV分為靜態和動態,動態能夠自動創建PV
• PersistentVolumeClaim(PVC):讓用戶不需要關心具體的Volume實現細節
容器與PV、PVC之間的關系,可以如下圖所示:
總的來說,PV是提供者,PVC是消費者,消費的過程就是綁定
(一)、PersistentVolume 靜態綁定
根據上圖我們可以此種模式分為三個部分
數據卷定義:(調用pvc)
卷需求模板:(pvc)
容器應用:(pv)
1、配置 數據卷和卷需求模板
[root@master volume]# cat pvc-pod.yaml
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: nginx-deployment
spec:
replicas: 2
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
#啟用數據卷的名字為wwwroot,並掛載到nginx的html目錄下
volumeMounts:
- name: wwwroot
mountPath: /usr/share/nginx/html
ports:
- containerPort: 80
#定義數據卷名字為wwwroot,類型為pvc
volumes:
- name: wwwroot
persistentVolumeClaim:
claimName: my-pvc
---
# 定義pvc的數據來源,根據容量大小來匹配pv
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
#對應上面的名字
name: my-pvc
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 5Gi
創建pvc:
kubectl apply -f pvc-pod.yaml
2、定義數據卷 pv
我們利用nfs做后端的數據來源
[root@master volume]# cat pv-pod.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: my-pv
spec:
capacity:
storage: 5Gi
accessModes:
- ReadWriteMany
nfs:
path: /opt/container_data
server: 192.168.1.39
創建pv:
kubectl apply -f pv-pod.yaml
此時我們可以看到,pvc根據選定的容量大小,自動匹配上了我們剛剛創建的 pv
[root@master volume]# kubectl get pvc,pv
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
persistentvolumeclaim/my-pvc Bound my-pv 5Gi RWX 11m
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
persistentvolume/my-pv 5Gi RWX Retain Bound default/my-pvc
測試訪問:
根據service暴露端口:
kubectl get svc
我們在后端nfs中的index.html中添加展示數據
[root@master container_data]# cat index.html
xianrenqiu qiuqi
192.168.1.40:30009
(二)、PersistentVolumeClaim pv動態供給
Dynamic Provisioning機制工作的核心在於StorageClass的API對象。
StorageClass聲明存儲插件,用於自動創建PV
當我們k8s業務上來的時候,大量的pvc,此時我們人工創建匹配的話,工作量就會非常大了,需要動態的自動掛載相應的存儲,‘
我們需要使用到StorageClass,來對接存儲,靠他來自動關聯pvc,並創建pv。
Kubernetes支持動態供給的存儲插件:
https://kubernetes.io/docs/concepts/storage/storage-classes/
因為NFS不支持動態存儲,所以我們需要借用這個存儲插件。
nfs動態相關部署可以參考:
https://github.com/kubernetes-incubator/external-storage/tree/master/nfs-client/deploy
部署步驟
1、定義一個storage
[root@master storage]# cat storageclass-nfs.yaml
apiVersion: storage.k8s.io/v1beta1
kind: StorageClass
metadata:
name: managed-nfs-storage
provisioner: fuseim.pri/ifs
2、部署授權
因為storage自動創建pv需要經過kube-apiserver,所以要進行授權
[root@master storage]# cat rbac.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: nfs-client-provisioner
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: nfs-client-provisioner-runner
rules:
- apiGroups: [""]
resources: ["persistentvolumes"]
verbs: ["get", "list", "watch", "create", "delete"]
- apiGroups: [""]
resources: ["persistentvolumeclaims"]
verbs: ["get", "list", "watch", "update"]
- apiGroups: ["storage.k8s.io"]
resources: ["storageclasses"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["events"]
verbs: ["list", "watch", "create", "update", "patch"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: run-nfs-client-provisioner
subjects:
- kind: ServiceAccount
name: nfs-client-provisioner
namespace: default
roleRef:
kind: ClusterRole
name: nfs-client-provisioner-runner
apiGroup: rbac.authorization.k8s.io
3、部署一個自動創建pv的服務
這里自動創建pv的服務由nfs-client-provisioner 完成,
[root@master storage]# cat deployment-nfs.yaml
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
name: nfs-client-provisioner
spec:
replicas: 1
strategy:
type: Recreate
template:
metadata:
labels:
app: nfs-client-provisioner
spec:
# imagePullSecrets:
# - name: registry-pull-secret
serviceAccount: nfs-client-provisioner
containers:
- name: nfs-client-provisioner
#image: quay.io/external_storage/nfs-client-provisioner:latest
image: lizhenliang/nfs-client-provisioner:v2.0.0
volumeMounts:
- name: nfs-client-root
mountPath: /persistentvolumes
env:
- name: PROVISIONER_NAME
#這個值是定義storage里面的那個值
value: fuseim.pri/ifs
- name: NFS_SERVER
value: 192.168.1.39
- name: NFS_PATH
value: /opt/container_data
volumes:
- name: nfs-client-root
nfs:
server: 192.168.1.39
path: /opt/container_data
創建:
kubectl apply -f storageclass-nfs.yaml
kubectl apply -f rbac.yaml
kubectl apply -f deployment-nfs.yaml
查看創建好的storage:
[root@master storage]# kubectl get sc
NAME PROVISIONER AGE
managed-nfs-storage fuseim.pri/ifs 11h
nfs-client-provisioner 會以pod運行在k8s中
[root@master storage]# kubectl get pod
NAME READY STATUS RESTARTS AGE
nfs-client-provisioner-855887f688-hrdwj 1/1 Running 0 10h
4、部署有狀態服務,測試自動創建pv
部署yaml文件參考:https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/
我們部署一個nginx服務,讓其html下面自動掛載數據卷,
[root@master ~]# cat nginx.yaml
apiVersion: v1
kind: Service
metadata:
name: nginx
labels:
app: nginx
spec:
ports:
- port: 80
name: web
clusterIP: None
selector:
app: nginx
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: web
spec:
serviceName: "nginx"
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
name: web
volumeMounts:
- name: www
mountPath: /usr/share/nginx/html
volumeClaimTemplates:
- metadata:
name: www
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: "managed-nfs-storage"
resources:
requests:
storage: 1Gi
創建:
kubectl apply -f nginx.yaml
進入其中一個容器,創建一個文件:
kubectl exec -it web-0 sh
# cd /usr/share/nginx/html
# touch 1.txt
進入nfs數據目錄:
此處可見到,nfs下面自動創建了三個有標識的數據卷文件夾。
查看web-0數據卷,是否有剛剛創建的1.txt
[root@master container_data]# ls default-www-web-0-pvc-2b7c8ce1-13b6-11e9-b1a2-0262b716c880/
1.txt
現在我們可以將web-0這個pod刪掉,測試數據卷里面的文件會不會消失。
[root@master ~]# kubectl delete pod web-0
pod "web-0" deleted
經過測試我們可以得到,刪掉這個pod以后,又會迅速再拉起一個web-0,並且數據不會丟失,這樣我們也就達到了動態的數據持久化。