pod本身是無狀態,所以很多有狀態的應用,就需要將數據進行持久化。
1:將數據掛在到宿主機。但是pod重啟之后有可能到另外一個節點,這樣數據雖然不會丟但是還是有可能會找不到
apiVersion: v1 kind: Pod metadata: name: busybox labels: name: busybox spec: containers: - image: busybox command: - sleep - "3600" imagePullPolicy: IfNotPresent name: busybox volumeMounts: - mountPath: /busybox-data name: data volumes: - hostPath: path: /tmp/data name: data
2:掛到外部存儲,如nfs
apiVersion: v1 kind: ReplicationController metadata: name: nginx spec: replicas: 2 selector: app: web01 template: metadata: name: nginx labels: app: web01 spec: containers: - name: nginx image: reg.docker.tb/harbor/nginx ports: - containerPort: 80 volumeMounts: - mountPath: /usr/share/nginx/html readOnly: false name: nginx-data volumes: - name: nginx-data nfs: server: 10.0.10.31 path: "/data/www-data"
上述說的是簡單的存儲方法,直接在deployment中定義了具體的存儲,但是這樣會存在幾個問題。
1:權限管理,任何一個pod都可以動任意一個路徑
2:磁盤大小限制,無法對某個存儲塊進行限制
3:如果NFS的url變了,那么所有的配置都需要修改
為了解決以上的問題,引入了PV-PVC的概念
創建一個卷PV,不屬於任何namespaces,可以限制大小,讀寫權限
apiVersion: v1 kind: PersistentVolume metadata: name: pv0001 labels: app: "my-nfs" spec: capacity: storage: 5Gi accessModes: - ReadWriteMany persistentVolumeReclaimPolicy: Recycle nfs: path: "/data/disk1" server: 192.168.20.47 readOnly: false
再對應的namespace下面創建PVC。
apiVersion: v1 kind: PersistentVolumeClaim metadata: name: nfs-pvc spec: accessModes: - ReadWriteMany resources: requests: storage: 1Gi selector: matchLabels: app: "my-nfs"
然后kubectl apply 創建PV,PVC
最后在應用用使用該PVC
apiVersion: v1 kind: Pod metadata: name: test-nfs-pvc labels: name: test-nfs-pvc spec: containers: - name: test-nfs-pvc image: registry:5000/back_demon:1.0 ports: - name: backdemon containerPort: 80 command: - /run.sh volumeMounts: - name: nfs-vol mountPath: /home/laizy/test/nfs-pvc volumes: - name: nfs-vol persistentVolumeClaim: claimName: nfs-pvc
這樣可以方便的限制每個pvc所在的子目錄,同時萬一nfs遷移后,只需要更改pv中的url即可
