Kubernetes ---- 存儲卷(emptyDir、hostPath、NFS)


存儲卷

  Pod是有生命周期的,當Pod出現故障時,數據會隨着Pod的終結就結束了.
  針對K8s集群,我們應該使用脫離節點的存儲設備,共享存儲設備.
  如果使用docker持久化數據的那種方法,那么Pod重構后就不能更換節點,否則,掛載的目錄位置肯定就訪問不到了.

可用存儲卷
  1. emptyDir: 臨時存儲目錄,隨着Pod刪除也會被刪除.
  2. hostPath: 節點路徑,直接在宿主機上找一個目錄,與Pod進行關聯.
網絡存儲卷:
  脫離主機節點本地的存儲設備:
  1. SAN: ISCSI,...
  2. NAS: NFS、CIFS
分布式存儲:
  1. GlusterFS
  2. cephfs: ceph文件系統存儲
  3. rbd: ceph塊存儲
雲端存儲:
  1. EBS(AWS): 彈性塊存儲
  2. Azure Disk(微軟)

# 查看k8s支持的存儲類型
$ kubectl explain pods.spec.volumes

pods.spec.volumes.emptyDir <Object>
  emptyDir類型的volume在pod分配到node上時被創建,kubernetes會在node上自動分配一個目錄,因此無需指定宿主機node上對應的目錄文件。這個目錄的初始內容為空,
當Pod從node上移除時,emptyDir中的數據會被永久刪除.

# 以下為了演示同一Pod的容器可以共享存儲空間,雖然掛載目錄指定的不一樣,但是創建數據最終會在各個定義的目錄中出現,實則是一樣的.

$ vim pod-vol-demo.yaml
  apiVersion: v1
  kind: Pod
  metadata:
    name: pod-vol-demo
    namespace: default
  spec:
    containers:
    - name: myapp
      image: ikubernetes/myapp:v1
      imagePullPolicy: IfNotPresent
      ports:
      - name: http
        containerPort: 80
      volumeMounts:
      - name: disk1
        mountPath: /usr/share/nginx/html/
    - name: busybox
      image: busybox:latest
      imagePullPolicy: IfNotPresent
      volumeMounts:
      - name: disk1
        mountPath: /data/
      command: ["/bin/sh","-c","while true;do echo $(date) >> /data/index.html;done"]
    volumes:
    - name: disk1
      emptyDir: {}
$ kubectl apply -f pod-vol-demo.yaml
$ kubectl get pods -o wide
NAME     READY STATUS RESTARTS   AGE   IP     NODE   NOMINATED NODE READINESS GATES
pod-vol-demo 2/2 Running   0     43m 10.244.2.124 node2   <none>   <none>

$ curl 10.244.2.124

pods.spec.volumes.hostPath <Object> 宿主機路徑
  把Pod所在的宿主機之上的宿主機的文件系統目錄與Pod建立關聯關系,在Pod被刪除時,這個目錄是不會被刪除的,所以在Pod被刪除之后重新被調度到這個節點上的話
對應的數據依然是存在的,如果宿主機上的目錄默認不存在的話是否自動創建取決於type的定義,但如果節點宕機了,那么數據還是會丟失的.

path <string> -required-
type <string>

         空字符串(默認)用於向后兼容,這意味着在安裝 hostPath 卷之前不會執行任何檢查。
DirectoryOrCreate 如果在給定路徑上什么都不存在,那么將根據需要創建空目錄,權限設置為 0755,具有與 Kubelet 相同的組和所有權。
Directory     在給定路徑上必須存在的目錄。
FileOrCreate    如果在給定路徑上什么都不存在,那么將在那里根據需要創建空文件,權限設置為 0644,具有與 Kubelet 相同的組和所有權。
File        在給定路徑上必須存在的文件。
Socket       在給定路徑上必須存在的 UNIX 套接字。
CharDevice     在給定路徑上必須存在的字符設備。
BlockDevice    在給定路徑上必須存在的塊設備。

$ vim pod-hostpath.yaml
  apiVersion: v1   kind: Pod   metadata:    name: pod
-hostpath    namespace: default   spec:    containers:    - name: hostpath-container    image: ikubernetes/myapp:v1    imagePullPolicy: IfNotPresent    ports:    - name: http    containerPort: 80    volumeMounts:    - name: disk2    mountPath: /usr/share/nginx/html/    lifecycle:    postStart:    exec:    command: ["/bin/sh","-c","echo $(hostname) > /usr/share/nginx/html/index.html"]    volumes:    - name: disk2    hostPath:    path: /data/pod/volume/pod-hostpath    type: DirectoryOrCreate

$ kubectl apply -f pod-hostpath.yaml
$ kubectl get pods -o wide 
NAME        READY   STATUS RESTARTS AGE     IP    NODE NOMINATED NODE READINESS GATES
pod-hostpath   1/1   Running   0   4m34s 10.244.2.127 node2 <none>   <none>

 
         

$ curl 10.244.2.127
pod-hostpath

k8s配置NFS共享存儲:

1. 在指定一台服務器安裝NFS服務以提供共享存儲服務

nfs node:

  # yum -y install nfs-utils
  # mkdir /data/volumes -pv
  # vim /etc/exports
  /data/volumes 192.168.222.0/24(rw,no_root_squash)
  # systemctl start nfs
  # ss -nlt | grep 2049


2. 手動測試k8s的node是否支持掛載NFS文件系統,如果不支持則進行解決.
node2:

  # mount -t nfs 192.168.222.103:/data/volumes/ /mnt

  mount: wrong fs type, bad option, bad superblock on 192.168.222.103:/data/volumes,
  missing codepage or helper program, or other error
  (for several filesystems (e.g. nfs, cifs) you might
  need a /sbin/mount.<type> helper program)

  In some cases useful info is found in syslog - try
  dmesg | tail or so.
  # yum -y install nfs-utils
  # mount -t nfs 192.168.222.103:/data/volumes /mnt
  # mount | grep 192.168.222.103
  # umount /mnt

3. 定義Pod進行測試

$ vim pod-nfs-demo.yaml

  apiVersion: v1   kind: Pod   metadata:    name: nfs
-pod    namespace: default   spec:    containers:    - name: nfs-container    image: ikubernetes/myapp:v1    imagePullPolicy: IfNotPresent    ports:    - name: http    containerPort: 80    volumeMounts:    - name: nfs-disk    mountPath: /usr/share/nginx/html/    lifecycle:    postStart:    exec:    command: ["/bin/sh","-c","echo $(hostname) > /usr/share/nginx/html/index.html"]    volumes:    - name: nfs-disk    nfs:    path: /data/volumes/    server: 192.168.222.103

$ kubectl apply -f pod-nfs-demo.yaml


$ kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod-nfs-vol 1/1 Running 0 4m41s 10.244.2.129 node2 <none> <none>

 
         

$ curl 10.244.2.129
pod-nfs-vol

 

3.2 定義Deployment多個Pod同時掛載至NFS
$ vim deploy-nfs-demo.yaml

apiVersion: v1
Kind: Service
metadata:
  name: nfs-svc
  namespace: default
spec:
  selector:
    app: nfs-pod
    release: dev
  type: NodePort
  ports:
  - name: http
    port: 80
    targetPort: 80
    nodePort: 30180
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nfs-deploy
  namespace: default
spec:
  replicas: 3
  selector:
    matchLabels:
      app: nfs-pod
      release: dev
  template:
    meatadata:
      name: nfs-pod
      namespace: defualt
      labels:
        app: nfs-pod
        release: dev
    spec:
      containers:
     - name: nfs-container
       image: ikubernetes/myapp:v2
       imagePullPolicy: IfNotPresent
       ports:
       - name: http
         port: 80
         containerPort: 80
       volumeMounts:
       - name: nfs-disk
         mountPath: /usr/share/nginx/html/
       lifecycle:
         postStart:
           exec:
             command: ["/bin/sh","-c","echo $(hostname) > /usr/share/nginx/html/index.html"]
       livenessProbe:
         exec:
           command: ["/bin/sh","-c","ps aux | grep nginx"]
         initialDelaySeconds: 2
         periodSeconds: 3
       readinessProbe:
         tcpSocket:
         port: 80
       initialDelaySeconds: 2
       periodSeconds: 3
     volumes:
     - name: nfs-disk
       nfs:
         path: /data/volumes/
         server: 192.168.222.103

$ kubectl apply -f deploy-nfs-demo.yaml
$ kubectl get svc -o wide
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
nfs-svc NodePort 10.101.149.48 <none> 80:30180/TCP 12m app=nfs-pod,release=dev

 
         

客戶端訪問"http://192.168.222.101:30180"

4. 登錄至nfs node查看指定目錄下是否存在文件

# cat /data/volumes/index.html
pod-nfs-vol

 


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM