這里使用了k8s自身的持久化卷存儲機制:PV和PVC。各組件之間的關系參考下圖:
PV的Access Mode(訪問模式)
The access modes are:
ReadWriteOnce – the volume can be mounted as read-write by a single node
ReadOnlyMany – the volume can be mounted read-only by many nodes
ReadWriteMany – the volume can be mounted as read-write by many nodes
In the CLI, the access modes are abbreviated to:
RWO - ReadWriteOnce
ROX - ReadOnlyMany
RWX - ReadWriteMany
Reclaim Policy(回收策略)
Current reclaim policies are:
Retain – manual reclamation
Recycle – basic scrub (“rm -rf /thevolume/*”)
Delete – associated storage asset such as AWS EBS, GCE PD, Azure Disk, or OpenStack Cinder volume is deleted
Currently, only NFS and HostPath support recycling. AWS EBS, GCE PD, Azure Disk, and Cinder volumes support deletion.
PV是有狀態的資源對象,有以下幾種狀態:
1、Available:空閑狀態
2、Bound:已經綁定到某個pvc上
3、Released:對應的pvc已經刪除,但資源還沒有被回收
4、Failed:pv自動回收失敗
1、創建secret。在secret中,data域的各子域的值必須為base64編碼值。
#echo "AQDchXhYTtjwHBAAk2/H1Ypa23WxKv4jA1NFWw==" | base64 QVFEY2hYaFlUdGp3SEJBQWsyL0gxWXBhMjNXeEt2NGpBMU5GV3c9PQo= #vim ceph-secret.yaml apiVersion: v1 kind: Secret metadata: name: ceph-secret data: key: QVFEY2hYaFlUdGp3SEJBQWsyL0gxWXBhMjNXeEt2NGpBMU5GV3c9PQo= |
2、創建pv。pv只能是網絡存儲,不屬於任何node,但可以在每個node上訪問。pv並不是定義在pod上的,而是獨立定義於pod之外。目前pv僅支持定義存儲容量。
#vim ceph-pv.yml apiVersion: v1 kind: PersistentVolume metadata: name: cephfs spec: capacity: storage: 10Gi accessModes: - ReadWriteMany fsType: xfs cephfs: monitors: - 172.16.100.5 :6789 - 172.16.100.6 :6789 - 172.16.100.7 :6789 path: /opt/eshop_dir/eshop user: admin secretRef: name: ceph-secret |
3、創建pvc
#vim ceph-pvc.yml kind: PersistentVolumeClaim apiVersion: v1 metadata: name: cephfs spec: accessModes: - ReadWriteMany resources: requests: storage: 8Gi |
4、查看pv和pvc
#kubectl get pv cephfs 10Gi RWX Retain Bound default/cephfs 2d #kubectl get pvc cephfs Bound cephfs 10Gi RWX 2d |
5、創建rc,這個只是測試樣例
#vim ceph-rc.yml kind: ReplicationController metadata: name: cephfstest labels: name: cephfstest spec: replicas: 4 selector: name: cephfstest template: metadata: labels: name: cephfstest spec: containers: - name: cephfstest image: 172.60.0.107/pingpw/nginx-php:v4 env: - name: GET_HOSTS_FROM value: env ports: - containerPort: 81 volumeMounts: - name: cephfs mountPath: "/opt/cephfstest" volumes: - name: cephfs persistentVolumeClaim: claimName: cephfs |
5、查看pod
#kubectl get pod -o wide cephfstest-09j37 1/1 Running 0 2d 10.244.5.16 kuber-node03 cephfstest-216r6 1/1 Running 0 2d 10.244.3.25 kuber-node01 cephfstest-4sjgr 1/1 Running 0 2d 10.244.4.26 kuber-node02 cephfstest-p2x7c 1/1 Running 0 2d 10.244.6.22 kuber-node04
|