kubernetes使用ceph存儲


PV、PVC概述

管理存儲是管理計算的一個明顯問題。PersistentVolume子系統為用戶和管理員提供了一個API,用於抽象如何根據消費方式提供存儲的詳細信息。於是引入了兩個新的API資源:PersistentVolume和PersistentVolumeClaim

PersistentVolume(PV)是集群中已由管理員配置的一段網絡存儲。 集群中的資源就像一個節點是一個集群資源。 PV是諸如卷之類的卷插件,但是具有獨立於使用PV的任何單個pod的生命周期。 該API對象包含存儲的實現細節,即NFS,iSCSI或雲提供商特定的存儲系統。

PersistentVolumeClaim(PVC)是用戶存儲的請求。 它類似於pod。Pod消耗節點資源,PVC消耗存儲資源。 pod可以請求特定級別的資源(CPU和內存)。 權限要求可以請求特定的大小和訪問模式。

雖然PersistentVolumeClaims允許用戶使用抽象存儲資源,但是常見的是,用戶需要具有不同屬性(如性能)的PersistentVolumes,用於不同的問題。 管理員需要能夠提供多種不同於PersistentVolumes,而不僅僅是大小和訪問模式,而不會使用戶了解這些卷的實現細節。 對於這些需求,存在StorageClass資源。

StorageClass為集群提供了一種描述他們提供的存儲的“類”的方法。 不同的類可能映射到服務質量級別,或備份策略,或者由群集管理員確定的任意策略。 Kubernetes本身對於什么類別代表是不言而喻的。 這個概念有時在其他存儲系統中稱為“配置文件”

POD動態供給

動態供給主要是能夠自動幫你創建pv,需要多大的空間就創建多大的pv。k8s幫助創建pv,創建pvc就直接api調用存儲類來尋找pv。

如果是存儲靜態供給的話,會需要我們手動去創建pv,如果沒有足夠的資源,找不到合適的pv,那么pod就會處於pending等待的狀態。而動態供給主要的一個實現就是StorageClass存儲對象,其實它就是聲明你使用哪個存儲,然后幫你去連接,再幫你去自動創建pv。

POD使用RBD做為持久數據卷
安裝與配置

RBD支持ReadWriteOnce,ReadOnlyMany兩種模式

1、配置rbd-provisioner

github倉庫鏈接https://github.com/kubernetes-incubator/external-storage

# git clone https://github.com/kubernetes-incubator/external-storage.git
# cd external-storage/ceph/rbd/deploy
# NAMESPACE=kube-system
# sed -r -i "s/namespace: [^ ]+/namespace: $NAMESPACE/g" ./rbac/clusterrolebinding.yaml ./rbac/rolebinding.yaml
# kubectl -n $NAMESPACE apply -f ./rbac

# kubectl get pod -n kube-system
NAME                               READY   STATUS    RESTARTS   AGE
coredns-575bd6d498-n995v           1/1     Running   1          13d
kube-flannel-ds-amd64-gplmm        1/1     Running   1          13d
kube-flannel-ds-amd64-jrrb9        1/1     Running   1          13d
kube-flannel-ds-amd64-ttcx4        1/1     Running   1          13d
rbd-provisioner-75b85f85bd-vr7t5   1/1     Running   0          72s

2、配置k8s訪問ceph的用戶

1、創建pod時,kubelet需要使用rbd命令去檢測和掛載pv對應的ceph image,所以要在所有的worker節點安裝ceph客戶端ceph-common。
將ceph的ceph.client.admin.keyring和ceph.conf文件拷貝到master的/etc/ceph目錄下
yum -y install ceph-common
2、創建 osd pool 在ceph的mon或者admin節點
ceph osd pool create kube 128 128 
ceph osd pool ls
3、創建k8s訪問ceph的用戶 在ceph的mon或者admin節點
ceph auth get-or-create client.kube mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=kube' -o ceph.client.kube.keyring
4、查看key 在ceph的mon或者admin節點
[root@ceph-node01 my-cluster]# ceph auth get-key client.admin
AQCBrJ9eV/U5NBAAoDlM4gV3a+KNQDBOUqVxdw==
[root@ceph-node01 my-cluster]# ceph auth get-key client.kube
AQCZ96BeUgPkDhAAhxbWarZh9kTx2QbFCDM/rA==

5、創建 admin secret
kubectl create secret generic ceph-secret --type="kubernetes.io/rbd" \
--from-literal=key=AQCBrJ9eV/U5NBAAoDlM4gV3a+KNQDBOUqVxdw== \
--namespace=kube-system
# kubectl get secret -n kube-system |grep ceph
ceph-secret                   kubernetes.io/rbd                     1      91s
6、在 default 命名空間創建pvc用於訪問ceph的 secret
kubectl create secret generic ceph-user-secret --type="kubernetes.io/rbd" \
--from-literal=key=AQCZ96BeUgPkDhAAhxbWarZh9kTx2QbFCDM/rA== \
--namespace=default
# kubectl get secret
NAME                  TYPE                                  DATA   AGE
ceph-user-secret      kubernetes.io/rbd                     1      114s

3、配置StorageClass

# vim storageclass-ceph-rdb.yaml 
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: dynamic-ceph-rdb
provisioner: ceph.com/rbd
parameters:
  monitors: 192.168.0.246:6789,192.168.0.247:6789,192.168.0.248:6789
  adminId: admin
  adminSecretName: ceph-secret
  adminSecretNamespace: kube-system
  pool: kube
  userId: kube
  userSecretName: ceph-user-secret
  fsType: ext4
  imageFormat: "2"
  imageFeatures: "layering"

[root@k8s-master yaml]# kubectl apply -f storageclass-ceph-rdb.yaml
storageclass.storage.k8s.io/dynamic-ceph-rdb created
[root@k8s-master yaml]# kubectl get sc
NAME               PROVISIONER    RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
dynamic-ceph-rdb   ceph.com/rbd   Delete          Immediate           false                  12s

創建 pvc測試

# cat >ceph-rdb-pvc-test.yaml<<EOF
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: ceph-rdb-claim
spec:
  accessModes:     
    - ReadWriteOnce
  storageClassName: dynamic-ceph-rdb
  resources:
    requests:
      storage: 2Gi
EOF
# kubectl apply -f ceph-rdb-pvc-test.yaml
# kubectl get pv,pvc
NAME                                                        CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                    STORAGECLASS       REASON   AGE
persistentvolume/pvc-908ec99d-5029-4c62-952f-016ca11ab08c   2Gi        RWO            Delete           Bound    default/ceph-rdb-claim   dynamic-ceph-rdb            8s

NAME                                   STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS       AGE
persistentvolumeclaim/ceph-rdb-claim   Bound    pvc-908ec99d-5029-4c62-952f-016ca11ab08c   2Gi        RWO            dynamic-ceph-rdb   9s

創建 nginx pod 掛載測試

# cat >nginx-pod.yaml<<EOF
apiVersion: v1
kind: Pod
metadata:
  name: nginx-pod1
  labels:
    name: nginx-pod1
spec:
  containers:
  - name: nginx-pod1
    image: nginx:alpine
    ports:
    - name: web
      containerPort: 80
    volumeMounts:
    - name: ceph-rdb
      mountPath: /usr/share/nginx/html
  volumes:
  - name: ceph-rdb
    persistentVolumeClaim:
      claimName: ceph-rdb-claim
EOF
# kubectl apply -f nginx-pod.yaml

[root@k8s-master yaml]# kubectl get pod -o wide
NAME         READY   STATUS    RESTARTS   AGE   IP           NODE          NOMINATED NODE   READINESS GATES
nginx-pod1   1/1     Running   0          43s   10.244.2.3   k8s-master1   <none>           <none>
[root@k8s-master yaml]# kubectl exec -ti nginx-pod1 -- /bin/sh -c 'echo this is from Ceph RBD!!! > /usr/share/nginx/html/index.html'
[root@k8s-master yaml]# curl 10.244.2.3
this is from Ceph RBD!!!

清理

kubectl delete -f nginx-pod.yaml
kubectl delete -f ceph-rdb-pvc-test.yaml

POD使用CephFS做為持久數據卷

CephFS方式支持k8s的pv的3種訪問模式ReadWriteOnce,ReadOnlyMany ,ReadWriteMany

Ceph端創建CephFS pool

1、如下操作在ceph的mon或者admin節點 CephFS需要使用兩個Pool來分別存儲數據和元數據

1、如下操作在ceph的mon或者admin節點CephFS需要使用兩個Pool來分別存儲數據和元數據
ceph osd pool create fs_data 128
ceph osd pool create fs_metadata 128
ceph osd lspools

2、創建一個CephFS
ceph fs new cephfs fs_metadata fs_data

3、查看
# ceph fs ls
name: cephfs, metadata pool: fs_metadata, data pools: [fs_data ]
部署 cephfs-provisioner

使用社區提供的cephfs-provisioner

# cat >external-storage-cephfs-provisioner.yaml<<EOF
apiVersion: v1
kind: Namespace
metadata:
   name: cephfs
   labels:
     name: cephfs
 
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: cephfs-provisioner
  namespace: kube-system
 
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: cephfs-provisioner
  namespace: kube-system
rules:
  - apiGroups: [""]
    resources: ["secrets"]
    verbs: ["create", "get", "delete"]
  - apiGroups: [""]
    resources: ["endpoints"]
    verbs: ["get", "list", "watch", "create", "update", "patch"]
 
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: cephfs-provisioner
  namespace: kube-system
rules:
  - apiGroups: [""]
    resources: ["persistentvolumes"]
    verbs: ["get", "list", "watch", "create", "delete"]
  - apiGroups: [""]
    resources: ["persistentvolumeclaims"]
    verbs: ["get", "list", "watch", "update"]
  - apiGroups: ["storage.k8s.io"]
    resources: ["storageclasses"]
    verbs: ["get", "list", "watch"]
  - apiGroups: [""]
    resources: ["events"]
    verbs: ["create", "update", "patch"]
  - apiGroups: [""]
    resources: ["services"]
    resourceNames: ["kube-dns","coredns"]
    verbs: ["list", "get"]
  - apiGroups: [""]
    resources: ["secrets"]
    verbs: ["get", "create", "delete"]
  - apiGroups: ["policy"]
    resourceNames: ["cephfs-provisioner"]
    resources: ["podsecuritypolicies"]
    verbs: ["use"]
 
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: cephfs-provisioner
  namespace: kube-system
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: cephfs-provisioner
subjects:
- kind: ServiceAccount
  name: cephfs-provisioner
 
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: cephfs-provisioner
subjects:
  - kind: ServiceAccount
    name: cephfs-provisioner
    namespace: kube-system
roleRef:
  kind: ClusterRole
  name: cephfs-provisioner
  apiGroup: rbac.authorization.k8s.io
 
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: cephfs-provisioner
  namespace: kube-system
spec:
  selector:
    matchLabels:
      app: cephfs-provisioner
  replicas: 1
  strategy:
    type: Recreate
  template:
    metadata:
      labels:
        app: cephfs-provisioner
    spec:
      containers:
      - name: cephfs-provisioner
        image: "quay.io/external_storage/cephfs-provisioner:latest"
        env:
        - name: PROVISIONER_NAME
          value: ceph.com/cephfs
        command:
        - "/usr/local/bin/cephfs-provisioner"
        args:
        - "-id=cephfs-provisioner-1"
        - "-disable-ceph-namespace-isolation=true"
      serviceAccount: cephfs-provisioner
EOF
# kubectl apply -f external-storage-cephfs-provisioner.yaml
# kubectl get pod -n kube-system |grep cephfs
cephfs-provisioner-847468fc-5k8vx   1/1     Running   0          7m39s

配置secret

查看key 在ceph的mon或者admin節點
ceph auth get-key client.admin

創建 admin secret
# kubectl create secret generic ceph-secret --type="kubernetes.io/rbd" \
--from-literal=key=AQCBrJ9eV/U5NBAAoDlM4gV3a+KNQDBOUqVxdw== \
--namespace=kube-system

查看
# kubectl get secret ceph-secret -n kube-system -o yaml
apiVersion: v1
data:
  key: QVFDQnJKOWVWL1U1TkJBQW9EbE00Z1YzYStLTlFEQk9VcVZ4ZHc9PQ==
kind: Secret
metadata:
  creationTimestamp: "2020-06-08T08:17:09Z"
  name: ceph-secret
  namespace: kube-system
  resourceVersion: "42732"
  selfLink: /api/v1/namespaces/kube-system/secrets/ceph-secret
  uid: efec109a-17de-4f72-afd4-d126f4d4f8d6
type: kubernetes.io/rbd

配置 StorageClass

# vim storageclass-cephfs.yaml 
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: dynamic-cephfs
provisioner: ceph.com/cephfs
parameters:
    monitors: 192.168.0.246:6789,192.168.0.247:6789,192.168.0.248:6789
    adminId: admin
    adminSecretName: ceph-secret
    adminSecretNamespace: "kube-system"
    claimRoot: /volumes/kubernetes

[root@k8s-master yaml]# kubectl apply -f storageclass-cephfs.yaml
storageclass.storage.k8s.io/dynamic-cephfs created
[root@k8s-master yaml]# kubectl get sc
NAME               PROVISIONER       RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
dynamic-ceph-rdb   ceph.com/rbd      Delete          Immediate           false                  59m
dynamic-cephfs     ceph.com/cephfs   Delete          Immediate           false                  5s

創建pvc測試

# cat >cephfs-pvc-test.yaml<<EOF
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: cephfs-claim
spec:
  accessModes:     
    - ReadWriteOnce
  storageClassName: dynamic-cephfs
  resources:
    requests:
      storage: 2Gi
EOF
# kubectl apply -f cephfs-pvc-test.yaml 
persistentvolumeclaim/cephfs-claim created
[root@k8s-master yaml]# kubectl get pv,pvc
NAME                                                        CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                  STORAGECLASS     REASON   AGE
persistentvolume/pvc-df8768d7-5111-4e14-a0cf-dd029b00469b   2Gi        RWO            Delete           Bound    default/cephfs-claim   dynamic-cephfs            12s

NAME                                 STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS     AGE
persistentvolumeclaim/cephfs-claim   Bound    pvc-df8768d7-5111-4e14-a0cf-dd029b00469b   2Gi        RWO            dynamic-cephfs   13s

創建 nginx pod 掛載測試

# cat >nginx-pod.yaml<<EOF
apiVersion: v1
kind: Pod
metadata:
  name: nginx-pod2
  labels:
    name: nginx-pod2
spec:
  containers:
  - name: nginx-pod2
    image: nginx
    ports:
    - name: web
      containerPort: 80
    volumeMounts:
    - name: cephfs
      mountPath: /usr/share/nginx/html
  volumes:
  - name: cephfs
    persistentVolumeClaim:
      claimName: cephfs-claim
EOF
# kubectl apply -f nginx-pod.yaml
[root@k8s-master yaml]# kubectl get pod -o wide
NAME         READY   STATUS    RESTARTS   AGE   IP           NODE          NOMINATED NODE   READINESS GATES
nginx-pod2   1/1     Running   0          37s   10.244.2.5   k8s-master1   <none>           <none>
[root@k8s-master yaml]# kubectl exec -ti nginx-pod2 -- /bin/sh -c 'echo This is from CephFS!!! > /usr/share/nginx/html/index.html'
[root@k8s-master yaml]# curl 10.244.2.5
This is from CephFS!!!
###查看掛載信息
[root@k8s-master yaml]# kubectl exec -ti nginx-pod2 -- /bin/sh -c 'df -h'
Filesystem                                                                                                                                           Size  Used Avail Use% Mounted on
overlay                                                                                                                                               76G  4.1G   72G   6% /
tmpfs                                                                                                                                                 64M     0   64M   0% /dev
tmpfs                                                                                                                                                1.4G     0  1.4G   0% /sys/fs/cgroup
/dev/sda3                                                                                                                                             76G  4.1G   72G   6% /etc/hosts
shm                                                                                                                                                   64M     0   64M   0% /dev/shm
192.168.0.246:6789,192.168.0.247:6789,192.168.0.248:6789:/volumes/kubernetes/kubernetes/kubernetes-dynamic-pvc-e957596e-a969-11ea-8966-cec7bd638523   85G     0   85G   0% /usr/share/nginx/html
tmpfs                                                                                                                                                1.4G   12K  1.4G   1% /run/secrets/kubernetes.io/serviceaccount
tmpfs                                                                                                                                                1.4G     0  1.4G   0% /proc/acpi
tmpfs                                                                                                                                                1.4G     0  1.4G   0% /proc/scsi
tmpfs                                                                                                                                                1.4G     0  1.4G   0% /sys/firmware


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM