k8s集成cephfs(StorageClass方式)


k8s 中 pv 有以下三種訪問模式(Access Mode):

  • ReadWriteOnce:只可被一個Node掛載,這個Node對PV擁有讀寫權限
  • ReadOnlyMany: 可以被多個Node掛載,這些Node對PV只有只讀權限
  • ReadWriteMany: 可以被多個Node掛載,這些Node對PV擁有讀寫權限

ceph rbd 模式是不支持 ReadWriteMany,而 cephfs 是支持的,詳見官方文檔 Persistent Volumes | Kubernetes

還有一點,當創建 ceph rbd 的 storageclass 時,k8s 官方集成了 provisioner 的,只需指定 provisioner: kubernetes.io/rbd 即可;

而 cephfs 的 provisioner 當前是並未集成的,所以需要額外自行安裝 cephfs-provisioner,具體方法如下:

  1. ceph 集群創建 cephfs

    ceph-deploy mds create ceph01
    ceph-deploy mds create ceph02
    ceph-deploy mds create ceph03
    ceph osd pool create cephfs_data 64
    ceph osd pool create cephfs_metadata 64
    ceph fs new cephfs cephfs_metadata cephfs_data
    ceph fs ls
    
  2. 獲取 key

    $ ceph auth get-key client.admin | base64
    QVFEMjVxVmhiVUNJRHhBQUxwdmVHbUdNTWtXZjB6VXovbWlBY3c9PQ==
    
  3. k8s 集群節點安裝 ceph-common,版本需和 ceph 集群一致

    rpm -ivh http://download.ceph.com/rpm-luminous/el7/noarch/ceph-release-1-1.el7.noarch.rpm
    sed -i 's#download.ceph.com#mirrors.aliyun.com/ceph#g' /etc/yum.repos.d/ceph.repo
    yum install epel-release -y
    yum install -y ceph-common
    
  4. 安裝 cephfs-provisioner

    git clone https://github.com/kubernetes-retired/external-storage.git
    cd external-storage/ceph/cephfs/deploy
    kubectl create namespace cephfs
    kubectl -n cephfs apply -f ./rbac/
    
  5. 編輯 storageclass yaml 文件

    $ vi ceph-sc.yaml
    apiVersion: v1
    kind: Secret
    metadata:
      name: cephfs-storageclass-secret
      namespace: cephfs
    data:
      key: QVFEMjVxVmhiVUNJRHhBQUxwdmVHbUdNTWtXZjB6VXovbWlBY3c9PQ==
    ---
    apiVersion: storage.k8s.io/v1
    kind: StorageClass
    metadata:
      name: cephfs-leffss
      annotations:
        storageclass.kubernetes.io/is-default-class: "false"
    provisioner: ceph.com/cephfs
    parameters:
      monitors: 10.10.10.51:6789,10.10.10.53:6789,10.10.10.53:6789
      # 不能使用主機名方式,因為前面安裝的 cephfs-provisioner 的 pod 是無法訪問的
      #monitors: ceph01:6789,ceph02:6789,ceph03:6789
      adminId: admin
      adminSecretName: cephfs-storageclass-secret
      adminSecretNamespace: cephfs
      claimRoot: /k8s-volumes
    

    測試 yaml:

    kind: PersistentVolumeClaim
    apiVersion: v1
    metadata:
      name: ceph-pvc-test1
      namespace: default
      annotations:
        volume.beta.kubernetes.io/storage-class: cephfs-leffss
    spec:
      accessModes:
        - ReadWriteMany
      resources:
        requests:
          storage: 1Gi
    ---
    kind: Pod
    apiVersion: v1
    metadata:
      name: test-pod-1
    spec:
      containers:
      - name: test-pod-1
        image: hub.leffss.com/library/busybox:v1.29.2
        command:
          - "/bin/sh"
        args:
          - "-c"
          - "touch /mnt/SUCCESS-ceph-pvc-test1 && exit 0 || exit 1"
        volumeMounts:
          - name: pvc
            mountPath: "/mnt"
      restartPolicy: "Never"
      volumes:
        - name: pvc
          persistentVolumeClaim:
            claimName: ceph-pvc-test1
            
    
    kind: PersistentVolumeClaim
    apiVersion: v1
    metadata:
      name: ceph-pvc-test2
      namespace: default
    spec:
      storageClassName: cephfs-leffss
      accessModes:
        - ReadWriteMany
      resources:
        requests:
          storage: 1Gi
    ---
    kind: Pod
    apiVersion: v1
    metadata:
      name: test-pod-2
    spec:
      containers:
      - name: test-pod-2
        image: hub.leffss.com/library/busybox:v1.29.2
        command:
          - "/bin/sh"
        args:
          - "-c"
          - "touch /mnt/SUCCESS-ceph-pvc-test2 && exit 0 || exit 1"
        volumeMounts:
          - name: pvc
            mountPath: "/mnt"
      restartPolicy: "Never"
      volumes:
        - name: pvc
          persistentVolumeClaim:
            claimName: ceph-pvc-test2
    
  6. 驗證

    $ ceph auth get-key client.admin
    AQD25qVhbUCIDxAALpveGmGMMkWf0zUz/miAcw==
    
    $ mkdir /mycephfs
    $ mount -t ceph ceph01:6789,ceph02:6789,ceph03:6789:/ /mycephfs -o name=admin,secret=AQD25qVhbUCIDxAALpveGmGMMkWf0zUz/miAcw==
    
    $ tree /mycephfs
    /mycephfs
    └── k8s-volumes
        ├── kubernetes
        │   ├── kubernetes-dynamic-pvc-59ea31d4-52a4-11ec-962a-567006a2be7a
        │   │   └── SUCCESS-ceph-pvc-test1
        │   └── kubernetes-dynamic-pvc-6553ea70-52a4-11ec-962a-567006a2be7a
        │       └── SUCCESS-ceph-pvc-test2
        ├── _kubernetes:kubernetes-dynamic-pvc-59ea31d4-52a4-11ec-962a-567006a2be7a.meta
        └── _kubernetes:kubernetes-dynamic-pvc-6553ea70-52a4-11ec-962a-567006a2be7a.meta
    
    4 directories, 4 filess
    

最后說一點:cephfs 不太穩定,建議生產不要使用,我在測試環境都會遇到使用 cephfs 后導致 ceph 集群異常的情況


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM