ceph對接k8s storage class


簡介

對接ceph的rbd和cephfs到k8s中提供持久化存儲

環境

主機名 IP role 操作系統
ceph-01 172.16.31.11 mon osd CentOS7.8
ceph-02 172.16.31.12 Osd CentOS7.8
ceph-03 172.16.31.13 osd CentOS7.8

這個是官網的圖

架構

步驟

安裝ceph

主機名設置

## ceph-01
hostnamectl set-hostname ceph-01
## ceph-02
hostnamectl set-hostname ceph-01
## ceph-03
hostnamectl set-hostname ceph-01

添加主機映射

cat << EOF >> /etc/hosts
172.16.31.11 ceph-01
172.16.31.12 ceph-02
172.16.31.13 ceph-03
EOF

關閉防火牆

systemctl stop firewalld && systemctl stop firewalld
setenforce 0 && sed -i 's/^SELINUX=.*/SELINUX=disabled/' /etc/selinux/config 
iptables -F && iptables -X && iptables -Z

時間同步

yum install -y ntpdate
ntpdate

ssh無密鑰訪問

## ceph-01節點執行
ssh-keygen 

ssh-copy-id ceph-01

ssh-copy-id ceph-02

ssh-copy-id ceph-03

准備repo

yum install epel-release -y
cat << EOF > /etc/yum.repos.d/ceph-deploy.repo
[ceph-noarch]
name=Ceph noarch packages
baseurl=https://download.ceph.com/rpm-luminous/el7/noarch
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://download.ceph.com/keys/release.asc
EOF

國內用戶可以用阿里的倉庫

wget -O /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo
cat << EOF > /etc/yum.repos.d/ceph-deploy.repo
[ceph-noarch]
name=Ceph noarch packages
baseurl=https://mirrors.aliyun.com/ceph/rpm-luminous/el7/noarch/
enabled=1
gpgcheck=0
EOF

安裝ceph-deploy軟件包

ceph-1節點

yum install ceph-deploy yum-plugin-priorities python2-pip bash-completion -y

其他節點安裝

yum install yum-plugin-priorities python2-pip bash-completion -y

創建一個ceph目錄

ceph-01節點

mkdir ceph-cluster
cd ceph-cluster

初始化ceph集群

ceph-deploy new ceph-01

(可選)修改網絡接口

如果有兩個網卡,可以將管理和存儲網分離

public_network = 172.16.0.0/16
cluster_network = 192.168.31.0/24

安裝ceph軟件包

ceph-deploy install ceph-01 ceph-02 ceph-03

國內加速可以指定阿里雲鏡像地址,先在所有節點添加這個倉庫

cat << EOF > /etc/yum.repos.d/ceph-luminous.repo
[ceph]
name=Ceph packages for x86_64
baseurl=http://mirrors.aliyun.com/ceph/rpm-luminous/el7/x86_64
enabled=1
gpgcheck=0
EOF

然后執行

ceph-deploy install ceph-01 ceph-02 ceph-03 --no-adjust-repos

創建mon

ceph-deploy mon create-initial

執行完后會創建*.keyring 密鑰環

復制配置和秘鑰到對應的節點上

ceph-deploy admin ceph-01 ceph-02 ceph-03 

部署mgr

ceph-deploy mgr create ceph-01

mgr是ceph-12.x版本(luminous)新增的組件

部署osd

ceph-deploy osd create --data /dev/sdb ceph-01
ceph-deploy osd create --data /dev/sdb ceph-02
ceph-deploy osd create --data /dev/sdb ceph-03

檢查集群狀態

ceph health
ceph -s

測試

創建pool

ceph osd pool create test 8 8
echo `date` > date.txt
rados put test-object-1 date.txt --pool=test
上傳到存儲池中
echo `date` > date.txt
rados put test-object-1 date.txt --pool=test
查看存儲池和對象映射
rados -p test ls
ceph osd map test test-object-1
刪除
rados rm test-object-1 --pool=test
ceph osd pool rm test test --yes-i-really-really-mean-it

這里刪不掉的話,需要添加這個配置

mon_allow_pool_delete = true

然后重啟mon

ceph-deploy --overwrite-conf admin ceph-01 ceph-02 ceph-03
systemctl restart ceph-mon@ceph-01.service
## 再執行刪除
ceph osd pool rm test test --yes-i-really-really-mean-it

ceph rbd對接kubernetes

參考github連接:https://kubernetes.io/zh/docs/concepts/storage/storage-classes/#ceph-rbd

創建pool

ceph osd pool create kube-pool 64 64

導入admin keyring

獲取admin keyring

ceph auth get-key client.admin

將key換成上一步輸出的結果

kubectl create secret generic ceph-secret  -n kube-system \
  --type="kubernetes.io/rbd" \
  --from-literal=key='AQDYuPZfdjykCxAAXApI8weHFiZdEPcoc8EaRA=='

創建 user secret

ceph auth add client.kube mon 'allow r' osd 'allow rwx pool=kube-pool'
ceph auth get-key client.kube
kubectl create secret generic ceph-secret-user -n kube-system  --from-literal=key='AQAH2vZfe8wWIhAA0w81hjSAoqmjayS5SmWuVQ=='  --type=kubernetes.io/rbd

創建StorageClass

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: ceph-rbd
provisioner: kubernetes.io/rbd
parameters:
  monitors: 172.16.31.11:6789
  adminId: admin
  adminSecretName: ceph-secret
  adminSecretNamespace: kube-system
  pool: kube-pool
  userId: kube
  userSecretName: ceph-secret-user
  userSecretNamespace: kube-system
  fsType: ext4
  imageFormat: "2"
  imageFeatures: "layering"

worker節點安裝ceph-common

cat << EOF > /etc/yum.repos.d/ceph-luminous.repo
[ceph]
name=Ceph packages for x86_64
baseurl=http://mirrors.aliyun.com/ceph/rpm-luminous/el7/x86_64
enabled=1
gpgcheck=0
EOF
yum install -y ceph-common

創建PVC

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: rbd-1
  namespace: default
spec:
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 1Gi
  storageClassName: ceph-rbd

創建deployment

apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: test-rbd
  name: test-rbd
  namespace: default
spec:
  replicas: 1
  selector:
    matchLabels:
      app: test-rbd
  template:
    metadata:
      labels:
        app: test-rbd
    spec:
      containers:
      - image: zerchin/network
        imagePullPolicy: IfNotPresent
        name: test-rbd
        volumeMounts:
        - mountPath: /data
          name: rbd
      volumes:
      - name: rbd
        persistentVolumeClaim:
          claimName: rbd-1

常見問題

問題1:rbd未加載報錯
MountVolume.WaitForAttach failed for volume "pvc-8d8a8ed9-bcdb-4de8-a725-9121fcb89c84" : rbd: map failed exit status 2, rbd output: libkmod: ERROR ../libkmod/libkmod.c:586 kmod_search_moddep: could not open moddep file '/lib/modules/4.4.247-1.el7.elrepo.x86_64/modules.dep.bin' modinfo: ERROR: Module alias rbd not found. modprobe: ERROR: ../libkmod/libkmod.c:586 kmod_search_moddep() could not open moddep file '/lib/modules/4.4.247-1.el7.elrepo.x86_64/modules.dep.bin' modprobe: FATAL: Module rbd not found in directory /lib/modules/4.4.247-1.el7.elrepo.x86_64 rbd: failed to load rbd kernel module (1) rbd: sysfs write failed In some cases useful info is found in syslog - try "dmesg | tail". rbd: map failed: (2) No such file or directory

原因

主要就是沒有加載rbd模塊,需要到所有的worker節點上加載rbd模塊

解決

modprobe rbd

參考:https://forums.cnrancher.com/q_445.html

問題2:掛載失敗
MountVolume.WaitForAttach failed for volume "pvc-aa0d2e46-3df3-4c70-a318-ad95d4d0810a" : rbd: map failed exit status 110, rbd output: rbd: sysfs write failed In some cases useful info is found in syslog - try "dmesg | tail". rbd: map failed: (110) Connection timed out

解決

ceph osd crush tunables hammer

參考:https://github.com/rancher/rancher/issues/13198#issuecomment-391920740

問題3:ceph HEALTH_WARN
HEALTH_WARN application not enabled on 1 pool(s)

解決

ceph health detail
ceph osd pool application enable kube-pool rbd

部署cephfs文件系統

k8s默認沒有cephfs的provisioner,所以需要手動部署一個provisioner去對接cephfs

參考github鏈接:https://github.com/kubernetes-retired/external-storage/tree/master/ceph/cephfs

部署mds(元數據服務)

ceph-deploy mds create ceph-01

創建兩個存儲池,用來存放實際的數據以及元數據

ceph osd pool create cephfs_data 64
ceph osd pool create cephfs_metadata 64

創建cephfs文件系統

ceph fs new cephfs cephfs_metadata cephfs_data

查看mds狀態

ceph mds stat
ceph -s

部署provisioner

這里有兩種方式部署provisioner,其中一種是直接docker run的方式部署,另一種是通過deployment的方式部署到k8s中

docker run方式部署ceph-provisioner
docker run -tid -v /root/.kube:/kube -v /var/run/kubernetes:/var/run/kubernetes --privileged --net=host --name ceph-provisioner quay.io/external_storage/cephfs-provisioner   /usr/local/bin/cephfs-provisioner   -master=https://172.16.0.99:6443   -kubeconfig=/kube/config -id=cephfs-provisioner-1 -disable-ceph-namespace-isolation
deployment方式部署到k8s中

rbac相關yaml

kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: cephfs-provisioner
  namespace: cephfs
rules:
  - apiGroups: [""]
    resources: ["persistentvolumes"]
    verbs: ["get", "list", "watch", "create", "delete"]
  - apiGroups: [""]
    resources: ["persistentvolumeclaims"]
    verbs: ["get", "list", "watch", "update"]
  - apiGroups: ["storage.k8s.io"]
    resources: ["storageclasses"]
    verbs: ["get", "list", "watch"]
  - apiGroups: [""]
    resources: ["events"]
    verbs: ["create", "update", "patch"]
  - apiGroups: [""]
    resources: ["services"]
    resourceNames: ["kube-dns","coredns"]
    verbs: ["list", "get"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: cephfs-provisioner
subjects:
  - kind: ServiceAccount
    name: cephfs-provisioner
    namespace: cephfs
roleRef:
  kind: ClusterRole
  name: cephfs-provisioner
  apiGroup: rbac.authorization.k8s.io
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: cephfs-provisioner
  namespace: cephfs
rules:
  - apiGroups: [""]
    resources: ["secrets"]
    verbs: ["create", "get", "delete"]
  - apiGroups: [""]
    resources: ["endpoints"]
    verbs: ["get", "list", "watch", "create", "update", "patch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: cephfs-provisioner
  namespace: cephfs
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: cephfs-provisioner
subjects:
- kind: ServiceAccount
  name: cephfs-provisioner
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: cephfs-provisioner
  namespace: cephfs

ceph-provisioner-deployment.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: cephfs-provisioner
  namespace: cephfs
spec:
  replicas: 1
  selector:
    matchLabels:
      app: cephfs-provisioner
  strategy:
    type: Recreate
  template:
    metadata:
      labels:
        app: cephfs-provisioner
    spec:
      containers:
      - name: cephfs-provisioner
        image: "quay.io/external_storage/cephfs-provisioner:latest"
        env:
        - name: PROVISIONER_NAME
          value: ceph.com/cephfs
        - name: PROVISIONER_SECRET_NAMESPACE
          value: cephfs
        command:
        - "/usr/local/bin/cephfs-provisioner"
        args:
        - "-id=cephfs-provisioner-1"
        - "-disable-ceph-namespace-isolation"
      serviceAccount: cephfs-provisioner

保存上述兩個文件,並執行kubectl apply

kubectl apply -f cephfs-provisioner-rbac.yaml
kubectl apply -f cephfs-provisioner-deployment.yaml

導入秘鑰

ceph auth get-key client.admin > /tmp/secret
kubectl create ns cephfs
kubectl create secret generic ceph-secret-admin --from-file=/tmp/secret --namespace=cephfs

創建Storage class

kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: cephfs
provisioner: ceph.com/cephfs
parameters:
    monitors: 172.16.31.11:6789
    adminId: admin
    adminSecretName: ceph-secret-admin
    adminSecretNamespace: "cephfs"
    claimRoot: /pvc-volumes

創建pvc

kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: cephfs
spec:
  storageClassName: cephfs
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 1Gi

創建deployment

apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: test-cephfs
  name: test-cephfs
  namespace: default
spec:
  replicas: 1
  selector:
    matchLabels:
      app: test-cephfs
  template:
    metadata:
      labels:
        app: test-cephfs
    spec:
      containers:
      - image: zerchin/network
        imagePullPolicy: IfNotPresent
        name: test-rbd
        volumeMounts:
        - mountPath: /data
          name: cephfs
      volumes:
      - name: cephfs
        persistentVolumeClaim:
          claimName: cephfs

常見問題

問題:無法掛載cephfs
	MountVolume.SetUp failed for volume "pvc-e4373999-8380-4211-99c5-5d096f234b35" : CephFS: mount failed: mount failed: exit status 5 Mounting command: mount Mounting arguments: -t ceph -o <masked>,<masked> 172.16.29.5:6789:/pvc-volumes/kubernetes/kubernetes-dynamic-pvc-b3e72054-4dc7-11eb-abdd-f21be6c36b31 /var/lib/kubelet/pods/5986dc99-b707-4ea9-b6b2-ae7ffd457c99/volumes/kubernetes.io~cephfs/pvc-e4373999-8380-4211-99c5-5d096f234b35 Output: modprobe: ERROR: ../libkmod/libkmod.c:586 kmod_search_moddep() could not open moddep file '/lib/modules/4.4.247-1.el7.elrepo.x86_64/modules.dep.bin' modprobe: FATAL: Module ceph not found in directory /lib/modules/4.4.247-1.el7.elrepo.x86_64 failed to load ceph kernel module (1) mount error 5 = Input/output error

此時手動在后台使用mount.ceph掛載對應的目錄也掛載不上

原因

在 cephfs_provisoner.py 的實現中,默認添加了對 cephfs namespace 的支持,因此在對 volume 授權時會添加對 namespace 相關的權限設置。因為,我們使用的 ceph 版本 luminous 沒有對 namespace 進行支持,所以,在使用時產生了創建的 volume 掛載到 pod 內后沒有讀寫權限"input/output error"的問題。 此時,你在 cephfs 端查看卷的讀寫權限時,你可以看到目錄讀寫權限都是問號。於是我們修改了這部分邏輯,去掉了 namespace 相關的部分。

解決

設置ceph-provisioner啟動時添加該參數-disable-ceph-namespace-isolation

參考:https://www.infoq.cn/article/jqhjzvvl11escvfydruc

擴展

高可用

添加多個mon

ceph-deploy mon add ceph-02 ceph-03

當ceph集群有多個mon時,ceph會同步mon並形成仲裁,檢查仲裁狀態命令如下:

ceph quorum_status --format json-pretty

添加mgr

ceph-deploy mgr create ceph-02 ceph-03

查看集群狀態

ceph -s

節點清理ceph

ceph-deploy purge [ceph-node]
ceph-deploy purgedata [ceph-node]
ceph-deploy forgetkeys
rm ceph.*


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM