1. 在Ceph上為Kubernetes創建一個存儲池
# ceph osd pool create k8s 128
2. 創建k8s用戶
# ceph auth get-or-create client.k8s mon 'allow r' osd 'allow rwx pool=k8s' -o ceph.client.k8s.keyring
3. 將k8s用戶的key進行base64編碼
這是Kubernetes訪問Ceph的密鑰,會保存在Kubernetes的Secret中
# grep key ceph.client.k8s.keyring | awk '{printf "%s", $NF}' | base64 VBGFaeN3OWJYdUZPSHhBQTNrU2E2QlUyaEF5UUV0SnNPRHdXeRT8PQ==
4. 在Kubernetes創建訪問Ceph的Secret
# echo ' apiVersion: v1 kind: Secret metadata: name: ceph-k8s-secret type: "kubernetes.io/rbd" data: key: VBGFaeN3OWJYdUZPSHhBQTNrU2E2QlUyaEF5UUV0SnNPRHdXeRT8PQ== --- apiVersion: v1 kind: Secret metadata: name: ceph-admin-secret namespace: kube-system type: "kubernetes.io/rbd" data: key: VBGFaeN3OWJYdUZPSHhBQTNrU2E2QlUyaEF5UUV0SnNPRHdXeRT8PQ== ‘ | kubectl create -f -
5. 將訪問Ceph的keyring復制到Kubernetes work節點上
在創建Pod的時候,kubelet會調用rbd命令去檢測和掛載PVC對應的rbd鏡像,因此在kubelet節點上要保證存在rbd命令和訪問ceph的keyring。否則創建Pod,kubelet有可能報各種各樣ceph相關的錯誤。
如果kubelet在worker節點上是正常運行在default namespace下的,那么安裝ceph-common包,然后將keyring拷貝到/etc/ceph/目錄下即可;如果kubelet是運行在容器中,那這兩個操作就需要在容器中執行。
我們的環境中,kubelet是運行在rkt容器中的,官方鏡像中已經包含了ceph客戶端,所以只需要將keyring拷貝到容器中。
我們環境中使用systemctl管理kubelet,以服務的方式啟動一個rkt容器來運行kubelet,修改/etc/systemd/system/kubelet.service,增加一個對應keyring的volume:
[Unit] Description=Kubernetes Kubelet Documentation=https://github.com/kubernetes/kubernetes After=load-images.service Requires=docker.service [Service] EnvironmentFile=/etc/cluster-envs Environment=KUBELET_IMAGE_TAG=v1.7.10 Environment="RKT_RUN_ARGS= \ --volume ceph-keyring,kind=host,source=/etc/ceph/ceph.client.k8s.keyring \ --mount volume=ceph-keyring,target=/etc/ceph/ceph.client.k8s.keyring \ --volume modprobe,kind=host,source=/usr/sbin/modprobe \ --mount volume=modprobe,target=/usr/sbin/modprobe \ --volume lib-modules,kind=host,source=/lib/modules \ --mount volume=lib-modules,target=/lib/modules \ ExecStartPre=/usr/bin/mkdir -p /etc/ceph ExecStart=/opt/bin/kubelet-wrapper \ --address=0.0.0.0 \ --allow-privileged=true \ --cluster-dns=192.168.192.10 \ --cluster-domain=cluster.local \ --cloud-provider='' \ --port=10250 \ --lock-file=/var/run/lock/kubelet.lock \ --exit-on-lock-contention \ --node-labels=worker=true \ --pod-manifest-path=/etc/kubernetes/manifests \ --kubeconfig=/etc/kubernetes/kubeconfig.yaml \ --require-kubeconfig=true \ --network-plugin=cni \ --cni-conf-dir=/etc/cni/net.d \ --cni-bin-dir=/opt/cni/bin \ --logtostderr=true Restart=always RestartSec=10 [Install] WantedBy=multi-user.target
6. 在Kubernetes創建ceph-rbd StorageClass
# echo ‘apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: ceph-rbd provisioner: kubernetes.io/rbd parameters: monitors: 10.32.24.11:6789,10.32.24.12:6789,10.32.24.13:6789 adminId: k8s adminSecretName: ceph-k8s-secret adminSecretNamespace: kube-system pool: k8s userId: k8s userSecretName: ceph-k8s-secret’ | kubectl create -f -
7. 將ceph-rbd設置為默認的StorageClass
# kubectl patch storageclass ceph-rbd -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'
注意集群中只能存在一個默認StorageClass,如果同時將多個StorageClass設置為默認,相當於沒有設置默認StorageClass。查看StorageClass列表,默認StorageClass帶有(default)標記:
# kubectl get storageclass NAME TYPE ceph-rbd (default) kubernetes.io/rbd ceph-sas kubernetes.io/rbd ceph-ssd kubernetes.io/rbd
8. 創建一個PersistentVolumeClaim
# echo ‘apiVersion: v1 kind: PersistentVolumeClaim metadata: name: nginx-test-vol1-claim spec: accessModes: - ReadWriteOnce storageClassName: ceph-rbd resources: requests: storage: 10Gi’ | kubectl create -f -
因為指定了默認StorageClass,所以這里的storageClassName其實可以省略。
9. 創建使用PVC的Pod
# echo ‘apiVersion: v1 kind: Pod metadata: name: nginx-test spec: containers: - name: nginx image: nginx:latest volumeMounts: - name: nginx-test-vol1 mountPath: /data/ readOnly: false volumes: - name: nginx-test-vol1 persistentVolumeClaim: claimName: nginx-test-vol1-claim’ | kubectl create -f -
10. 查看容器狀態
進入容器看到rbd掛載到了/data目錄
# kubectl exec nginx-test -it -- /bin/bash [root@nginx-test ~]# df -h Filesystem Size Used Avail Use% Mounted on /dev/rbd0 50G 52M 47G 1% /data