描述
ceph-csi擴展各種存儲類型的卷的管理能力,實現第三方存儲ceph的各種操作能力與k8s存儲系統的結合。調用第三方存儲ceph的接口或命令,從而提供ceph數據卷的創建/刪除、掛載/解除掛載的具體操作實現。前面分析組件中的對於數據卷的創建/刪除、掛載/解除掛載操作,全是調用ceph-csi,然后由ceph-csi調用ceph提供的命令或接口來完成最終的操作。
ceph-csi服務組成
ceph-csi含有rbdType
、cephfsType
、livenessType
三大類型服務。rbdType
主要進行rbd的操作完成與ceph的交互,cephfsType
主要進行cephfs的操作完成與ceph交互,livenessType
該服務主要是定時向csi endpoint探測csi組件的存活(向指定的socket地址發送probe請求),然后統計到prometheus指標中。
- ControllerServer:主要負責創建、刪除cephfs/rbd存儲等操作。
- NodeServer:部署在k8s中的每個node上,主要負責cephfs、rbd在node節點上相關的操作,如將存儲掛載到node上,解除node上存儲掛載等操作。
- IdentityServer:主要是返回自身服務的相關信息,如返回服務身份信息(名稱與版本等信息)、返回服務具備的能力、暴露存活探測接口(用於給別的組件/服務探測該服務是否存活)等。
ceph-csi及相關組件部署
-
部署步驟
部署請參考我的這個文章《k8s基於csi使用rbd存儲》。 -
部署的組件介紹[rbd csi為例]
csi-rbdplugin-provisioner.yaml部署的相關組件:csi-provisioner
、csi-snapshotter
、csi-attacher
、csi-resizer
、csi-rbdplugin
、liveness-prometheus
5個容器,作用分別如下。- csi-provisioner:實際上是external-provisioner組件。在create pvc時,csi-provisioner參與存儲資源與pv對象的創建。csi-provisioner組件監聽到pvc創建事件后,負責拼接請求,調用ceph-csi組件(即csi-rbdplugin容器)的CreateVolume方法來創建存儲,創建存儲成功后,創建pv對象;delete pvc時,csi-provisioner參與存儲資源與pv對象的刪除。當pvc被刪除時,pv controller會將其綁定的pv對象狀態由bound更新為release,csi-provisioner監聽到pv更新事件后,調用ceph-csi組件(即csi-rbdplugin容器)的DeleteVolume方法來刪除存儲,並刪除pv對象。
- csi-snapshotter:實際上是external-snapshotter組件,負責處理存儲快照相關的操作。
- csi-attacher:實際上是external-attacher組件,只負責操作VolumeAttachment對象,實際上並沒有操作存儲。
- csi-resizer:實際上是external-resizer組件,負責處理存儲擴容相關的操作。
- csi-rbdplugin:實際上是ceph-csi組件,rbdType-ControllerServer/IdentityServer類型的服務。create pvc時,external-provisioner組件(即csi-provisioner容器)監聽到pvc創建事件后,負責拼接請求,然后調用csi-rbdplugin容器的CreateVolume方法來創建存儲;delete pvc時,pv對象狀態由bound變為release,external-provisioner組件(即csi-provisioner容器)監聽到pv更新事件后,負責拼接請求,調用csi-rbdplugin容器的DeleteVolume方法來刪除存儲。
- liveness-prometheus:實際上是ceph-csi組件,livenessType類型的服務。負責探測並上報csi-rbdplugin服務的存活情況。
csi-rbdplugin.yaml部署的相關組件:包含了
driver-registrar
、csi-rbdplugin
、liveness-prometheus
3個容器,作用分別如下。- driver-registrar:向kubelet傳入csi-rbdplugin容器提供服務的socket地址、版本信息和驅動名稱(如rbd.csi.ceph.com)等,將csi-rbdplugin容器服務注冊給kubelet。
- csi-rbdplugin:實際上是ceph-csi組件,rbdType-NoderServer/IdentityServer類型的服務。create pod cliam pvc時,kubelet會調用csi-rbdplugin容器將創建好的存儲從ceph集群掛載到pod所在的node上,然后再掛載到pod相應的目錄上;delete pod cliam pvc時,kubelet會調用csi-rbdplugin容器的相應方法,解除存儲在pod目錄上的掛載,再解除存儲在node上的掛載。
- liveness-prometheus:實際上是ceph-csi組件,livenessType類型的服務。負責探測並上報csi-rbdplugin服務的存活情況。
- csi-provisioner:實際上是external-provisioner組件。在create pvc時,csi-provisioner參與存儲資源與pv對象的創建。csi-provisioner組件監聽到pvc創建事件后,負責拼接請求,調用ceph-csi組件(即csi-rbdplugin容器)的CreateVolume方法來創建存儲,創建存儲成功后,創建pv對象;delete pvc時,csi-provisioner參與存儲資源與pv對象的刪除。當pvc被刪除時,pv controller會將其綁定的pv對象狀態由bound更新為release,csi-provisioner監聽到pv更新事件后,調用ceph-csi組件(即csi-rbdplugin容器)的DeleteVolume方法來刪除存儲,並刪除pv對象。
csi如何進行pvc的create和delete
# 環境信息
$ kubectl get pod
NAME READY STATUS RESTARTS AGE
csi-rbdplugin-9tfnm 3/3 Running 0 26h
csi-rbdplugin-provisioner-5cc9f558c7-d2stz 7/7 Running 0 26h
$ kubectl get pvc,pv
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
persistentvolumeclaim/raw-block-pvc Bound pvc-4e52c163-a593-4cc1-af59-23367d1e7573 2Gi RWO csi-rbd-sc 9m47s
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
persistentvolume/pvc-4e52c163-a593-4cc1-af59-23367d1e7573 2Gi RWO Delete Bound default/raw-block-pvc csi-rbd-sc 9m47s
# 創建操作: 創建pvc的日志
$ kubectl apply -f raw-block-pvc.yaml
$ kubectl logs -f --tail=100 csi-rbdplugin-provisioner-5cc9f558c7-d2stz -c csi-provisioner
$ kubectl logs -f --tail=100 csi-rbdplugin-provisioner-5cc9f558c7-d2stz -c csi-rbdplugin
# csi-provisioner log: 接收到創建pvc的指令,下發指令創建
I0310 09:21:35.323991 1 connection.go:183] GRPC call: /csi.v1.Controller/CreateVolume
I0310 09:21:35.390997 1 controller.go:777] create volume rep: {CapacityBytes:2147483648 VolumeId:0001-0024-4a9e463a-4853-4237-a5c5-9ae9d25bacda-0000000000000005-7b50e081-a053-11ec-b2dd-fa163ed7971b VolumeContext:map[clusterID:4a9e463a-4853-4237-a5c5-9ae9d25bacda csi.storage.k8s.io/pv/name:pvc-4e52c163-a593-4cc1-af59-23367d1e7573 csi.storage.k8s.io/pvc/name:raw-block-pvc csi.storage.k8s.io/pvc/namespace:default imageFeatures:layering imageName:csi-vol-7b50e081-a053-11ec-b2dd-fa163ed7971b journalPool:kubernetes pool:kubernetes] ContentSource:<nil> AccessibleTopology:[] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}
I0310 09:21:35.391046 1 controller.go:861] successfully created PV pvc-4e52c163-a593-4cc1-af59-23367d1e7573 for PVC raw-block-pvc and csi volume name 0001-0024-4a9e463a-4853-4237-a5c5-9ae9d25bacda-0000000000000005-7b50e081-a053-11ec-b2dd-fa163ed7971b
# csi-rbdplugin log: 收到創建存儲指令,調用rbd創建pv成功
I0310 09:21:35.350755 1 rbd_journal.go:482] ID: 1609 Req-ID: pvc-4e52c163-a593-4cc1-af59-23367d1e7573 generated Volume ID (0001-0024-4a9e463a-4853-4237-a5c5-9ae9d25bacda-0000000000000005-7b50e081-a053-11ec-b2dd-fa163ed7971b) and image name (csi-vol-7b50e081-a053-11ec-b2dd-fa163ed7971b) for request name (pvc-4e52c163-a593-4cc1-af59-23367d1e7573)
I0310 09:21:35.350822 1 rbd_util.go:352] ID: 1609 Req-ID: pvc-4e52c163-a593-4cc1-af59-23367d1e7573 rbd: create kubernetes/csi-vol-7b50e081-a053-11ec-b2dd-fa163ed7971b size 2048M (features: [layering]) using mon 172.20.163.52:6789,172.20.163.52:6789,172.20.163.52:6789
I0310 09:21:35.374165 1 controllerserver.go:666] ID: 1609 Req-ID: pvc-4e52c163-a593-4cc1-af59-23367d1e7573 created image kubernetes/csi-vol-7b50e081-a053-11ec-b2dd-fa163ed7971b backed for request name pvc-4e52c163-a593-4cc1-af59-23367d1e7573
# 刪除操作:刪除pvc的日志
$ kubectl delete -f raw-block-pvc.yaml
$ kubectl logs -f --tail=100 csi-rbdplugin-provisioner-5cc9f558c7-d2stz -c csi-provisioner
$ kubectl logs -f --tail=100 csi-rbdplugin-provisioner-5cc9f558c7-d2stz -c csi-rbdplugin
# csi-provisioner log: 接收到刪除pvc的請求,進行下發指令執行刪除pv
I0310 09:11:52.301723 1 controller.go:1413] delete "pvc-a92ceaaf-e400-4a74-a578-1fa0292ebdee": started
I0310 09:11:52.306652 1 connection.go:183] GRPC call: /csi.v1.Controller/DeleteVolume
I0310 09:11:52.306671 1 connection.go:184] GRPC request: {"secrets":"***stripped***","volume_id":"0001-0024-4a9e463a-4853-4237-a5c5-9ae9d25bacda-0000000000000005-b8f108b8-a022-11ec-b2dd-fa163ed7971b"}
I0310 09:11:53.088151 1 connection.go:186] GRPC response: {}
I0310 09:11:53.088204 1 connection.go:187] GRPC error: <nil>
I0310 09:11:53.088220 1 controller.go:1428] delete "pvc-a92ceaaf-e400-4a74-a578-1fa0292ebdee": volume deleted
I0310 09:11:53.098260 1 controller.go:1478] delete "pvc-a92ceaaf-e400-4a74-a578-1fa0292ebdee": persistentvolume deleted
I0310 09:11:53.098290 1 controller.go:1483] delete "pvc-a92ceaaf-e400-4a74-a578-1fa0292ebdee": succeeded
I0310 09:11:54.915543 1 leaderelection.go:278] successfully renewed lease default/rbd-csi-ceph-com
# csi-rbdplugin log: 收到刪除存儲指令,調用rbd刪除pv成功
I0310 09:11:52.390569 1 rbd_util.go:644] ID: 1598 Req-ID: 0001-0024-4a9e463a-4853-4237-a5c5-9ae9d25bacda-0000000000000005-b8f108b8-a022-11ec-b2dd-fa163ed7971b rbd: delete csi-vol-b8f108b8-a022-11ec-b2dd-fa163ed7971b-temp using mon 172.20.163.52:6789,172.20.163.52:6789,172.20.163.52:6789, pool kubernetes
I0310 09:11:52.394786 1 controllerserver.go:947] ID: 1598 Req-ID: 0001-0024-4a9e463a-4853-4237-a5c5-9ae9d25bacda-0000000000000005-b8f108b8-a022-11ec-b2dd-fa163ed7971b deleting image csi-vol-b8f108b8-a022-11ec-b2dd-fa163ed7971b
I0310 09:11:52.394815 1 rbd_util.go:644] ID: 1598 Req-ID: 0001-0024-4a9e463a-4853-4237-a5c5-9ae9d25bacda-0000000000000005-b8f108b8-a022-11ec-b2dd-fa163ed7971b rbd: delete csi-vol-b8f108b8-a022-11ec-b2dd-fa163ed7971b using mon 172.20.163.52:6789,172.20.163.52:6789,172.20.163.52:6789, pool kubernetes
ask to remove image "kubernetes/csi-vol-b8f108b8-a022-11ec-b2dd-fa163ed7971b" with id "11cbf1b83337" from trash
I0310 09:11:53.087702 1 omap.go:123] ID: 1598 Req-ID: 0001-0024-4a9e463a-4853-4237-a5c5-9ae9d25bacda-0000000000000005-b8f108b8-a022-11ec-b2dd-fa163ed7971b removed omap keys (pool="kubernetes", namespace="", name="csi.volumes.default"): [csi.volume.pvc-a92ceaaf-e400-4a74-a578-1fa0292ebdee]
csi如何進行pod rbd存儲的create和delete
# 創建操作:掛載rbd存儲到pod進行跟蹤log[在node節點查看]
$ kubectl create -f raw-block-pod.yaml
$ kubectl logs -f --tail=10 csi-rbdplugin-9tfnm -c csi-rbdplugin
# csi通過rbd進行創建塊設備,並映射到宿主機/dev/rbd0
I0310 08:31:22.769929 11099 cephcmds.go:63] ID: 1900 Req-ID: 0001-0024-4a9e463a-4853-4237-a5c5-9ae9d25bacda-0000000000000005-b8f108b8-a022-11ec-b2dd-fa163ed7971b command succeeded: rbd [--id kubernetes -m 172.20.163.52:6789,172.20.163.52:6789,172.20.163.52:6789 --keyfile=***stripped*** map kubernetes/csi-vol-b8f108b8-a022-11ec-b2dd-fa163ed7971b --device-type krbd --options noudev]
I0310 08:31:22.769972 11099 nodeserver.go:391] ID: 1900 Req-ID: 0001-0024-4a9e463a-4853-4237-a5c5-9ae9d25bacda-0000000000000005-b8f108b8-a022-11ec-b2dd-fa163ed7971b rbd image: kubernetes/csi-vol-b8f108b8-a022-11ec-b2dd-fa163ed7971b was successfully mapped at /dev/rbd0
# 格式化/dev/rbd0塊設備為ext4文件系統並成功掛載給pod
I0310 08:31:22.823277 11099 mount_linux.go:376] Checking for issues with fsck on disk: /dev/rbd0
I0310 08:31:22.894904 11099 mount_linux.go:477] Attempting to mount disk /dev/rbd0 in ext4 format at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-a92ceaaf-e400-4a74-a578-1fa0292ebdee/globalmount/0001-0024-4a9e463a-4853-4237-a5c5-9ae9d25bacda-0000000000000005-b8f108b8-a022-11ec-b2dd-fa163ed7971b
I0310 08:31:22.894960 11099 mount_linux.go:183] Mounting cmd (mount) with arguments (-t ext4 -o _netdev,discard,defaults /dev/rbd0 /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-a92ceaaf-e400-4a74-a578-1fa0292ebdee/globalmount/0001-0024-4a9e463a-4853-4237-a5c5-9ae9d25bacda-0000000000000005-b8f108b8-a022-11ec-b2dd-fa163ed7971b)
I0310 08:31:22.997909 11099 resizefs_linux.go:124] ResizeFs.needResize - checking mounted volume /dev/rbd0
I0310 08:31:23.000412 11099 resizefs_linux.go:128] Ext size: filesystem size=2147483648, block size=4096
I0310 08:31:23.000433 11099 resizefs_linux.go:140] Volume /dev/rbd0: device size=2147483648, filesystem size=2147483648, block size=4096
I0310 08:31:23.000502 11099 nodeserver.go:351] ID: 1900 Req-ID: 0001-0024-4a9e463a-4853-4237-a5c5-9ae9d25bacda-0000000000000005-b8f108b8-a022-11ec-b2dd-fa163ed7971b rbd: successfully mounted volume 0001-0024-4a9e463a-4853-4237-a5c5-9ae9d25bacda-0000000000000005-b8f108b8-a022-11ec-b2dd-fa163ed7971b to stagingTargetPath /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-a92ceaaf-e400-4a74-a578-1fa0292ebdee/globalmount/0001-0024-4a9e463a-4853-4237-a5c5-9ae9d25bacda-0000000000000005-b8f108b8-a022-11ec-b2dd-fa163ed7971b
# 刪除操作:已掛載rbd存儲的pod進行刪除跟蹤log[在node節點查看]
$ kubectl delete pod pod-with-raw-block-volume
$ kubectl logs -f --tail=10 csi-rbdplugin-9tfnm -c csi-rbdplugin
I0310 07:57:38.477003 11099 mount_linux.go:294] Unmounting /var/lib/kubelet/pods/b81968d7-1f46-4076-8f90-36c2b1e2ea86/volumes/kubernetes.io~csi/pvc-a92ceaaf-e400-4a74-a578-1fa0292ebdee/mount
# 從pod中umount掉這個卷
I0310 07:57:38.485433 11099 nodeserver.go:864] ID: 1862 Req-ID: 0001-0024-4a9e463a-4853-4237-a5c5-9ae9d25bacda-0000000000000005-b8f108b8-a022-11ec-b2dd-fa163ed7971b rbd: successfully unbound volume 0001-0024-4a9e463a-4853-4237-a5c5-9ae9d25bacda-0000000000000005-b8f108b8-a022-11ec-b2dd-fa163ed7971b from /var/lib/kubelet/pods/b81968d7-1f46-4076-8f90-36c2b1e2ea86/volumes/kubernetes.io~csi/pvc-a92ceaaf-e400-4a74-a578-1fa0292ebdee/mount
# 從宿主機rbd umap掉塊設備
I0310 07:57:38.777236 11099 cephcmds.go:63] ID: 1864 Req-ID: 0001-0024-4a9e463a-4853-4237-a5c5-9ae9d25bacda-0000000000000005-b8f108b8-a022-11ec-b2dd-fa163ed7971b command succeeded: rbd [unmap kubernetes/csi-vol-b8f108b8-a022-11ec-b2dd-fa163ed7971b --device-type krbd --options noudev]
I0310 07:57:38.777270 11099 nodeserver.go:977] ID: 1864 Req-ID: 0001-0024-4a9e463a-4853-4237-a5c5-9ae9d25bacda-0000000000000005-b8f108b8-a022-11ec-b2dd-fa163ed7971b successfully unmapped volume (0001-0024-4a9e463a-4853-4237-a5c5-9ae9d25bacda-0000000000000005-b8f108b8-a022-11ec-b2dd-fa163ed7971b)