官方文檔地址:https://rook.io/docs/rook/v1.8/ceph-teardown.html
如果要拆除群集並啟動新群集,請注意需要清理的以下資源:
- rook-ceph namespace: The Rook operator and cluster created by operator.yaml and cluster.yaml (the cluster CRD)
- /var/lib/rook: Path on each host in the cluster where configuration is cached by the ceph mons and osds # 每一個主機的該目錄需要刪除
刪除現有的資源
kubectl delete -n rook-ceph cephblockpool replicapool
kubectl delete storageclass rook-ceph-block
kubectl delete storageclass csi-cephfs
kubectl -n rook-ceph delete cephcluster rook-ceph
kubectl delete -f operator.yaml
kubectl delete -f common.yaml
kubectl delete -f crds.yaml
刪除每個主機中的/var/lib/rook目錄(默認是這個目錄)
刪除磁盤數據
# yum -y install gdisk
#!/usr/bin/env bash
DISK="/dev/sdb" # 修改具體的磁盤
# Zap the disk to a fresh, usable state (zap-all is important, b/c MBR has to be clean)
# You will have to run this step for all disks.
sgdisk --zap-all $DISK
# Clean hdds with dd # 機械磁盤使用這個命令清除磁盤數據
dd if=/dev/zero of="$DISK" bs=1M count=100 oflag=direct,dsync
# Clean disks such as ssd with blkdiscard instead of dd # 固態磁盤使用這個命令清除磁盤數據,若不是固態磁盤則會提示不支持這一步操作
blkdiscard $DISK
# These steps only have to be run once on each node
# If rook sets up osds using ceph-volume, teardown leaves some devices mapped that lock the disks.
ls /dev/mapper/ceph-* | xargs -I% -- dmsetup remove %
# ceph-volume setup can leave ceph-<UUID> directories in /dev and /dev/mapper (unnecessary clutter)
rm -rf /dev/ceph-*
rm -rf /dev/mapper/ceph--*
# Inform the OS of partition table changes
partprobe $DISK