(1)在部署ceph的節點執行,遠程安裝ceph客戶端及設置:
ceph-deploy install ceph_client01 #ceph客戶端的主機名(ceph_client01)
ceph-deploy admin ceph_client01 #向客戶端拷貝ceph.client.admin.keyring
(2)客戶端執行
sudo chmod 644 /etc/ceph/ceph.client.admin.keyring
(3)客戶端執行,塊設備rdb配置:
創建rbd:rbd create disk01 –size 10G –image-feature layering
刪除:rbd rm disk01
列示rbd:rbd ls -l
映射rbd的image map:sudo rbd map disk01
取消映射:sudo rbd unmap disk01
顯示map:rbd showmapped
格式化disk01文件系統xfs:sudo mkfs.xfs /dev/rbd0
掛載硬盤:sudo mount /dev/rbd0 /mnt
驗證是否掛着成功:df -hT
(4)File System配置:
在部署節點執行,選擇一個node來創建MDS:
ceph-deploy mds create node1
以下操作在node1上執行:
sudo chmod 644 /etc/ceph/ceph.client.admin.keyring
在MDS節點node1上創建 cephfs_data 和 cephfs_metadata 的 pool
ceph osd pool create cephfs_data 128
ceph osd pool create cephfs_metadata 128
開啟pool:
ceph fs new cephfs cephfs_metadata cephfs_data
顯示ceph fs:
ceph fs ls
ceph mds stat
★以下操作在客戶端執行:
1, 安裝ceph-fuse
yum -y install ceph-fuse
獲取admin key:
sshcent@node1″sudo ceph-authtool -p /etc/ceph/ceph.client.admin.keyring” > admin.key
chmod600 admin.key
掛載ceph-fs:
mount -t ceph node1:6789:/ /mnt -o name=admin,secretfile=admin.key
df-hT
停止ceph-mds服務:
systemctl stop ceph-mds@node1
ceph mds fail 0
ceph fs rm cephfs –yes-i-really-mean-it
ceph osd lspools
顯示結果:0 rbd,1 cephfs_data,2 cephfs_metadata,
ceph osd pool rm cephfs_metadata cephfs_metadata –yes-i-really-really-mean-it
★刪除環境:
ceph-deploy purge cephhost cephhost1 cephhost2
ceph-deploy purgedata cephhost cephhost1 cephhost2
ceph-deploy forgetkeys
rm -rf ceph*
查看pool:ceph osd pool ls
查看pool中的pg:ceph osd pool get rbd pg_num
創建有64個pg的pool:ceph osd pool cephhost_pool 64
創建后,查看:
ceph osd pool ls
ceph osd pool get cephhost_pool pg_num