(1)在部署ceph的节点执行,远程安装ceph客户端及设置:
ceph-deploy install ceph_client01 #ceph客户端的主机名(ceph_client01)
ceph-deploy admin ceph_client01 #向客户端拷贝ceph.client.admin.keyring
(2)客户端执行
sudo chmod 644 /etc/ceph/ceph.client.admin.keyring
(3)客户端执行,块设备rdb配置:
创建rbd:rbd create disk01 –size 10G –image-feature layering
删除:rbd rm disk01
列示rbd:rbd ls -l
映射rbd的image map:sudo rbd map disk01
取消映射:sudo rbd unmap disk01
显示map:rbd showmapped
格式化disk01文件系统xfs:sudo mkfs.xfs /dev/rbd0
挂载硬盘:sudo mount /dev/rbd0 /mnt
验证是否挂着成功:df -hT
(4)File System配置:
在部署节点执行,选择一个node来创建MDS:
ceph-deploy mds create node1
以下操作在node1上执行:
sudo chmod 644 /etc/ceph/ceph.client.admin.keyring
在MDS节点node1上创建 cephfs_data 和 cephfs_metadata 的 pool
ceph osd pool create cephfs_data 128
ceph osd pool create cephfs_metadata 128
开启pool:
ceph fs new cephfs cephfs_metadata cephfs_data
显示ceph fs:
ceph fs ls
ceph mds stat
★以下操作在客户端执行:
1, 安装ceph-fuse
yum -y install ceph-fuse
获取admin key:
sshcent@node1″sudo ceph-authtool -p /etc/ceph/ceph.client.admin.keyring” > admin.key
chmod600 admin.key
挂载ceph-fs:
mount -t ceph node1:6789:/ /mnt -o name=admin,secretfile=admin.key
df-hT
停止ceph-mds服务:
systemctl stop ceph-mds@node1
ceph mds fail 0
ceph fs rm cephfs –yes-i-really-mean-it
ceph osd lspools
显示结果:0 rbd,1 cephfs_data,2 cephfs_metadata,
ceph osd pool rm cephfs_metadata cephfs_metadata –yes-i-really-really-mean-it
★删除环境:
ceph-deploy purge cephhost cephhost1 cephhost2
ceph-deploy purgedata cephhost cephhost1 cephhost2
ceph-deploy forgetkeys
rm -rf ceph*
查看pool:ceph osd pool ls
查看pool中的pg:ceph osd pool get rbd pg_num
创建有64个pg的pool:ceph osd pool cephhost_pool 64
创建后,查看:
ceph osd pool ls
ceph osd pool get cephhost_pool pg_num