一、Ceph文件系統簡介
CephFS提供兼容POSIX的文件系統,將其數據和與那數據作為對象那個存儲在Ceph中
CephFS依靠MDS節點來協調RADOS集群的訪問
元數據服務器
MDS管理元數據(文件的所有者、時間戳和模式等),也負責緩存元數據訪問權限,管理客戶端緩存來維護緩存一致性。
CephFS客戶端首先聯系MON,進行身份驗證后,將查詢活躍MDS的文件元數據,並通過直接與OSD通信來訪問文件或目錄的對象。
Ceph支持一個集群中有一個活躍的MDS,多個備用MDS。目前也支持同時多個活躍MDS,但該功能尚未GA
目前Ceph僅支持一個集群中有一個活躍的CephFS文件系統
目前CephFS的快照功能尚未GA
在ceph2部署mds
二、部署
2.1 安裝mds包
[root@ceph2 ~]# yum -y install ceph-mds
授權
[root@ceph2 ~]# ceph auth get-or-create mds.ceph2 mon 'allow profile mds' osd 'allow rwx' msd 'allow' -o /etc/ceph/ceph.mds.ceph2.keyring
[root@ceph2 ~]# systemctl restart ceph-mds@ceph2
[root@ceph2 ~]# mkdir /var/lib/ceph/mds/ceph-ceph2
[root@ceph2 ~]# mv /etc/ceph/ceph.mds.ceph2.keyring /var/lib/ceph/mds/ceph-ceph2/keyring
[root@ceph2 ~]# chown ceph.ceph /var/lib/ceph/mds/ceph-ceph2/keyring
[root@ceph2 ~]# /usr/bin/ceph-mds -f --cluster ceph --id ceph2 --setuser ceph --setgroup ceph
starting mds.ceph2 at -
[root@ceph2 ~]# systemctl restart ceph-mds@ceph2
[root@ceph2 ~]# ps -ef |grep mds
ceph 991855 1 0 17:06 ? 00:00:00 /usr/bin/ceph-mds -f --cluster ceph --id ceph2 --setuser ceph --setgroup ceph
2.2 創建Ceph文件系統
CephFS文件系統需要兩個存儲池,一個用於存儲CephFS數據,一個用於存儲CephFS元數據
[root@ceph2 ~]# ceph osd pool create cephfs_metadata 64 64 pool 'cephfs_metadata' created [root@ceph2 ~]# ceph osd pool create cephfs_data 128 128 pool 'cephfs_data' created
創建一個名叫cephfs的文件系統
[root@ceph2 ~]# ceph fs new cephfs cephfs_metadata cephfs_data new fs with metadata pool 14 and data pool 15 [root@ceph2 ~]# ceph -s cluster: id: 35a91e48-8244-4e96-a7ee-980ab989d20d health: HEALTH_OK services: mon: 3 daemons, quorum ceph2,ceph3,ceph4 mgr: ceph4(active), standbys: ceph2, ceph3 mds: cephfs-1/1/1 up {0=ceph2=up:active} osd: 9 osds: 9 up, 9 in rbd-mirror: 1 daemon active data: pools: 12 pools, 472 pgs objects: 213 objects, 240 MB usage: 1733 MB used, 133 GB / 134 GB avail pgs: 472 active+clean io: client: 409 B/s rd, 614 B/s wr, 0 op/s rd, 3 op/s wr
2.4 查看
[root@ceph2 ~]# ceph fs status
2.5 在ceoh3上安裝MDS
[root@ceph3 ~]# yum -y install ceph-mds [root@ceph3 ~]# mkdir /var/lib/ceph/mds/ceph-ceph3 [root@ceph3 ~]# ceph auth get-or-create mds.ceph3 mon 'allow profile mds' osd 'allow rwx' mds 'allow' -o /var/lib/ceph/mds/ceph-ceph3/keyring [root@ceph3 ~]# chown ceph.ceph /var/lib/ceph/mds/ceph-ceph3/keyring [root@ceph3 ~]# systemctl restart ceph-mds@ceph3 [root@ceph3 ~]# ps -ef|grep mds ceph 87716 1 0 17:21 ? 00:00:00 /usr/bin/ceph-mds -f --cluster ceph --id ceph3 --setuser ceph --setgroup ceph
[root@ceph3 ~]# ceph fs status
注:默認情況下,ceph只支持一個活躍的mds,其他的作為備用mds,但目前在實驗性質下也可以同時開啟多個活躍的mds,生產環境不保證數據的完整性
2.5 創建授權用戶進行掛載操作
[root@ceph2 ~]# ceph auth get-or-create client.cephfs mon 'allow r' osd 'allow rw pool=cephfs_metadata,allow rw pool=cephfs_data' -o /etc/ceph/ceph.client.cephfs.keyring [root@ceph2 ~]# ll /etc/ceph/ceph.client.cephfs.keyring -rw-r--r-- 1 root root 64 Mar 26 17:30 /etc/ceph/ceph.client.cephfs.keyring [root@ceph2 ~]# scp /etc/ceph/ceph.client.cephfs.keyring root@ceph1:/etc/ceph/ root@ceph1's password: ceph.client.cephfs.keyring
2.6 在ceph1上測試秘鑰環是否可用
[root@ceph1 ~]# ceph -s --id cephfs cluster: id: 35a91e48-8244-4e96-a7ee-980ab989d20d health: HEALTH_OK services: mon: 3 daemons, quorum ceph2,ceph3,ceph4 mgr: ceph4(active), standbys: ceph2, ceph3 mds: cephfs-1/1/1 up {0=ceph2=up:active}, 1 up:standby #一個主,一個備 osd: 9 osds: 9 up, 9 in rbd-mirror: 1 daemon active data: pools: 12 pools, 472 pgs objects: 217 objects, 240 MB usage: 1733 MB used, 133 GB / 134 GB avail pgs: 472 active+clean io: client: 0 B/s wr, 0 op/s rd, 0 op/s wr
2.7 使用cephfs掛載
掛載卡住,修改用戶權限,再次嘗試
[root@ceph2 ~]# cd /etc/ceph/ [root@ceph2 ceph]# rm -rf ceph.client.cephfs.keyring [root@ceph2 ceph]# ceph auth caps client.cephfs mon 'allow r' mds 'allow' osd 'allow rw pool=cephfs_metadata, allow rw pool=cephfs_data' -o /etc/ceph/ceph.client.cephfs.keyring updated caps for client.cephfs [root@ceph2 ceph]# ceph auth get-or-create client.cephfs -o /etc/ceph/ceph.client.cephfs.keyring [root@ceph2 ceph]# cat /etc/ceph/ceph.client.cephfs.keyring [client.cephfs] key = AQAl8Zlcdbt/DRAA3cHKjt2BFSCY7cmio6mrXw== [root@ceph2 ceph]# scp /etc/ceph/ceph.client.cephfs.keyring ceph1:/etc/ceph/ root@ceph1's password: ceph.client.cephfs.keyring
2.8 掛載
root@ceph1 ~]# ceph-fuse --keyring /etc/ceph/ceph.client.cephfs.keyring --name client.cephfs -m ceph2:6789,ceph3:6789,ceph4:6789 /mnt/cephfs/ ceph-fuse[15243]: starting ceph client 2019-03-26 17:57:26.351649 7f4a3242b040 -1 init, newargv = 0x55bc8e6bba40 newargc=9 ceph-fuse[15243]: starting fuse [root@ceph1 ~]# df -hT Filesystem Type Size Used Avail Use% Mounted on /dev/vda1 xfs 40G 1.7G 39G 5% / devtmpfs devtmpfs 893M 0 893M 0% /dev tmpfs tmpfs 920M 0 920M 0% /dev/shm tmpfs tmpfs 920M 25M 896M 3% /run tmpfs tmpfs 920M 0 920M 0% /sys/fs/cgroup /dev/rbd1 xfs 2.0G 33M 2.0G 2% /mnt/ceph2 /dev/rbd0 xfs 2.0G 33M 2.0G 2% /mnt/ceph tmpfs tmpfs 184M 0 184M 0% /run/user/0 ceph-fuse fuse.ceph-fuse 43G 0 43G 0% /mnt/cephfs #掛載成功 [root@ceph1 ~]# cd /mnt/cephfs/ [root@ceph1 cephfs]# touch 111 #嘗試寫數據,正常 [root@ceph1 cephfs]# echo sucessfull >> 111 [root@ceph1 cephfs]# cat 111 sucessfull
2.9 寫進啟動文件
[root@ceph1 cephfs]# cd [root@ceph1 ~]# umount /mnt/cephfs [root@ceph1 ~]# echo "id=cephfs,keyring=/etc/ceph/ceph.client.cephfs.keyring /mnt/cephfs fuse.ceph defaults,_netdev 0 0 " >> /etc/fstab [root@ceph1 ~]# mount -a mount: special device /dev/rbd/rbd/testimg-copy does not exist ceph-fuse[15320]: starting ceph client 2019-03-26 18:03:57.070081 7f25f0c5c040 -1 init, newargv = 0x55aa8aab30a0 newargc=11 ceph-fuse[15320]: starting fuse [root@ceph1 ~]# df -hT Filesystem Type Size Used Avail Use% Mounted on /dev/vda1 xfs 40G 1.7G 39G 5% / devtmpfs devtmpfs 893M 0 893M 0% /dev tmpfs tmpfs 920M 0 920M 0% /dev/shm tmpfs tmpfs 920M 25M 896M 3% /run tmpfs tmpfs 920M 0 920M 0% /sys/fs/cgroup /dev/rbd1 xfs 2.0G 33M 2.0G 2% /mnt/ceph2 /dev/rbd0 xfs 2.0G 33M 2.0G 2% /mnt/ceph tmpfs tmpfs 184M 0 184M 0% /run/user/0 ceph-fuse fuse.ceph-fuse 43G 0 43G 0% /mnt/cephfs #成功 [root@ceph1 ~]# cd /mnt/cephfs/ [root@ceph1 cephfs]# ll total 1 -rw-r--r-- 1 root root 11 Mar 26 18:00 111
2.10 使用內核掛載
ceph auth get-key client.cephfs -o /etc/ceph/cephfskey [root@ceph2 ceph]# ll /etc/ceph/cephfskey -rw-r--r-- 1 root root 40 Mar 26 18:08 /etc/ceph/cephfskey [root@ceph2 ceph]# scp /etc/ceph/cephfskey ceph1:/etc/ceph/ root@ceph1's password: cephfskey [root@ceph1 ~]# mount -t ceph ceph2:6789,ceph3:6789,ceph4:6789:/ /mnt/cephfs -o name=cephfs,secretfile=/etc/ceph/cephfskey [root@ceph1 ~]# df -hT Filesystem Type Size Used Avail Use% Mounted on /dev/vda1 xfs 40G 1.7G 39G 5% / devtmpfs devtmpfs 893M 0 893M 0% /dev tmpfs tmpfs 920M 0 920M 0% /dev/shm tmpfs tmpfs 920M 25M 896M 3% /run tmpfs tmpfs 920M 0 920M 0% /sys/fs/cgroup /dev/rbd1 xfs 2.0G 33M 2.0G 2% /mnt/ceph2 /dev/rbd0 xfs 2.0G 33M 2.0G 2% /mnt/ceph tmpfs tmpfs 184M 0 184M 0% /run/user/0 172.25.250.11:6789,172.25.250.12:6789,172.25.250.13:6789:/ ceph 135G 1.8G 134G 2% /mnt/cephfs [root@ceph1 ~]# echo "ceph2:6789,ceph3:6789,ceph4:6789:/ /mnt/cephfs ceph name=cephfs,secretfile=/etc/ceph/cephfskey,noatime,_netdev 0 0" >> /etc/fstab
注:
通過內核客戶端掛載時,可能出現超時,解決辦法:
ceph osd crush tunables hammer
ceph osd crush reweight-all
實驗完成!!!
博主聲明:本文的內容來源主要來自譽天教育晏威老師,由本人實驗完成操作驗證,需要的博友請聯系譽天教育(http://www.yutianedu.com/),獲得官方同意或者晏老師(https://www.cnblogs.com/breezey/)本人同意即可轉載,謝謝!