文件存儲
ceph文件系統提供了任何大小的符合posix標准的分布式文件系統,它使用Ceph RADOS存儲數據。要實現ceph文件系統,需要一個正在運行的ceph存儲集群和至少一個ceph元數據服務器(MDS)來管理其元數據並使其與數據分離,這有助於降低復雜性和提高可靠性。
libcephfs庫在支持其多個客戶機實現方面發揮重要作用。它具有本機linux內核驅動程序支持,因此客戶機可以使用本機文件系統安裝,例如使用mount命令。她與samba緊密集成,支持CIFS和SMB。Ceph FS使用cephfuse模塊擴展到用戶空間(FUSE)中的文件系統。它還允許使用libcephfs庫與RADOS集群進行直接的應用程序交互。
只有Ceph FS才需要Ceph MDS,其他存儲方法的塊和基於對象的存儲不需要MDS。Ceph MDS作為一個守護進程運行,它允許客戶機掛載任意大小的POSIX文件系統。MDS不直接向客戶端提供任何數據,數據服務僅OSD完成。
部署cephfs
[ceph-admin@ceph-node1 my-cluster]$ ceph-deploy mds create ceph-node2
創建池 FS
[ceph-admin@ceph-node1 my-cluster]$ ceph osd pool create cephfs_data 128 pool 'cephfs_data' created [ceph-admin@ceph-node1 my-cluster]$ ceph osd pool create cephfs_metadata 32 pool 'cephfs_metadata' created [ceph-admin@ceph-node1 my-cluster]$ ceph fs new cephfs cephfs_metadata cephfs_data new fs with metadata pool 22 and data pool 21
查看狀態
[ceph-admin@ceph-node1 my-cluster]$ ceph mds stat cephfs-1/1/1 up {0=ceph-node2=up:active} [ceph-admin@ceph-node1 my-cluster]$ ceph osd pool ls rbd .rgw.root default.rgw.control default.rgw.meta default.rgw.log .rgw .rgw.control .rgw.gc .rgw.buckets .rgw.buckets.index .rgw.buckets.extra .log .intent-log .usage .users .users.email .users.swift .users.uid default.rgw.buckets.index default.rgw.buckets.data cephfs_data cephfs_metadata [ceph-admin@ceph-node1 my-cluster]$ ceph fs ls name: cephfs, metadata pool: cephfs_metadata, data pools: [cephfs_data ]
創建用戶
[ceph-admin@ceph-node1 my-cluster]$ ceph auth get-or-create client.cephfs mon 'allow r' mds 'allow r,allow rw path=/' osd 'allow rw pool=cephfs_data' -o ceph.client.cephfs.keyring [ceph-admin@ceph-node1 my-cluster]$ cat ceph.client.cephfs.keyring [client.cephfs] key = AQD2EWpcZasXIBAAzcdvbJxrwwgR1eDJHTz1lQ== [ceph-admin@ceph-node1 my-cluster]$ scp ceph.client.cephfs.keyring root@192.168.0.123:/etc/ceph/ #將key復制到客戶端
通過內核驅動和FUSE客戶端掛載cephfs
[root@localhost ~]# ceph auth get-key client.cephfs AQD2EWpcZasXIBAAzcdvbJxrwwgR1eDJHTz1lQ== [root@localhost ~]# [root@localhost ~]# mkdir /mnt/cephfs [root@localhost ~]# mount -t ceph ceph-node2:6789:/ /mnt/cephfs -o name=cephfs,secret=AQD2EWpcZasXIBAAzcdvbJxrwwgR1eDJHTz1lQ== [root@localhost ~]# df -h /mnt/cephfs/ Filesystem Size Used Avail Use% Mounted on 192.168.0.126:6789:/ 54G 0 54G 0% /mnt/cephfs
也可以將key保存到文件,通過文件掛載
[root@localhost ~]# echo "AQD2EWpcZasXIBAAzcdvbJxrwwgR1eDJHTz1lQ==" > /etc/ceph/cephfskey [root@localhost ~]# umount /mnt/cephfs/ [root@localhost ~]# mount -t ceph ceph-node2:6789:/ /mnt/cephfs -o name=cephfs,secretfile=/etc/ceph/cephfskey
開機自動掛載
[root@localhost ~]# echo "ceph-node2:6789:/ /mnt/cephfs ceph name=cephfs,secretfile=/etc/ceph/cephfskey,_netdev,noatime 0 0" >> /etc/fstab [root@localhost ~]# umount /mnt/cephfs [root@localhost ~]# mount /mnt/cephfs [root@localhost ~]# df -h /mnt/cephfs/ Filesystem Size Used Avail Use% Mounted on 192.168.0.126:6789:/ 54G 0 54G 0% /mnt/cephfs
寫入數據測試
[root@localhost ~]# dd if=/dev/zero of=/mnt/cephfs/file1 bs=1M count=1024 1024+0 records in 1024+0 records out 1073741824 bytes (1.1 GB) copied, 8.76348 s, 123 MB/s [root@localhost ~]# ll -h /mnt/cephfs/ total 1.0G -rw-r--r-- 1 root root 1.0G Feb 18 11:56 file1
使用fuse客戶端掛載
ceph文件系統由linux內核本地支持,但是如果主機在較低的內核版本上運行,或者有任何應用程序依賴項,可以使用FUSE客戶端讓ceph掛載cephfs。
安裝ceph-fuse
[root@localhost ~]# yum install ceph-fuse -y
掛載方式
[root@localhost ~]# ceph-fuse --keyring /etc/ceph/ceph.client.cephfs.keyring --name client.cephfs -m ceph-node2:6789 /mnt/cephfs^C [root@localhost ~]# umount /mnt/cephfs/ [root@localhost ~]# ceph-fuse --keyring /etc/ceph/ceph.client.cephfs.keyring --name client.cephfs -m ceph-node2:6789 /mnt/cephfs ceph-fuse[78183]: starting ceph client 2019-02-18 12:05:41.922959 7ff1e2ce20c0 -1 init, newargv = 0x55fdbb5c4420 newargc=9 ceph-fuse[78183]: starting fuse [root@localhost ~]# df -h /mnt/cephfs Filesystem Size Used Avail Use% Mounted on ceph-fuse 54G 1.0G 53G 2% /mnt/cephfs
寫入配置文件
[root@localhost ~]# echo "id=cephfs,keyring=/etc/ceph/ceph.client.cephfs.keyring /mnt/cephfs fuse.ceph defaults 0 0" >> /etc/fstab [root@localhost ~]# umount /mnt/cephfs [root@localhost ~]# mount /mnt/cephfs 2019-02-18 14:29:48.123645 7f70a8cb50c0 -1 init, newargv = 0x562911bb0150 newargc=11ceph-fuse[78451]: starting ceph client ceph-fuse[78451]: starting fuse [root@localhost ~]# df -h /mnt/cephfs/ Filesystem Size Used Avail Use% Mounted on ceph-fuse 54G 1.0G 53G 2% /mnt/cephfs
將Ceph FS導出為NFS服務器
網絡文件系統(Network Filesystem,NFS)是最流行的可共享文件系統協議之一,每個基於unix的系統都可以使用它。不理解Ceph FS類型的基於unix的客戶機仍然可以使用NFS訪問Ceph文件系統。要做到這一點,我們需要一個NFS服務器,它可以作為NFS共享重新導出Ceph FS。NFS-ganesha是一個在用戶空間中運行的NFS服務器,使用libcephfs支持Ceph FS文件系統抽象層(FSAL)。
安裝軟件
[root@ceph-node2 ~]# yum -y install nfs-utils nfs-ganesha
啟動NFS需要的rpc服務
[root@ceph-node2 ~]# systemctl start rpcbind [root@ceph-node2 ~]# systemctl enable rpcbind [root@ceph-node2 ~]# systemctl status rpcbind ● rpcbind.service - RPC bind service Loaded: loaded (/usr/lib/systemd/system/rpcbind.service; enabled; vendor preset: enabled) Active: active (running) since Mon 2019-02-18 14:54:56 CST; 14s ago Main PID: 37093 (rpcbind) CGroup: /system.slice/rpcbind.service └─37093 /sbin/rpcbind -w Feb 18 14:54:56 ceph-node2 systemd[1]: Starting RPC bind service... Feb 18 14:54:56 ceph-node2 systemd[1]: Started RPC bind service.
修改配置文件
[root@ceph-node2 ~]# cat /etc/ganesha/ganesha.conf ################################################### # # EXPORT # # To function, all that is required is an EXPORT # # Define the absolute minimal export # ################################################### EXPORT { # Export Id (mandatory, each EXPORT must have a unique Export_Id) Export_Id = 77; # Exported path (mandatory) Path = "/"; # Pseudo Path (required for NFS v4) Pseudo = "/"; # Required for access (default is None) # Could use CLIENT blocks instead Access_Type = RW; SecType = "none"; NFS_Protocols = "3"; Squash = No_ROOT_Squash; # Exporting FSAL FSAL { Name = CEPH; } }
通過提供Ganesha.conf 啟動NFS Ganesha守護進程
[root@ceph-node2 ~]# ganesha.nfsd -f /etc/ganesha/ganesha.conf -L /var/log/ganesha.log -N NIV_DEBUG [root@ceph-node2 ~]# showmount -e Export list for ceph-node2:
客戶端掛載
[root@localhost ~]# yum -y install nfs-utils [root@localhost ~]# mkdir /mnt/cephnfs [root@localhost ~]# mount -o rw,noatime ceph-node2:/ /mnt/cephnfs [root@localhost ~]# df -h /mnt/cephnfs Filesystem Size Used Avail Use% Mounted on ceph-node2:/ 0 0 0 - /mnt/cephnfs