Ceph 文件存儲


 

 

 

 

(1)部署cephfs

[ceph-admin@c720181 my-cluster]$ ceph-deploy mds create c720182

注意:查看輸出,可以看到執行了那些命令,以及生成的keyring

[ceph_deploy.conf][DEBUG ] found configuration file at: /home/ceph-admin/.cephdeploy.conf

[ceph_deploy.cli][INFO  ] Invoked (2.0.1): /bin/ceph-deploy mds create c720182

[ceph_deploy.cli][INFO  ] ceph-deploy options:

[ceph_deploy.cli][INFO  ]  username                      : None

[ceph_deploy.cli][INFO  ]  verbose                       : False

[ceph_deploy.cli][INFO  ]  overwrite_conf                : False

[ceph_deploy.cli][INFO  ]  subcommand                    : create

[ceph_deploy.cli][INFO  ]  quiet                         : False

[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7f48fe3b9200>

[ceph_deploy.cli][INFO  ]  cluster                       : ceph

[ceph_deploy.cli][INFO  ]  func                          : <function mds at 0x7f48fe604e60>

[ceph_deploy.cli][INFO  ]  ceph_conf                     : None

[ceph_deploy.cli][INFO  ]  mds                           : [('c720182', 'c720182')]

[ceph_deploy.cli][INFO  ]  default_release               : False

[ceph_deploy.mds][DEBUG ] Deploying mds, cluster ceph hosts c720182:c720182

[c720182][DEBUG ] connection detected need for sudo

[c720182][DEBUG ] connected to host: c720182

[c720182][DEBUG ] detect platform information from remote host

[c720182][DEBUG ] detect machine type

[ceph_deploy.mds][INFO  ] Distro info: CentOS Linux 7.6.1810 Core

[ceph_deploy.mds][DEBUG ] remote host will use systemd

[ceph_deploy.mds][DEBUG ] deploying mds bootstrap to c720182

[c720182][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf

[c720182][WARNIN] mds keyring does not exist yet, creating one

[c720182][DEBUG ] create a keyring file

[c720182][DEBUG ] create path if it doesn't exist

[c720182][INFO  ] Running command: sudo ceph --cluster ceph --name client.bootstrap-mds --keyring /var/lib/ceph/bootstrap-mds/ceph.keyring auth get-or-create mds.c720182 osd allow rwx mds allow mon allow profile mds -o /var/lib/ceph/mds/ceph-c720182/keyring

[c720182][INFO  ] Running command: sudo systemctl enable ceph-mds@c720182

[c720182][WARNIN] Created symlink from /etc/systemd/system/ceph-mds.target.wants/ceph-mds@c720182.service to /usr/lib/systemd/system/ceph-mds@.service.

[c720182][INFO  ] Running command: sudo systemctl start ceph-mds@c720182

[c720182][INFO  ] Running command: sudo systemctl enable ceph.target

[ceph-admin@c720181 my-cluster]$

[ceph-admin@c720181 my-cluster]$ ceph -s

  cluster:

    id:     a4088ae8-c818-40d6-ab40-8f40c5bedeee

    health: HEALTH_WARN

            application not enabled on 1 pool(s)

 

 

  services:

    mon: 3 daemons, quorum c720181,c720182,c720183

    mgr: c720181(active), standbys: c720183, c720182

    osd: 3 osds: 3 up, 3 in

    rgw: 3 daemons active

 

  data:

    pools:   18 pools, 156 pgs

    objects: 222 objects, 1.62KiB

    usage:   3.06GiB used, 56.9GiB / 60.0GiB avail

    pgs:     156 active+clean

 

 

(2)創建存放cephfs_data和cephfs_metadata池

[ceph-admin@c720181 my-cluster]$ ceph osd pool create cephfs_data 50

[ceph-admin@c720181 my-cluster]$ ceph osd pool create cephfs_metadate 30

[ceph-admin@c720181 my-cluster]$ ceph fs new cephfs cephfs_metadate cephfs_data

new fs with metadata pool 35 and data pool 36

 

(3)查看相關信息

[ceph-admin@c720181 my-cluster]$ ceph mds stat

cephfs-1/1/1 up  {0=c720182=up:active}

[ceph-admin@c720181 my-cluster]$ ceph osd pool ls

default.rgw.control

default.rgw.meta

default.rgw.log

rbd

.rgw

.rgw.root

.rgw.control

.rgw.gc

.rgw.buckets

.rgw.buckets.index

.rgw.buckets.extra

.log

.intent-log

.usage

.users

.users.email

.users.swift

.users.uid

cephfs_metadate

cephfs_data

[ceph-admin@c720181 my-cluster]$ ceph fs ls

name: cephfs, metadata pool: cephfs_metadate, data pools: [cephfs_data ]

 

(4)創建用戶(可選,因為部署mds的時候已經生成)

[ceph-admin@c720181 my-cluster]$ ceph auth get-or-create client.cephfs mon 'allow r' mds 'allow r,allow rw path=/' osd 'allow rw pool=cephfs_data' -o ceph.client.cephfs.keyring

 

(5)將用戶密鑰拷貝到客戶端,比如客戶端是c720184

[ceph-admin@c720181 my-cluster]$ cat ceph.client.cephfs.keyring

[client.cephfs]

key = AQBXllpd6s2IHRAAk8iiDAHRiCdc9LlKVTB74w==

[ceph-admin@c720181 my-cluster]$ scp ceph.client.cephfs.kering c720184:/etc/ceph/

scp: /etc/ceph//ceph.client.cephfs.kering: Permission denied

[ceph-admin@c720181 my-cluster]$ sudo scp ceph.client.cephfs.kering c720184:/etc/ceph/

ceph.client.cephfs.kering                                               100%   64     2.4KB/s   00:00   

 

 ===========================================================================

一、通過內核驅動掛載cephfs

說明:在Linux內核2.6.34和以后的版本中添加了對Ceph的本機支持

(1)創建掛載目錄

[root@c720184 ceph]# mkdir /mnt/cephfs

 

(2)掛載

[ceph-admin@c720181 my-cluster]$ ceph auth get-key client.cephfs

AQBXllpd6s2IHRAAk8iiDAHRiCdc9LlKVTB74w==  //在ceph管理端上執行,獲取key,當然上一步已經拷貝到本機/etc/ceph/ceph.client.cephfs.keyring,可直接在本地查看

[root@c720184 ceph]# mount -t ceph c720182:6789:/ /mnt/cephfs -o name=cephfs,secret=AQBXllpd6s2IHRAAk8iiDAHRiCdc9LlKVTB74w==       //name為用戶名稱(去掉前面的client)

[root@c720184 ceph]# df -h

Filesystem               Size  Used Avail Use% Mounted on

/dev/mapper/centos-root   17G  1.5G   16G   9% /

devtmpfs                 908M     0  908M   0% /dev

tmpfs                    920M     0  920M   0% /dev/shm

tmpfs                    920M  8.5M  911M   1% /run

tmpfs                    920M     0  920M   0% /sys/fs/cgroup

/dev/sda1               1014M  145M  870M  15% /boot

tmpfs                    184M     0  184M   0% /run/user/0

/dev/rbd1                 10G  133M  9.9G   2% /mnt/ceph-disk1

192.168.20.182:6789:/     18G     0   18G   0% /mnt/cephfs

 

因上述命令包括key密鑰,不太安全,所以可通過下面的方式使用密鑰配置文件掛載

[root@c720184 ceph]# mount -t ceph c720182:6789:/ /mnt/cephfs -o name=cephfs,secretfile=/etc/ceph/ceph.client.cephfs.keyring

secret is not valid base64: Invalid argument.

adding ceph secret key to kernel failed: Invalid argument.

failed to parse ceph_options

 

報上述錯誤,是因為密鑰文件格式不對,需要拷貝一份重新配置一些:

[root@c720184 ceph]# cp ceph.client.cephfs.kerying cephfskey

修改如下:

[root@c720184 ceph]# cat cephfskey

AQBXllpd6s2IHRAAk8iiDAHRiCdc9LlKVTB74w==

重新掛載:

[root@c720184 ceph]# mount -t ceph c720182:6789:/ /mnt/cephfs -o name=cephfs,secretfile=/etc/ceph/cephfskey

[root@c720184 ceph]# df -h

Filesystem               Size  Used Avail Use% Mounted on

/dev/mapper/centos-root   17G  1.5G   16G   9% /

devtmpfs                 908M     0  908M   0% /dev

tmpfs                    920M     0  920M   0% /dev/shm

tmpfs                    920M  8.5M  911M   1% /run

tmpfs                    920M     0  920M   0% /sys/fs/cgroup

/dev/sda1               1014M  145M  870M  15% /boot

tmpfs                    184M     0  184M   0% /run/user/0

/dev/rbd1                 10G  133M  9.9G   2% /mnt/ceph-disk1

192.168.20.182:6789:/     18G     0   18G   0% /mnt/cephfs

(3)設置開機自動掛載

[root@c720184 ceph]# vim /etc/fstab

# /etc/fstab

# Created by anaconda on Tue Jul  9 07:05:16 2019

#

# Accessible filesystems, by reference, are maintained under '/dev/disk'

# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info

#

/dev/mapper/centos-root /                       xfs     defaults        0 0

UUID=9a641a11-b1ab-4ffe-9ed6-584d3bd308a7 /boot                   xfs     defaults        0 0

/dev/mapper/centos-swap swap                    swap    defaults        0 0

c720182:6789:/ /mnt/cephfs ceph name=cephfs,secretfile=/etc/ceph/cephfskey,_netdev,noatime 0 0

 

(4)取消掛載測試

[root@c720184 ceph]# umount /mnt/cephfs

[root@c720184 ceph]# mount /mnt/cephfs

[root@c720184 ceph]# df -h

Filesystem               Size  Used Avail Use% Mounted on

/dev/mapper/centos-root   17G  1.5G   16G   9% /

devtmpfs                 908M     0  908M   0% /dev

tmpfs                    920M     0  920M   0% /dev/shm

tmpfs                    920M  8.5M  911M   1% /run

tmpfs                    920M     0  920M   0% /sys/fs/cgroup

/dev/sda1               1014M  145M  870M  15% /boot

tmpfs                    184M     0  184M   0% /run/user/0

/dev/rbd1                 10G  133M  9.9G   2% /mnt/ceph-disk1

192.168.20.182:6789:/     18G     0   18G   0% /mnt/cephfs

(5)寫入文件測試

[root@c720184 ceph]# dd if=/dev/zero of=/mnt/cephfs/file1 bs=1M count=1024

1024+0 records in

1024+0 records out

1073741824 bytes (1.1 GB) copied, 34.0426 s, 31.5 MB/s

 

二、使用ceph-fuse客戶端掛載(建議使用客戶端掛載方式,磁盤容量會更加准確(內核掛載顯示所有容量包括元數據池,客戶端掛載只使用數據盤容量))

說明:ceph文件系統由Linux內核本地支持,但是如果主機在較低內核版本上運行,或者有任何應用程序依賴項,總是可以使用FUSE客戶端讓Ceph掛載Cephfs

 

(1)安裝Ceph-fuse客戶端

         rpm -qa | grep -I ceph-fuse

yum install -y ceph-fuse

 

(2)掛載

[root@c720184 yum.repos.d]# ceph-fuse --keyring /etc/ceph/ceph.client.cephfs.keyring --name client.cephfs -m c720182:6789 /mnt/cephfs

注意:這里的秘鑰ceph.client.cephfs.keyring和內核掛載的格式不一樣(不需要做任何修改)

cat /etc/ceph/ceph.client.cephfs.keyring

[client.cephfs]

key = AQBXllpd6s2IHRAAk8iiDAHRiCdc9LlKVTB74w==

 

2019-08-19 21:09:48.274554 7f9c058790c0 -1 init, newargv = 0x560fdf46e8a0 newargc=9

ceph-fuse[16151]: starting ceph client

ceph-fuse[16151]: starting fuse

 

說明:因為keyring文件中包含了地址,所以fstab不需要指定地址了

(3)開機啟動自動掛載

[root@c720184 yum.repos.d]# vim /etc/fstab

# /etc/fstab

# Created by anaconda on Tue Jul  9 07:05:16 2019

#

# Accessible filesystems, by reference, are maintained under '/dev/disk'

# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info

#

/dev/mapper/centos-root /                       xfs     defaults        0 0

UUID=9a641a11-b1ab-4ffe-9ed6-584d3bd308a7 /boot                   xfs     defaults        0 0

/dev/mapper/centos-swap swap                    swap    defaults        0 0

#c720182:6789:/ /mnt/cephfs ceph name=cephfs,secretfile=/etc/ceph/cephfskey,_netdev,noatime 0 0

id=cephfs,keyring=/etc/ceph/ceph.client.cephfs.keyring /mnt/cephfs fuse.ceph defaults 0 0 _netdev

 

(4)測試開機自動掛載

[root@c720184 yum.repos.d]# umount /mnt/cephfs

[root@c720184 yum.repos.d]# mount /mnt/cephfs

2019-08-19 21:20:10.970017 7f217d1fa0c0 -1 init, newargv = 0x558304b7e0e0 newargc=11

ceph-fuse[16306]: starting ceph client

ceph-fuse[16306]: starting fuse

[root@c720184 yum.repos.d]# df -h

Filesystem               Size  Used Avail Use% Mounted on

/dev/mapper/centos-root   17G  1.5G   16G   9% /

devtmpfs                 908M     0  908M   0% /dev

tmpfs                    920M     0  920M   0% /dev/shm

tmpfs                    920M  8.5M  911M   1% /run

tmpfs                    920M     0  920M   0% /sys/fs/cgroup

/dev/sda1               1014M  145M  870M  15% /boot

tmpfs                    184M     0  184M   0% /run/user/0

/dev/rbd1                 10G  133M  9.9G   2% /mnt/ceph-disk1

ceph-fuse                 18G  1.0G   17G   6% /mnt/cephfs

 

 ===========================================================================

將Ceph FS導出為NFS服務器

網絡文件系統(NFS)是最流行的可共享文件協議之一,每個基於unix的系統都可以使用它。不理解Ceph FS類型的基於unix的

客戶機仍然可以使用NFS訪問Ceph文件系統。要做到這一點,我們需要一個NFS服務器,它可以作為NFS共享重新導出CephFS.

NFS-ganesha是一個用戶空間中運行的NFS服務器,使用libcephfs支持CephFS文件系統抽象層(FSAL).

 

#編譯安裝軟件nfs-ganesha,請參考這篇博客:https://www.cnblogs.com/flytor/p/11430490.html

#yum install -y nfs-utils

#啟動NFS所需要的rpc服務

systemctl start rpcbind

systemctl enable rpcbind

 

#修改配置文件

vim /etc/ganesha/ganesha.conf

......

EXPORT

{

  Export_ID = 1

  Path = "/";

  Pseudo = "/";

  Access_Type = RW;

  SecType = "none";

  NFS_Protocols = "3";

  Squash = No_Root_Squash;

  Transport_Protocols = TCP;

  FSAL {

      Name = CEPH;

     }

}

LOG {
        ## Default log level for all components
        Default_Log_Level = WARN;

        ## Configure per-component log levels.
        Components {
                FSAL = INFO;
                NFS4 = EVENT;
        }

        ## Where to log
        Facility {
                name = FILE;
                destination = "/var/log/ganesha.log";
                enable = active;
        }
}

#通過提供Ganesha.conf啟動NFS Ganesha守護進程

ganesha.nfsd -f /etc/ganesha.conf -L /var/log/ganesha.log -N NIV_DEBUG

[root@c720182 ~]# showmount -e
Export list for c720182:
/ (everyone

 

#客戶端掛載

yum install -y nfs-utils

mkdir /mnt/cephnfs

[root@client ~]# mount -o rw,noatime c720182:/ /mnt/cephnfs/
[root@client ~]# df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/cl_-root   50G  1.7G   49G   4% /
devtmpfs              910M     0  910M   0% /dev
tmpfs                 920M     0  920M   0% /dev/shm
tmpfs                 920M   17M  904M   2% /run
tmpfs                 920M     0  920M   0% /sys/fs/cgroup
/dev/sda1            1014M  184M  831M  19% /boot
/dev/mapper/cl_-home  196G   33M  195G   1% /home
tmpfs                 184M     0  184M   0% /run/user/0
ceph-fuse              54G     0   54G   0% /mnt/cephfs
c720182:/              54G     0   54G   0% /mnt/cephnfs

 


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM