六 CephFS使用
- https://docs.ceph.com/en/pacific/cephfs/
6.1 部署MDS服務
6.1.1 安裝ceph-mds
點擊查看代碼
root@ceph-mgr-01:~# apt -y install ceph-mds
6.1.2 創建MDS服務
點擊查看代碼
ceph@ceph-deploy:~/ceph-cluster$ ceph-deploy mds create ceph-mgr-01
[ceph_deploy.conf][DEBUG ] found configuration file at: /var/lib/ceph/.cephdeploy.conf
[ceph_deploy.cli][INFO ] Invoked (2.0.1): /usr/bin/ceph-deploy mds create ceph-mgr-01
[ceph_deploy.cli][INFO ] ceph-deploy options:
[ceph_deploy.cli][INFO ] username : None
[ceph_deploy.cli][INFO ] verbose : False
[ceph_deploy.cli][INFO ] overwrite_conf : False
[ceph_deploy.cli][INFO ] subcommand : create
[ceph_deploy.cli][INFO ] quiet : False
[ceph_deploy.cli][INFO ] cd_conf : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7f82357dae60>
[ceph_deploy.cli][INFO ] cluster : ceph
[ceph_deploy.cli][INFO ] func : <function mds at 0x7f82357b9350>
[ceph_deploy.cli][INFO ] ceph_conf : None
[ceph_deploy.cli][INFO ] mds : [('ceph-mgr-01', 'ceph-mgr-01')]
[ceph_deploy.cli][INFO ] default_release : False
[ceph_deploy.mds][DEBUG ] Deploying mds, cluster ceph hosts ceph-mgr-01:ceph-mgr-01
ceph@ceph-mgr-01's password:
[ceph-mgr-01][DEBUG ] connection detected need for sudo
ceph@ceph-mgr-01's password:
sudo: unable to resolve host ceph-mgr-01
[ceph-mgr-01][DEBUG ] connected to host: ceph-mgr-01
[ceph-mgr-01][DEBUG ] detect platform information from remote host
[ceph-mgr-01][DEBUG ] detect machine type
[ceph_deploy.mds][INFO ] Distro info: Ubuntu 18.04 bionic
[ceph_deploy.mds][DEBUG ] remote host will use systemd
[ceph_deploy.mds][DEBUG ] deploying mds bootstrap to ceph-mgr-01
[ceph-mgr-01][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph-mgr-01][WARNIN] mds keyring does not exist yet, creating one
[ceph-mgr-01][DEBUG ] create a keyring file
[ceph-mgr-01][DEBUG ] create path if it doesn't exist
[ceph-mgr-01][INFO ] Running command: sudo ceph --cluster ceph --name client.bootstrap-mds --keyring /var/lib/ceph/bootstrap-mds/ceph.keyring auth get-or-create mds.ceph-mgr-01 osd allow rwx mds allow mon allow profile mds -o /var/lib/ceph/mds/ceph-ceph-mgr-01/keyring
[ceph-mgr-01][INFO ] Running command: sudo systemctl enable ceph-mds@ceph-mgr-01
[ceph-mgr-01][WARNIN] Created symlink /etc/systemd/system/ceph-mds.target.wants/ceph-mds@ceph-mgr-01.service → /lib/systemd/system/ceph-mds@.service.
[ceph-mgr-01][INFO ] Running command: sudo systemctl start ceph-mds@ceph-mgr-01
[ceph-mgr-01][INFO ] Running command: sudo systemctl enable ceph.target
6.2 創建CephFS metadata和data存儲池
6.2.1 創建元數據存儲池
點擊查看代碼
ceph@ceph-deploy:~/ceph-cluster$ ceph osd pool create cephfs-metadata 32 32
pool 'cephfs-metadata' created
6.2.2 創建數據存儲池
點擊查看代碼
ceph@ceph-deploy:~/ceph-cluster$ ceph osd pool create cephfs-data 64 64
pool 'cephfs-data' created
6.2.3 查看Ceph集群狀態
點擊查看代碼
ceph@ceph-deploy:~/ceph-cluster$ ceph -s cluster: id: 6e521054-1532-4bc8-9971-7f8ae93e8430 health: HEALTH_WARN 3 daemons have recently crashed
services:
mon: 3 daemons, quorum ceph-mon-01,ceph-mon-02,ceph-mon-03 (age 7m)
mgr: ceph-mgr-01(active, since 16m), standbys: ceph-mgr-02
mds: 1/1 daemons up
osd: 9 osds: 9 up (since 44h), 9 in (since 44h)
data:
volumes: 1/1 healthy
pools: 4 pools, 161 pgs
objects: 43 objects, 24 MiB
usage: 1.4 GiB used, 179 GiB / 180 GiB avail
pgs: 161 active+clean
6.3 創建CephFS文件系統
6.3.1 創建CephFS文件系統命令格式
點擊查看代碼
ceph@ceph-deploy:~/ceph-cluster$ ceph fs new -h
General usage:
usage: ceph [-h] [-c CEPHCONF] [-i INPUT_FILE] [-o OUTPUT_FILE]
[--setuser SETUSER] [--setgroup SETGROUP] [--id CLIENT_ID]
[--name CLIENT_NAME] [--cluster CLUSTER]
[--admin-daemon ADMIN_SOCKET] [-s] [-w] [--watch-debug]
[--watch-info] [--watch-sec] [--watch-warn] [--watch-error]
[-W WATCH_CHANNEL] [--version] [--verbose] [--concise]
[-f {json,json-pretty,xml,xml-pretty,plain,yaml}]
[--connect-timeout CLUSTER_TIMEOUT] [--block] [--period PERIOD]Ceph administration tool
optional arguments:
-h, --help request mon help
-c CEPHCONF, --conf CEPHCONF
ceph configuration file
-i INPUT_FILE, --in-file INPUT_FILE
input file, or "-" for stdin
-o OUTPUT_FILE, --out-file OUTPUT_FILE
output file, or "-" for stdout
--setuser SETUSER set user file permission
--setgroup SETGROUP set group file permission
--id CLIENT_ID, --user CLIENT_ID
client id for authentication
--name CLIENT_NAME, -n CLIENT_NAME
client name for authentication
--cluster CLUSTER cluster name
--admin-daemon ADMIN_SOCKET
submit admin-socket commands ("help" for help)
-s, --status show cluster status
-w, --watch watch live cluster changes
--watch-debug watch debug events
--watch-info watch info events
--watch-sec watch security events
--watch-warn watch warn events
--watch-error watch error events
-W WATCH_CHANNEL, --watch-channel WATCH_CHANNEL
watch live cluster changes on a specific channel
(e.g., cluster, audit, cephadm, or '*' for all)
--version, -v display version
--verbose make verbose
--concise make less verbose
-f {json,json-pretty,xml,xml-pretty,plain,yaml}, --format {json,json-pretty,xml,xml-pretty,plain,yaml}
--connect-timeout CLUSTER_TIMEOUT
set a timeout for connecting to the cluster
--block block until completion (scrub and deep-scrub only)
--period PERIOD, -p PERIOD
polling period, default 1.0 second (for polling
commands only)Local commands:
ping <mon.id> Send simple presence/life test to a mon
<mon.id> may be 'mon.*' for all mons
daemon {type.id|path} <cmd>
Same as --admin-daemon, but auto-find admin socket
daemonperf {type.id | path} [stat-pats] [priority] [<interval>] [<count>]
daemonperf {type.id | path} list|ls [stat-pats] [priority]
Get selected perf stats from daemon/admin socket
Optional shell-glob comma-delim match string stat-pats
Optional selection priority (can abbreviate name):
critical, interesting, useful, noninteresting, debug
List shows a table of all available stats
Run <count> times (default forever),
once per <interval> seconds (default 1)Monitor commands:
fs new <fs_name> <metadata> <data> [--force] [--allow-dangerous-metadata-overlay] make new filesystem using named pools <metadata> and <data>
6.3.2 創建CephFS文件系統
- 一個數據池只能創建一個cephfs文件系統。
點擊查看代碼
ceph@ceph-deploy:~/ceph-cluster$ ceph fs new wgscephfs cephfs-metadata cephfs-data
new fs with metadata pool 7 and data pool 8
6.3.3 查看創建的CephFS文件系統
點擊查看代碼
ceph@ceph-deploy:~/ceph-cluster$ ceph fs ls
name: wgscephfs, metadata pool: cephfs-metadata, data pools: [cephfs-data ]
6.3.4 查看指定CephFS文件系統狀態
ceph@ceph-deploy:~/ceph-cluster$ ceph fs status wgscephfs
wgscephfs - 0 clients
=========
RANK STATE MDS ACTIVITY DNS INOS DIRS CAPS
0 active ceph-mgr-01 Reqs: 0 /s 10 13 12 0
POOL TYPE USED AVAIL
cephfs-metadata metadata 96.0k 56.2G
cephfs-data data 0 56.2G
MDS version: ceph version 16.2.6 (ee28fb57e47e9f88813e24bbf4c14496ca299d31) pacific (stable)
6.3.5 啟用多個文件系統
- 每創建一個cephfs文件系統需要一個新的數據池。
點擊查看代碼
ceph@ceph-deploy:~/ceph-cluster$ ceph fs flag set enable_multiple true
6.4 驗證CephFS服務狀態
ceph@ceph-deploy:~/ceph-cluster$ ceph mds stat
wgscephfs:1 {0=ceph-mgr-01=up:active}
6.5 創建客戶端賬戶
6.5.1 創建賬戶
點擊查看代碼
ceph@ceph-deploy:~/ceph-cluster$ ceph auth add client.wgs mon 'allow rw' mds 'allow rw' osd 'allow rwx pool=cephfs-data'
added key for client.wgs
6.5.2 驗證賬戶信息
點擊查看代碼
ceph@ceph-deploy:~/ceph-cluster$ ceph auth get client.wgs
[client.wgs]
key = AQCrhk5htve9AxAAED3UAwf2P/5YFjBPVoNayw==
caps mds = "allow rw"
caps mon = "allow rw"
caps osd = "allow rwx pool=cephfs-data"
exported keyring for client.wgs
6.5.3 創建用戶keyring文件
點擊查看代碼
ceph@ceph-deploy:~/ceph-cluster$ ceph auth get client.wgs -o ceph.client.wgs.keyring
exported keyring for client.wgs
6.5.4 創建key文件
點擊查看代碼
ceph@ceph-deploy:~/ceph-cluster$ ceph auth print-key client.wgs > wgs.key
6.5.5 驗證用戶keyring文件
點擊查看代碼
ceph@ceph-deploy:~/ceph-cluster$ cat ceph.client.wgs.keyring
[client.wgs]
key = AQCrhk5htve9AxAAED3UAwf2P/5YFjBPVoNayw==
caps mds = "allow rw"
caps mon = "allow rw"
caps osd = "allow rwx pool=cephfs-data"
6.6 安裝ceph客戶端
6.6.1 客戶端ceph-client-centos7-01
6.6.1.1 配置倉庫
點擊查看代碼
[root@ceph-client-centos7-01 ~]# yum -y install epel-release
[root@ceph-client-centos7-01 ~]# yum -y install https://mirrors.aliyun.com/ceph/rpm-octopus/el7/noarch/ceph-release-1-1.el7.noarch.rpm
6.6.1.2 安裝ceph-common
點擊查看代碼
[root@ceph-client-centos7-01 ~]# yum -y install ceph-common
6.6.2 客戶端ceph-client-ubuntu20.04-01
6.6.2.1 配置倉庫
點擊查看代碼
root@ceph-client-ubuntu20.04-01:~# wget -q -O- 'https://mirrors.tuna.tsinghua.edu.cn/ceph/keys/release.asc' | sudo apt-key add -
OK
root@ceph-client-ubuntu20.04-01:~# echo "deb https://mirrors.tuna.tsinghua.edu.cn/ceph/debian-pacific $(lsb_release -cs) main" >> /etc/apt/sources.list
root@ceph-client-ubuntu20.04-01:~# apt -y update && apt -y upgrade
6.6.2.2 安裝ceph-common
點擊查看代碼
root@ceph-client-ubuntu20.04-01:~# apt -y install ceph-common
6.7 同步客戶端認證文件
點擊查看代碼
ceph@ceph-deploy:~/ceph-cluster$ scp ceph.conf ceph.client.wgs.keyring wgs.key root@ceph-client-ubuntu20.04-01:/etc/ceph ceph@ceph-deploy:~/ceph-cluster$ scp ceph.conf ceph.client.wgs.keyring wgs.key root@ceph-client-centos7-01:/etc/ceph
6.8 客戶端驗證權限
6.8.1 客戶端ceph-client-centos7-01
點擊查看代碼
[root@ceph-client-centos7-01 ~]# ceph --id wgs -s cluster: id: 6e521054-1532-4bc8-9971-7f8ae93e8430 health: HEALTH_OK
services:
mon: 3 daemons, quorum ceph-mon-01,ceph-mon-02,ceph-mon-03 (age 6h)
mgr: ceph-mgr-01(active, since 17h), standbys: ceph-mgr-02
mds: 1/1 daemons up
osd: 9 osds: 9 up (since 2d), 9 in (since 2d)
data:
volumes: 1/1 healthy
pools: 4 pools, 161 pgs
objects: 43 objects, 24 MiB
usage: 1.4 GiB used, 179 GiB / 180 GiB avail
pgs: 161 active+clean
6.8.2 客戶端ceph-client-ubuntu20.04-01
點擊查看代碼
root@ceph-client-ubuntu20.04-01:~# ceph --id wgs -s cluster: id: 6e521054-1532-4bc8-9971-7f8ae93e8430 health: HEALTH_OK
services:
mon: 3 daemons, quorum ceph-mon-01,ceph-mon-02,ceph-mon-03 (age 6h)
mgr: ceph-mgr-01(active, since 17h), standbys: ceph-mgr-02
mds: 1/1 daemons up
osd: 9 osds: 9 up (since 2d), 9 in (since 2d)
data:
volumes: 1/1 healthy
pools: 4 pools, 161 pgs
objects: 43 objects, 24 MiB
usage: 1.4 GiB used, 179 GiB / 180 GiB avail
pgs: 161 active+clean
6.9 內核空間掛載cephfs(推薦使用)
6.9.1 驗證客戶端是否可以掛載cephfs
6.9.1.1 驗證客戶端ceph-client-centos7-01
點擊查看代碼
[root@ceph-client-centos7-01 ~]# stat /sbin/mount.ceph File: ‘/sbin/mount.ceph’
Size: 195512 Blocks: 384 IO Block: 4096 regular file
Device: fd01h/64769d Inode: 51110858 Links: 1
Access: (0755/-rwxr-xr-x) Uid: ( 0/ root) Gid: ( 0/ root)
Access: 2021-09-25 11:52:07.544069156 +0800
Modify: 2021-08-06 01:48:44.000000000 +0800
Change: 2021-09-23 13:57:21.674953501 +0800
Birth: -
6.9.1.1 驗證客戶端ceph-client-ubuntu20.04-01
點擊查看代碼
root@ceph-client-ubuntu20.04-01:~# stat /sbin/mount.ceph
File: /sbin/mount.ceph
Size: 260520 Blocks: 512 IO Block: 4096 regular file
Device: fc02h/64514d Inode: 402320 Links: 1
Access: (0755/-rwxr-xr-x) Uid: ( 0/ root) Gid: ( 0/ root)
Access: 2021-09-25 11:54:38.642951083 +0800
Modify: 2021-09-16 22:38:17.000000000 +0800
Change: 2021-09-22 18:01:23.708934550 +0800
Birth: -
6.9.2 客戶端通過key文件掛載cephfs
6.9.2.1 通過key文件掛載cephfs命令格式(建議使用)
點擊查看代碼
mount -t ceph {mon01:socket},{mon02:socket},{mon03:socket}:/ {mount-point} -o name={name},secretfile={key_path}
6.9.2.2 客戶端ceph-client-centos7-01掛載cephfs
點擊查看代碼
[root@ceph-client-centos7-01 ~]# mkdir /data/cephfs-data
[root@ceph-client-centos7-01 ~]# mount -t ceph 172.16.10.148:6789,172.16.10.110:6789,172.16.10.182:6789:/ /data/cephfs-data -o name=wgs,secretfile=/etc/ceph/wgs.key
[root@ceph-client-centos7-01 ~]# df -TH
Filesystem Type Size Used Avail Use% Mounted on
devtmpfs devtmpfs 2.0G 0 2.0G 0% /dev
tmpfs tmpfs 2.0G 0 2.0G 0% /dev/shm
tmpfs tmpfs 2.0G 194M 1.9G 10% /run
tmpfs tmpfs 2.0G 0 2.0G 0% /sys/fs/cgroup
/dev/vda1 xfs 22G 5.9G 16G 28% /
/dev/vdb xfs 215G 9.1G 206G 5% /data
tmpfs tmpfs 399M 0 399M 0% /run/user/1000
tmpfs tmpfs 399M 0 399M 0% /run/user/1003
172.16.10.148:6789,172.16.10.110:6789,172.16.10.182:6789:/ ceph 61G 0 61G 0% /data/cephfs-data
[root@ceph-client-centos7-01 ~]# stat -f /data/cephfs-data/
File: "/data/cephfs-data/"
ID: de6f23f7f8cf0cfc Namelen: 255 Type: ceph
Block size: 4194304 Fundamental block size: 4194304
Blocks: Total: 14397 Free: 14397 Available: 14397
Inodes: Total: 1 Free: -1
6.9.2.3 客戶端ceph-client-ubuntu20.04-01掛載cephfs
點擊查看代碼
root@ceph-client-ubuntu20.04-01:~# mkdir /data/cephfs-data
root@ceph-client-ubuntu20.04-01:~# mount -t ceph 172.16.10.148:6789,172.16.10.110:6789,172.16.10.182:6789:/ /data/cephfs-data -o name=wgs,secretfile=/etc/ceph/wgs.key
root@ceph-client-ubuntu20.04-01:~# df -TH
Filesystem Type Size Used Avail Use% Mounted on
udev devtmpfs 4.1G 0 4.1G 0% /dev
tmpfs tmpfs 815M 1.1M 814M 1% /run
/dev/vda2 ext4 22G 13G 7.9G 61% /
tmpfs tmpfs 4.1G 0 4.1G 0% /dev/shm
tmpfs tmpfs 5.3M 0 5.3M 0% /run/lock
tmpfs tmpfs 4.1G 0 4.1G 0% /sys/fs/cgroup
/dev/vdb ext4 528G 19G 483G 4% /data
172.16.10.148:6789,172.16.10.110:6789,172.16.10.182:6789:/ ceph 61G 0 61G 0% /data/cephfs-data
root@ceph-client-ubuntu20.04-01:~# stat -f /data/cephfs-data/
File: "/data/cephfs-data/"
ID: de6f23f7f8cf0cfc Namelen: 255 Type: ceph
Block size: 4194304 Fundamental block size: 4194304
Blocks: Total: 14397 Free: 14397 Available: 14397
Inodes: Total: 1 Free: -1
6.9.3 客戶端通過key掛載cephfs
6.9.3.1 通過key掛載cephfs命令格式
點擊查看代碼
掛載cephfs文件根目錄
mount -t ceph {mon01:socket},{mon02:socket},{mon03:socket}:/ {mount-point} -o name={name},secret={value}
掛載cephfs文件子目錄
mount -t ceph {mon01:socket},{mon02:socket},{mon03:socket}:/{subvolume/dir1/dir2} {mount-point} -o name={name},secret={value}
6.9.3.2 查看key
點擊查看代碼
ceph@ceph-deploy:~/ceph-cluster$ cat wgs.key
AQCrhk5htve9AxAAED3UAwf2P/5YFjBPVoNayw==
6.9.3.3 客戶端ceph-client-centos7-01掛載cephfs
點擊查看代碼
[root@ceph-client-centos7-01 ~]# mkdir /data/cephfs-data
[root@ceph-client-centos7-01 ~]# mount -t ceph 172.16.10.148:6789,172.16.10.110:6789,172.16.10.182:6789:/ /data/cephfs-data -o name=wgs,secret=AQCrhk5htve9AxAAED3UAwf2P/5YFjBPVoNayw==
[root@ceph-client-centos7-01 ~]# df -TH
Filesystem Type Size Used Avail Use% Mounted on
devtmpfs devtmpfs 2.0G 0 2.0G 0% /dev
tmpfs tmpfs 2.0G 0 2.0G 0% /dev/shm
tmpfs tmpfs 2.0G 194M 1.9G 10% /run
tmpfs tmpfs 2.0G 0 2.0G 0% /sys/fs/cgroup
/dev/vda1 xfs 22G 5.9G 16G 28% /
/dev/vdb xfs 215G 9.1G 206G 5% /data
tmpfs tmpfs 399M 0 399M 0% /run/user/1000
tmpfs tmpfs 399M 0 399M 0% /run/user/1003
172.16.10.148:6789,172.16.10.110:6789,172.16.10.182:6789:/ ceph 61G 0 61G 0% /data/cephfs-data
[root@ceph-client-centos7-01 ~]# stat -f /data/cephfs-data/
File: "/data/cephfs-data/"
ID: de6f23f7f8cf0cfc Namelen: 255 Type: ceph
Block size: 4194304 Fundamental block size: 4194304
Blocks: Total: 14397 Free: 14397 Available: 14397
Inodes: Total: 1 Free: -1
6.9.3.4 客戶端ceph-client-ubuntu20.04-01掛載cephfs
點擊查看代碼
root@ceph-client-ubuntu20.04-01:~# mkdir /data/cephfs-data
root@ceph-client-ubuntu20.04-01:~# mount -t ceph 172.16.10.148:6789,172.16.10.110:6789,172.16.10.182:6789:/ /data/cephfs-data -o name=wgs,secret=AQCrhk5htve9AxAAED3UAwf2P/5YFjBPVoNayw==
root@ceph-client-ubuntu20.04-01:~# df -TH
Filesystem Type Size Used Avail Use% Mounted on
udev devtmpfs 4.1G 0 4.1G 0% /dev
tmpfs tmpfs 815M 1.1M 814M 1% /run
/dev/vda2 ext4 22G 13G 7.9G 61% /
tmpfs tmpfs 4.1G 0 4.1G 0% /dev/shm
tmpfs tmpfs 5.3M 0 5.3M 0% /run/lock
tmpfs tmpfs 4.1G 0 4.1G 0% /sys/fs/cgroup
/dev/vdb ext4 528G 19G 483G 4% /data
172.16.10.148:6789,172.16.10.110:6789,172.16.10.182:6789:/ ceph 61G 0 61G 0% /data/cephfs-data
root@ceph-client-ubuntu20.04-01:~# stat -f /data/cephfs-data/
File: "/data/cephfs-data/"
ID: de6f23f7f8cf0cfc Namelen: 255 Type: ceph
Block size: 4194304 Fundamental block size: 4194304
Blocks: Total: 14397 Free: 14397 Available: 14397
Inodes: Total: 1 Free: -1
6.9.4 客戶端寫入數據並驗證
6.9.4.1 客戶端ceph-client-centos7-01寫入數據
點擊查看代碼
[root@ceph-client-centos7-01 ~]# cd /data/cephfs-data/
[root@ceph-client-centos7-01 cephfs-data]# echo "ceph-client-centos7-01" > ceph-client-centos7-01
[root@ceph-client-centos7-01 cephfs-data]# ls -l
total 1
-rw-r--r-- 1 root root 23 Sep 25 12:28 ceph-client-centos7-01
6.9.4.1 客戶端ceph-client-ubuntu20.04-01驗證數據共享
點擊查看代碼
root@ceph-client-ubuntu20.04-01:~# cd /data/cephfs-data/
root@ceph-client-ubuntu20.04-01:/data/cephfs-data# ls -l
total 1
-rw-r--r-- 1 root root 23 Sep 25 12:28 ceph-client-centos7-01
root@ceph-client-ubuntu20.04-01:/data/cephfs-data# cat ceph-client-centos7-01
ceph-client-centos7-01
6.9.5 客戶端卸載cephfs
6.9.5.1 客戶端ceph-client-centos7-01卸載cephfs
點擊查看代碼
[root@ceph-client-centos7-01 ~]# umount /data/cephfs-data/
6.9.5.2 客戶端ceph-client-ubuntu20.04-01卸載cephfs
點擊查看代碼
root@ceph-client-ubuntu20.04-01:~# umount /data/cephfs-data
6.9.6 客戶端開機掛載cephfs
6.9.6.1 客戶端ceph-client-centos7-01開機掛載cephfs
點擊查看代碼
[root@ceph-client-centos7-01 ~]# cat /etc/fstab
172.16.10.148:6789,172.16.10.110:6789,172.16.10.182:6789:/ /data/cephfs-data ceph defautls,name=wgs,secretfile=/etc/ceph/wgs.key,noatime,_netdev 0 2
6.9.6.2 客戶端ceph-client-centos7-01開機掛載cephfs
點擊查看代碼
root@ceph-client-ubuntu20.04-01:~# cat /etc/fstab
172.16.10.148:6789,172.16.10.110:6789,172.16.10.182:6789:/ /data/cephfs-data ceph defautls,name=wgs,secretfile=/etc/ceph/wgs.key,noatime,_netdev 0 2
6.10 用戶空間掛載cephfs
- 從 Ceph 10.x (Jewel) 開始,至少使用 4.x 內核。如果使用較舊的內核,則應使用 fuse 客戶端而不是內核客戶端。
6.10.1 客戶端配置倉庫
6.10.1.1 客戶端ceph-client-centos7-01配置yum源
點擊查看代碼
[root@ceph-client-centos7-01 ~]# yum -y install epel-release
[root@ceph-client-centos7-01 ~]# yum -y install https://mirrors.aliyun.com/ceph/rpm-octopus/el7/noarch/ceph-release-1-1.el7.noarch.rpm
6.10.1.2 客戶端ceph-client-ubuntu20.04-01添加倉庫
點擊查看代碼
root@ceph-client-ubuntu20.04-01:~# wget -q -O- 'https://mirrors.tuna.tsinghua.edu.cn/ceph/keys/release.asc' | sudo apt-key add -
OK
root@ceph-client-ubuntu20.04-01:~# echo "deb https://mirrors.tuna.tsinghua.edu.cn/ceph/debian-pacific $(lsb_release -cs) main" >> /etc/apt/sources.list
root@ceph-client-ubuntu20.04-01:~# apt -y update && apt -y upgrade
6.10.2 客戶端安裝ceph-fuse
6.10.2.1 客戶端ceph-client-centos7-01安裝ceph-fuse
點擊查看代碼
[root@ceph-client-centos7-01 ~]# yum -y install ceph-common ceph-fuse
6.10.2.2 客戶端ceph-client-centos7-01安裝ceph-fuse
點擊查看代碼
root@ceph-client-ubuntu20.04-01:~# apt -y install ceph-common fuse
6.10.3 客戶端同步認證文件
點擊查看代碼
ceph@ceph-deploy:~/ceph-cluster$ scp ceph.conf ceph.client.wgs.keyring root@ceph-client-ubuntu20.04-01:/etc/ceph
ceph@ceph-deploy:~/ceph-cluster$ scp ceph.conf ceph.client.wgs.keyring root@ceph-client-centos7-01:/etc/ceph
6.10.4 客戶端用ceph-fuse掛載cephfs
6.10.4.1 ceph-fuse用法
點擊查看代碼
root@ceph-client-centos7-01:~# ceph-fuse -h usage: ceph-fuse [-n client.username] [-m mon-ip-addr:mon-port] <mount point> [OPTIONS] --client_mountpoint/-r <sub_directory> use sub_directory as the mounted root, rather than the full Ceph tree.
usage: ceph-fuse mountpoint [options]
general options:
-o opt,[opt...] mount options
-h --help print help
-V --version print versionFUSE options:
-d -o debug enable debug output (implies -f)
-f foreground operation
-s disable multi-threaded operation--conf/-c FILE read configuration from the given configuration file
--id ID set ID portion of my name
--name/-n TYPE.ID set name
--cluster NAME set cluster name (default: ceph)
--setuser USER set uid to user or uid (and gid to user's gid)
--setgroup GROUP set gid to group or gid
--version show version and quit
-o opt,[opt...] 安裝選項。-d 在前台運行,將所有日志輸出發送到 stderr 並啟用 FUSE 調試 (-o debug)。
-c ceph.conf, --conf=ceph.conf 在啟動期間使用ceph.conf配置文件而不是默認配置文件 /etc/ceph/ceph.conf來確定監視器地址。
-m monaddress[:port] 連接到指定的mon節點(而不是通過 ceph.conf 查找)。
-n client.{cephx-username} 傳遞密鑰用於掛載的 CephX 用戶的名稱。
-k <path-to-keyring> 提供keyring的路徑;當它在標准位置不存在時很有用。
--client_mountpoint/-r root_directory 使用 root_directory 作為掛載的根目錄,而不是完整的 Ceph 樹。
-f 在前台運行。不生成pid文件。
-s 禁用多線程操作。
使用樣例
ceph-fuse --id {name} -m {mon01:socket},{mon02:socket},{mon03:socket} {mountpoint}
指定掛載cephfs文件系統目錄
ceph-fuse --id wgs -r /path/to/dir /data/cephfs-data
指定用戶keyring文件路徑
ceph-fuse --id wgs -k /path/to/keyring /data/cephfs-data
有多個cephfs文件系統指定掛載
ceph-fuse --id wgs --client_fs mycephfs2 /data/cephfs-data
6.10.4.2 客戶端ceph-client-centos7-01用ceph-fuse掛載cephfs
點擊查看代碼
[root@ceph-client-centos7-01 ~]# ceph-fuse --id wgs -m 172.16.10.148:6789,172.16.10.110:6789,172.16.10.182:6789 /data/cephfs-data
ceph-fuse[8979]: starting ceph client
2021-09-25T14:24:32.258+0800 7f2934e9df40 -1 init, newargv = 0x556e4ebb1300 newargc=9
ceph-fuse[8979]: starting fuse
[root@ceph-client-centos7-01 ~]# df -TH
Filesystem Type Size Used Avail Use% Mounted on
devtmpfs devtmpfs 2.0G 0 2.0G 0% /dev
tmpfs tmpfs 2.0G 0 2.0G 0% /dev/shm
tmpfs tmpfs 2.0G 194M 1.9G 10% /run
tmpfs tmpfs 2.0G 0 2.0G 0% /sys/fs/cgroup
/dev/vda1 xfs 22G 6.0G 16G 28% /
/dev/vdb xfs 215G 9.2G 206G 5% /data
tmpfs tmpfs 399M 0 399M 0% /run/user/1000
tmpfs tmpfs 399M 0 399M 0% /run/user/1003
ceph-fuse fuse.ceph-fuse 61G 0 61G 0% /data/cephfs-data
6.10.4.3 客戶端ceph-client-ubuntu20.04-01用ceph-fuse掛載cephfs
點擊查看代碼
root@ceph-client-ubuntu20.4-01:~# ceph-fuse --id wgs -m 172.16.10.148:6789,172.16.10.110:6789,172.16.10.182:6789 /data/cephfs-data
2021-09-25T14:26:17.664+0800 7f2473c04080 -1 init, newargv = 0x560939e8b8c0 newargc=15
ceph-fuse[8696]: starting ceph client
ceph-fuse[8696]: starting fuse
root@ceph-client-ubuntu20.4-01:~# df -TH
Filesystem Type Size Used Avail Use% Mounted on
udev devtmpfs 4.1G 0 4.1G 0% /dev
tmpfs tmpfs 815M 1.1M 814M 1% /run
/dev/vda2 ext4 22G 13G 7.9G 61% /
tmpfs tmpfs 4.1G 0 4.1G 0% /dev/shm
tmpfs tmpfs 5.3M 0 5.3M 0% /run/lock
tmpfs tmpfs 4.1G 0 4.1G 0% /sys/fs/cgroup
/dev/vdb ext4 528G 21G 480G 5% /data
tmpfs tmpfs 815M 0 815M 0% /run/user/1001
ceph-fuse fuse.ceph-fuse 61G 0 61G 0% /data/cephfs-data
6.10.5 客戶端寫入數據並驗證
6.10.5.1 客戶端ceph-client-centos7-01寫入數據
點擊查看代碼
[root@ceph-client-centos7-01 ~]# cd /data/cephfs-data/
[root@ceph-client-centos7-01 cephfs-data]# mkdir -pv test/test1
mkdir: created directory 'test'
mkdir: created directory 'test/test1'
6.10.5.2 客戶端ceph-client-ubuntu20.04-01驗證數據
點擊查看代碼
root@ceph-client-ubuntu20.4-01:~# cd /data/cephfs-data/ root@ceph-client-ubuntu20.4-01:/data/cephfs-data# tree . . └── test └── test1
2 directories, 0 files
6.10.6 客戶端卸載cephfs
6.10.6.1 客戶端ceph-client-centos7-01卸載cephfs
點擊查看代碼
[root@ceph-client-centos7-01 ~]# umount /data/cephfs-data/
6.10.6.2 客戶端ceph-client-ubuntu20.04-01卸載cephfs
點擊查看代碼
root@ceph-client-ubuntu20.04-01:~# umount /data/cephfs-data
6.10.7 客戶端開機掛載cephfs
6.10.7.1 客戶端ceph-client-centos7-01開機掛載cephfs
點擊查看代碼
[root@ceph-client-centos7-01 ~]# cat /etc/fstab
none /data/cephfs-data fuse.ceph ceph.id=wgs,ceph.conf=/etc/ceph/ceph.conf,_netdev,defaults 0 0
6.10.7.2 客戶端ceph-client-centos7-01開機掛載cephfs
點擊查看代碼
root@ceph-client-ubuntu20.04-01:~# cat /etc/fstab
none /data/cephfs-data fuse.ceph ceph.id=wgs,ceph.conf=/etc/ceph/ceph.conf,_netdev,defaults 0 0
6.11 刪除cephfs文件系統(多個文件系統)
6.11.1 查看cephfs文件系統信息
點擊查看代碼
ceph@ceph-deploy:~/ceph-cluster$ ceph fs ls
name: wgscephfs, metadata pool: cephfs-metadata, data pools: [cephfs-data ]
name: wgscephfs01, metadata pool: cephfs-metadata01, data pools: [cephfs-data02 ]
6.11.2 查看cephfs文件系統是否正在被掛載
點擊查看代碼
ceph@ceph-deploy:~/ceph-cluster$ ceph fs status wgscephfs
wgscephfs - 1 clients
=========
RANK STATE MDS ACTIVITY DNS INOS DIRS CAPS
0 active ceph-mgr-01 Reqs: 0 /s 13 15 14 2
POOL TYPE USED AVAIL
cephfs-metadata metadata 216k 56.2G
cephfs-data data 0 56.2G
MDS version: ceph version 16.2.6 (ee28fb57e47e9f88813e24bbf4c14496ca299d31) pacific (stable)
6.11.3 查找掛載cephfs的客戶端
點擊查看代碼
ceph@ceph-deploy:~/ceph-cluster$ ceph tell mds.0 client ls
2021-09-25T18:30:09.856+0800 7f0fbdffb700 0 client.1094242 ms_handle_reset on v2:172.16.10.225:6802/1724959904
2021-09-25T18:30:09.884+0800 7f0fbdffb700 0 client.1084719 ms_handle_reset on v2:172.16.10.225:6802/1724959904
[
{
"id": 1094171,
"entity": {
"name": {
"type": "client",
"num": 1094171
},
"addr": {
"type": "v1",
"addr": "172.16.0.126:0",
"nonce": 1257114724
}
},
"state": "open",
"num_leases": 0,
"num_caps": 2,
"request_load_avg": 0,
"uptime": 274.89986021499999,
"requests_in_flight": 0,
"num_completed_requests": 0,
"num_completed_flushes": 0,
"reconnecting": false,
"recall_caps": {
"value": 0,
"halflife": 60
},
"release_caps": {
"value": 0,
"halflife": 60
},
"recall_caps_throttle": {
"value": 0,
"halflife": 1.5
},
"recall_caps_throttle2o": {
"value": 0,
"halflife": 0.5
},
"session_cache_liveness": {
"value": 1.6026981127772033,
"halflife": 300
},
"cap_acquisition": {
"value": 0,
"halflife": 10
},
"delegated_inos": [],
"inst": "client.1094171 v1:172.16.0.126:0/1257114724",
"completed_requests": [],
"prealloc_inos": [],
"client_metadata": {
"client_features": {
"feature_bits": "0x00000000000001ff"
},
"metric_spec": {
"metric_flags": {
"feature_bits": "0x"
}
},
"entity_id": "wgs",
"hostname": "bj2d-prod-eth-star-boot-03",
"kernel_version": "4.19.0-1.el7.ucloud.x86_64",
"root": "/"
}
}
]
6.11.4 客戶端卸載cephfs
6.11.4.1 客戶端主動卸載cephfs
點擊查看代碼
[root@ceph-client-ubuntu20.04-01 ~]# umount /data/cephfs-data/
6.11.4.2 客戶端手動被驅逐
6.11.4.2.1 使用客戶端用其唯一 ID
點擊查看代碼
ceph@ceph-deploy:~/ceph-cluster$ ceph tell mds.0 client evict id=1094171
2021-09-25T18:31:02.895+0800 7fc5eeffd700 0 client.1094254 ms_handle_reset on v2:172.16.10.225:6802/1724959904
2021-09-25T18:31:03.671+0800 7fc5eeffd700 0 client.1084740 ms_handle_reset on v2:172.16.10.225:6802/1724959904
6.11.4.2.2 客戶端被驅逐后查看掛載點狀態
點擊查看代碼
root@ceph-client-ubuntu20.04-01:~# ls -l /data/
ls: cannot access '/data/cephfs-data': Permission denied
total 32
d????????? ? ? ? ? ? cephfs-data
6.11.4.2.3 客戶端解決辦法
點擊查看代碼
root@ceph-client-ubuntu20.04-01:~# umount /data/cephfs-data
6.11.4.2.4 客戶端驗證掛載點狀態
點擊查看代碼
root@ceph-client-ubuntu20.04-01:~# ls -l
total 4
drwxr-xr-x 2 root root 6 Sep 25 11:46 cephfs-data
6.11.5 刪除cephfs文件系統
點擊查看代碼
ceph@ceph-deploy:~/ceph-cluster$ ceph fs rm wgscephfs01 --yes-i-really-mean-it
6.11.6 驗證刪除cephfs文件系統
點擊查看代碼
ceph@ceph-deploy:~/ceph-cluster$ ceph fs ls
name: wgscephfs, metadata pool: cephfs-metadata, data pools: [cephfs-data ]
6.12 刪除cephfs文件系統(單個文件系統)
6.12.1 查看cephfs文件系統信息
點擊查看代碼
ceph@ceph-deploy:~/ceph-cluster$ ceph fs ls
name: wgscephfs, metadata pool: cephfs-metadata, data pools: [cephfs-data ]
6.12.2 查看cephfs文件系統是否正在被掛載
點擊查看代碼
ceph@ceph-deploy:~/ceph-cluster$ ceph fs status wgscephfs
wgscephfs - 1 clients
=========
RANK STATE MDS ACTIVITY DNS INOS DIRS CAPS
0 active ceph-mgr-01 Reqs: 0 /s 13 15 14 2
POOL TYPE USED AVAIL
cephfs-metadata metadata 216k 56.2G
cephfs-data data 0 56.2G
MDS version: ceph version 16.2.6 (ee28fb57e47e9f88813e24bbf4c14496ca299d31) pacific (stable)
6.12.3 查找掛載cephfs的客戶端
點擊查看代碼
ceph@ceph-deploy:~/ceph-cluster$ ceph tell mds.0 client ls
2021-09-25T18:30:09.856+0800 7f0fbdffb700 0 client.1094242 ms_handle_reset on v2:172.16.10.225:6802/1724959904
2021-09-25T18:30:09.884+0800 7f0fbdffb700 0 client.1084719 ms_handle_reset on v2:172.16.10.225:6802/1724959904
[
{
"id": 1094171,
"entity": {
"name": {
"type": "client",
"num": 1094171
},
"addr": {
"type": "v1",
"addr": "172.16.0.126:0",
"nonce": 1257114724
}
},
"state": "open",
"num_leases": 0,
"num_caps": 2,
"request_load_avg": 0,
"uptime": 274.89986021499999,
"requests_in_flight": 0,
"num_completed_requests": 0,
"num_completed_flushes": 0,
"reconnecting": false,
"recall_caps": {
"value": 0,
"halflife": 60
},
"release_caps": {
"value": 0,
"halflife": 60
},
"recall_caps_throttle": {
"value": 0,
"halflife": 1.5
},
"recall_caps_throttle2o": {
"value": 0,
"halflife": 0.5
},
"session_cache_liveness": {
"value": 1.6026981127772033,
"halflife": 300
},
"cap_acquisition": {
"value": 0,
"halflife": 10
},
"delegated_inos": [],
"inst": "client.1094171 v1:172.16.0.126:0/1257114724",
"completed_requests": [],
"prealloc_inos": [],
"client_metadata": {
"client_features": {
"feature_bits": "0x00000000000001ff"
},
"metric_spec": {
"metric_flags": {
"feature_bits": "0x"
}
},
"entity_id": "wgs",
"hostname": "bj2d-prod-eth-star-boot-03",
"kernel_version": "4.19.0-1.el7.ucloud.x86_64",
"root": "/"
}
}
]
6.12.4 客戶端卸載cephfs
6.12.4.1 客戶端主動卸載cephfs
點擊查看代碼
[root@ceph-client-ubuntu20.04-01 ~]# umount /data/cephfs-data/
6.12.4.2 客戶端手動被驅逐
6.12.4.2.1 使用客戶端用其唯一 ID
點擊查看代碼
ceph@ceph-deploy:~/ceph-cluster$ ceph tell mds.0 client evict id=1094171
2021-09-25T18:31:02.895+0800 7fc5eeffd700 0 client.1094254 ms_handle_reset on v2:172.16.10.225:6802/1724959904
2021-09-25T18:31:03.671+0800 7fc5eeffd700 0 client.1084740 ms_handle_reset on v2:172.16.10.225:6802/1724959904
6.12.4.2.2 客戶端被驅逐后查看掛載點狀態
點擊查看代碼
root@ceph-client-ubuntu20.04-01:~# ls -l /data/
ls: cannot access '/data/cephfs-data': Permission denied
total 32
d????????? ? ? ? ? ? cephfs-data
6.12.4.2.3 客戶端解決辦法
點擊查看代碼
root@ceph-client-ubuntu20.04-01:~# umount /data/cephfs-data
6.12.4.2.4 客戶端驗證掛載點狀態
點擊查看代碼
root@ceph-client-ubuntu20.04-01:~# ls -l
total 4
drwxr-xr-x 2 root root 6 Sep 25 11:46 cephfs-data
6.12.5 刪除Cephfs文件系統
6.12.5.1 查看cephfs服務狀態
點擊查看代碼
ceph@ceph-deploy:~/ceph-cluster$ ceph mds stat
wgscephfs:1 {0=ceph-mgr-01=up:active}
6.12.5.2 關閉cephfs文件系統
點擊查看代碼
ceph@ceph-deploy:~/ceph-cluster$ ceph fs fail wgscephfs
wgscephfs marked not joinable; MDS cannot join the cluster. All MDS ranks marked failed.
6.12.5.3 查看ceph集群狀態
點擊查看代碼
ceph@ceph-deploy:~/ceph-cluster$ ceph -s cluster: id: 6e521054-1532-4bc8-9971-7f8ae93e8430 health: HEALTH_ERR 1 filesystem is degraded 1 filesystem is offline
services:
mon: 3 daemons, quorum ceph-mon-01,ceph-mon-02,ceph-mon-03 (age 97m)
mgr: ceph-mgr-01(active, since 25h), standbys: ceph-mgr-02
mds: 0/1 daemons up (1 failed), 1 standby
osd: 9 osds: 9 up (since 2d), 9 in (since 2d)
data:
volumes: 0/1 healthy, 1 failed
pools: 6 pools, 257 pgs
objects: 44 objects, 24 MiB
usage: 1.4 GiB used, 179 GiB / 180 GiB avail
pgs: 257 active+clean
6.12.5.4 查看Cephfs服務狀態
點擊查看代碼
ceph@ceph-deploy:~/ceph-cluster$ ceph mds stat
wgscephfs:0/1 1 up:standby, 1 failed
6.12.5.5 刪除cephfs文件系統
點擊查看代碼
ceph@ceph-deploy:~/ceph-cluster$ ceph fs rm wgscephfs --yes-i-really-mean-it
6.12.5.6 查看cephfs文件系統
點擊查看代碼
ceph@ceph-deploy:~/ceph-cluster$ ceph fs ls
No filesystems enabled
6.12.5.7 查看ceph集群狀態
點擊查看代碼
ceph@ceph-deploy:~/ceph-cluster$ ceph -s cluster: id: 6e521054-1532-4bc8-9971-7f8ae93e8430 health: HEALTH_OK
services:
mon: 3 daemons, quorum ceph-mon-01,ceph-mon-02,ceph-mon-03 (age 9m)
mgr: ceph-mgr-01(active, since 25h), standbys: ceph-mgr-02
osd: 9 osds: 9 up (since 2d), 9 in (since 2d)
data:
pools: 6 pools, 257 pgs
objects: 43 objects, 24 MiB
usage: 1.4 GiB used, 179 GiB / 180 GiB avail
pgs: 257 active+clean
6.12.5.8 查看cephfs服務狀態
點擊查看代碼
ceph@ceph-deploy:~/ceph-cluster$ ceph mds stat
1 up:standby
6.13 Ceph MDS高可用
6.13.1 Ceph MDS高可用介紹
- Ceph mds作為ceph的訪問入口,需要實現高性能以及數據備份。
6.13.2 Ceph MDS高可用架構圖
- 兩主兩備
6.13.3 Ceph MDS配置常用選項
- mds_standby_replay: 值為true表示開啟relplay模式,這種模式下從MDS內的數據將實時與主MDS同步,如果主宕機,從可以快速的切換。如果值為false只有宕機時從MDS才會同步數據,有一段時間的中斷。
- mds_standby_for_name: 設置當前MDS進程只用於備份於指定名稱的MDS。
- mds_standby_for_rank:設置當前MDS進程只用於備份那個Rank,通常為Rank編號,另外在存在這個cephfs文件系統中,還可以使用mds_standby_fir _fscid參數來指定不同的文件系統。
- mds_standby_for_fscid: 指定cephfs文件系統ID,需要聯合mds_standby_for_rank生效,如果設置mds_standby_for_rank,那么久是用於指定文件的Rank,如果沒有設置,就是指定文件系統的所有Rank。
6.13.4 當前mds服務狀態
點擊查看代碼
ceph@ceph-deploy:~/ceph-cluster$ ceph mds stat
wgscephfs:1 {0=ceph-mgr-01=up:active}
6.13.5 添加mds服務器
6.13.5.1 安裝ceph-mds
點擊查看代碼
root@ceph-mgr-02:~# apt -y install ceph-mds
root@ceph-mon-01:~# apt -y install ceph-mds
root@ceph-mon-02:~# apt -y install ceph-mds
root@ceph-mon-03:~# apt -y install ceph-mds
6.13.5.2 創建mds服務
點擊查看代碼
ceph@ceph-deploy:~/ceph-cluster$ ceph-deploy mds create ceph-mgr-02 ceph-mon-01 ceph-mon-02 ceph-mon-03
6.13.6 查看當前mds服務狀態
點擊查看代碼
ceph@ceph-deploy:~/ceph-cluster$ ceph fs status
wgscephfs - 0 clients
=========
RANK STATE MDS ACTIVITY DNS INOS DIRS CAPS
0 active ceph-mgr-01 Reqs: 0 /s 10 13 12 0
POOL TYPE USED AVAIL
cephfs-metadata metadata 96.0k 56.2G
cephfs-data data 0 56.2G
STANDBY MDS
ceph-mgr-02
ceph-mon-02
ceph-mon-03
ceph-mon-01
MDS version: ceph version 16.2.6 (ee28fb57e47e9f88813e24bbf4c14496ca299d31) pacific (stable)
6.13.7 查看當前ceph集群狀態
點擊查看代碼
ceph@ceph-deploy:~/ceph-cluster$ ceph -s cluster: id: 6e521054-1532-4bc8-9971-7f8ae93e8430 health: HEALTH_OK
services:
mon: 3 daemons, quorum ceph-mon-01,ceph-mon-02,ceph-mon-03 (age 65m)
mgr: ceph-mgr-01(active, since 26h), standbys: ceph-mgr-02
mds: 1/1 daemons up, 4 standby
osd: 9 osds: 9 up (since 2d), 9 in (since 2d)
data:
volumes: 1/1 healthy
pools: 4 pools, 161 pgs
objects: 43 objects, 24 MiB
usage: 1.4 GiB used, 179 GiB / 180 GiB avail
pgs: 161 active+clean
6.13.8 查看當前cephfs文件系統狀態
點擊查看代碼
ceph@ceph-deploy:~/ceph-cluster$ ceph fs get wgscephfs
Filesystem 'wgscephfs' (8)
fs_name wgscephfs
epoch 39
flags 13
created 2021-09-25T20:01:13.237645+0800
modified 2021-09-25T20:05:16.835799+0800
tableserver 0
root 0
session_timeout 60
session_autoclose 300
max_file_size 1099511627776
required_client_features {}
last_failure 0
last_failure_osd_epoch 0
compat compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}
max_mds 1
in 0
up {0=1104195}
failed
damaged
stopped
data_pools [14]
metadata_pool 13
inline_data disabled
balancer
standby_count_wanted 1
[mds.ceph-mgr-01{0:1104195} state up:active seq 113 addr [v2:172.16.10.225:6802/3134604779,v1:172.16.10.225:6803/3134604779]]
6.13.9 刪除MDS服務
- MDS 會自動通知 Ceph 監視器它正在關閉。這使監視器能夠即時故障轉移到可用的備用數據庫(如果存在)。沒有必要使用管理命令來實現此故障轉移,例如通過使用 ceph fs fail mds.id
6.13.9.1 停止mds服務
點擊查看代碼
root@ceph-mon-03:~# systemctl stop ceph-mds@ceph-mon-03
6.13.9.2 查看當前mds狀態
點擊查看代碼
ceph@ceph-deploy:~/ceph-cluster$ ceph fs status
wgscephfs - 0 clients
=========
RANK STATE MDS ACTIVITY DNS INOS DIRS CAPS
0 active ceph-mgr-02 Reqs: 0 /s 10 13 12 0
1 active ceph-mgr-01 Reqs: 0 /s 10 13 11 0
POOL TYPE USED AVAIL
cephfs-metadata metadata 168k 56.2G
cephfs-data data 0 56.2G
STANDBY MDS
ceph-mon-02
ceph-mon-01
MDS version: ceph version 16.2.6 (ee28fb57e47e9f88813e24bbf4c14496ca299d31) pacific (stable)
6.13.9.3 刪除/var/lib/ceph/mds/ceph-${id}
點擊查看代碼
root@ceph-mon-03:~# rm -rf /var/lib/ceph/mds/ceph-ceph-mon-03
6.13.10 設置處於激活狀態mds數量
點擊查看代碼
#設置同時活躍狀態的主mds最大值為2
ceph@ceph-deploy:~/ceph-cluster$ ceph fs set wgscephfs max_mds 2
6.13.11 查看當前mds狀態
點擊查看代碼
ceph@ceph-deploy:~/ceph-cluster$ ceph fs status
wgscephfs - 0 clients
=========
RANK STATE MDS ACTIVITY DNS INOS DIRS CAPS
0 active ceph-mgr-02 Reqs: 0 /s 10 13 12 0
1 active ceph-mgr-01 Reqs: 0 /s 10 13 11 0
POOL TYPE USED AVAIL
cephfs-metadata metadata 168k 56.2G
cephfs-data data 0 56.2G
STANDBY MDS
ceph-mon-02
ceph-mon-01
MDS version: ceph version 16.2.6 (ee28fb57e47e9f88813e24bbf4c14496ca299d31) pacific (stable)
6.13.12 mds高可用優化
- 當前ceph-mgr-01和ceph-mgr-02為active狀態。
- 將ceph-mgr-02設置為ceph-mgr-01的standby。
- 將ceph-mon-02設置為ceph-mon-01的standby。
點擊查看代碼
ceph@ceph-deploy:~/ceph-cluster$ cat ceph.conf [global] fsid = 6e521054-1532-4bc8-9971-7f8ae93e8430 public_network = 172.16.10.0/24 cluster_network = 172.16.10.0/24 mon_initial_members = ceph-mon-01 mon_host = 172.16.10.148 auth_cluster_required = cephx auth_service_required = cephx auth_client_required = cephx mon_allow_pool_delete = true
mon clock drift allowed = 2
mon clock drift warn backoff = 30
[mds.ceph-mon-01]
mds_standby_for_name = ceph-mgr-01
mds_standby_replay = true
[mds.ceph-mon-02]
mds_standby_for_name = ceph-mgr-02
mds_standby_replay = true
6.13.13 分發配置文件
點擊查看代碼
ceph@ceph-deploy:~/ceph-cluster$ ceph-deploy --overwrite config push ceph-mgr-01
ceph@ceph-deploy:~/ceph-cluster$ ceph-deploy --overwrite config push ceph-mgr-02
ceph@ceph-deploy:~/ceph-cluster$ ceph-deploy --overwrite config push ceph-mon-01
ceph@ceph-deploy:~/ceph-cluster$ ceph-deploy --overwrite config push ceph-mon-02
6.13.14 重啟mds服務
點擊查看代碼
root@ceph-mon-02:~# systemctl restart ceph-mds@ceph-mon-02
root@ceph-mon-01:~# systemctl restart ceph-mds@ceph-mon-01
root@ceph-mgr-02:~# systemctl restart ceph-mds@ceph-mgr-02
root@ceph-mgr-01:~# systemctl restart ceph-mds@ceph-mgr-01
6.13.15 ceph集群mds高可用狀態
點擊查看代碼
ceph@ceph-deploy:~/ceph-cluster$ ceph fs status
wgscephfs - 0 clients
=========
RANK STATE MDS ACTIVITY DNS INOS DIRS CAPS
0 active ceph-mgr-02 Reqs: 0 /s 10 13 12 0
1 active ceph-mon-01 Reqs: 0 /s 10 13 11 0
POOL TYPE USED AVAIL
cephfs-metadata metadata 168k 56.2G
cephfs-data data 0 56.2G
STANDBY MDS
ceph-mon-02
ceph-mgr-01
MDS version: ceph version 16.2.6 (ee28fb57e47e9f88813e24bbf4c14496ca299d31) pacific (stable)
6.13.15 驗證mds狀態一對一對應
- ceph-mgr-02和ceph-mgr-01交替切換狀態為actinve。
- ceph-mon-02和ceph-mon-01交替切換狀態為actinve。
6.14 通過ganesha將cephfs導出為NFS
- https://docs.ceph.com/en/pacific/cephfs/nfs/
6.14.1 配置要求
- Ceph 文件系統是luminous 或更高版本。
- 在 NFS 服務器主機中,'libcephfs2'是luminous 或更高版本、'nfs-ganesha' 和 'nfs-ganesha-ceph' 包(Ganesha v2.5 或更高版本)。
- NFS-Ganesha 服務器主機連接到 Ceph 公網。
- 建議使用 3.5 或更高穩定版本的 NFS-Ganesha 包與 pacific (16.2.x) 或更高穩定版本的 Ceph 包。
- 在安裝有cephfs的節點安裝nfs-ganesha,nfs-ganesha-ceph
6.14.2 在部署有ceph-mds節點安裝ganesha服務
6.14.2.1 查看ganesha版本信息
點擊查看代碼
root@ceph-mgr-01:~# apt-cache madison nfs-ganesha-ceph
nfs-ganesha-ceph | 2.6.0-2 | http://mirrors.ucloud.cn/ubuntu bionic/universe amd64 Packages
nfs-ganesha | 2.6.0-2 | http://mirrors.ucloud.cn/ubuntu bionic/universe Sources
6.12.2.2 安裝ganesha服務
點擊查看代碼
root@ceph-mgr-01:~# apt -y install nfs-ganesha-ceph nfs-ganesha
6.12.2.3 ganesha配置信息
- https://github.com/nfs-ganesha/nfs-ganesha/blob/next/src/config_samples/ceph.conf
root@ceph-mgr-01:~# mv /etc/ganesha/ganesha.conf /etc/ganesha/ganesha.conf.back root@ceph-mgr-01:~# vi /etc/ganesha/ganesha.conf NFS_CORE_PARAM { # Ganesha can lift the NFS grace period early if NLM is disabled. Enable_NLM = false;
# rquotad doesn't add any value here. CephFS doesn't support per-uid # quotas anyway. Enable_RQUOTA = false; # In this configuration, we're just exporting NFSv4. In practice, it's # best to use NFSv4.1+ to get the benefit of sessions. Protocols = 4;
}
EXPORT
{
# Export Id (mandatory, each EXPORT must have a unique Export_Id)
Export_Id = 77;# Exported path (mandatory) Path = /; # Pseudo Path (required for NFS v4) Pseudo = /cephfs-test; # Time out attribute cache entries immediately Attr_Expiration_Time = 0; # We're only interested in NFSv4 in this configuration Protocols = 4; # NFSv4 does not allow UDP transport Transports = TCP; # Time out attribute cache entries immediately Attr_Expiration_Time = 0; # setting for root Squash Squash="No_root_squash"; # Required for access (default is None) # Could use CLIENT blocks instead Access_Type = RW; # Exporting FSAL FSAL { Name = CEPH; hostname="172.16.10.225"; #當前節點ip地址 }
}
LOG {
# default log level
Default_Log_Level = WARN;
}
6.12.2.4 ganesha服務管理
點擊查看代碼
root@ceph-mgr-01:~# systemctl restart nfs-ganesha
root@ceph-mgr-01:~# systemctl status nfs-ganesha
6.12.3 ganesha客戶端設置
6.12.3.1 ubuntu系統
點擊查看代碼
root@ceph-client-ubuntu18.04-01:~# apt -y install nfs-common
6.12.3.2 centos系統
點擊查看代碼
[root@ceph-client-centos7-01 ~]# yum install -y nfs-utils
6.12.4 客戶端掛載
6.12.4.1 客戶端以ceph方式掛載
點擊查看代碼
root@ceph-client-ubuntu20.04-01:~# mount -t ceph 172.16.10.148:6789,172.16.10.110:6789,172.16.10.182:6789:/ /data/cephfs-data -o name=wgs,secretfile=/etc/ceph/wgs.key
root@ceph-client-ubuntu20.04-01:~# df -TH
Filesystem Type Size Used Avail Use% Mounted on
udev devtmpfs 4.1G 0 4.1G 0% /dev
tmpfs tmpfs 815M 1.1M 814M 1% /run
/dev/vda2 ext4 22G 18G 2.1G 90% /
tmpfs tmpfs 4.1G 0 4.1G 0% /dev/shm
tmpfs tmpfs 5.3M 0 5.3M 0% /run/lock
tmpfs tmpfs 4.1G 0 4.1G 0% /sys/fs/cgroup
/dev/vdb ext4 528G 32G 470G 7% /data
tmpfs tmpfs 815M 0 815M 0% /run/user/1001
172.16.10.148:6789,172.16.10.110:6789,172.16.10.182:6789:/ ceph 61G 0 61G 0% /data/cephfs-data
6.12.4.2 寫入測試數據
點擊查看代碼
root@ceph-client-ubuntu20.04-01:~# cd /data/cephfs-data/
root@ceph-client-ubuntu20.04-01:/data/cephfs-data# echo "mount nfs" > nfs.txt
root@ceph-client-ubuntu20.04-01:/data/cephfs-data# ls -l
total 1
-rw-r--r-- 1 root root 10 Sep 26 22:08 nfs.txt
6.12.4.3 客戶端以nfs方式掛載
[root@ceph-client-centos7-01 ~]# mount -t nfs -o nfsvers=4.1,proto=tcp 172.16.10.225:/cephfs-test /data/cephfs-data/
[root@ceph-client-centos7-01 ~]# df -TH
Filesystem Type Size Used Avail Use% Mounted on
udev devtmpfs 2.1G 0 2.1G 0% /dev
tmpfs tmpfs 412M 51M 361M 13% /run
/dev/vda1 ext4 106G 4.1G 97G 5% /
tmpfs tmpfs 2.1G 0 2.1G 0% /dev/shm
tmpfs tmpfs 5.3M 0 5.3M 0% /run/lock
tmpfs tmpfs 2.1G 0 2.1G 0% /sys/fs/cgroup
/dev/vdb ext4 106G 16G 85G 16% /data
tmpfs tmpfs 412M 0 412M 0% /run/user/1003
172.16.10.225:/cephfs-test nfs4 61G 0 61G 0% /data/cephfs-data
6.12.4.4 驗證nfs掛載數據
點擊查看代碼
[root@ceph-client-centos7-01 ~]# ls -l /data/cephfs-data/
total 1
-rw-r--r-- 1 root root 10 Sep 26 22:08 nfs.txt
[root@ceph-client-centos7-01 ~]# cat /data/cephfs-data/nfs.txt
mount nfs
6.12.5 客戶端卸載
6.12.5.1 ubuntu系統
點擊查看代碼
root@ceph-client-ubuntu20.04-01:~# umount /data/cephfs-data
6.12.5.2 ubuntu系統
點擊查看代碼
[root@ceph-client-centos7-01 ~]# umount /data/cephfs-data