ceph-rbd和cephfs使用


目錄

1 用戶權限管理和授權流程

用戶管理功能可讓 Ceph 集群管理員能夠直接在 Ceph 集群中創建、更新和刪除用戶。

權限,此文件類似於 linux 系統的中的/etc/passwd 文件。

1.1 列出用戶

[ceph@ceph-deploy ceph-cluster]$ ceph auth list 
installed auth entries: 

mds.ceph-mgr1 
	key: AQCOKqJfXRvWFhAAVCdkr5uQr+5tNjrIRcZhSQ== 
    caps: [mds] allow caps: [mon] allow profile mds 
    caps: [osd] allow rwx 
osd.0 
	key: AQAhE6Jf74HbEBAA/6PS57YKAyj9Uy8rNRb1BA== 
    caps: [mgr] allow profile osd 
    caps: [mon] allow profile osd
client.admin
	key: AQAGDKJfQk/dAxAA3Y+9xoE/p8in6QjoHeXmeg== 
	caps: [mds] allow * 
	caps: [mgr] allow * 
	caps: [mon] allow * 
	caps: [osd] allow *

注意:TYPE.ID 表示法

針對用戶采用 TYPE.ID 表示法,例如 osd.0 指定是 osd 類並且 ID 為 0 的用戶(節點),

client.admin 是 client 類型的用戶,其 ID 為 admin,

另請注意,每個項包含一個 key=xxxx 項,以及一個或多個 caps 項。

可以結合使用-o 文件名選項和 ceph auth list 將輸出保存到某個文件。

[ceph@ceph-deploy ceph-cluster]$ ceph auth list -o 123.key

1.2 用戶管理

添加一個用戶會創建用戶名 (TYPE.ID)、機密密鑰,以及包含在命令中用於創建該用戶的所有能力,用戶可使用其密鑰向 Ceph 存儲集群進行身份驗證。用戶的能力授予該用戶在 Ceph monitor (mon)、Ceph OSD (osd) 或 Ceph 元數據服務器 (mds) 上進行讀取、寫入或執行的能力,可以使用以下幾個命令來添加用戶:

1.2.1 ceph auth add

此命令是添加用戶的規范方法。它會創建用戶、生成密鑰,並添加所有指定的能力。

[ceph@ceph-deploy ceph-cluster]$ ceph auth -h 
auth add <entity> {<caps> [<caps>...]} 

#添加認證 key: 
[ceph@ceph-deploy ceph-cluster]$ ceph auth add client.tom mon 'allow r' osd 'allow rwx pool=mypool'
added key for client.tom 

#驗證 key 
[ceph@ceph-deploy ceph-cluster]$ ceph auth get client.tom 
exported keyring for client.tom 

[client.tom] 
	key = AQCErsdftuumLBAADUiAfQUI42ZlX1e/4PjpdA== 
	caps mon = "allow r" 
	caps osd = "allow rwx pool=mypool"

1.2.3 ceph auth get-or-create

ceph auth get-or-create 此命令是創建用戶較為常見的方式之一,它會返回包含用戶名(在方括號中)和密鑰的密鑰文,如果該用戶已存在,此命令只以密鑰文件格式返回用戶名和密鑰,還可以使用 -o 指定文件名選項將輸出保存到某個文件

#創建用戶 
[ceph@ceph-deploy ceph-cluster]$ ceph auth get-or-create client.jack mon 'allow r' osd 'allow rwx pool=mypool' 
[client.jack] 
	key = AQAtr8dfi37XMhAADbHWEZ0shY1QZ5A8eBpeoQ== 
	
#驗證用戶 
[ceph@ceph-deploy ceph-cluster]$ ceph auth get client.jack 
exported keyring for client.jack 
[client.jack] 
	key = AQAtr8dfi37XMhAADbHWEZ0shY1QZ5A8eBpeoQ== 
	caps mon = "allow r" 
	caps osd = "allow rwx pool=mypool" 

#再次創建用戶 
[ceph@ceph-deploy ceph-cluster]$ ceph auth get-or-create client.jack mon 'allow r' osd 'allow rwx pool=mypool' 
[client.jack] 
	key = AQAtr8dfi37XMhAADbHWEZ0shY1QZ5A8eBpeoQ==

1.2.4 ceph auth get-or-create-key

此命令是創建用戶並僅返回用戶密鑰,對於只需要密鑰的客戶端(例如 libvirt),此命令非常有用。如果該用戶已存在,此命令只返回密鑰。您可以使用 -o 文件名選項將輸出保存到某個文件。

創建客戶端用戶時,可以創建不具有能力的用戶。不具有能力的用戶可以進行身份驗證,但不能執行其他操作,此類客戶端無法從監視器檢索集群地圖,但是,如果希望稍后再添加能力,可以使用 ceph auth caps 命令創建一個不具有能力的用戶。

典型的用戶至少對 Ceph monitor 具有讀取功能,並對 Ceph OSD 具有讀取和寫入功能。此外,用戶的 OSD 權限通常限制為只能訪問特定的存儲池。

[ceph@ceph-deploy ceph-cluster]$ ceph auth get-or-create-key client.jack mon 'allow r' osd 'allow rwx pool=mypool' 
AQAtr8dfi37XMhAADbHWEZ0shY1QZ5A8eBpeoQ== # 用戶有 key 就顯示沒有就創建

1.2.5 ceph auth print-key

只獲取單個指定用戶的 key 信息

[ceph@ceph-deploy ceph-cluster]$ ceph auth print-key client.jack

AQAtr8dfi37XMhAADbHWEZ0shY1QZ5A8eBpeoQ==

1.2.6 修改用戶能力

使用 ceph auth caps 命令可以指定用戶以及更改該用戶的能力,設置新能力會完全覆蓋當前的能力,因此要加上之前的用戶已經擁有的能和新的能力,如果看當前能力,可以運行 ceph auth get USERTYPE.USERID,如果要添加能力,使用以下格式時還需要指定現有能力:

例如:

#查看用戶當前權限 
[ceph@ceph-deploy ceph-cluster]$ ceph auth get client.jack 
exported keyring for client.jack 
[client.jack] 
    key = AQAtr8dfi37XMhAADbHWEZ0shY1QZ5A8eBpeoQ== 
    caps mon = "allow r" 
    caps osd = "allow rwx pool=mypool"
    
#修改用戶權限 
[ceph@ceph-deploy ceph-cluster]$ ceph auth caps client.jack mon 'allow r' osd 'allow rw pool=mypool' 
updated caps for client.jack    

#再次驗證權限 
[ceph@ceph-deploy ceph-cluster]$ ceph auth get client.jack 
exported keyring for client.jack 
[client.jack] 
    key = AQAtr8dfi37XMhAADbHWEZ0shY1QZ5A8eBpeoQ== 
    caps mon = "allow r" 
    caps osd = "allow rw pool=mypool"

1.2.7 刪除用戶

要刪除用戶使用 ceph auth del TYPE.ID,其中 TYPE 是 client、osd、mon 或 mds 之一,ID 是用戶名或守護進程的 ID。

[ceph@ceph-deploy ceph-cluster]$ ceph auth del client.tom

updated

1.3 密鑰環管理

ceph 的秘鑰環是一個保存了 secrets、keys、certificates 並且能夠讓客戶端通認證訪問 ceph的 keyring file(集合文件),一個 keyring file 可以保存一個或者多個認證信息,每一個 key 都有一個實體名稱加權限,類型為:

{client、mon、mds、osd}.name

當客戶端訪問 ceph 集群時,ceph 會使用以下四個密鑰環文件預設置密鑰環設置:

/etc/ceph/<$cluster name>.<user $type>.<user $id>.keyring #保存單個用戶的 keyring 
/etc/ceph/cluster.keyring #保存多個用戶的 keyring 
/etc/ceph/keyring #未定義集群名稱的多個用戶的 keyring 
/etc/ceph/keyring.bin #編譯后的二進制文件

1.3.1 通過秘鑰環文件備份與恢復用戶

使用 ceph auth add 等命令添加的用戶還需要額外使用 ceph-authtool 命令為其創建用戶秘鑰環文件。

創建 keyring 文件命令格式:

ceph-authtool --create-keyring FILE

1.3.1.1 導出用戶認證信息至 keyring 文件

將用戶信息導出至 keyring 文件,對用戶信息進行備份。

#創建用戶: 
[ceph@ceph-deploy ceph-cluster]$ ceph auth get-or-create client.user1 mon 'allow r' osd 'allow * pool=mypool' 
[client.user1] 
	key = AQAUUchfjpMqGRAARV6h0ofdDEneuaRnxuHjoQ== 
	
#驗證用戶 
[ceph@ceph-deploy ceph-cluster]$ ceph auth get client.user1 
exported keyring for client.user1 
[client.user1] 
    key = AQAUUchfjpMqGRAARV6h0ofdDEneuaRnxuHjoQ== 
    caps mon = "allow r" 
    caps osd = "allow * pool=mypool" 

#創建 keyring 文件: 
[ceph@ceph-deploy ceph-cluster]$ ceph-authtool --create-keyring ceph.client.user1.keyring 

#驗證 keyring 文件: 

[ceph@ceph-deploy ceph-cluster]$ cat ceph.client.user1.keyring 

#是個空文件 
[ceph@ceph-deploy ceph-cluster]$ file ceph.client.user1.keyring
ceph.client.user1.keyring: empty

#導出 keyring 至指定文件 
[ceph@ceph-deploy ceph-cluster]$ ceph auth get client.user1 -o ceph.client.user1.keyring 
exported keyring for client.user1

#驗證指定用戶的 keyring 文件: 
[ceph@ceph-deploy ceph-cluster]$ cat ceph.client.user1.keyring 
[client.user1] 
    key = AQAUUchfjpMqGRAARV6h0ofdDEneuaRnxuHjoQ== 
    caps mon = "allow r" 
    caps osd = "allow * pool=mypool"    

在創建包含單個用戶的密鑰環時,通常建議使用 ceph 集群名稱、用戶類型和用戶名及 keyring 來 命 名 , 並 將 其 保 存 在 /etc/ceph 目 錄 中 。 例 如 為 client.user1 用 戶 創 建ceph.client.user1.keyring。

1.3.1.2:keyring 文件恢復用戶認證信息

可以使用 ceph auth import -i 指定 keyring 文件並導入到 ceph,其實就是起到用戶備份和恢復的目的:

[ceph@ceph-deploy ceph-cluster]$ cat ceph.client.user1.keyring #驗證用戶的認證文件 [client.user1] 
    key = AQAKkgthpbdlIxAABO28D3eK5hTxRfx7Omhquw== 
    caps mon = "allow r" 
    caps osd = "allow * pool=mypool" 

[ceph@ceph-deploy ceph-cluster]$ ceph auth del client.user1 #演示誤刪除用戶 
Updated

[ceph@ceph-deploy ceph-cluster]$ ceph auth get client.user1 #確認用戶被刪除 
Error ENOENT: failed to find client.user1 in keyring

[ceph@ceph-deploy ceph-cluster]$ ceph auth import -i ceph.client.user1.keyring #導入用戶 
imported keyring

[ceph@ceph-deploy ceph-cluster]$ ceph auth get client.user1 #驗證已恢復用戶 
exported keyring for client.user1 
[client.user1] 
    key = AQAKkgthpbdlIxAABO28D3eK5hTxRfx7Omhquw== 
    caps mon = "allow r" 
    caps osd = "allow * pool=mypool"

1.3.2 秘鑰環文件多用戶

一個 keyring 文件中可以包含多個不同用戶的認證文件。

#創建 keyring 文件: 
$ ceph-authtool --create-keyring ceph.client.user.keyring #創建空的 keyring 文件 
creating ceph.client.user.keyring

#把指定的 admin 用戶的 keyring 文件內容導入到 user 用戶的 keyring 文件: 
$ceph-authtool ./ceph.client.user.keyring --import-keyring ./ceph.client.admin.keyring 
importing contents of ./ceph.client.admin.keyring into ./ceph.client.user.keyring

#驗證 keyring 文件: 
[ceph@ceph-deploy ceph-cluster]$ ceph-authtool -l ./ceph.client.user.keyring 
[client.admin] 
    key = AQAGDKJfQk/dAxAA3Y+9xoE/p8in6QjoHeXmeg== 
    caps mds = "allow *" 
    caps mgr = "allow *" 
    caps mon = "allow *" 
    caps osd = "allow *"

#再導入一個其他用戶的 keyring:
[ceph@ceph-deploy ceph-cluster]$ ceph-authtool ./ceph.client.user.keyring --import-keyring ./ceph.client.user1.keyring 
importing contents of ./ceph.client.user1.keyring into ./ceph.client.user.keyring

#再次驗證 keyring 文件是否包含多個用戶的認證信息: 
[ceph@ceph-deploy ceph-cluster]$ ceph-authtool -l ./ceph.client.user.keyring 
[client.admin] 
    key = AQAGDKJfQk/dAxAA3Y+9xoE/p8in6QjoHeXmeg== 
    caps mds = "allow *" 
    caps mgr = "allow *" 
    caps mon = "allow *" 
    caps osd = "allow *" 
[client.user1] 
    key = AQAUUchfjpMqGRAARV6h0ofdDEneuaRnxuHjoQ== 
    caps mon = "allow r" 
    caps osd = "allow * pool=mypool"


2 用普通用戶掛載rbd和cephfs

2.1 創建存儲池

#創建存儲池: 
magedu@ceph-deploy:~/ceph-cluster$ ceph osd pool create rbd-data1 32 32
pool 'rbd-data1' created 

#驗證存儲池: 
magedu@ceph-deploy:~/ceph-cluster$ ceph osd pool ls
device_health_metrics
myrbd1
.rgw.root
default.rgw.log
default.rgw.control
default.rgw.meta
cephfs-metadata
cephfs-data
rbd-data1

#在存儲池啟用 rbd: 
[ceph@ceph-deploy ceph-cluster]$ osd pool application enable <poolname> <app> {--yes-i-really-mean-it} enable use of an application <app> [cephfs,rbd,rgw] on pool <poolname> 

magedu@ceph-deploy:~/ceph-cluster$ ceph osd pool application enable rbd-data1 rbd
enabled application 'rbd' on pool 'rbd-data1'

#初始化 rbd: 
magedu@ceph-deploy:~/ceph-cluster$ rbd pool init -p rbd-data1

2.2 創建img鏡像

rbd 存儲池並不能直接用於塊設備,而是需要事先在其中按需創建映像(image),並把映像文件作為塊設備使用。rbd 命令可用於創建、查看及刪除塊設備相在的映像(image), 以及克隆映像、創建快照、將映像回滾到快照和查看快照等管理操作。例如,下面的命令能夠在指定的 RBD 即 rbd-data1 創建一個名為 myimg1 的映像:

2.2.1 創建鏡像

#創建兩個鏡像: 
$ rbd create data-img1 --size 3G --pool rbd-data1 --image-format 2 --image-feature layering 
$ rbd create data-img2 --size 5G --pool rbd-data1 --image-format 2 --image-feature layering

#驗證鏡像: 
$ rbd ls --pool rbd-data1 
data-img1 
data-img2

#列出鏡像個多信息: 
$ rbd ls --pool rbd-data1 -l 
NAME SIZE PARENT FMT PROT LOCK 
data-img1 3 GiB 2 
data-img2 5 GiB 2

2.2.2 查看鏡像詳細信息

magedu@ceph-deploy:~/ceph-cluster$ rbd --image data-img2 --pool rbd-data1 info
rbd image 'data-img2':
	size 3 GiB in 768 objects
	order 22 (4 MiB objects)
	snapshot_count: 0
	id: 121e429921010
	block_name_prefix: rbd_data.121e429921010
	format: 2
	features: layering
	op_features: 
	flags: 
	create_timestamp: Sun Aug 29 20:31:03 2021
	access_timestamp: Sun Aug 29 20:31:03 2021
	modify_timestamp: Sun Aug 29 20:31:03 2021
	
$ rbd --image data-img1 --pool rbd-data1 info

2.2.3 以json格式顯示

magedu@ceph-deploy:~/ceph-cluster$ rbd ls --pool rbd-data1 -l --format json --pretty-format
[
    {
        "image": "data-img1",
        "id": "121e1146bfbda",
        "size": 3221225472,
        "format": 2
    },
    {
        "image": "data-img2",
        "id": "121e429921010",
        "size": 3221225472,
        "format": 2
    }
]

2.2.4 鏡像的其他特性

#特性簡介 
layering: 支持鏡像分層快照特性,用於快照及寫時復制,可以對 image 創建快照並保護,然 后從快照克隆出新的 image 出來,父子 image 之間采用 COW 技術,共享對象數據。 

striping: 支持條帶化 v2,類似 raid 0,只不過在 ceph 環境中的數據被分散到不同的對象中, 可改善順序讀寫場景較多情況下的性能。 

exclusive-lock: 支持獨占鎖,限制一個鏡像只能被一個客戶端使用。 

object-map: 支持對象映射(依賴 exclusive-lock),加速數據導入導出及已用空間統計等,此特 性開啟的時候,會記錄 image 所有對象的一個位圖,用以標記對象是否真的存在,在一些場 景下可以加速 io。 

fast-diff: 快速計算鏡像與快照數據差異對比(依賴 object-map)。 

deep-flatten: 支持快照扁平化操作,用於快照管理時解決快照依賴關系等。 

journaling: 修改數據是否記錄日志,該特性可以通過記錄日志並通過日志恢復數據(依賴獨 占鎖),開啟此特性會增加系統磁盤 IO 使用。

jewel 默認開啟的特性包括: layering/exlcusive lock/object map/fast diff/deep flatten

2.2.5 鏡像特性的啟用

內核不支持存在無法掛載的問題

#啟用指定存儲池中的指定鏡像的特性: 
$ rbd feature enable exclusive-lock --pool rbd-data1 --image data-img1 
$ rbd feature enable object-map --pool rbd-data1 --image data-img1
$ rbd feature enable fast-diff --pool rbd-data1 --image data-img1

-------------
#驗證鏡像特性: $ rbd --image data-img1 --pool rbd-data1 info

2.2.6 鏡像特性的禁用

#禁用指定存儲池中指定鏡像的特性: 
$ rbd feature disable fast-diff --pool rbd-data1 --image data-img1

#驗證鏡像特性: 
$ rbd --image data-img1 --pool rbd-data1 info

2.3 客戶端使用普通賬戶掛載並使用rbd

測試客戶端使用普通賬戶掛載並使用 RBD

2.3.1 創建普通用戶並授權

#創建普通賬戶 
magedu@ceph-deploy:~/ceph-cluster$ ceph auth add client.shijie mon 'allow r' osd 'allow rwx pool=rbd-data1'
added key for client.shijie

#驗證用戶信息 
magedu@ceph-deploy:~/ceph-cluster$ ceph auth get client.shijie
[client.shijie]
	key = AQCwfithlAyDEBAAG6dylI+XDcJ+21jcKMNtZQ==
	caps mon = "allow r"
	caps osd = "allow rwx pool=rbd-data1"
exported keyring for client.shijie
    
#創建用 keyring 文件 
magedu@ceph-deploy:~/ceph-cluster$ ceph-authtool --create-keyring ceph.client.shijie.keyring
creating ceph.client.shijie.keyring

#導出用戶 keyring 
magedu@ceph-deploy:~/ceph-cluster$ ceph auth get client.shijie -o ceph.client.shijie.keyring
exported keyring for client.shijie

#驗證指定用戶的 keyring 文件 
magedu@ceph-deploy:~/ceph-cluster$ cat ceph.client.shijie.keyring
[client.shijie]
	key = AQCwfithlAyDEBAAG6dylI+XDcJ+21jcKMNtZQ==
	caps mon = "allow r"
	caps osd = "allow rwx pool=rbd-data1"

2.3.2 安裝ceph-common

Ubuntu:
~# wget -q -O- 'https://mirrors.tuna.tsinghua.edu.cn/ceph/keys/release.asc' | sudo apt-key add - 
~# vim /etc/apt/sources.list 
~# apt install ceph-common

Centos: 
[root@ceph-client2 ~]# yum install epel-release 
[root@ceph-client2 ~]# yum install https://mirrors.aliyun.com/ceph/rpm-octopus/el7/noarch/ceph-release-1-1.el7.noarch.rpm 
[root@ceph-client2 ~]# yum install ceph-common

2.3.3 同步普通用戶認證文件

magedu@ceph-deploy:~/ceph-cluster$ scp ceph.conf ceph.client.shijie.keyring root@192.168.43.102:/etc/ceph/
The authenticity of host '192.168.43.102 (192.168.43.102)' can't be established.
ECDSA key fingerprint is SHA256:2lyoHBpFm5neq9RephfU/qVeXv9j/KGbyeJERycOFAU.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '192.168.43.102' (ECDSA) to the list of known hosts.
root@192.168.43.102's password: 
ceph.conf                                                                                     100%  266   549.1KB/s   00:00    
ceph.client.shijie.keyring                                                                    100%  125   448.4KB/s   00:00 

2.3.4 在客戶端驗證權限

root@ceph-mon2:~# cd /etc/ceph/ 
root@ceph-mon2:/etc/ceph# ls
ceph.client.admin.keyring  ceph.client.shijie.keyring  ceph.conf  rbdmap  tmpsNT_hI
root@ceph-mon2:/etc/ceph# ceph --user shijie -s #默認使用 admin 賬戶

2.3.5 映射rbd

使用普通用戶權限映射 rbd

#映射 rbd 
root@ceph-mon2:/etc/ceph# rbd --user shijie -p rbd-data1 map data-img1
/dev/rbd0

#驗證 rbd 
root@ceph-mon2:/etc/ceph# fdisk -l /dev/rbd0
root@ceph-mon2:/etc/ceph# lsblk 
NAME   MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sda      8:0    0  200G  0 disk 
└─sda1   8:1    0  200G  0 part /
sr0     11:0    1 1024M  0 rom  
rbd0   252:0    0    3G  0 disk 

2.3.6 格式化並使用rbd鏡像

root@ceph-mon2:/etc/ceph# mkfs.ext4 /dev/rbd0 
root@ceph-mon2:/etc/ceph# mkdir /data 
root@ceph-mon2:/etc/ceph#  mount /dev/rbd0 /data/
root@ceph-mon2:/etc/ceph# df -h
Filesystem      Size  Used Avail Use% Mounted on
udev            962M     0  962M   0% /dev
tmpfs           198M  832K  197M   1% /run
/dev/sda1       196G  5.0G  181G   3% /
tmpfs           986M     0  986M   0% /dev/shm
tmpfs           5.0M     0  5.0M   0% /run/lock
tmpfs           986M     0  986M   0% /sys/fs/cgroup
tmpfs           198M     0  198M   0% /run/user/0
/dev/rbd0       2.9G  9.0M  2.8G   1% /datad   ###掛載成功

2.3.7 驗證ceph內核模塊

掛載 rbd 之后系統內核會自動加載 libceph.ko 模塊

root@ceph-mon2:/etc/ceph# lsmod |grep ceph
libceph               315392  1 rbd
libcrc32c              16384  1 libceph
root@ceph-mon2:/etc/ceph# modinfo libceph
filename:       /lib/modules/4.15.0-154-generic/kernel/net/ceph/libceph.ko
license:        GPL
description:    Ceph core library
author:         Patience Warnick <patience@newdream.net>
author:         Yehuda Sadeh <yehuda@hq.newdream.net>
author:         Sage Weil <sage@newdream.net>
srcversion:     89A5EF37D4AA2C7E073D35B
depends:        libcrc32c
retpoline:      Y
intree:         Y
name:           libceph
vermagic:       4.15.0-154-generic SMP mod_unload modversions 
signat:         PKCS#7
signer:         
sig_key:        
sig_hashalgo:   md4

2.3.8 rdb 動態擴容

#管理端重新設置rdb大小
magedu@ceph-deploy:~/ceph-cluster$ rbd ls --pool rbd-data1
data-img1
data-img2
magedu@ceph-deploy:~/ceph-cluster$ rbd resize --pool rbd-data1 --size 10240 data-img1
Resizing image: 100% complete...done.

#在客戶端確認
root@ceph-mon2:/etc/ceph# blockdev --getsize64 /dev/rbd0
10737418240
root@ceph-mon2:/etc/ceph# lsblk 
NAME   MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sda      8:0    0  200G  0 disk 
└─sda1   8:1    0  200G  0 part /
sr0     11:0    1 1024M  0 rom  
rbd0   252:0    0   10G  0 disk /data  ##可以看到已經擴展成功
root@ceph-mon2:/etc/ceph# df -h
Filesystem      Size  Used Avail Use% Mounted on
udev            962M     0  962M   0% /dev
tmpfs           198M  832K  197M   1% /run
/dev/sda1       196G  5.0G  181G   3% /
tmpfs           986M     0  986M   0% /dev/shm
tmpfs           5.0M     0  5.0M   0% /run/lock
tmpfs           986M     0  986M   0% /sys/fs/cgroup
tmpfs           198M     0  198M   0% /run/user/0
/dev/rbd0       2.9G  9.0M  2.8G   1% /data ##這里還是3G

#重新讀取分區信息
root@ceph-mon2:/etc/ceph# resize2fs /dev/rbd0
resize2fs 1.44.1 (24-Mar-2018)
Filesystem at /dev/rbd0 is mounted on /data; on-line resizing required
old_desc_blocks = 1, new_desc_blocks = 2
The filesystem on /dev/rbd0 is now 2621440 (4k) blocks long.

root@ceph-mon2:/etc/ceph# df -h
Filesystem      Size  Used Avail Use% Mounted on
udev            962M     0  962M   0% /dev
tmpfs           198M  832K  197M   1% /run
/dev/sda1       196G  5.0G  181G   3% /
tmpfs           986M     0  986M   0% /dev/shm
tmpfs           5.0M     0  5.0M   0% /run/lock
tmpfs           986M     0  986M   0% /sys/fs/cgroup
tmpfs           198M     0  198M   0% /run/user/0
/dev/rbd0       9.8G   14M  9.4G   1% /data

#此方法只對格式化為EXT4文件系統的塊設備有效。對於XFS,要執行#xfs_growfs /dev/rbd0 
root@ceph-mon2:/etc/ceph# rbd --user shijie -p rbd-data1 map data-img2
/dev/rbd1
root@ceph-mon2:/etc/ceph# mkfs.xfs /dev/rbd1
root@ceph-mon2:/etc/ceph# mkdir /data1
root@ceph-mon2:/etc/ceph# mount /dev/rbd1 /data1
root@ceph-mon2:/etc/ceph# df -h
/dev/rbd0       9.8G   14M  9.4G   1% /data
/dev/rbd1       3.0G   36M  3.0G   2% /data1
magedu@ceph-deploy:~/ceph-cluster$ rbd resize --pool rbd-data1 --size 5120 data-img2
root@ceph-mon2:/etc/ceph# lsblk 
rbd0   252:0    0   10G  0 disk /data
rbd1   252:16   0    5G  0 disk /data1
root@ceph-mon2:/etc/ceph# xfs_growfs /dev/rbd1
root@ceph-mon2:/etc/ceph# df -h
/dev/rbd0       9.8G   14M  9.4G   1% /data
/dev/rbd1       5.0G   39M  5.0G   1% /data1 ##擴展成功

2.3.9 設置開機自動掛載

root@ceph-mon2:/etc/ceph# cat /etc/rc.d/rc.local 
rbd --user shijie -p rbd-data1 map data-img1 
mount /dev/rbd0 /data/

root@ceph-mon2:/etc/ceph# chmod a+x /etc/rc.d/rc.local 
root@ceph-mon2:/etc/ceph# reboot

#查看映射 
root@ceph-mon2:/etc/ceph#  rbd showmapped 
id  pool       namespace  image      snap  device   
0   rbd-data1             data-img1  -     /dev/rbd0
1   rbd-data1             data-img2  -     /dev/rbd1

#驗證掛載 
root@ceph-mon2:/etc/ceph# df -TH
/dev/rbd0      ext4       11G   15M   11G   1% /data
/dev/rbd1      xfs       5.4G   40M  5.4G   1% /data1

2.3.10 卸載rbd鏡像|取消映射

root@ceph-mon2:/etc/ceph# umount /data 
root@ceph-mon2:/etc/ceph# umount /data1 
root@ceph-mon2:/etc/ceph# rbd --user shijie -p rbd-data1 unmap data-img1
root@ceph-mon2:/etc/ceph# rbd --user shijie -p rbd-data1 unmap data-img2

鏡像刪除后數據也會被刪除而且是無法恢復,因此在執行刪除操作的時候要慎重。
#刪除存儲池 rbd -data1 中的 data-img1 鏡像: 
magedu@ceph-deploy:~/ceph-cluster$ rbd rm --pool rbd-data1 --image data-img1 

2.4 普通用戶掛載cephfs

Ceph FS 需要運行 Meta Data Services(MDS)服務,其守護進程為 ceph-mds,ceph-mds 進程管理與 cephFS 上存儲的文件相關的元數據,並協調對 ceph 存儲集群的訪問

2.4.1 部署ceph-mds

root@ceph-mgr1:~# apt install ceph-mds

2.4.2 創建cephfs metadata和data存儲池

使用 CephFS 之前需要事先於集群中創建一個文件系統,並為其分別指定元數據和數據相關的存儲池。下面創建一個名為 mycephfs 的文件系統用於測試,它使用 cephfs-metadata 為元數據存儲池,使用 cephfs-data 為數據存儲池:

magedu@ceph-deploy:~/ceph-cluster$ ceph osd pool create cephfs-metadata 32 32
magedu@ceph-deploy:~/ceph-cluster$ ceph osd pool create cephfs-data 64 64
magedu@ceph-deploy:~/ceph-cluster$ ceph -s
magedu@ceph-deploy:~/ceph-cluster$ ceph fs new mycephfs cephfs-metadata cephfs-data  #創建一個名為 mycephfs 的文件系統
new fs with metadata pool 7 and data pool 8

magedu@ceph-deploy:~/ceph-cluster$ ceph fs ls
name: mycephfs, metadata pool: cephfs-metadata, data pools: [cephfs-data ]
magedu@ceph-deploy:~/ceph-cluster$ ceph fs status mycephfs ##查看指定 cephFS 狀態
mycephfs - 0 clients
========
RANK  STATE      MDS        ACTIVITY     DNS    INOS   DIRS   CAPS  
 0    active  ceph-mon2  Reqs:    0 /s    14     13     12      0   
 1    active  ceph-mon1  Reqs:    0 /s    10     13     11      0   
      POOL         TYPE     USED  AVAIL  
cephfs-metadata  metadata   252k  32.6G  
  cephfs-data      data       0   21.7G  
STANDBY MDS  
 ceph-mgr1   
MDS version: ceph version 16.2.5 (0883bdea7337b95e4b611c768c0279868462204a) pacific (stable)

2.4.3 驗證cephfs服務狀態

magedu@ceph-deploy:~/ceph-cluster$ ceph mds stat 
mycephfs:2 {0=ceph-mon2=up:active,1=ceph-mon1=up:active} 1 up:standby
#現在已經轉變為活動狀態

2.4.4 創建客戶端賬戶

#創建賬戶 
magedu@ceph-deploy:~/ceph-cluster$ ceph auth add client.yanyan mon 'allow r' mds 'allow rw' osd 'allow rwx pool=cephfs-data'
added key for client.yanyan

#驗證賬戶 
magedu@ceph-deploy:~/ceph-cluster$ ceph auth get client.yanyan 
[client.yanyan]
	key = AQDGiithR8i2MBAAaz7HqOni9NxCegRvSh4XZQ==
	caps mds = "allow rw"
	caps mon = "allow r"
	caps osd = "allow rwx pool=cephfs-data"
exported keyring for client.yanyan
	
#創建用 keyring 文件 
magedu@ceph-deploy:~/ceph-cluster$ ceph auth get client.yanyan -o ceph.client.yanyan.keyring
exported keyring for client.yanyan

#創建 key 文件: 
magedu@ceph-deploy:~/ceph-cluster$ ceph auth print-key client.yanyan > yanyan.key

#驗證用戶的 keyring 文件 
magedu@ceph-deploy:~/ceph-cluster$ cat ceph.client.yanyan.keyring
[client.yanyan]
	key = AQDGiithR8i2MBAAaz7HqOni9NxCegRvSh4XZQ==
	caps mds = "allow rw"
	caps mon = "allow r"
	caps osd = "allow rwx pool=cephfs-data"

2.4.5 同步客戶端認證文件

magedu@ceph-deploy:~/ceph-cluster$ scp ceph.conf ceph.client.yanyan.keyring yanyan.key root@192.168.43.102:/etc/ceph/
root@192.168.43.102's password: 
ceph.conf                                                                                     100%  266   808.6KB/s   00:00    
ceph.client.yanyan.keyring                                                                    100%  150   409.1KB/s   00:00    
yanyan.key                                                                                    100%   40   112.7KB/s   00:00 

2.4.6 客戶端驗證權限

# 如果未安裝mds需要執行apt install ceph-mds 
root@ceph-mon2:/etc/ceph# ceph --user yanyan -s 
  cluster:
    id:     cce50457-e522-4841-9986-a09beefb2d65
    health: HEALTH_WARN
            1/3 mons down, quorum ceph-mon1,ceph-mon2
            Degraded data redundancy: 290/870 objects degraded (33.333%), 97 pgs degraded, 297 pgs undersized
            47 pgs not deep-scrubbed in time
 
  services:
    mon: 3 daemons, quorum ceph-mon1,ceph-mon2 (age 64m), out of quorum: ceph-mon3
    mgr: ceph-mgr1(active, since 64m)
    mds: 2/2 daemons up, 1 standby
    osd: 7 osds: 5 up (since 64m), 5 in (since 7d)
    rgw: 1 daemon active (1 hosts, 1 zones)
 
  data:
    volumes: 1/1 healthy
    pools:   9 pools, 297 pgs
    objects: 290 objects, 98 MiB
    usage:   515 MiB used, 49 GiB / 50 GiB avail
    pgs:     290/870 objects degraded (33.333%)
             200 active+undersized
             97  active+undersized+degraded

2.4.7 內核空間掛載ceph-fs

客戶端掛載有兩種方式,一是內核空間一是用戶空間,內核空間掛載需要內核支持 ceph 模塊,用戶空間掛載需要安裝 ceph-fuse

2.4.7.1 客戶端通過key文件掛載

root@ceph-mon2:/etc/ceph# mount -t ceph 192.168.43.101:6789,192.168.43.102:6789:/ /data2 -o name=yanyan,secretfile=/etc/ceph/yanyan.key
root@ceph-mon2:/etc/ceph# df -TH
/dev/rbd0                                 ext4       11G   15M   11G   1% /data
/dev/rbd1                                 xfs       5.4G   40M  5.4G   1% /data1
192.168.43.101:6789,192.168.43.102:6789:/ ceph       24G     0   24G   0% /data2
#驗證寫入數據 
root@ceph-mon2:/etc/ceph# cp /etc/issue /data2/ 
root@ceph-mon2:/etc/ceph# dd if=/dev/zero of=/data2/testfile bs=2M count=100
100+0 records in
100+0 records out
209715200 bytes (210 MB, 200 MiB) copied, 0.573734 s, 366 MB/s

2.4.7.2 客戶端通過key掛載

root@ceph-mon2:/data2# tail /etc/ceph/yanyan.key 
AQDGiithR8i2MBAAaz7HqOni9NxCegRvSh4XZQ==

root@ceph-mon2:/# umount /data2
root@ceph-mon2:/# mount -t ceph 192.168.43.101:6789,192.168.43.102:6789:/ /data2 -o name=yanyan,secret=AQDGiithR8i2MBAAaz7HqOni9NxCegRvSh4XZQ==
root@ceph-mon2:/# cd /data2
root@ceph-mon2:/data2# ls
issue  testfile

root@ceph-mon2:/data2# df -TH
/dev/rbd0                                 ext4       11G   15M   11G   1% /data
/dev/rbd1                                 xfs       5.4G   40M  5.4G   1% /data1
192.168.43.101:6789,192.168.43.102:6789:/ ceph       21G  6.5G   14G  32% /data2

2.4.7.3 開機掛載

root@ceph-mon2:/# cat /etc/fstab
192.168.43.101:6789,192.168.43.102:6789:/ /data2 ceph defaults,name=yanyan,secretfile=/etc/ceph/yanyan.key,_netdev 0 0
#IP是mon的ip池,一定要指定_netdev網絡掛載

root@ceph-mon2:/# umount /data2
root@ceph-mon2:/# df -h
Filesystem      Size  Used Avail Use% Mounted on
/dev/rbd0       9.8G   14M  9.4G   1% /data
/dev/rbd1       5.0G   39M  5.0G   1% /data1
root@ceph-mon2:/# mount -a
root@ceph-mon2:/# df -TH
Filesystem                                Type      Size  Used Avail Use% Mounted on
/dev/rbd0                                 ext4       11G   15M   11G   1% /data
/dev/rbd1                                 xfs       5.4G   40M  5.4G   1% /data1
192.168.43.101:6789,192.168.43.102:6789:/ ceph       21G  6.5G   14G  32% /data2

3 mds高可用

3.1 當前mds服務器狀態

[ceph@ceph-deploy ceph-cluster]$ ceph mds stat 
mycephfs-1/1/1 up {0=ceph-mgr1=up:active}

3.2 添加mds服務器

將 ceph-mgr1 和 ceph-mon1 和 ceph-mon2 作為 mds 服務角色添加至 ceph 集群,最后實兩主一備的 mds 高可用和高性能結構。

#mds 服務器安裝 ceph-mds 服務 
[root@ceph-mon1 ~]# yum install ceph-mds -y 
[root@ceph-mon2 ~]# yum install ceph-mds -y 

#添加 mds 服務器 
magedu@ceph-deploy:~/ceph-cluster$ ceph-deploy mds create ceph-mgr1 
magedu@ceph-deploy:~/ceph-cluster$ ceph-deploy mds create ceph-mon1 
magedu@ceph-deploy:~/ceph-cluster$ ceph-deploy mds create ceph-mon2

#驗證 mds 服務器當前狀態: 
magedu@ceph-deploy:~/ceph-cluster$ ceph mds stat 
mycephfs:2 {0=ceph-mon2=up:active} 2 up:standby

3.3 驗證ceph集群當前狀態

當前處於激活狀態的 mds 服務器有一台,處於備份狀態的 mds 服務器有2台。

magedu@ceph-deploy:~/ceph-cluster$ ceph fs status
mycephfs - 1 clients
========
RANK  STATE      MDS        ACTIVITY     DNS    INOS   DIRS   CAPS  
 0    active  ceph-mon2  Reqs:    0 /s    16     15     12      0   
      POOL         TYPE     USED  AVAIL  
cephfs-metadata  metadata  1148k  19.2G  
  cephfs-data      data    12.0G  19.2G  
STANDBY MDS 
 ceph-mon1
 ceph-mgr1   
MDS version: ceph version 16.2.5 (0883bdea7337b95e4b611c768c0279868462204a) pacific (stable)

3.4 當前的文件系統狀態

magedu@ceph-deploy:~/ceph-cluster$ ceph fs get mycephfs
Filesystem 'mycephfs' (1)
fs_name	mycephfs
epoch	12
flags	12
created	2021-08-22T11:43:04.596564+0800
modified	2021-08-22T13:40:18.974219+0800
tableserver	0
root	0
session_timeout	60
session_autoclose	300
max_file_size	1099511627776
required_client_features	{}
last_failure	0
last_failure_osd_epoch	252
compat	compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}
max_mds	1
in	0
up	{0=54786}
failed	
damaged	
stopped	
data_pools	[8]
metadata_pool	7
inline_data	disabled
balancer	
standby_count_wanted	1
[mds.ceph-mgr1{0:54786} state up:active seq 1868 addr [v2:192.168.43.104:6800/1237850653,v1:192.168.43.104:6801/1237850653]]

3.5 設置處於激活狀態mds的數量

目前有3個 mds 服務器,但是有一個主2個備,可以優化一下部署架構,設置為為兩主1備

magedu@ceph-deploy:~/ceph-cluster$ ceph fs set mycephfs max_mds 2
magedu@ceph-deploy:~/ceph-cluster$ ceph fs status #設置同時活躍的主 mds 最 大值為 2。
mycephfs - 0 clients
========
RANK  STATE      MDS        ACTIVITY     DNS    INOS   DIRS   CAPS  
 0    active  ceph-mgr1  Reqs:    0 /s    14     13     12      0   
 1    active  ceph-mon1  Reqs:    0 /s    10     13     11      0   
      POOL         TYPE     USED  AVAIL  
cephfs-metadata  metadata   163k  32.8G  
  cephfs-data      data       0   21.9G  
STANDBY MDS  
 ceph-mon2   
MDS version: ceph version 16.2.5 (0883bdea7337b95e4b611c768c0279868462204a) pacific (stable)

3.6 mds高可用優化

可以通過配置為active指定standby

[ceph@ceph-deploy ceph-cluster]$ vim ceph.conf 
[global] 
fsid = 23b0f9f2-8db3-477f-99a7-35a90eaf3dab 
public_network = 172.31.0.0/21 
cluster_network = 192.168.0.0/21 
mon_initial_members = ceph-mon1 
mon_host = 172.31.6.104 
auth_cluster_required = cephx 
auth_service_required = cephx 
auth_client_required = cephx 

mon clock drift allowed = 2 
mon clock drift warn backoff = 30 

[mds.ceph-mgr2] 
#mds_standby_for_fscid = mycephfs 
mds_standby_for_name = ceph-mgr1 
mds_standby_replay = true 

[mds.ceph-mon3] 
mds_standby_for_name = ceph-mon2 
mds_standby_replay = true

3.7 分發配置文件並重啟mds服務

#分發配置文件保證各 mds 服務重啟有效 
$ ceph-deploy --overwrite-conf config push ceph-mon1 
$ ceph-deploy --overwrite-conf config push ceph-mon2 
$ ceph-deploy --overwrite-conf config push ceph-mgr1 
 
[root@ceph-mon1 ~]# systemctl restart ceph-mds@ceph-mon1.service 
[root@ceph-mon2 ~]# systemctl restart ceph-mds@ceph-mon2.service 
[root@ceph-mgr1 ~]# systemctl restart ceph-mds@ceph-mgr1.service 

4 ceph rgw的使用

ceph 使用 bucket 作為存儲桶(存儲空間),實現對象數據的存儲和多用戶隔離,數據存儲在bucket 中,用戶的權限也是針對 bucket 進行授權,可以設置用戶對不同的 bucket 擁有不通的權限,以實現權限管理。

bucket 特性:

  • 存儲空間是您用於存儲對象(Object)的容器,所有的對象都必須隸屬於某個存儲空間,可 以設置和修改存儲空間屬性用來控制地域、訪問權限、生命周期等,這些屬性設置直接作用於該存儲空間內所有對象,因此您可以通過靈活創建不同的存儲空間來完成不同的管理功能。
  • 同一個存儲空間的內部是扁平的,沒有文件系統的目錄等概念,所有的對象都直接隸屬於其對應的存儲空間。
  • 每個用戶可以擁有多個存儲空間
  • 存儲空間的名稱在 OSS 范圍內必須是全局唯一的,一旦創建之后無法修改名稱。
  • 存儲空間內部的對象數目沒有限制。

4.1 部署radosgw服務

apt install radosgw
ceph-deploy節點執行 
ceph-deploy rgw create ceph-mgr1
magedu@ceph-deploy:~/ceph-cluster$ sudo curl http://192.168.43.104:7480/  #mgr1的ip地址+7480

<?xml version="1.0" encoding="UTF-8"?><ListAllMyBucketsResult xmlns="http://s3.amazonaws.com/doc/2006-03-01/"><Owner><ID>anonymous</ID><DisplayName></DisplayName></Owner><Buckets></Buckets></ListAllMyBucketsResult>

4.2 創建用戶並授權

magedu@ceph-deploy:~/ceph-cluster$ radosgw-admin  user create --uid="user1" --display-name="user1"

{
    "user_id": "user1",
    "display_name": "user1",
    "email": "",
    "suspended": 0,
    "max_buckets": 1000,
    "subusers": [],
    "keys": [
        {
            "user": "user1",
            "access_key": "3PDRWUWJ8ML5G4CQ0XXK",
            "secret_key": "ZSm45j0Sq9AjqBSPjfFpQbwHdN4PUl3nuQnAnAkE"
        }
    ],
    "swift_keys": [],
    "caps": [],
    "op_mask": "read, write, delete",
    "default_placement": "",
    "default_storage_class": "",
    "placement_tags": [],
    "bucket_quota": {
        "enabled": false,
        "check_on_raw": false,
        "max_size": -1,
        "max_size_kb": 0,
        "max_objects": -1
    },
    "user_quota": {
        "enabled": false,
        "check_on_raw": false,
        "max_size": -1,
        "max_size_kb": 0,
        "max_objects": -1
    },
    "temp_url_keys": [],
    "type": "rgw",
    "mfa_ids": []
}

從上面的輸出可以看到access_key,secret_key,同時也能夠看到關於bucket,user配額相關的內容

  • 1、radosgw-admin user modify 修改用戶信息;
  • 2、radosgw-admin user rm 刪除用戶;
  • 3、radosgw-admin user enable,radosgw-admin user suspend 啟用和禁用用戶。

此時用戶已經創建完畢,我們可以配置 s3cmd 訪問集群了,訪問集群的時候需要用到RGW的訪問域名。如果在企業中最好設置DNS解析,當前為了測試直接寫hosts文件的方式實現:

注:當前集群有多個radosgw,指向任意一個均可以,生產環境應該指向radosgw的VIP地址

安裝s3cmd 工具

root@ceph-mon2:/# apt install s3cmd -y
# 查看訪問用戶
root@ceph-mon2:/# radosgw-admin user info --uid user1
{
    "user_id": "user1",
    "display_name": "user1",
    "email": "",
    "suspended": 0,
    "max_buckets": 1000,
    "subusers": [],
    "keys": [
        {
            "user": "user1",
            "access_key": "3PDRWUWJ8ML5G4CQ0XXK",
            "secret_key": "ZSm45j0Sq9AjqBSPjfFpQbwHdN4PUl3nuQnAnAkE"
        }
    ],
    "swift_keys": [],
    "caps": [],
    "op_mask": "read, write, delete",
    "default_placement": "",
    "default_storage_class": "",
    "placement_tags": [],
    "bucket_quota": {
        "enabled": false,
        "check_on_raw": false,
        "max_size": -1,
        "max_size_kb": 0,
        "max_objects": -1
    },
    "user_quota": {
        "enabled": false,
        "check_on_raw": false,
        "max_size": -1,
        "max_size_kb": 0,
        "max_objects": -1
    },
    "temp_url_keys": [],
    "type": "rgw",
    "mfa_ids": []
}

# 配置s3cmd
root@ceph-mon2:/# echo "192.168.43.104 rgw.zOukun.com" >> /etc/hosts
s3cmd --configure #指定以下內容
Access Key: 3PDRWUWJ8ML5G4CQ0XXK
Secret Key: ZSm45j0Sq9AjqBSPjfFpQbwHdN4PUl3nuQnAnAkE
S3 Endpoint [s3.amazonaws.com]: 192.168.43.104:7480
DNS-style bucket+hostname:port template for accessing a bucket [%(bucket)s.s3.amazonaws.com]: 192.168.43.104:7480/%(bucket)s
Use HTTPS protocol [Yes]: False
Test access with supplied credentials? [Y/n] y
Save settings? [y/N] y

創建bucket

root@ceph-mon2:/# s3cmd mb s3://z0ukun-rgw-bucket
ERROR: S3 error: 403 (SignatureDoesNotMatch)
這是需要修改版本,啟用v2版本即可
root@ceph-mon2:/# sed -i '/signature_v2/s/False/True/g' root/.s3cfg
root@ceph-mon2:/# s3cmd mb s3://z0ukun-rgw-bucket
Bucket 's3://z0ukun-rgw-bucket/' created
root@ceph-mon2:/# s3cmd ls
2021-08-29 15:17  s3://z0ukun-rgw-bucket

上傳數據

# 上傳文件
s3cmd put /etc/fstab s3://z0ukun-rgw-bucket/fstab

# 查看文件詳情
s3cmd ls s3://z0ukun-rgw-bucket
s3cmd info s3://z0ukun-rgw-bucket

# 下載文件
s3cmd get s3://z0ukun-rgw-bucket/fstab test-fstab

root@ceph-mon2:~# s3cmd get s3://z0ukun-rgw-bucket/fstab test-fstab
download: 's3://z0ukun-rgw-bucket/fstab' -> 'test-fstab'  [1 of 1]
 669 of 669   100% in    0s   159.27 kB/s  done
root@ceph-mon2:~# ls
test-fstab
root@ceph-mon2:~# cat test-fstab 
# /etc/fstab: static file system information.
#
# Use 'blkid' to print the universally unique identifier for a
# device; this may be used with UUID= as a more robust way to name devices
# that works even if disks are added and removed. See fstab(5).
#
# <file system> <mount point>   <type>  <options>       <dump>  <pass>
# / was on /dev/sda1 during installation
UUID=d1949cc7-daf3-4b9c-8472-24d3041740b2 /               ext4    errors=remount-ro 0       1
/swapfile                                 none            swap    sw              0       0
192.168.43.101:6789,192.168.43.102:6789:/ /data2 ceph defaults,name=yanyan,secretfile=/etc/ceph/yanyan.key,_netdev 0 0

除了這幾個常見的基本功能之外,s3cmd還提供了sync,cp,mv,setpolicy,multipart等功能,我們可以通過s3cmd –help獲取更多的命令幫助:

5 ceph dashboard和監控

新版本需要安裝 dashboard ,必須安裝在 mgr 節點

root@ceph-mgr1:~# ceph mgr module enable dashboard #啟用模塊
root@ceph-mgr1:~# apt-cache madison ceph-mgr-dashboard 
ceph-mgr-dashboard | 16.2.5-1bionic | https://mirrors.tuna.tsinghua.edu.cn/ceph/debian-pacific bionic/main amd64 Packages
root@ceph-mgr1:~# apt install ceph-mgr-dashboard
Reading package lists... Done
Building dependency tree       
Reading state information... Done
ceph-mgr-dashboard is already the newest version (16.2.5-1bionic).
0 upgraded, 0 newly installed, 0 to remove and 19 not upgraded.

root@ceph-mgr1:~# ceph mgr module ls
{
    "always_on_modules": [
        "balancer",
        "crash",
        "devicehealth",
        "orchestrator",
        "pg_autoscaler",
        "progress",
        "rbd_support",
        "status",
        "telemetry",
        "volumes"
    ],
    "enabled_modules": [
        "iostat",
        "nfs",
        "restful"
    ],
    "disabled_modules": [
        {
            "name": "alerts",
            "can_run": true,
            "error_string": "",
            "module_options": {
                "interval": {
                    "name": "interval",
                    "type": "secs",
                    "level": "advanced",
                    "flags": 1,
                    "default_value": "60",
                    "min": "",
                    "max": "",
                    .................
[ceph@ceph-deploy ceph-cluster]$ ceph mgr module enable dashboard #啟用模塊                    
 注:模塊啟用后還不能直接訪問,需要配置關閉 SSL 或啟用 SSL 及指定監聽地址。               

5.1 enable dashborad 模塊

Ceph dashboard 在 mgr 節點進行開啟設置,並且可以配置開啟或者關閉 SSL,如下:

root@ceph-mgr1:~# ceph config set mgr mgr/dashboard/ssl false #關閉 SSL

root@ceph-mgr1:~# ceph config set mgr mgr/dashboard/ceph-mgr1/server_addr 192.168.43.104 #指定 dashboard 監聽地址

root@ceph-mgr1:~# ceph config set mgr mgr/dashboard/ceph-mgr1/server_port 9009 #指定 dashboard 監聽端口

#驗證 ceph 集群狀態: 
[ceph@ceph-deploy ceph-cluster]$ ceph -s 
cluster: 
    id: 23b0f9f2-8db3-477f-99a7-35a90eaf3dab 
    health: HEALTH_OK 

services: 
    mon: 3 daemons, quorum ceph-mon1,ceph-mon2,ceph-mon3 
    mgr: ceph-mgr1(active), standbys: ceph-mgr2 
    mds: mycephfs-2/2/2 up {0=ceph-mgr1=up:active,1=ceph-mgr2=up:active}, 1 up:standby 
    osd: 12 osds: 12 up, 12 in 
    rgw: 2 daemons active 
    
data: 
    pools: 9 pools, 256 pgs 
    objects: 411 objects, 449 MiB 
    usage: 15 GiB used, 1.2 TiB / 1.2 TiB avail 
    pgs: 256 active+clean 

io:
	client: 8.0 KiB/s rd, 0 B/s wr, 7 op/s rd, 5 op/s wr 
	
	
第一次啟用 dashboard 插件需要等一段時間(幾分鍾),再去被啟用的節點驗證。 如果有以下報錯: Module 'dashboard' has failed: error('No socket could be created',) 需要檢查 mgr 服務是否正常運行,可以重啟一遍 mgr 服務

5.2 在mgr節點驗證端口與進程

[root@ceph-mgr1 ~]# lsof -i:9009 
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME ceph-mgr 2338 ceph 28u IPv4 23986 0t0 TCP *:pichat (LISTEN)

5.3 dashboard訪問驗證

http://192.168.43.104:9009/#/login

5.4 設置dashboard賬戶密碼

magedu@ceph-deploy:~/ceph-cluster$ touch pass.txt
magedu@ceph-deploy:~/ceph-cluster$ echo "12345678" > pass.txt 
magedu@ceph-deploy:~/ceph-cluster$ ceph dashboard set-login-credentials jack -i pass.txt


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM