cephadm安裝ceph集群


一 系統規划

1.1 系統版本信息

~# cat /etc/issue
Ubuntu 20.04.3 LTS \n \l

1.2 系統時間同步

~# apt -y install chrony
~# systemctl start chrony
~# systemctl enable chrony

1.3 服務器規划

主機名 IP 角色
cephadm-deploy 192.168.174.200 cephadm,monitor,mgr,rgw,mds,osd,nfs
ceph-node01 192.168.174.103 monitor,mgr,rgw,mds,osd,nfs
ceph-node02 192.168.174.104 monitor,mgr,rgw,mds,osd,nfs
ceph-node03 192.168.174.105 monitor,mgr,rgw,mds,osd,nfs
ceph-node04 192.168.174.120 monitor,mgr,rgw,mds,osd,nfs

1.4 安裝docker

~# apt -y install docker-ce
~# systemctl enable docker-ce

1.5 設置主機名稱

~# cat /etc/hosts
192.168.174.200 cephadm-deploy
192.168.174.103 ceph-node01
192.168.174.104 ceph-node02
192.168.174.105 ceph-node03
192.168.174.120 ceph-node04

1.6 軟件清單

ceph: 16.2.7  pacific (stable)

cephadm:16.2.7

二 安裝cephadm

2.1 基於curl安裝

2.1.1 下載執行腳本

root@cephadm-deploy:~/cephadm# curl --silent --remote-name --location https://github.com/ceph/ceph/raw/pacific/src/cephadm/cephadm

2.1.2 添加執行權限

root@cephadm-deploy:~/cephadm# chmod +x cephadm

2.1.3 添加cephadm源

root@cephadm-deploy:~/cephadm# ./cephadm add-repo --release pacific
Installing repo GPG key from https://download.ceph.com/keys/release.gpg...
Installing repo file at /etc/apt/sources.list.d/ceph.list...
Updating package list...
Completed adding repo.

2.1.4 查看cephadm版本信息

root@cephadm-deploy:~/cephadm# apt-cache madison cephadm
   cephadm | 16.2.7-1focal | https://download.ceph.com/debian-pacific focal/main amd64 Packages
   cephadm | 15.2.14-0ubuntu0.20.04.1 | http://mirrors.aliyun.com/ubuntu focal-updates/universe amd64 Packages
   cephadm | 15.2.12-0ubuntu0.20.04.1 | http://mirrors.aliyun.com/ubuntu focal-security/universe amd64 Packages
   cephadm | 15.2.1-0ubuntu1 | http://mirrors.aliyun.com/ubuntu focal/universe amd64 Packages

2.1.5 安裝cephadm

root@cephadm-deploy:~/cephadm# ./cephadm install

2.1.6 查看cephadm安裝路徑

root@cephadm-deploy:~/cephadm# which cephadm
/usr/sbin/cephadm

2.2 包管理器安裝

2.2.1 添加源

root@cephadm-deploy:~/cephadm# wget -q -O- 'https://mirrors.tuna.tsinghua.edu.cn/ceph/keys/release.asc' | sudo apt-key add -
OK
root@cephadm-deploy:~/cephadm# echo "deb https://mirrors.tuna.tsinghua.edu.cn/ceph/debian-pacific $(lsb_release -cs) main" >> /etc/apt/sources.list
root@cephadm-deploy:~/cephadm# apt update

2.2.2 查看cephadm版本信息

root@cephadm-deploy:~/cephadm# apt-cache madison cephadm
   cephadm | 16.2.7-1focal | https://mirrors.tuna.tsinghua.edu.cn/ceph/debian-pacific focal/main amd64 Packages
   cephadm | 15.2.14-0ubuntu0.20.04.1 | http://mirrors.aliyun.com/ubuntu focal-updates/universe amd64 Packages
   cephadm | 15.2.12-0ubuntu0.20.04.1 | http://mirrors.aliyun.com/ubuntu focal-security/universe amd64 Packages
   cephadm | 15.2.1-0ubuntu1 | http://mirrors.aliyun.com/ubuntu focal/universe amd64 Packages

2.2.3 安裝cephadm

root@cephadm-deploy:~/cephadm# apt -y install cephadmin

2.3 cephadm使用幫助

查看代碼
~# cephadm -h
usage: cephadm [-h] [--image IMAGE] [--docker] [--data-dir DATA_DIR] [--log-dir LOG_DIR] [--logrotate-dir LOGROTATE_DIR] [--sysctl-dir SYSCTL_DIR] [--unit-dir UNIT_DIR] [--verbose] [--timeout TIMEOUT]
               [--retry RETRY] [--env ENV] [--no-container-init]
               {version,pull,inspect-image,ls,list-networks,adopt,rm-daemon,rm-cluster,run,shell,enter,ceph-volume,zap-osds,unit,logs,bootstrap,deploy,check-host,prepare-host,add-repo,rm-repo,install,registry-login,gather-facts,exporter,host-maintenance}
               ...

Bootstrap Ceph daemons with systemd and containers.

positional arguments:
{version,pull,inspect-image,ls,list-networks,adopt,rm-daemon,rm-cluster,run,shell,enter,ceph-volume,zap-osds,unit,logs,bootstrap,deploy,check-host,prepare-host,add-repo,rm-repo,install,registry-login,gather-facts,exporter,host-maintenance}
sub-command
version #從容器中獲取ceph版本
pull #pull 最新 image版本
inspect-image #檢查本地image
ls #列出此主機上的守護程序實例
list-networks #列出IP網絡
adopt #采用使用不同工具部署的守護程序
rm-daemon #刪除守護程序實例
rm-cluster #刪除群集的所有守護進程
run #在容器中,在前台運行ceph守護進程
shell #在守護進程容器中運行交互式shell
enter #在運行的守護程序容器中運行交互式shell
ceph-volume #在容器內運行ceph volume
zap-osds #zap與特定fsid關聯的所有OSD
unit #操作守護進程的systemd單元
logs #打印守護程序容器的日志
bootstrap #引導群集(mon+mgr守護進程)
deploy #部署守護進程
check-host #檢查主機配置
prepare-host #准備主機供cephadm使用
add-repo #配置包存儲庫
rm-repo #刪除包存儲庫配置
install #安裝ceph軟件包
registry-login #將主機登錄到經過身份驗證的注冊表
gather-facts #收集並返回主機相關信息(JSON格式)
exporter #在exporter模式(web服務)下啟動cephadm,提供主機/守護程序/磁盤元數據
host-maintenance #管理主機的維護狀態

optional arguments:
-h, --help #顯示此幫助消息並退出
--image IMAGE #container image. 也可以通過“CEPHADM_IMAGE”環境變量進行設置(默認值:無)
--docker #使用docker而不是podman(默認值:False)
--data-dir DATA_DIR #守護程序數據的基本目錄(默認值:/var/lib/ceph)
--log-dir LOG_DIR #守護程序日志的基本目錄(默認值:/var/log/ceph)
--logrotate-dir LOGROTATE_DIR
#logrotate配置文件的位置(默認值:/etc/logrotate.d)
--sysctl-dir SYSCTL_DIR
#sysctl配置文件的位置(默認值:/usr/lib/sysctl.d)
--unit-dir UNIT_DIR #systemd裝置的基本目錄(默認值:/etc/systemd/system)
--verbose, -v #顯示調試級別日志消息(默認值:False)
--timeout TIMEOUT #以秒為單位的超時(默認值:None)
--retry RETRY #最大重試次數(默認值:15)
--env ENV, -e ENV #設置環境變量(默認值:[])
--no-container-init #不使用“---init”運行podman/docker(默認值:False)

三 引導ceph集群

官方文檔:https://docs.ceph.com/en/pacific/install/

3.1 ceph bootstrap

  • 在本地主機上為新集群創建一個監視器和管理器守護進程。

  • 為 Ceph 集群生成一個新的 SSH 密鑰並將其添加到 root 用戶的/root/.ssh/authorized_keys文件中。

  • 將公鑰的副本寫入/etc/ceph/ceph.pub.

  • 將最小配置文件寫入/etc/ceph/ceph.conf. 需要此文件才能與新集群通信。

  • client.admin管理(特權!)秘密密鑰的副本寫入/etc/ceph/ceph.client.admin.keyring.

  • _admin標簽添加到引導主機。默認情況下,這個標簽的任何主機將(也)獲得的副本/etc/ceph/ceph.conf和 /etc/ceph/ceph.client.admin.keyring

3.2 引導集群

查看代碼
root@cephadm-deploy:~/cephadm# ./cephadm bootstrap --mon-ip 192.168.174.200
Verifying podman|docker is present...
Verifying lvm2 is present...
Verifying time synchronization is in place...
Unit chrony.service is enabled and running
Repeating the final host check...
docker is present
systemctl is present
lvcreate is present
Unit chrony.service is enabled and running
Host looks OK
Cluster fsid: 0888a64c-57e6-11ec-ad21-fbe9db6e2e74
Verifying IP 192.168.174.200 port 3300 ...
Verifying IP 192.168.174.200 port 6789 ...
Mon IP `192.168.174.200` is in CIDR network `192.168.174.0/24`
- internal network (--cluster-network) has not been provided, OSD replication will default to the public_network
Pulling container image quay.io/ceph/ceph:v16...
Ceph version: ceph version 16.2.7 (dd0603118f56ab514f133c8d2e3adfc983942503) pacific (stable)
Extracting ceph user uid/gid from container image...
Creating initial keys...
Creating initial monmap...
Creating mon...
Waiting for mon to start...
Waiting for mon...
mon is available
Assimilating anything we can from ceph.conf...
Generating new minimal ceph.conf...
Restarting the monitor...
Setting mon public_network to 192.168.174.0/24
Wrote config to /etc/ceph/ceph.conf
Wrote keyring to /etc/ceph/ceph.client.admin.keyring
Creating mgr...
Verifying port 9283 ...
Waiting for mgr to start...
Waiting for mgr...
mgr not available, waiting (1/15)...
mgr not available, waiting (2/15)...
mgr not available, waiting (3/15)...
mgr not available, waiting (4/15)...
mgr is available
Enabling cephadm module...
Waiting for the mgr to restart...
Waiting for mgr epoch 4...
mgr epoch 4 is available
Setting orchestrator backend to cephadm...
Generating ssh key...
Wrote public SSH key to /etc/ceph/ceph.pub
Adding key to root@localhost authorized_keys...
Adding host cephadm-deploy...
Deploying mon service with default placement...
Deploying mgr service with default placement...
Deploying crash service with default placement...
Deploying prometheus service with default placement...
Deploying grafana service with default placement...
Deploying node-exporter service with default placement...
Deploying alertmanager service with default placement...
Enabling the dashboard module...
Waiting for the mgr to restart...
Waiting for mgr epoch 8...
mgr epoch 8 is available
Generating a dashboard self-signed certificate...
Creating initial admin user...
Fetching dashboard port number...
Ceph Dashboard is now available at:
     URL: https://ceph-deploy:8443/
    User: admin
Password: pezc2ncdii

Enabling client.admin keyring and conf on hosts with "admin" label
You can access the Ceph CLI with:

sudo ./cephadm shell --fsid 0888a64c-57e6-11ec-ad21-fbe9db6e2e74 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring

Please consider enabling telemetry to help improve Ceph:

ceph telemetry on

For more information see:

https://docs.ceph.com/docs/pacific/mgr/telemetry/

Bootstrap complete.

3.3 查看ceph鏡像

root@cephadm-deploy:~/cephadm# docker ps
CONTAINER ID   IMAGE                                      COMMAND                  CREATED          STATUS          PORTS     NAMES
0fa08c0335a7   quay.io/prometheus/alertmanager:v0.20.0    "/bin/alertmanager -…"   17 minutes ago   Up 17 minutes             ceph-0888a64c-57e6-11ec-ad21-fbe9db6e2e74-alertmanager-cephadm-deploy
fb58cbd1517b   quay.io/prometheus/prometheus:v2.18.1      "/bin/prometheus --c…"   17 minutes ago   Up 17 minutes             ceph-0888a64c-57e6-11ec-ad21-fbe9db6e2e74-prometheus-cephadm-deploy
a37f6d4e5696   quay.io/prometheus/node-exporter:v0.18.1   "/bin/node_exporter …"   17 minutes ago   Up 17 minutes             ceph-0888a64c-57e6-11ec-ad21-fbe9db6e2e74-node-exporter-cephadm-deploy
331cf9c8b544   quay.io/ceph/ceph                          "/usr/bin/ceph-crash…"   17 minutes ago   Up 17 minutes             ceph-0888a64c-57e6-11ec-ad21-fbe9db6e2e74-crash-cephadm-deploy
1fb47f7aba04   quay.io/ceph/ceph:v16                      "/usr/bin/ceph-mgr -…"   18 minutes ago   Up 18 minutes             ceph-0888a64c-57e6-11ec-ad21-fbe9db6e2e74-mgr-cephadm-deploy-jgiulj
c1ac511761ec   quay.io/ceph/ceph:v16                      "/usr/bin/ceph-mon -…"   18 minutes ago   Up 18 minutes             ceph-0888a64c-57e6-11ec-ad21-fbe9db6e2e74-mon-cephadm-deploy
  • ceph-mgr ceph管理程序
  • ceph-monitor ceph監視器
  • ceph-crash 崩潰數據收集模塊
  • prometheus prometheus監控組件
  • grafana 監控數據展示dashboard
  • alertmanager prometheus告警組件
  • node_exporter prometheus節點數據收集組件

 

3.4 訪問ceph Dashboard

User: admin
Password: pezc2ncdii

3.5 訪問Prometheus web界面

3.6 訪問grafana web界面

3.7 啟用ceph CLI

root@cephadm-deploy:~/cephadm# sudo ./cephadm shell --fsid 0888a64c-57e6-11ec-ad21-fbe9db6e2e74 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring
Using recent ceph image quay.io/ceph/ceph@sha256:bb6a71f7f481985f6d3b358e3b9ef64c6755b3db5aa53198e0aac38be5c8ae54

3.8 驗證ceph集群信息

root@cephadm-deploy:/# ceph version
ceph version 16.2.7 (dd0603118f56ab514f133c8d2e3adfc983942503) pacific (stable)

root@cephadm-deploy:/# ceph fsid
0888a64c-57e6-11ec-ad21-fbe9db6e2e74

root@cephadm-deploy:/# ceph -s
cluster:
id: 0888a64c-57e6-11ec-ad21-fbe9db6e2e74
health: HEALTH_WARN
OSD count 0 < osd_pool_default_size 3

services:
mon: 1 daemons, quorum cephadm-deploy (age 22m)
mgr: cephadm-deploy.jgiulj(active, since 20m)
osd: 0 osds: 0 up, 0 in

data:
pools: 0 pools, 0 pgs
objects: 0 objects, 0 B
usage: 0 B used, 0 B / 0 B avail
pgs:

3.9 查看組件運行狀態

查看代碼
root@cephadm-deploy:/# ceph orch ps
NAME                          HOST            PORTS        STATUS         REFRESHED  AGE  MEM USE  MEM LIM  VERSION  IMAGE ID      CONTAINER ID  
alertmanager.cephadm-deploy   cephadm-deploy  *:9093,9094  running (23m)     2m ago  23m    14.7M        -  0.20.0   0881eb8f169f  0fa08c0335a7  
crash.cephadm-deploy          cephadm-deploy               running (23m)     2m ago  23m    10.2M        -  16.2.7   cc266d6139f4  331cf9c8b544  
mgr.cephadm-deploy.jgiulj     cephadm-deploy  *:9283       running (24m)     2m ago  24m     433M        -  16.2.7   cc266d6139f4  1fb47f7aba04  
mon.cephadm-deploy            cephadm-deploy               running (24m)     2m ago  24m    61.5M    2048M  16.2.7   cc266d6139f4  c1ac511761ec  
node-exporter.cephadm-deploy  cephadm-deploy  *:9100       running (23m)     2m ago  23m    17.3M        -  0.18.1   e5a616e4b9cf  a37f6d4e5696  
prometheus.cephadm-deploy     cephadm-deploy  *:9095       running (23m)     2m ago  23m    64.9M        -  2.18.1   de242295e225  fb58cbd1517b

root@cephadm-deploy:/# ceph orch ps --daemon_type mgr
NAME HOST PORTS STATUS REFRESHED AGE MEM USE MEM LIM VERSION IMAGE ID CONTAINER ID
mgr.cephadm-deploy.jgiulj cephadm-deploy *:9283 running (45m) 2m ago 45m 419M - 16.2.7 cc266d6139f4 1fb47f7aba04

root@cephadm-deploy:/# ceph orch ls
NAME PORTS RUNNING REFRESHED AGE PLACEMENT
alertmanager ?:9093,9094 1/1 2m ago 45m count:1
crash 1/1 2m ago 45m *
grafana ?:3000 1/1 2m ago 45m count:1
mgr 1/2 2m ago 45m count:2
mon 1/5 2m ago 45m count:5
node-exporter ?:9100 1/1 2m ago 45m *
prometheus ?:9095 1/1 2m ago 45m count:1
root@cephadm-deploy:/# ceph orch ls mgr
NAME PORTS RUNNING REFRESHED AGE PLACEMENT
mgr 1/2 2m ago 45m count:2

3.10 退出ceph CLI

root@cephadm-deploy:/# exit
exit

3.11 安裝ceph-common組件

root@cephadm-deploy:~/cephadm# cephadm add-repo --release pacific
root@cephadm-deploy:~/cephadm# ./cephadm install ceph-common
Installing packages ['ceph-common']...

四 ceph集群主機管理

官方文檔:https://docs.ceph.com/en/pacific/cephadm/host-management/#cephadm-adding-hosts

ceph集群添加主機后會自動擴展monitor和manager數量。

4.1 列出集群主機

root@cephadm-deploy:~/cephadm# ./cephadm shell ceph orch host ls
Inferring fsid 0888a64c-57e6-11ec-ad21-fbe9db6e2e74
Using recent ceph image quay.io/ceph/ceph@sha256:bb6a71f7f481985f6d3b358e3b9ef64c6755b3db5aa53198e0aac38be5c8ae54
HOST            ADDR             LABELS  STATUS  
cephadm-deploy  192.168.174.200  _admin          

4.2 添加主機

4.2.1 新主機添加ceph集群ssh 密鑰

root@cephadm-deploy:~/cephadm# ssh-copy-id -f -i /etc/ceph/ceph.pub ceph-node01
root@cephadm-deploy:~/cephadm# ssh-copy-id -f -i /etc/ceph/ceph.pub ceph-node02
root@cephadm-deploy:~/cephadm# ssh-copy-id -f -i /etc/ceph/ceph.pub ceph-node03
root@cephadm-deploy:~/cephadm# ssh-copy-id -f -i /etc/ceph/ceph.pub ceph-node04

4.2.2 ceph集群添加主機

root@cephadm-deploy:~/cephadm# ./cephadm shell ceph orch host add ceph-node01 192.168.174.103
root@cephadm-deploy:~/cephadm# ./cephadm shell ceph orch host add ceph-node02 192.168.174.104
root@cephadm-deploy:~/cephadm# ./cephadm shell ceph orch host add ceph-node03 192.168.174.105
root@cephadm-deploy:~/cephadm# ./cephadm shell ceph orch host add ceph-node04 192.168.174.120

4.3 驗證主機信息

root@cephadm-deploy:~/cephadm# ./cephadm shell ceph orch host ls
Inferring fsid 0888a64c-57e6-11ec-ad21-fbe9db6e2e74
Using recent ceph image quay.io/ceph/ceph@sha256:bb6a71f7f481985f6d3b358e3b9ef64c6755b3db5aa53198e0aac38be5c8ae54
HOST            ADDR             LABELS  STATUS  
ceph-node01     192.168.174.103                  
ceph-node02     192.168.174.104                  
ceph-node03     192.168.174.105                  
ceph-node04     192.168.174.120                  
cephadm-deploy  192.168.174.200  _admin   

4.4 查看ceph集群節點信息

root@cephadm-deploy:~/cephadm# ./cephadm shell ceph orch ls
Inferring fsid 0888a64c-57e6-11ec-ad21-fbe9db6e2e74
Using recent ceph image quay.io/ceph/ceph@sha256:bb6a71f7f481985f6d3b358e3b9ef64c6755b3db5aa53198e0aac38be5c8ae54
NAME           PORTS        RUNNING  REFRESHED  AGE  PLACEMENT  
alertmanager   ?:9093,9094      1/1  -          72m  count:1    
crash                           5/5  9m ago     72m  *          
grafana        ?:3000           1/1  9m ago     72m  count:1    
mgr                             2/2  9m ago     72m  count:2    
mon                             5/5  9m ago     72m  count:5    
node-exporter  ?:9100           5/5  9m ago     72m  *          
prometheus     ?:9095           1/1  -          72m  count:1    

4.5 驗證當前ceph集群狀態

root@cephadm-deploy:~/cephadm# ./cephadm shell ceph -s
Inferring fsid 0888a64c-57e6-11ec-ad21-fbe9db6e2e74
Using recent ceph image quay.io/ceph/ceph@sha256:bb6a71f7f481985f6d3b358e3b9ef64c6755b3db5aa53198e0aac38be5c8ae54
  cluster:
    id:     0888a64c-57e6-11ec-ad21-fbe9db6e2e74
    health: HEALTH_WARN
            OSD count 0 < osd_pool_default_size 3

services:
mon: 5 daemons, quorum cephadm-deploy,ceph-node01,ceph-node02,ceph-node03,ceph-node04 (age 3m)
mgr: cephadm-deploy.jgiulj(active, since 75m), standbys: ceph-node01.anwvfy
osd: 0 osds: 0 up, 0 in

data:
pools: 0 pools, 0 pgs
objects: 0 objects, 0 B
usage: 0 B used, 0 B / 0 B avail
pgs:

4.6 其它節點驗證鏡像

root@ceph-node01:~# docker ps
CONTAINER ID   IMAGE                                      COMMAND                  CREATED         STATUS         PORTS     NAMES
05349d266ffa   quay.io/prometheus/node-exporter:v0.18.1   "/bin/node_exporter …"   7 minutes ago   Up 7 minutes             ceph-0888a64c-57e6-11ec-ad21-fbe9db6e2e74-node-exporter-ceph-node01
76b58ef1b83a   quay.io/ceph/ceph                          "/usr/bin/ceph-mon -…"   7 minutes ago   Up 7 minutes             ceph-0888a64c-57e6-11ec-ad21-fbe9db6e2e74-mon-ceph-node01
e673814b6a59   quay.io/ceph/ceph                          "/usr/bin/ceph-mgr -…"   7 minutes ago   Up 7 minutes             ceph-0888a64c-57e6-11ec-ad21-fbe9db6e2e74-mgr-ceph-node01-anwvfy
60a34da66725   quay.io/ceph/ceph                          "/usr/bin/ceph-crash…"   7 minutes ago   Up 7 minutes             ceph-0888a64c-57e6-11ec-ad21-fbe9db6e2e74-crash-ceph-node01

五 OSD服務

官方文檔:https://docs.ceph.com/en/pacific/cephadm/services/osd/#cephadm-deploy-osds

5.1 部署要求

  • 設備必須沒有分區。
  • 設備不得具有任何 LVM 狀態。
  • 不得安裝設備。
  • 設備不能包含文件系統。
  • 設備不得包含 Ceph BlueStore OSD。
  • 設備必須大於 5 GB。

5.2 查看集群主機存儲設備清單

查看代碼
root@cephadm-deploy:~/cephadm# ./cephadm shell ceph orch device ls
Inferring fsid 0888a64c-57e6-11ec-ad21-fbe9db6e2e74
Using recent ceph image quay.io/ceph/ceph@sha256:bb6a71f7f481985f6d3b358e3b9ef64c6755b3db5aa53198e0aac38be5c8ae54
HOST            PATH      TYPE  DEVICE ID   SIZE  AVAILABLE  REJECT REASONS  
ceph-node01     /dev/sdb  hdd              21.4G  Yes                        
ceph-node01     /dev/sdc  hdd              21.4G  Yes                        
ceph-node01     /dev/sdd  hdd              21.4G  Yes                        
ceph-node02     /dev/sdb  hdd              21.4G  Yes                        
ceph-node02     /dev/sdc  hdd              21.4G  Yes                        
ceph-node02     /dev/sdd  hdd              21.4G  Yes                        
ceph-node03     /dev/sdb  hdd              21.4G  Yes                        
ceph-node03     /dev/sdc  hdd              21.4G  Yes                        
ceph-node03     /dev/sdd  hdd              21.4G  Yes                        
ceph-node04     /dev/sdb  hdd              21.4G  Yes                        
ceph-node04     /dev/sdc  hdd              21.4G  Yes                        
ceph-node04     /dev/sdd  hdd              21.4G  Yes                        
cephadm-deploy  /dev/sdb  hdd              21.4G  Yes                        
cephadm-deploy  /dev/sdc  hdd              21.4G  Yes                        
cephadm-deploy  /dev/sdd  hdd              21.4G  Yes

5.3 創建OSD方法

5.3.1 從特定主機上的特定設備創建 OSD

~# ceph orch daemon add osd ceph-node01:/dev/sdd

5.3.2 從yml文件創建OSD

~# ceph orch apply -i spec.yml

5.3.3 批量創建OSD

~# ceph orch apply osd --all-available-devices

運行上述命令后:

  • 如果您向集群添加新磁盤,它們將自動用於創建新的 OSD。

  • 如果您移除 OSD 並清理 LVM 物理卷,則會自動創建一個新的 OSD。

如果您想避免這種行為(禁用在可用設備上自動創建 OSD),請使用--unmanaged=true參數;

注意:

ceph orch apply的默認行為導致cephadm不斷進行協調。這意味着cephadm會在檢測到新驅動器后立即創建OSD。

設置unmanaged:True將禁用OSD的創建。如果設置了unmanaged:True,則即使應用了新的OSD服務,也不會發生任何事情。

ceph orch daemon add創建OSD,但不添加OSD服務。

5.3.4 查看創建OSD

root@cephadm-deploy:~/cephadm# ./cephadm shell ceph orch device ls
Inferring fsid 0888a64c-57e6-11ec-ad21-fbe9db6e2e74
Using recent ceph image quay.io/ceph/ceph@sha256:bb6a71f7f481985f6d3b358e3b9ef64c6755b3db5aa53198e0aac38be5c8ae54
HOST            PATH      TYPE  DEVICE ID   SIZE  AVAILABLE  REJECT REASONS                                                 
ceph-node01     /dev/sdb  hdd              21.4G             Insufficient space (<10 extents) on vgs, LVM detected, locked  
ceph-node01     /dev/sdc  hdd              21.4G             Insufficient space (<10 extents) on vgs, LVM detected, locked  
ceph-node01     /dev/sdd  hdd              21.4G             Insufficient space (<10 extents) on vgs, LVM detected, locked  
ceph-node02     /dev/sdb  hdd              21.4G             Insufficient space (<10 extents) on vgs, LVM detected, locked  
ceph-node02     /dev/sdc  hdd              21.4G             Insufficient space (<10 extents) on vgs, LVM detected, locked  
ceph-node02     /dev/sdd  hdd              21.4G             Insufficient space (<10 extents) on vgs, LVM detected, locked  
ceph-node03     /dev/sdb  hdd              21.4G             Insufficient space (<10 extents) on vgs, LVM detected, locked  
ceph-node03     /dev/sdc  hdd              21.4G             Insufficient space (<10 extents) on vgs, LVM detected, locked  
ceph-node03     /dev/sdd  hdd              21.4G             Insufficient space (<10 extents) on vgs, LVM detected, locked  
ceph-node04     /dev/sdb  hdd              21.4G             Insufficient space (<10 extents) on vgs, LVM detected, locked  
ceph-node04     /dev/sdc  hdd              21.4G             Insufficient space (<10 extents) on vgs, LVM detected, locked  
ceph-node04     /dev/sdd  hdd              21.4G             Insufficient space (<10 extents) on vgs, LVM detected, locked  
cephadm-deploy  /dev/sdb  hdd              21.4G             Insufficient space (<10 extents) on vgs, LVM detected, locked  
cephadm-deploy  /dev/sdc  hdd              21.4G             Insufficient space (<10 extents) on vgs, LVM detected, locked  
cephadm-deploy  /dev/sdd  hdd              21.4G             Insufficient space (<10 extents) on vgs, LVM detected, locked  

5.3.5 查看ceph集群狀態

root@cephadm-deploy:~/cephadm# cephadm shell ceph -s
Inferring fsid 0888a64c-57e6-11ec-ad21-fbe9db6e2e74
Using recent ceph image quay.io/ceph/ceph@sha256:bb6a71f7f481985f6d3b358e3b9ef64c6755b3db5aa53198e0aac38be5c8ae54
  cluster:
    id:     0888a64c-57e6-11ec-ad21-fbe9db6e2e74
    health: HEALTH_OK

services:
mon: 5 daemons, quorum cephadm-deploy,ceph-node01,ceph-node02,ceph-node03,ceph-node04 (age 37m)
mgr: cephadm-deploy.jgiulj(active, since 109m), standbys: ceph-node01.anwvfy
osd: 15 osds: 15 up (since 82s), 15 in (since 104s)

data:
pools: 1 pools, 1 pgs
objects: 0 objects, 0 B
usage: 82 MiB used, 300 GiB / 300 GiB avail
pgs: 1 active+clean

六 刪除服務

6.1 自動刪除

ceph orch rm <service-name>

6.2 手動刪除

ceph orch daemon rm <daemon name>... [--force]

6.3 禁用守護進程的自動管理

cat mgr.yaml
service_type: mgr
unmanaged: true
placement:
  label: mgr
ceph orch apply -i mgr.yaml

在服務規范中應用此更改后,cephadm 將不再部署任何新的守護進程(即使放置規范與其他主機匹配)。

 

 

 

 


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM