Ceph 入門篇


Ceph 初探

1. Ceph 簡介

Ceph 是一個可靠地、自動重均衡、自動恢復的分布式存儲系統,根據場景划分可以將 Ceph 分為三大塊,分別是對象存儲、塊設備存儲和文件系統服務。在虛擬化領域里,比較常用到的是 Ceph 的塊設備存儲,比如在 OpenStack 項目里,Ceph 的塊設備存儲可以對接 OpenStack 的 Cinder后端存儲、Glance的鏡像存儲和虛擬機的數據存儲,比較直觀的是 Ceph 集群可以提供一個raw格式的塊存儲來作為虛擬機實例的硬盤。

Ceph 相比其它存儲的優勢點在於它不單單是存儲,同時還充分利用了存儲節點上的計算能力,在存儲每一個數據時,都會通過計算得出該數據存儲的位置,盡量將數據分布均衡,同時由於Ceph的良好設計,采用了CRUSH算法、HASH環等方法,使得它不存在傳統的單點故障的問題,且隨着規模的擴大性能並不會受到影響。

2. Ceph 核心組件及功能介紹

Ceph的核心組件包括Ceph OSD、Ceph Monitor、Ceph Manager和Ceph MDS。

  • Ceph OSD

OSD的英文全稱是Object Storage Device,它的主要功能是存儲數據、復制數據、平衡數據、恢復數據等,並通過檢查其他Ceph OSD守護程序的心跳來向 Ceph 監視器和管理器提供一些監視信息。通常至少需要3個Ceph OSD才能實現冗余和高可用性。一般情況下一塊硬盤對應一個OSD,由OSD來對硬盤存儲進行管理,當然一個分區也可以成為一個OSD。

Ceph OSD的架構實現由物理磁盤驅動器、Linux文件系統和Ceph OSD服務組成,對於Ceph OSD Deamon而言,Linux文件系統顯性的支持了其拓展性,一般Linux文件系統有好幾種,比如有BTRFS、XFS、Ext4等,BTRFS雖然有很多優點特性,但現在還沒達到生產環境所需的穩定性,一般比較推薦使用XFS。

伴隨OSD的還有一個概念叫做Journal盤,一般寫數據到Ceph集群時,都是先將數據寫入到Journal盤中,然后每隔一段時間比如5秒再將Journal盤中的數據刷新到文件系統中。一般為了使讀寫時延更小,Journal盤都是采用SSD,一般分配10G以上,當然分配多點那是更好,Ceph中引入Journal盤的概念是因為Journal允許Ceph OSD功能很快做小的寫操作;一個隨機寫入首先寫入在上一個連續類型的journal,然后刷新到文件系統,這給了文件系統足夠的時間來合並寫入磁盤,一般情況下使用SSD作為OSD的journal可以有效緩沖突發負載。

  • Ceph Monitor

Monitor 負責監視Ceph集群,維護Ceph集群的健康狀態,同時維護着Ceph集群中的各種Map圖,比如OSD Map、Monitor Map、PG Map和CRUSH Map,這些Map統稱為Cluster Map,Cluster Map是RADOS的關鍵數據結構,管理集群中的所有成員、關系、屬性等信息以及數據的分發,比如當用戶需要存儲數據到Ceph集群時,OSD需要先通過Monitor獲取最新的Map圖,然后根據Map圖和object id等計算出數據最終存儲的位置。

  • Ceph Manager

在一個主機上運行的一個守護進程,Ceph Manager 守護程序(ceph-mgr)負責跟蹤運行時指標和Ceph集群的當前狀態,包括存儲利用率,當前性能指標和系統負載。Ceph Manager守護程序還托管基於Python的模塊來管理和公開Ceph集群信息,包括基於Web的Ceph儀表板和REST API。高可用性通常至少需要兩個管理器。

  • Ceph MDS

全稱是Ceph MetaData Server,主要保存的文件系統服務的元數據,但對象存儲和塊存儲設備是不需要使用該服務的。

3. Ceph 基礎架構組件

../_images/stack.png

  • Ceph最底層的是RADOS,RADOS自身是一個完整的分布式對象存儲系統,它具有可靠、智能、分布式等特性,Ceph的高可靠、高可拓展、高性能、高自動化都是由這一層來提供的,用戶數據的存儲最終也都是通過這一層來進行存儲的,RADOS可以說就是Ceph的核心。RADOS系統主要由兩部分組成,分別是OSD和Monitor。

  • 基於RADOS層的上一層是LIBRADOS,LIBRADOS是一個庫,它允許應用程序通過訪問該庫來與RADOS系統進行交互,支持多種編程語言,比如C、C++、Python等。基於LIBRADOS層開發的又可以看到有三層,分別是RADOSGW、RBD和CEPH FS。

  • RADOSGW是一套基於當前流行的RESTFUL協議的網關,並且兼容S3和Swift。

  • RBD通過Linux內核客戶端和QEMU/KVM驅動來提供一個分布式的塊設備。

  • CEPHFS則提供了POSIX接口,用戶可直接通過客戶端掛載使用。它是內核態的程序,所以無需調用用戶空間的librados庫。它通過內核中的net模塊來與Rados進行交互。

4. Ceph 數據讀寫流程

查看源圖像

Ceph 的讀/寫操作采用Primary-Replica模型,客戶端只向文件對象Object所對應OSD set的Primary OSD發起讀/寫請求,這保證了數據的強一致性。當Primary OSD收到Object的寫請求時,它負責把數據發送給其他副本,只有這個數據被保存在所有的OSD上時,Primary OSD才應答文件對象Object的寫請求,這保證了副本的一致性。這點和Kafka中讀寫數據方式有點類似。

  • 寫入數據

第一步:計算文件到對象的映射。假設客戶端要存儲一個文件,首先得到文件的oid,oid(object id) = ino + ono,即inode序列號(文件的元數據序列號)加上object序列號(文件分塊時生成的對象序列號)。Ceph 底層存儲是分塊存儲的,默認以 4M 切分一個塊大小。

第二步:通過 hash 算法計算出文件對應的 pool 中的 PG:通過一致性 HASH 計算 Object 到 PG, Object -> PG 映射 hash(oid) & mask-> pgid。

第三步:通過 CRUSH 把對象映射到PG中的OSD。通過 CRUSH 算法計算 PG 到 OSD,PG -> OSD 映射:[CRUSH(pgid)->(osd1,osd2,osd3)]。

第四步:PG 中的主 OSD 將對象寫入到硬盤。

第五步: 主 OSD 將數據同步給備份 OSD,並等待備份 OSD 返回確認。

第六步: 主 OSD 將寫入完成返回給客戶端。

  • 讀取數據

如果需要讀取數據,客戶端只需完成同樣的尋址過程,並直接和主 OSD聯系。在目前的Ceph設計中,被讀取的數據默認由Primary OSD提供,但也可以設置允許從其他OSD中獲取,以分散讀取壓力從而提高性能。

5.Ceph 集群部署

5.1 服務器規划

172.31.0.10 ceph-deploy.example.local ceph-deploy 2c2g 30G*1
172.31.0.11 ceph-mon1.example.local ceph-mon1     2c2g 30G*1
172.31.0.12 ceph-mon2.example.local ceph-mon2     2c2g 30G*1
172.31.0.13 ceph-mon3.example.local ceph-mon3     2c2g 30G*1
172.31.0.14 ceph-mgr1.example.local ceph-mgr1     2c2g 30G*1
172.31.0.15 ceph-mgr2.example.local ceph-mgr2     2c2g 30G*1
172.31.0.16 ceph-node1.example.local ceph-node1   2c2g 30G*1 10G*4
172.31.0.17 ceph-node2.example.local ceph-node2   2c2g 30G*1 10G*4
172.31.0.18 ceph-node3.example.local ceph-node3   2c2g 30G*1 10G*4
172.31.0.19 ceph-node4.example.local ceph-node4   2c2g 30G*1 10G*4

# Ceph 版本
ubuntu@ceph-deploy:~/ceph-cluster$ ceph --version
ceph version 15.2.14 (cd3bb7e87a2f62c1b862ff3fd8b1eec13391a5be) octopus (stable)

# 服務器版本
Ubuntu 18.04

5.2 初始化服務器

網卡配置

# 每台服務器支持兩個網絡,public 網絡針對客戶端訪問,cluster 網絡用於集群管理及數據同步
public 172.31.0.0/24
cluster 192.168.10.0/24

初始化步驟

# 先初始化一台機器作為模板,再克隆模板服務器
# 1. 設置網卡
# 更改網卡設備名稱為常用的 eth0
sudo vim /etc/default/grub
GRUB_CMDLINE_LINUX=""
# 改為:
GRUB_CMDLINE_LINUX="net.ifnames=0 biosdevname=0"
# 更新開機啟動文件
sudo grub-mkconfig -o /boot/grub/grub.cfg

# 禁用ipv6
root@devops:~# cat > /etc/sysctl.conf <<EOF
# 禁用ipv6
net.ipv6.conf.all.disable_ipv6=1
net.ipv6.conf.default.disable_ipv6=1
net.ipv6.conf.lo.disable_ipv6=1
EOF
root@devops:~# sysctl -p
# 設置雙網卡
root@devops:~# cat /etc/netplan/10-netcfg.yaml
network:
  version: 2
  renderer: networkd
  ethernets:
    eth0:
      dhcp4: no
      dhcp6: no
      addresses: [172.31.0.10/24]
      gateway4: 172.31.0.2
      nameservers:
        addresses: [223.5.5.5,223.6.6.6,114.114.114.114]
    eth1:
      dhcp4: no
      dhcp6: no
      addresses: [192.168.10.10/24]
      
root@devops:~# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 00:0c:29:d5:be:99 brd ff:ff:ff:ff:ff:ff
    inet 172.31.0.10/24 brd 172.31.0.255 scope global eth0
       valid_lft forever preferred_lft forever
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 00:0c:29:d5:be:a3 brd ff:ff:ff:ff:ff:ff
    inet 192.168.10.10/24 brd 192.168.10.255 scope global eth1
       valid_lft forever preferred_lft forever

# 2. 設置免密登錄
ubuntu@devops:~$ ssh-keygen -t rsa -N '' -f ~/.ssh/id_rsa
ubuntu@devops:~$ cat ~/.ssh/id_rsa.pub > ~/.ssh/authorized_keys
# 取消 ssh 登錄指紋驗證
sudo sed -i '/ask/{s/#//;s/ask/no/}' /etc/ssh/ssh_config

# 3. 換源
sudo mv /etc/apt/{sources.list,sources.list.old}
sudo cat > /etc/apt/sources.list <<EOF
# 默認注釋了源碼鏡像以提高 apt update 速度,如有需要可自行取消注釋
deb https://mirrors.tuna.tsinghua.edu.cn/ubuntu/ bionic main restricted universe multiverse
# deb-src https://mirrors.tuna.tsinghua.edu.cn/ubuntu/ bionic main restricted universe multiverse
deb https://mirrors.tuna.tsinghua.edu.cn/ubuntu/ bionic-updates main restricted universe multiverse
# deb-src https://mirrors.tuna.tsinghua.edu.cn/ubuntu/ bionic-updates main restricted universe multiverse
deb https://mirrors.tuna.tsinghua.edu.cn/ubuntu/ bionic-backports main restricted universe multiverse
# deb-src https://mirrors.tuna.tsinghua.edu.cn/ubuntu/ bionic-backports main restricted universe multiverse
deb https://mirrors.tuna.tsinghua.edu.cn/ubuntu/ bionic-security main restricted universe multiverse
# deb-src https://mirrors.tuna.tsinghua.edu.cn/ubuntu/ bionic-security main restricted universe multiverse
EOF
# 添加ceph 源
ubuntu@devops:~$ wget -q -O- 'https://download.ceph.com/keys/release.asc' | sudo apt-key add -
ubuntu@devops:~$ sudo apt-add-repository 'deb https://mirrors.tuna.tsinghua.edu.cn/ceph/debian-octopus/ bionic main'

sudo apt update
sudo apt upgrade
# 4. 安裝常用工具包
sudo apt install net-tools vim wget git build-essential -y

# 5. 更改系統限制
sudo cat >> /etc/security/limits.conf <<EOF
* soft     nproc          102400
* hard     nproc          102400
* soft     nofile         102400
* hard     nofile         102400

root soft     nproc          102400
root hard     nproc          102400
root soft     nofile         102400
root hard     nofile         102400
EOF

# 6. 時鍾同步
sudo apt update
sudo apt install chrony -y
sudo vim /etc/chrony/chrony.conf
# 修改為阿里雲時鍾同步服務器
# 公網
server ntp.aliyun.com minpoll 4 maxpoll 10 iburst
server ntp1.aliyun.com minpoll 4 maxpoll 10 iburst
server ntp2.aliyun.com minpoll 4 maxpoll 10 iburst
server ntp3.aliyun.com minpoll 4 maxpoll 10 iburst
server ntp4.aliyun.com minpoll 4 maxpoll 10 iburst
server ntp5.aliyun.com minpoll 4 maxpoll 10 iburst
server ntp6.aliyun.com minpoll 4 maxpoll 10 iburst
server ntp7.aliyun.com minpoll 4 maxpoll 10 iburst

# 重啟chrony服務
sudo systemctl restart chrony
sudo systemctl status chrony
sudo systemctl enable chrony
# 查看是否激活
sudo chronyc activity
# 查看時鍾同步狀態
sudo timedatectl status
# 寫入系統時鍾
sudo hwclock -w

# 重啟服務器
sudo reboot
# ceph 部署前環境准備
# 設置hostname 並安裝 python2
for host in ceph-{deploy,mon1,mon2,mon3,mgr1,mgr2,node1,node2,node3,node4}
do
   ssh ubuntu@${host} "sudo sed -ri '/ceph/d' /etc/hosts"
   # 設置主機名
   ssh ubuntu@${host} "sudo hostnamectl set-hostname ${host}"
   # 添加 /etc/hosts 解析
   ssh ubuntu@${host} "echo \"172.31.0.10 ceph-deploy.example.local ceph-deploy\" |sudo tee -a /etc/hosts" 
   ssh ubuntu@${host} "echo \"172.31.0.11 ceph-mon1.example.local ceph-mon1\" |sudo tee -a /etc/hosts" 
   ssh ubuntu@${host} "echo \"172.31.0.12 ceph-mon2.example.local ceph-mon2\" |sudo tee -a /etc/hosts" 
   ssh ubuntu@${host} "echo \"172.31.0.13 ceph-mon3.example.local ceph-mon3\" |sudo tee -a /etc/hosts" 
   ssh ubuntu@${host} "echo \"172.31.0.14 ceph-mgr1.example.local ceph-mgr1\" |sudo tee -a /etc/hosts" 
   ssh ubuntu@${host} "echo \"172.31.0.15 ceph-mgr2.example.local ceph-mgr2\" |sudo tee -a /etc/hosts" 
   ssh ubuntu@${host} "echo \"172.31.0.16 ceph-node1.example.local ceph-node1\" |sudo tee -a /etc/hosts" 
   ssh ubuntu@${host} "echo \"172.31.0.17 ceph-node2.example.local ceph-node2\" |sudo tee -a /etc/hosts" 
   ssh ubuntu@${host} "echo \"172.31.0.18 ceph-node3.example.local ceph-node3\" |sudo tee -a /etc/hosts" 
   ssh ubuntu@${host} "echo \"172.31.0.19 ceph-node4.example.local ceph-node4\" |sudo tee -a /etc/hosts"
   # ceph 服務依賴 python2 環境
   ssh ubuntu@${host} "sudo apt install python2.7 -y"
   ssh ubuntu@${host} "sudo ln -sv /usr/bin/python2.7 /usr/bin/python2"
   # 每個節點都安裝 ceph-common 工具, 以方便后期可以執行 ceph 管理命令
   ssh ubuntu@${host} "sudo apt install ceph-common -y"
done

# ceph-deploy 工具必須以普通用戶登錄 Ceph 節點,且此用戶擁有無密碼使用 sudo 的權限,因為它需要在安裝軟件及配置文件的過程中,不必輸入密碼。
for host in ceph-{deploy,mon1,mon2,mon3,mgr1,mgr2,node1,node2,node3,node4}
do
   ssh ubuntu@${host} "echo \"ubuntu ALL = (root) NOPASSWD:ALL\" | sudo tee /etc/sudoers.d/ubuntu"
   ssh ubuntu@${host} "sudo chmod 0440 /etc/sudoers.d/ubuntu"
done

5.3 安裝 ceph 部署工具

# 1. 安裝 ceph-deploy
ubuntu@ceph-deploy:~$ sudo apt-cache madison ceph-deploy
ceph-deploy |      2.0.1 | https://mirrors.tuna.tsinghua.edu.cn/ceph/debian-octopus bionic/main amd64 Packages
ceph-deploy | 1.5.38-0ubuntu1 | https://mirrors.tuna.tsinghua.edu.cn/ubuntu bionic/universe amd64 Packages
ubuntu@ceph-deploy:~$ sudo apt install ceph-deploy python-setuptools -y

# 推薦使用指定的普通用戶部署和運行 ceph 集群,普通用戶只要能以非交互方式執行 sudo 命令執行一些特權命令即可,新版的 ceph-deploy 可以指定包含 root 的在內只要可以執行 sudo 命令的用戶,不過仍然推薦使用普通用戶,比如 ceph、cephuser、cephadmin 這樣 的用戶去管理 ceph 集群。

# 允許無密碼 SSH 登錄
# 修改 ceph-deploy 管理節點上的 ~/.ssh/config 文件,這樣 ceph-deploy 就能用你所建的用戶名登錄 Ceph 節點了,
# 而無需每次執行 ceph-deploy 都要指定 --username {username} 。這樣做同時也簡化了 ssh 和 scp 的用法。
# 把 {username} 替換成你創建的用戶名。

ubuntu@ceph-deploy:~$ cat > ~/.ssh/config << EOF
Host ceph-mon1
   Hostname ceph-mon1
   User ubuntu
Host ceph-mon2
   Hostname ceph-mon2
   User ubuntu
Host ceph-mon3
   Hostname ceph-mon3
   User ubuntu
Host ceph-mgr1
   Hostname ceph-mgr1
   User ubuntu
Host ceph-mgr2
   Hostname ceph-mgr2
   User ubuntu
Host ceph-node1
   Hostname ceph-node1
   User ubuntu
Host ceph-node2
   Hostname ceph-node2
   User ubuntu
Host ceph-node3
   Hostname ceph-node3
   User ubuntu
Host ceph-node4
   Hostname ceph-node4
   User ubuntu
EOF

5.4 集群搭建

# 先在管理節點上創建一個目錄,用於保存 ceph-deploy 生成的配置文件和密鑰對。
ubuntu@ceph-deploy:~$ mkdir ceph-cluster
ubuntu@ceph-deploy:~$ cd ceph-cluster
# 開始部署一個新的 ceph 存儲集群,並生成 CLUSTER.conf 集群配置文件和 keyring 認證文件。
ubuntu@ceph-deploy:~/ceph-cluster$ ceph-deploy new --cluster-network 192.168.10.0/24 --public-network 172.31.0.0/24 ceph-mon1
[ceph_deploy][ERROR ] Traceback (most recent call last):
[ceph_deploy][ERROR ]   File "/usr/lib/python2.7/dist-packages/ceph_deploy/util/decorators.py", line 69, in newfunc
[ceph_deploy][ERROR ]     return f(*a, **kw)
[ceph_deploy][ERROR ]   File "/usr/lib/python2.7/dist-packages/ceph_deploy/cli.py", line 147, in _main
[ceph_deploy][ERROR ]     fh = logging.FileHandler('ceph-deploy-{cluster}.log'.format(cluster=args.cluster))
[ceph_deploy][ERROR ]   File "/usr/lib/python2.7/logging/__init__.py", line 920, in __init__
[ceph_deploy][ERROR ]     StreamHandler.__init__(self, self._open())
[ceph_deploy][ERROR ]   File "/usr/lib/python2.7/logging/__init__.py", line 950, in _open
[ceph_deploy][ERROR ]     stream = open(self.baseFilename, self.mode)
[ceph_deploy][ERROR ] IOError: [Errno 13] Permission denied: '/home/ubuntu/ceph-cluster/ceph-deploy-ceph.log'
[ceph_deploy][ERROR ]
# 這里刪除 ceph-deploy-ceph.log 文件就好了,因為一開始我用了sudo ceph-deploy new 執行,留下的日志文件權限是root權限,這里做了免密,使用普通用戶執行就可以
ubuntu@ceph-deploy:~/ceph-cluster$ sudo rm -rf ceph-deploy-ceph.log 
ubuntu@ceph-deploy:~/ceph-cluster$ ceph-deploy new --cluster-network 192.168.10.0/24 --public-network 172.31.0.0/24 ceph-mon1
[ceph_deploy.conf][DEBUG ] found configuration file at: /home/ubuntu/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (2.0.1): /usr/bin/ceph-deploy new --cluster-network 192.168.10.0/24 --public-network 172.31.0.0/24 ceph-mon1
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7f61b116fe10>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  ssh_copykey                   : True
[ceph_deploy.cli][INFO  ]  mon                           : ['ceph-mon1']
[ceph_deploy.cli][INFO  ]  func                          : <function new at 0x7f61ae529ad0>
[ceph_deploy.cli][INFO  ]  public_network                : 172.31.0.0/24
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  cluster_network               : 192.168.10.0/24
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.cli][INFO  ]  fsid                          : None
[ceph_deploy.new][DEBUG ] Creating new cluster named ceph
[ceph_deploy.new][INFO  ] making sure passwordless SSH succeeds
[ceph-mon1][DEBUG ] connected to host: ceph-deploy 
[ceph-mon1][INFO  ] Running command: ssh -CT -o BatchMode=yes ceph-mon1
[ceph-mon1][DEBUG ] connection detected need for sudo
[ceph-mon1][DEBUG ] connected to host: ceph-mon1 
[ceph-mon1][DEBUG ] detect platform information from remote host
[ceph-mon1][DEBUG ] detect machine type
[ceph-mon1][DEBUG ] find the location of an executable
[ceph-mon1][INFO  ] Running command: sudo /bin/ip link show
[ceph-mon1][INFO  ] Running command: sudo /bin/ip addr show
[ceph-mon1][DEBUG ] IP addresses found: [u'172.31.0.11', u'192.168.10.11']
[ceph_deploy.new][DEBUG ] Resolving host ceph-mon1
[ceph_deploy.new][DEBUG ] Monitor ceph-mon1 at 172.31.0.11
[ceph_deploy.new][DEBUG ] Monitor initial members are ['ceph-mon1']
[ceph_deploy.new][DEBUG ] Monitor addrs are [u'172.31.0.11']
[ceph_deploy.new][DEBUG ] Creating a random mon key...
[ceph_deploy.new][DEBUG ] Writing monitor keyring to ceph.mon.keyring...
[ceph_deploy.new][DEBUG ] Writing initial config to ceph.conf...

ubuntu@ceph-deploy:~/ceph-cluster$ ls -l
total 12
-rw-rw-r-- 1 ubuntu ubuntu 3327 Aug 16 05:13 ceph-deploy-ceph.log
-rw-rw-r-- 1 ubuntu ubuntu  263 Aug 16 05:13 ceph.conf
-rw------- 1 ubuntu ubuntu   73 Aug 16 05:13 ceph.mon.keyring
ubuntu@ceph-deploy:~/ceph-cluster$ cat ceph.conf
[global]
fsid = b7c42944-dd49-464e-a06a-f3a466b79eb4
public_network = 172.31.0.0/24
cluster_network = 192.168.10.0/24
mon_initial_members = ceph-mon1
mon_host = 172.31.0.11
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx

5.4.1 安裝 ceph-mon

# 在 mon 節點安裝 ceph-mon
ubuntu@ceph-mon1:~$ sudo apt install ceph-mon -y

# 初始化 mon 
ubuntu@ceph-deploy:~/ceph-cluster$ ceph-deploy mon create-initial
[ceph_deploy.conf][DEBUG ] found configuration file at: /home/ubuntu/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (2.0.1): /usr/bin/ceph-deploy mon create-initial
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  subcommand                    : create-initial
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7ffb094d5fa0>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  func                          : <function mon at 0x7ffb094b9ad0>
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  keyrings                      : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.mon][DEBUG ] Deploying mon, cluster ceph hosts ceph-mon1
[ceph_deploy.mon][DEBUG ] detecting platform for host ceph-mon1 ...
[ceph-mon1][DEBUG ] connection detected need for sudo
[ceph-mon1][DEBUG ] connected to host: ceph-mon1 
[ceph-mon1][DEBUG ] detect platform information from remote host
[ceph-mon1][DEBUG ] detect machine type
[ceph-mon1][DEBUG ] find the location of an executable
[ceph_deploy.mon][INFO  ] distro info: Ubuntu 18.04 bionic
[ceph-mon1][DEBUG ] determining if provided host has same hostname in remote
[ceph-mon1][DEBUG ] get remote short hostname
[ceph-mon1][DEBUG ] deploying mon to ceph-mon1
[ceph-mon1][DEBUG ] get remote short hostname
[ceph-mon1][DEBUG ] remote hostname: ceph-mon1
[ceph-mon1][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph-mon1][DEBUG ] create the mon path if it does not exist
[ceph-mon1][DEBUG ] checking for done path: /var/lib/ceph/mon/ceph-ceph-mon1/done
[ceph-mon1][DEBUG ] done path does not exist: /var/lib/ceph/mon/ceph-ceph-mon1/done
[ceph-mon1][INFO  ] creating keyring file: /var/lib/ceph/tmp/ceph-ceph-mon1.mon.keyring
[ceph-mon1][DEBUG ] create the monitor keyring file
[ceph-mon1][INFO  ] Running command: sudo ceph-mon --cluster ceph --mkfs -i ceph-mon1 --keyring /var/lib/ceph/tmp/ceph-ceph-mon1.mon.keyring --setuser 64045 --setgroup 64045
[ceph-mon1][INFO  ] unlinking keyring file /var/lib/ceph/tmp/ceph-ceph-mon1.mon.keyring
[ceph-mon1][DEBUG ] create a done file to avoid re-doing the mon deployment
[ceph-mon1][DEBUG ] create the init path if it does not exist
[ceph-mon1][INFO  ] Running command: sudo systemctl enable ceph.target
[ceph-mon1][INFO  ] Running command: sudo systemctl enable ceph-mon@ceph-mon1
[ceph-mon1][WARNIN] Created symlink /etc/systemd/system/ceph-mon.target.wants/ceph-mon@ceph-mon1.service → /lib/systemd/system/ceph-mon@.service.
[ceph-mon1][INFO  ] Running command: sudo systemctl start ceph-mon@ceph-mon1
[ceph-mon1][INFO  ] Running command: sudo ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.ceph-mon1.asok mon_status
[ceph-mon1][DEBUG ] ********************************************************************************
[ceph-mon1][DEBUG ] status for monitor: mon.ceph-mon1
[ceph-mon1][DEBUG ] {
[ceph-mon1][DEBUG ]   "election_epoch": 3, 
[ceph-mon1][DEBUG ]   "extra_probe_peers": [], 
[ceph-mon1][DEBUG ]   "feature_map": {
[ceph-mon1][DEBUG ]     "mon": [
[ceph-mon1][DEBUG ]       {
[ceph-mon1][DEBUG ]         "features": "0x3f01cfb8ffedffff", 
[ceph-mon1][DEBUG ]         "num": 1, 
[ceph-mon1][DEBUG ]         "release": "luminous"
[ceph-mon1][DEBUG ]       }
[ceph-mon1][DEBUG ]     ]
[ceph-mon1][DEBUG ]   }, 
[ceph-mon1][DEBUG ]   "features": {
[ceph-mon1][DEBUG ]     "quorum_con": "4540138292840890367", 
[ceph-mon1][DEBUG ]     "quorum_mon": [
[ceph-mon1][DEBUG ]       "kraken", 
[ceph-mon1][DEBUG ]       "luminous", 
[ceph-mon1][DEBUG ]       "mimic", 
[ceph-mon1][DEBUG ]       "osdmap-prune", 
[ceph-mon1][DEBUG ]       "nautilus", 
[ceph-mon1][DEBUG ]       "octopus"
[ceph-mon1][DEBUG ]     ], 
[ceph-mon1][DEBUG ]     "required_con": "2449958747315978244", 
[ceph-mon1][DEBUG ]     "required_mon": [
[ceph-mon1][DEBUG ]       "kraken", 
[ceph-mon1][DEBUG ]       "luminous", 
[ceph-mon1][DEBUG ]       "mimic", 
[ceph-mon1][DEBUG ]       "osdmap-prune", 
[ceph-mon1][DEBUG ]       "nautilus", 
[ceph-mon1][DEBUG ]       "octopus"
[ceph-mon1][DEBUG ]     ]
[ceph-mon1][DEBUG ]   }, 
[ceph-mon1][DEBUG ]   "monmap": {
[ceph-mon1][DEBUG ]     "created": "2021-08-16T06:26:05.290405Z", 
[ceph-mon1][DEBUG ]     "epoch": 1, 
[ceph-mon1][DEBUG ]     "features": {
[ceph-mon1][DEBUG ]       "optional": [], 
[ceph-mon1][DEBUG ]       "persistent": [
[ceph-mon1][DEBUG ]         "kraken", 
[ceph-mon1][DEBUG ]         "luminous", 
[ceph-mon1][DEBUG ]         "mimic", 
[ceph-mon1][DEBUG ]         "osdmap-prune", 
[ceph-mon1][DEBUG ]         "nautilus", 
[ceph-mon1][DEBUG ]         "octopus"
[ceph-mon1][DEBUG ]       ]
[ceph-mon1][DEBUG ]     }, 
[ceph-mon1][DEBUG ]     "fsid": "b7c42944-dd49-464e-a06a-f3a466b79eb4", 
[ceph-mon1][DEBUG ]     "min_mon_release": 15, 
[ceph-mon1][DEBUG ]     "min_mon_release_name": "octopus", 
[ceph-mon1][DEBUG ]     "modified": "2021-08-16T06:26:05.290405Z", 
[ceph-mon1][DEBUG ]     "mons": [
[ceph-mon1][DEBUG ]       {
[ceph-mon1][DEBUG ]         "addr": "172.31.0.11:6789/0", 
[ceph-mon1][DEBUG ]         "name": "ceph-mon1", 
[ceph-mon1][DEBUG ]         "priority": 0, 
[ceph-mon1][DEBUG ]         "public_addr": "172.31.0.11:6789/0", 
[ceph-mon1][DEBUG ]         "public_addrs": {
[ceph-mon1][DEBUG ]           "addrvec": [
[ceph-mon1][DEBUG ]             {
[ceph-mon1][DEBUG ]               "addr": "172.31.0.11:3300", 
[ceph-mon1][DEBUG ]               "nonce": 0, 
[ceph-mon1][DEBUG ]               "type": "v2"
[ceph-mon1][DEBUG ]             }, 
[ceph-mon1][DEBUG ]             {
[ceph-mon1][DEBUG ]               "addr": "172.31.0.11:6789", 
[ceph-mon1][DEBUG ]               "nonce": 0, 
[ceph-mon1][DEBUG ]               "type": "v1"
[ceph-mon1][DEBUG ]             }
[ceph-mon1][DEBUG ]           ]
[ceph-mon1][DEBUG ]         }, 
[ceph-mon1][DEBUG ]         "rank": 0, 
[ceph-mon1][DEBUG ]         "weight": 0
[ceph-mon1][DEBUG ]       }
[ceph-mon1][DEBUG ]     ]
[ceph-mon1][DEBUG ]   }, 
[ceph-mon1][DEBUG ]   "name": "ceph-mon1", 
[ceph-mon1][DEBUG ]   "outside_quorum": [], 
[ceph-mon1][DEBUG ]   "quorum": [
[ceph-mon1][DEBUG ]     0
[ceph-mon1][DEBUG ]   ], 
[ceph-mon1][DEBUG ]   "quorum_age": 1, 
[ceph-mon1][DEBUG ]   "rank": 0, 
[ceph-mon1][DEBUG ]   "state": "leader", 
[ceph-mon1][DEBUG ]   "sync_provider": []
[ceph-mon1][DEBUG ] }
[ceph-mon1][DEBUG ] ********************************************************************************
[ceph-mon1][INFO  ] monitor: mon.ceph-mon1 is running
[ceph-mon1][INFO  ] Running command: sudo ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.ceph-mon1.asok mon_status
[ceph_deploy.mon][INFO  ] processing monitor mon.ceph-mon1
[ceph-mon1][DEBUG ] connection detected need for sudo
[ceph-mon1][DEBUG ] connected to host: ceph-mon1 
[ceph-mon1][DEBUG ] detect platform information from remote host
[ceph-mon1][DEBUG ] detect machine type
[ceph-mon1][DEBUG ] find the location of an executable
[ceph-mon1][INFO  ] Running command: sudo ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.ceph-mon1.asok mon_status
[ceph_deploy.mon][INFO  ] mon.ceph-mon1 monitor has reached quorum!
[ceph_deploy.mon][INFO  ] all initial monitors are running and have formed quorum
[ceph_deploy.mon][INFO  ] Running gatherkeys...
[ceph_deploy.gatherkeys][INFO  ] Storing keys in temp directory /tmp/tmpzxBtYk
[ceph-mon1][DEBUG ] connection detected need for sudo
[ceph-mon1][DEBUG ] connected to host: ceph-mon1 
[ceph-mon1][DEBUG ] detect platform information from remote host
[ceph-mon1][DEBUG ] detect machine type
[ceph-mon1][DEBUG ] get remote short hostname
[ceph-mon1][DEBUG ] fetch remote file
[ceph-mon1][INFO  ] Running command: sudo /usr/bin/ceph --connect-timeout=25 --cluster=ceph --admin-daemon=/var/run/ceph/ceph-mon.ceph-mon1.asok mon_status
[ceph-mon1][INFO  ] Running command: sudo /usr/bin/ceph --connect-timeout=25 --cluster=ceph --name mon. --keyring=/var/lib/ceph/mon/ceph-ceph-mon1/keyring auth get client.admin
[ceph-mon1][INFO  ] Running command: sudo /usr/bin/ceph --connect-timeout=25 --cluster=ceph --name mon. --keyring=/var/lib/ceph/mon/ceph-ceph-mon1/keyring auth get client.bootstrap-mds
[ceph-mon1][INFO  ] Running command: sudo /usr/bin/ceph --connect-timeout=25 --cluster=ceph --name mon. --keyring=/var/lib/ceph/mon/ceph-ceph-mon1/keyring auth get client.bootstrap-mgr
[ceph-mon1][INFO  ] Running command: sudo /usr/bin/ceph --connect-timeout=25 --cluster=ceph --name mon. --keyring=/var/lib/ceph/mon/ceph-ceph-mon1/keyring auth get client.bootstrap-osd
[ceph-mon1][INFO  ] Running command: sudo /usr/bin/ceph --connect-timeout=25 --cluster=ceph --name mon. --keyring=/var/lib/ceph/mon/ceph-ceph-mon1/keyring auth get client.bootstrap-rgw
[ceph_deploy.gatherkeys][INFO  ] Storing ceph.client.admin.keyring
[ceph_deploy.gatherkeys][INFO  ] Storing ceph.bootstrap-mds.keyring
[ceph_deploy.gatherkeys][INFO  ] Storing ceph.bootstrap-mgr.keyring
[ceph_deploy.gatherkeys][INFO  ] keyring 'ceph.mon.keyring' already exists
[ceph_deploy.gatherkeys][INFO  ] Storing ceph.bootstrap-osd.keyring
[ceph_deploy.gatherkeys][INFO  ] Storing ceph.bootstrap-rgw.keyring
[ceph_deploy.gatherkeys][INFO  ] Destroy temp directory /tmp/tmpzxBtYk

# 查看結果
ubuntu@ceph-mon1:~$ ps -ef |grep ceph
root       9293      1  0 06:15 ?        00:00:00 /usr/bin/python3.6 /usr/bin/ceph-crash
ceph      13339      1  0 06:26 ?        00:00:00 /usr/bin/ceph-mon -f --cluster ceph --id ceph-mon1 --setuser ceph --setgroup ceph
ubuntu    13911  13898  0 06:28 pts/0    00:00:00 grep --color=auto ceph

5.4.2 分發 ceph 密鑰

# 在 ceph-deploy 節點把配置文件和 admin 密鑰拷貝至 Ceph 集群需要執行 ceph 管理命令的 節點,從而不需要后期通過 ceph 命令對 ceph 集群進行管理配置的時候每次都需要指定 ceph-mon 節點地址 和 ceph.client.admin.keyring 文件, 另外各 ceph-mon 節點也需要同步 ceph 的集群配置文件與認證文件。

ubuntu@ceph-deploy:~/ceph-cluster$ ceph-deploy admin ceph-deploy ceph-node{1,2,3,4}
[ceph_deploy.conf][DEBUG ] found configuration file at: /home/ubuntu/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (2.0.1): /usr/bin/ceph-deploy admin ceph-node1 ceph-node2 ceph-node3 ceph-node4
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7fc9e52ec190>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  client                        : ['ceph-node1', 'ceph-node2', 'ceph-node3', 'ceph-node4']
[ceph_deploy.cli][INFO  ]  func                          : <function admin at 0x7fc9e5befa50>
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.admin][DEBUG ] Pushing admin keys and conf to ceph-node1
[ceph-node1][DEBUG ] connection detected need for sudo
[ceph-node1][DEBUG ] connected to host: ceph-node1 
[ceph-node1][DEBUG ] detect platform information from remote host
[ceph-node1][DEBUG ] detect machine type
[ceph-node1][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph_deploy.admin][DEBUG ] Pushing admin keys and conf to ceph-node2
[ceph-node2][DEBUG ] connection detected need for sudo
[ceph-node2][DEBUG ] connected to host: ceph-node2 
[ceph-node2][DEBUG ] detect platform information from remote host
[ceph-node2][DEBUG ] detect machine type
[ceph-node2][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph_deploy.admin][DEBUG ] Pushing admin keys and conf to ceph-node3
[ceph-node3][DEBUG ] connection detected need for sudo
[ceph-node3][DEBUG ] connected to host: ceph-node3 
[ceph-node3][DEBUG ] detect platform information from remote host
[ceph-node3][DEBUG ] detect machine type
[ceph-node3][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph_deploy.admin][DEBUG ] Pushing admin keys and conf to ceph-node4
[ceph-node4][DEBUG ] connection detected need for sudo
[ceph-node4][DEBUG ] connected to host: ceph-node4 
[ceph-node4][DEBUG ] detect platform information from remote host
[ceph-node4][DEBUG ] detect machine type
[ceph-node4][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf

目前我們安裝了一個 ceph-mon,現在看下結果

ubuntu@ceph-deploy:~/ceph-cluster$ sudo ceph -s
  cluster:
    id:     b7c42944-dd49-464e-a06a-f3a466b79eb4
    health: HEALTH_WARN
            mon is allowing insecure global_id reclaim
 
  services:
    mon: 1 daemons, quorum ceph-mon1 (age 47m)
    mgr: no daemons active
    osd: 0 osds: 0 up, 0 in
 
  data:
    pools:   0 pools, 0 pgs
    objects: 0 objects, 0 B
    usage:   0 B used, 0 B / 0 B avail
    pgs:

5.4.3 安裝 ceph-mgr

ceph 的 Luminious 及以上版本有 manager 節點,早期的版本沒有。

部署 ceph-mgr 節點

# 在 manager 節點安裝 ceph-mgr
ubuntu@ceph-mgr1:~$ sudo apt install ceph-mgr -y

# ceph-deploy 節點 添加 ceph-mgr
# mgr 節點需要讀取 ceph 的配置文件,即/etc/ceph 目錄中的配置文件
[ceph_deploy.conf][DEBUG ] found configuration file at: /home/ubuntu/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (2.0.1): /usr/bin/ceph-deploy mgr create ceph-mgr1
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  mgr                           : [('ceph-mgr1', 'ceph-mgr1')]
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  subcommand                    : create
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7f5cf08f4c30>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  func                          : <function mgr at 0x7f5cf0d54150>
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.mgr][DEBUG ] Deploying mgr, cluster ceph hosts ceph-mgr1:ceph-mgr1
[ceph-mgr1][DEBUG ] connection detected need for sudo
[ceph-mgr1][DEBUG ] connected to host: ceph-mgr1 
[ceph-mgr1][DEBUG ] detect platform information from remote host
[ceph-mgr1][DEBUG ] detect machine type
[ceph_deploy.mgr][INFO  ] Distro info: Ubuntu 18.04 bionic
[ceph_deploy.mgr][DEBUG ] remote host will use systemd
[ceph_deploy.mgr][DEBUG ] deploying mgr bootstrap to ceph-mgr1
[ceph-mgr1][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph-mgr1][WARNIN] mgr keyring does not exist yet, creating one
[ceph-mgr1][DEBUG ] create a keyring file
[ceph-mgr1][DEBUG ] create path recursively if it doesn't exist
[ceph-mgr1][INFO  ] Running command: sudo ceph --cluster ceph --name client.bootstrap-mgr --keyring /var/lib/ceph/bootstrap-mgr/ceph.keyring auth get-or-create mgr.ceph-mgr1 mon allow profile mgr osd allow * mds allow * -o /var/lib/ceph/mgr/ceph-ceph-mgr1/keyring
[ceph-mgr1][INFO  ] Running command: sudo systemctl enable ceph-mgr@ceph-mgr1
[ceph-mgr1][WARNIN] Created symlink /etc/systemd/system/ceph-mgr.target.wants/ceph-mgr@ceph-mgr1.service → /lib/systemd/system/ceph-mgr@.service.
[ceph-mgr1][INFO  ] Running command: sudo systemctl start ceph-mgr@ceph-mgr1
[ceph-mgr1][INFO  ] Running command: sudo systemctl enable ceph.target

# 在 ceph-mgr1 節點檢測結果
ubuntu@ceph-mgr1:~$ sudo ps -ef |grep ceph
root      10148      1  0 07:30 ?        00:00:00 /usr/bin/python3.6 /usr/bin/ceph-crash
ceph      15202      1 14 07:32 ?        00:00:04 /usr/bin/ceph-mgr -f --cluster ceph --id ceph-mgr1 --setuser ceph --setgroup ceph
ubuntu    15443  15430  0 07:33 pts/0    00:00:00 grep --color=auto ceph

# 通過 ceph 命令查看結果
ubuntu@ceph-deploy:~/ceph-cluster$ sudo ceph -s
  cluster:
    id:     b7c42944-dd49-464e-a06a-f3a466b79eb4
    health: HEALTH_WARN
            mon is allowing insecure global_id reclaim  # 需要禁用非安全模式通信
            OSD count 0 < osd_pool_default_size 3       # 集群的 OSD 數量小於 3
 
  services:
    mon: 1 daemons, quorum ceph-mon1 (age 70m)
    mgr: ceph-mgr1(active, since 3m)
    osd: 0 osds: 0 up, 0 in
 
  data:
    pools:   0 pools, 0 pgs
    objects: 0 objects, 0 B
    usage:   0 B used, 0 B / 0 B avail
    pgs:

5.4.4 安裝 ceph-osd

在添加 osd 之前, 對 osd node 節點安裝基本環境初始化

存儲節點等於在存儲節點安裝了 ceph 及 ceph-rodsgw 安裝包,但是使用默認的官方倉庫會因為網絡原因導致初始化超時,因此各存儲節點推薦修改 ceph 倉庫為阿里或者清華等國內的鏡像源。

ubuntu@ceph-deploy:~/ceph-cluster$ ceph-deploy install --no-adjust-repos --nogpgcheck ceph-node1 ceph-node2 ceph-node3 ceph-node4
[ceph_deploy.conf][DEBUG ] found configuration file at: /home/ubuntu/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (2.0.1): /usr/bin/ceph-deploy install --no-adjust-repos --nogpgcheck ceph-node1 ceph-node2 ceph-node3 ceph-node4
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  testing                       : None
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7f6d9495ac30>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  dev_commit                    : None
[ceph_deploy.cli][INFO  ]  install_mds                   : False
[ceph_deploy.cli][INFO  ]  stable                        : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  adjust_repos                  : False
[ceph_deploy.cli][INFO  ]  func                          : <function install at 0x7f6d9520ca50>
[ceph_deploy.cli][INFO  ]  install_mgr                   : False
[ceph_deploy.cli][INFO  ]  install_all                   : False
[ceph_deploy.cli][INFO  ]  repo                          : False
[ceph_deploy.cli][INFO  ]  host                          : ['ceph-node1', 'ceph-node2', 'ceph-node3', 'ceph-node4']
[ceph_deploy.cli][INFO  ]  install_rgw                   : False
[ceph_deploy.cli][INFO  ]  install_tests                 : False
[ceph_deploy.cli][INFO  ]  repo_url                      : None
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  install_osd                   : False
[ceph_deploy.cli][INFO  ]  version_kind                  : stable
[ceph_deploy.cli][INFO  ]  install_common                : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  dev                           : master
[ceph_deploy.cli][INFO  ]  nogpgcheck                    : True
[ceph_deploy.cli][INFO  ]  local_mirror                  : None
[ceph_deploy.cli][INFO  ]  release                       : None
[ceph_deploy.cli][INFO  ]  install_mon                   : False
[ceph_deploy.cli][INFO  ]  gpg_url                       : None
[ceph_deploy.install][DEBUG ] Installing stable version mimic on cluster ceph hosts ceph-node1 ceph-node2 ceph-node3 ceph-node4
[ceph_deploy.install][DEBUG ] Detecting platform for host ceph-node1 ...
[ceph-node1][DEBUG ] connection detected need for sudo
[ceph-node1][DEBUG ] connected to host: ceph-node1 
[ceph-node1][DEBUG ] detect platform information from remote host
[ceph-node1][DEBUG ] detect machine type
[ceph_deploy.install][INFO  ] Distro info: Ubuntu 18.04 bionic
[ceph-node1][INFO  ] installing Ceph on ceph-node1
[ceph-node1][INFO  ] Running command: sudo env DEBIAN_FRONTEND=noninteractive DEBIAN_PRIORITY=critical apt-get --assume-yes -q update
[ceph-node1][DEBUG ] Hit:1 https://mirrors.tuna.tsinghua.edu.cn/ubuntu bionic InRelease
[ceph-node1][DEBUG ] Get:2 https://mirrors.tuna.tsinghua.edu.cn/ubuntu bionic-updates InRelease [88.7 kB]
[ceph-node1][DEBUG ] Get:3 https://mirrors.tuna.tsinghua.edu.cn/ubuntu bionic-backports InRelease [74.6 kB]
[ceph-node1][DEBUG ] Get:4 https://mirrors.tuna.tsinghua.edu.cn/ubuntu bionic-security InRelease [88.7 kB]
[ceph-node1][DEBUG ] Hit:5 https://mirrors.tuna.tsinghua.edu.cn/ceph/debian-octopus bionic InRelease
[ceph-node1][DEBUG ] Fetched 252 kB in 1s (326 kB/s)
[ceph-node1][DEBUG ] Reading package lists...
[ceph-node1][INFO  ] Running command: sudo env DEBIAN_FRONTEND=noninteractive DEBIAN_PRIORITY=critical apt-get --assume-yes -q --no-install-recommends install ca-certificates apt-transport-https
[ceph-node1][DEBUG ] Reading package lists...
[ceph-node1][DEBUG ] Building dependency tree...
[ceph-node1][DEBUG ] Reading state information...
[ceph-node1][DEBUG ] ca-certificates is already the newest version (20210119~18.04.1).
[ceph-node1][DEBUG ] ca-certificates set to manually installed.
[ceph-node1][DEBUG ] The following NEW packages will be installed:
[ceph-node1][DEBUG ]   apt-transport-https
[ceph-node1][DEBUG ] 0 upgraded, 1 newly installed, 0 to remove and 0 not upgraded.
[ceph-node1][DEBUG ] Need to get 4348 B of archives.
[ceph-node1][DEBUG ] After this operation, 154 kB of additional disk space will be used.
[ceph-node1][DEBUG ] Get:1 https://mirrors.tuna.tsinghua.edu.cn/ubuntu bionic-updates/universe amd64 apt-transport-https all 1.6.14 [4348 B]
[ceph-node1][DEBUG ] Fetched 4348 B in 0s (28.8 kB/s)
[ceph-node1][DEBUG ] Selecting previously unselected package apt-transport-https.
(Reading database ... 109531 files and directories currently installed.)
[ceph-node1][DEBUG ] Preparing to unpack .../apt-transport-https_1.6.14_all.deb ...
[ceph-node1][DEBUG ] Unpacking apt-transport-https (1.6.14) ...
[ceph-node1][DEBUG ] Setting up apt-transport-https (1.6.14) ...
[ceph-node1][INFO  ] Running command: sudo env DEBIAN_FRONTEND=noninteractive DEBIAN_PRIORITY=critical apt-get --assume-yes -q update
[ceph-node1][DEBUG ] Hit:1 https://mirrors.tuna.tsinghua.edu.cn/ubuntu bionic InRelease
[ceph-node1][DEBUG ] Hit:2 https://mirrors.tuna.tsinghua.edu.cn/ubuntu bionic-updates InRelease
[ceph-node1][DEBUG ] Hit:3 https://mirrors.tuna.tsinghua.edu.cn/ubuntu bionic-backports InRelease
[ceph-node1][DEBUG ] Hit:4 https://mirrors.tuna.tsinghua.edu.cn/ubuntu bionic-security InRelease
[ceph-node1][DEBUG ] Hit:5 https://mirrors.tuna.tsinghua.edu.cn/ceph/debian-octopus bionic InRelease
[ceph-node1][DEBUG ] Reading package lists...
[ceph-node1][INFO  ] Running command: sudo env DEBIAN_FRONTEND=noninteractive DEBIAN_PRIORITY=critical apt-get --assume-yes -q --no-install-recommends install ceph ceph-osd ceph-mds ceph-mon radosgw
[ceph-node1][DEBUG ] Reading package lists...
[ceph-node1][DEBUG ] Building dependency tree...
[ceph-node1][DEBUG ] Reading state information...
[ceph-node1][DEBUG ] The following additional packages will be installed:
[ceph-node1][DEBUG ]   ceph-base ceph-mgr ceph-mgr-modules-core libjs-jquery python-pastedeploy-tpl
[ceph-node1][DEBUG ]   python3-bcrypt python3-bs4 python3-cherrypy3 python3-dateutil
[ceph-node1][DEBUG ]   python3-distutils python3-lib2to3 python3-logutils python3-mako
[ceph-node1][DEBUG ]   python3-paste python3-pastedeploy python3-pecan python3-simplegeneric
[ceph-node1][DEBUG ]   python3-singledispatch python3-tempita python3-waitress python3-webob
[ceph-node1][DEBUG ]   python3-webtest python3-werkzeug
[ceph-node1][DEBUG ] Suggested packages:
[ceph-node1][DEBUG ]   python3-influxdb python3-beaker python-mako-doc httpd-wsgi
[ceph-node1][DEBUG ]   libapache2-mod-python libapache2-mod-scgi libjs-mochikit python-pecan-doc
[ceph-node1][DEBUG ]   python-waitress-doc python-webob-doc python-webtest-doc ipython3
[ceph-node1][DEBUG ]   python3-lxml python3-termcolor python3-watchdog python-werkzeug-doc
[ceph-node1][DEBUG ] Recommended packages:
[ceph-node1][DEBUG ]   ceph-fuse ceph-mgr-dashboard ceph-mgr-diskprediction-local
[ceph-node1][DEBUG ]   ceph-mgr-diskprediction-cloud ceph-mgr-k8sevents ceph-mgr-cephadm nvme-cli
[ceph-node1][DEBUG ]   smartmontools javascript-common python3-lxml python3-routes
[ceph-node1][DEBUG ]   python3-simplejson python3-pastescript python3-pyinotify
[ceph-node1][DEBUG ] The following NEW packages will be installed:
[ceph-node1][DEBUG ]   ceph ceph-base ceph-mds ceph-mgr ceph-mgr-modules-core ceph-mon ceph-osd
[ceph-node1][DEBUG ]   libjs-jquery python-pastedeploy-tpl python3-bcrypt python3-bs4
[ceph-node1][DEBUG ]   python3-cherrypy3 python3-dateutil python3-distutils python3-lib2to3
[ceph-node1][DEBUG ]   python3-logutils python3-mako python3-paste python3-pastedeploy
[ceph-node1][DEBUG ]   python3-pecan python3-simplegeneric python3-singledispatch python3-tempita
[ceph-node1][DEBUG ]   python3-waitress python3-webob python3-webtest python3-werkzeug radosgw
[ceph-node1][DEBUG ] 0 upgraded, 28 newly installed, 0 to remove and 0 not upgraded.
[ceph-node1][DEBUG ] Need to get 47.7 MB of archives.
[ceph-node1][DEBUG ] After this operation, 219 MB of additional disk space will be used.
[ceph-node1][DEBUG ] Get:1 https://mirrors.tuna.tsinghua.edu.cn/ceph/debian-octopus bionic/main amd64 ceph-base amd64 15.2.14-1bionic [5179 kB]
[ceph-node1][DEBUG ] Get:2 https://mirrors.tuna.tsinghua.edu.cn/ubuntu bionic/main amd64 python3-dateutil all 2.6.1-1 [52.3 kB]
[ceph-node1][DEBUG ] Get:3 https://mirrors.tuna.tsinghua.edu.cn/ceph/debian-octopus bionic/main amd64 ceph-mgr-modules-core all 15.2.14-1bionic [162 kB]
[ceph-node1][DEBUG ] Get:4 https://mirrors.tuna.tsinghua.edu.cn/ubuntu bionic/universe amd64 python3-bcrypt amd64 3.1.4-2 [29.9 kB]
[ceph-node1][DEBUG ] Get:5 https://mirrors.tuna.tsinghua.edu.cn/ubuntu bionic/universe amd64 python3-cherrypy3 all 8.9.1-2 [160 kB]
[ceph-node1][DEBUG ] Get:6 https://mirrors.tuna.tsinghua.edu.cn/ubuntu bionic-updates/main amd64 python3-lib2to3 all 3.6.9-1~18.04 [77.4 kB]
[ceph-node1][DEBUG ] Get:7 https://mirrors.tuna.tsinghua.edu.cn/ubuntu bionic-updates/main amd64 python3-distutils all 3.6.9-1~18.04 [144 kB]
[ceph-node1][DEBUG ] Get:8 https://mirrors.tuna.tsinghua.edu.cn/ubuntu bionic/universe amd64 python3-logutils all 0.3.3-5 [16.7 kB]
[ceph-node1][DEBUG ] Get:9 https://mirrors.tuna.tsinghua.edu.cn/ubuntu bionic/main amd64 python3-mako all 1.0.7+ds1-1 [59.3 kB]
[ceph-node1][DEBUG ] Get:10 https://mirrors.tuna.tsinghua.edu.cn/ubuntu bionic/universe amd64 python3-simplegeneric all 0.8.1-1 [11.5 kB]
[ceph-node1][DEBUG ] Get:11 https://mirrors.tuna.tsinghua.edu.cn/ubuntu bionic/universe amd64 python3-singledispatch all 3.4.0.3-2 [7022 B]
[ceph-node1][DEBUG ] Get:12 https://mirrors.tuna.tsinghua.edu.cn/ubuntu bionic/main amd64 python3-webob all 1:1.7.3-2fakesync1 [64.3 kB]
[ceph-node1][DEBUG ] Get:13 https://mirrors.tuna.tsinghua.edu.cn/ubuntu bionic/main amd64 python3-bs4 all 4.6.0-1 [67.8 kB]
[ceph-node1][DEBUG ] Get:14 https://mirrors.tuna.tsinghua.edu.cn/ubuntu bionic/main amd64 python3-waitress all 1.0.1-1 [53.4 kB]
[ceph-node1][DEBUG ] Get:15 https://mirrors.tuna.tsinghua.edu.cn/ubuntu bionic/main amd64 python3-tempita all 0.5.2-2 [13.9 kB]
[ceph-node1][DEBUG ] Get:16 https://mirrors.tuna.tsinghua.edu.cn/ubuntu bionic/main amd64 python3-paste all 2.0.3+dfsg-4ubuntu1 [456 kB]
[ceph-node1][DEBUG ] Get:17 https://mirrors.tuna.tsinghua.edu.cn/ubuntu bionic/main amd64 python-pastedeploy-tpl all 1.5.2-4 [4796 B]
[ceph-node1][DEBUG ] Get:18 https://mirrors.tuna.tsinghua.edu.cn/ubuntu bionic/main amd64 python3-pastedeploy all 1.5.2-4 [13.4 kB]
[ceph-node1][DEBUG ] Get:19 https://mirrors.tuna.tsinghua.edu.cn/ubuntu bionic/main amd64 python3-webtest all 2.0.28-1ubuntu1 [27.9 kB]
[ceph-node1][DEBUG ] Get:20 https://mirrors.tuna.tsinghua.edu.cn/ubuntu bionic/universe amd64 python3-pecan all 1.2.1-2 [86.1 kB]
[ceph-node1][DEBUG ] Get:21 https://mirrors.tuna.tsinghua.edu.cn/ubuntu bionic/main amd64 libjs-jquery all 3.2.1-1 [152 kB]
[ceph-node1][DEBUG ] Get:22 https://mirrors.tuna.tsinghua.edu.cn/ubuntu bionic-updates/universe amd64 python3-werkzeug all 0.14.1+dfsg1-1ubuntu0.1 [174 kB]
[ceph-node1][DEBUG ] Get:23 https://mirrors.tuna.tsinghua.edu.cn/ceph/debian-octopus bionic/main amd64 ceph-mgr amd64 15.2.14-1bionic [1309 kB]
[ceph-node1][DEBUG ] Get:24 https://mirrors.tuna.tsinghua.edu.cn/ceph/debian-octopus bionic/main amd64 ceph-mon amd64 15.2.14-1bionic [5952 kB]
[ceph-node1][DEBUG ] Get:25 https://mirrors.tuna.tsinghua.edu.cn/ceph/debian-octopus bionic/main amd64 ceph-osd amd64 15.2.14-1bionic [22.8 MB]
[ceph-node1][DEBUG ] Get:26 https://mirrors.tuna.tsinghua.edu.cn/ceph/debian-octopus bionic/main amd64 ceph amd64 15.2.14-1bionic [3968 B]
[ceph-node1][DEBUG ] Get:27 https://mirrors.tuna.tsinghua.edu.cn/ceph/debian-octopus bionic/main amd64 ceph-mds amd64 15.2.14-1bionic [1854 kB]
[ceph-node1][DEBUG ] Get:28 https://mirrors.tuna.tsinghua.edu.cn/ceph/debian-octopus bionic/main amd64 radosgw amd64 15.2.14-1bionic [8814 kB]
[ceph-node1][DEBUG ] Fetched 47.7 MB in 2s (20.8 MB/s)
[ceph-node1][DEBUG ] Selecting previously unselected package ceph-base.
(Reading database ... 109535 files and directories currently installed.)
[ceph-node1][DEBUG ] Preparing to unpack .../00-ceph-base_15.2.14-1bionic_amd64.deb ...
[ceph-node1][DEBUG ] Unpacking ceph-base (15.2.14-1bionic) ...
[ceph-node1][DEBUG ] Selecting previously unselected package python3-dateutil.
[ceph-node1][DEBUG ] Preparing to unpack .../01-python3-dateutil_2.6.1-1_all.deb ...
[ceph-node1][DEBUG ] Unpacking python3-dateutil (2.6.1-1) ...
[ceph-node1][DEBUG ] Selecting previously unselected package ceph-mgr-modules-core.
[ceph-node1][DEBUG ] Preparing to unpack .../02-ceph-mgr-modules-core_15.2.14-1bionic_all.deb ...
[ceph-node1][DEBUG ] Unpacking ceph-mgr-modules-core (15.2.14-1bionic) ...
[ceph-node1][DEBUG ] Selecting previously unselected package python3-bcrypt.
[ceph-node1][DEBUG ] Preparing to unpack .../03-python3-bcrypt_3.1.4-2_amd64.deb ...
[ceph-node1][DEBUG ] Unpacking python3-bcrypt (3.1.4-2) ...
[ceph-node1][DEBUG ] Selecting previously unselected package python3-cherrypy3.
[ceph-node1][DEBUG ] Preparing to unpack .../04-python3-cherrypy3_8.9.1-2_all.deb ...
[ceph-node1][DEBUG ] Unpacking python3-cherrypy3 (8.9.1-2) ...
[ceph-node1][DEBUG ] Selecting previously unselected package python3-lib2to3.
[ceph-node1][DEBUG ] Preparing to unpack .../05-python3-lib2to3_3.6.9-1~18.04_all.deb ...
[ceph-node1][DEBUG ] Unpacking python3-lib2to3 (3.6.9-1~18.04) ...
[ceph-node1][DEBUG ] Selecting previously unselected package python3-distutils.
[ceph-node1][DEBUG ] Preparing to unpack .../06-python3-distutils_3.6.9-1~18.04_all.deb ...
[ceph-node1][DEBUG ] Unpacking python3-distutils (3.6.9-1~18.04) ...
[ceph-node1][DEBUG ] Selecting previously unselected package python3-logutils.
[ceph-node1][DEBUG ] Preparing to unpack .../07-python3-logutils_0.3.3-5_all.deb ...
[ceph-node1][DEBUG ] Unpacking python3-logutils (0.3.3-5) ...
[ceph-node1][DEBUG ] Selecting previously unselected package python3-mako.
[ceph-node1][DEBUG ] Preparing to unpack .../08-python3-mako_1.0.7+ds1-1_all.deb ...
[ceph-node1][DEBUG ] Unpacking python3-mako (1.0.7+ds1-1) ...
[ceph-node1][DEBUG ] Selecting previously unselected package python3-simplegeneric.
[ceph-node1][DEBUG ] Preparing to unpack .../09-python3-simplegeneric_0.8.1-1_all.deb ...
[ceph-node1][DEBUG ] Unpacking python3-simplegeneric (0.8.1-1) ...
[ceph-node1][DEBUG ] Selecting previously unselected package python3-singledispatch.
[ceph-node1][DEBUG ] Preparing to unpack .../10-python3-singledispatch_3.4.0.3-2_all.deb ...
[ceph-node1][DEBUG ] Unpacking python3-singledispatch (3.4.0.3-2) ...
[ceph-node1][DEBUG ] Selecting previously unselected package python3-webob.
[ceph-node1][DEBUG ] Preparing to unpack .../11-python3-webob_1%3a1.7.3-2fakesync1_all.deb ...
[ceph-node1][DEBUG ] Unpacking python3-webob (1:1.7.3-2fakesync1) ...
[ceph-node1][DEBUG ] Selecting previously unselected package python3-bs4.
[ceph-node1][DEBUG ] Preparing to unpack .../12-python3-bs4_4.6.0-1_all.deb ...
[ceph-node1][DEBUG ] Unpacking python3-bs4 (4.6.0-1) ...
[ceph-node1][DEBUG ] Selecting previously unselected package python3-waitress.
[ceph-node1][DEBUG ] Preparing to unpack .../13-python3-waitress_1.0.1-1_all.deb ...
[ceph-node1][DEBUG ] Unpacking python3-waitress (1.0.1-1) ...
[ceph-node1][DEBUG ] Selecting previously unselected package python3-tempita.
[ceph-node1][DEBUG ] Preparing to unpack .../14-python3-tempita_0.5.2-2_all.deb ...
[ceph-node1][DEBUG ] Unpacking python3-tempita (0.5.2-2) ...
[ceph-node1][DEBUG ] Selecting previously unselected package python3-paste.
[ceph-node1][DEBUG ] Preparing to unpack .../15-python3-paste_2.0.3+dfsg-4ubuntu1_all.deb ...
[ceph-node1][DEBUG ] Unpacking python3-paste (2.0.3+dfsg-4ubuntu1) ...
[ceph-node1][DEBUG ] Selecting previously unselected package python-pastedeploy-tpl.
[ceph-node1][DEBUG ] Preparing to unpack .../16-python-pastedeploy-tpl_1.5.2-4_all.deb ...
[ceph-node1][DEBUG ] Unpacking python-pastedeploy-tpl (1.5.2-4) ...
[ceph-node1][DEBUG ] Selecting previously unselected package python3-pastedeploy.
[ceph-node1][DEBUG ] Preparing to unpack .../17-python3-pastedeploy_1.5.2-4_all.deb ...
[ceph-node1][DEBUG ] Unpacking python3-pastedeploy (1.5.2-4) ...
[ceph-node1][DEBUG ] Selecting previously unselected package python3-webtest.
[ceph-node1][DEBUG ] Preparing to unpack .../18-python3-webtest_2.0.28-1ubuntu1_all.deb ...
[ceph-node1][DEBUG ] Unpacking python3-webtest (2.0.28-1ubuntu1) ...
[ceph-node1][DEBUG ] Selecting previously unselected package python3-pecan.
[ceph-node1][DEBUG ] Preparing to unpack .../19-python3-pecan_1.2.1-2_all.deb ...
[ceph-node1][DEBUG ] Unpacking python3-pecan (1.2.1-2) ...
[ceph-node1][DEBUG ] Selecting previously unselected package libjs-jquery.
[ceph-node1][DEBUG ] Preparing to unpack .../20-libjs-jquery_3.2.1-1_all.deb ...
[ceph-node1][DEBUG ] Unpacking libjs-jquery (3.2.1-1) ...
[ceph-node1][DEBUG ] Selecting previously unselected package python3-werkzeug.
[ceph-node1][DEBUG ] Preparing to unpack .../21-python3-werkzeug_0.14.1+dfsg1-1ubuntu0.1_all.deb ...
[ceph-node1][DEBUG ] Unpacking python3-werkzeug (0.14.1+dfsg1-1ubuntu0.1) ...
[ceph-node1][DEBUG ] Selecting previously unselected package ceph-mgr.
[ceph-node1][DEBUG ] Preparing to unpack .../22-ceph-mgr_15.2.14-1bionic_amd64.deb ...
[ceph-node1][DEBUG ] Unpacking ceph-mgr (15.2.14-1bionic) ...
[ceph-node1][DEBUG ] Selecting previously unselected package ceph-mon.
[ceph-node1][DEBUG ] Preparing to unpack .../23-ceph-mon_15.2.14-1bionic_amd64.deb ...
[ceph-node1][DEBUG ] Unpacking ceph-mon (15.2.14-1bionic) ...
[ceph-node1][DEBUG ] Selecting previously unselected package ceph-osd.
[ceph-node1][DEBUG ] Preparing to unpack .../24-ceph-osd_15.2.14-1bionic_amd64.deb ...
[ceph-node1][DEBUG ] Unpacking ceph-osd (15.2.14-1bionic) ...
[ceph-node1][DEBUG ] Selecting previously unselected package ceph.
[ceph-node1][DEBUG ] Preparing to unpack .../25-ceph_15.2.14-1bionic_amd64.deb ...
[ceph-node1][DEBUG ] Unpacking ceph (15.2.14-1bionic) ...
[ceph-node1][DEBUG ] Selecting previously unselected package ceph-mds.
[ceph-node1][DEBUG ] Preparing to unpack .../26-ceph-mds_15.2.14-1bionic_amd64.deb ...
[ceph-node1][DEBUG ] Unpacking ceph-mds (15.2.14-1bionic) ...
[ceph-node1][DEBUG ] Selecting previously unselected package radosgw.
[ceph-node1][DEBUG ] Preparing to unpack .../27-radosgw_15.2.14-1bionic_amd64.deb ...
[ceph-node1][DEBUG ] Unpacking radosgw (15.2.14-1bionic) ...
[ceph-node1][DEBUG ] Setting up python3-logutils (0.3.3-5) ...
[ceph-node1][DEBUG ] Setting up libjs-jquery (3.2.1-1) ...
[ceph-node1][DEBUG ] Setting up python3-werkzeug (0.14.1+dfsg1-1ubuntu0.1) ...
[ceph-node1][DEBUG ] Setting up python3-simplegeneric (0.8.1-1) ...
[ceph-node1][DEBUG ] Setting up python3-waitress (1.0.1-1) ...
[ceph-node1][DEBUG ] update-alternatives: using /usr/bin/waitress-serve-python3 to provide /usr/bin/waitress-serve (waitress-serve) in auto mode
[ceph-node1][DEBUG ] Setting up python3-mako (1.0.7+ds1-1) ...
[ceph-node1][DEBUG ] Setting up python3-tempita (0.5.2-2) ...
[ceph-node1][DEBUG ] Setting up python3-webob (1:1.7.3-2fakesync1) ...
[ceph-node1][DEBUG ] Setting up python3-bcrypt (3.1.4-2) ...
[ceph-node1][DEBUG ] Setting up python3-singledispatch (3.4.0.3-2) ...
[ceph-node1][DEBUG ] Setting up python3-cherrypy3 (8.9.1-2) ...
[ceph-node1][DEBUG ] Setting up python3-bs4 (4.6.0-1) ...
[ceph-node1][DEBUG ] Setting up ceph-base (15.2.14-1bionic) ...
[ceph-node1][DEBUG ] Created symlink /etc/systemd/system/ceph.target.wants/ceph-crash.service → /lib/systemd/system/ceph-crash.service.
[ceph-node1][DEBUG ] Setting up python3-paste (2.0.3+dfsg-4ubuntu1) ...
[ceph-node1][DEBUG ] Setting up python-pastedeploy-tpl (1.5.2-4) ...
[ceph-node1][DEBUG ] Setting up python3-lib2to3 (3.6.9-1~18.04) ...
[ceph-node1][DEBUG ] Setting up python3-distutils (3.6.9-1~18.04) ...
[ceph-node1][DEBUG ] Setting up python3-dateutil (2.6.1-1) ...
[ceph-node1][DEBUG ] Setting up radosgw (15.2.14-1bionic) ...
[ceph-node1][DEBUG ] Created symlink /etc/systemd/system/multi-user.target.wants/ceph-radosgw.target → /lib/systemd/system/ceph-radosgw.target.
[ceph-node1][DEBUG ] Created symlink /etc/systemd/system/ceph.target.wants/ceph-radosgw.target → /lib/systemd/system/ceph-radosgw.target.
[ceph-node1][DEBUG ] Setting up ceph-osd (15.2.14-1bionic) ...
[ceph-node1][DEBUG ] Created symlink /etc/systemd/system/multi-user.target.wants/ceph-osd.target → /lib/systemd/system/ceph-osd.target.
[ceph-node1][DEBUG ] Created symlink /etc/systemd/system/ceph.target.wants/ceph-osd.target → /lib/systemd/system/ceph-osd.target.
[ceph-node1][DEBUG ] Setting up ceph-mds (15.2.14-1bionic) ...
[ceph-node1][DEBUG ] Created symlink /etc/systemd/system/multi-user.target.wants/ceph-mds.target → /lib/systemd/system/ceph-mds.target.
[ceph-node1][DEBUG ] Created symlink /etc/systemd/system/ceph.target.wants/ceph-mds.target → /lib/systemd/system/ceph-mds.target.
[ceph-node1][DEBUG ] Setting up ceph-mon (15.2.14-1bionic) ...
[ceph-node1][DEBUG ] Created symlink /etc/systemd/system/multi-user.target.wants/ceph-mon.target → /lib/systemd/system/ceph-mon.target.
[ceph-node1][DEBUG ] Created symlink /etc/systemd/system/ceph.target.wants/ceph-mon.target → /lib/systemd/system/ceph-mon.target.
[ceph-node1][DEBUG ] Setting up ceph-mgr-modules-core (15.2.14-1bionic) ...
[ceph-node1][DEBUG ] Setting up python3-pastedeploy (1.5.2-4) ...
[ceph-node1][DEBUG ] Setting up python3-webtest (2.0.28-1ubuntu1) ...
[ceph-node1][DEBUG ] Setting up python3-pecan (1.2.1-2) ...
[ceph-node1][DEBUG ] update-alternatives: using /usr/bin/python3-pecan to provide /usr/bin/pecan (pecan) in auto mode
[ceph-node1][DEBUG ] update-alternatives: using /usr/bin/python3-gunicorn_pecan to provide /usr/bin/gunicorn_pecan (gunicorn_pecan) in auto mode
[ceph-node1][DEBUG ] Setting up ceph-mgr (15.2.14-1bionic) ...
[ceph-node1][DEBUG ] Created symlink /etc/systemd/system/multi-user.target.wants/ceph-mgr.target → /lib/systemd/system/ceph-mgr.target.
[ceph-node1][DEBUG ] Created symlink /etc/systemd/system/ceph.target.wants/ceph-mgr.target → /lib/systemd/system/ceph-mgr.target.
[ceph-node1][DEBUG ] Setting up ceph (15.2.14-1bionic) ...
[ceph-node1][DEBUG ] Processing triggers for systemd (237-3ubuntu10.50) ...
[ceph-node1][DEBUG ] Processing triggers for man-db (2.8.3-2ubuntu0.1) ...
[ceph-node1][DEBUG ] Processing triggers for ureadahead (0.100.0-21) ...
[ceph-node1][DEBUG ] Processing triggers for libc-bin (2.27-3ubuntu1.4) ...
[ceph-node1][INFO  ] Running command: sudo ceph --version
[ceph-node1][DEBUG ] ceph version 15.2.14 (cd3bb7e87a2f62c1b862ff3fd8b1eec13391a5be) octopus (stable)
[ceph_deploy.install][DEBUG ] Detecting platform for host ceph-node2 ...
[ceph-node2][DEBUG ] connection detected need for sudo
[ceph-node2][DEBUG ] connected to host: ceph-node2 
[ceph-node2][DEBUG ] detect platform information from remote host
[ceph-node2][DEBUG ] detect machine type
[ceph_deploy.install][INFO  ] Distro info: Ubuntu 18.04 bionic
[ceph-node2][INFO  ] installing Ceph on ceph-node2
[ceph-node2][INFO  ] Running command: sudo env DEBIAN_FRONTEND=noninteractive DEBIAN_PRIORITY=critical apt-get --assume-yes -q update
[ceph-node2][DEBUG ] Hit:1 https://mirrors.tuna.tsinghua.edu.cn/ubuntu bionic InRelease
[ceph-node2][DEBUG ] Get:2 https://mirrors.tuna.tsinghua.edu.cn/ubuntu bionic-updates InRelease [88.7 kB]
[ceph-node2][DEBUG ] Get:3 https://mirrors.tuna.tsinghua.edu.cn/ubuntu bionic-backports InRelease [74.6 kB]
[ceph-node2][DEBUG ] Get:4 https://mirrors.tuna.tsinghua.edu.cn/ubuntu bionic-security InRelease [88.7 kB]
[ceph-node2][DEBUG ] Hit:5 https://mirrors.tuna.tsinghua.edu.cn/ceph/debian-octopus bionic InRelease

此過程會在指定的 ceph osd node 節點按照串行的方式逐個服務器安裝 ceph 源並安裝 ceph、ceph-radosgw

列出遠端存儲 node 節點的磁盤信息

ubuntu@ceph-deploy:~/ceph-cluster$ ceph-deploy disk list ceph-node1
[ceph_deploy.conf][DEBUG ] found configuration file at: /home/ubuntu/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (2.0.1): /usr/bin/ceph-deploy disk list ceph-node1
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  debug                         : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  subcommand                    : list
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7f847f5fbfa0>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  host                          : ['ceph-node1']
[ceph_deploy.cli][INFO  ]  func                          : <function disk at 0x7f847f5d52d0>
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph-node1][DEBUG ] connection detected need for sudo
[ceph-node1][DEBUG ] connected to host: ceph-node1 
[ceph-node1][DEBUG ] detect platform information from remote host
[ceph-node1][DEBUG ] detect machine type
[ceph-node1][DEBUG ] find the location of an executable
[ceph-node1][INFO  ] Running command: sudo fdisk -l
[ceph-node1][INFO  ] Disk /dev/sda: 30 GiB, 32212254720 bytes, 62914560 sectors
[ceph-node1][INFO  ] Disk /dev/mapper/ubuntu--vg-ubuntu--lv: 20 GiB, 21474836480 bytes, 41943040 sectors
[ceph-node1][INFO  ] Disk /dev/sdb: 10 GiB, 10737418240 bytes, 20971520 sectors
[ceph-node1][INFO  ] Disk /dev/sdd: 10 GiB, 10737418240 bytes, 20971520 sectors
[ceph-node1][INFO  ] Disk /dev/sdc: 10 GiB, 10737418240 bytes, 20971520 sectors
[ceph-node1][INFO  ] Disk /dev/sde: 10 GiB, 10737418240 bytes, 20971520 sectors

使用 ceph-deploy disk zap 擦除各 ceph node 的 ceph 數據磁盤:

ubuntu@ceph-deploy:~/ceph-cluster$ ceph-deploy  disk zap ceph-node1 /dev/sd{b,c,d,e}
[ceph_deploy.conf][DEBUG ] found configuration file at: /home/ubuntu/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (2.0.1): /usr/bin/ceph-deploy disk zap ceph-node1 /dev/sdb /dev/sdc /dev/sdd /dev/sde
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  debug                         : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  subcommand                    : zap
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7fc58823bfa0>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  host                          : ceph-node1
[ceph_deploy.cli][INFO  ]  func                          : <function disk at 0x7fc5882152d0>
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.cli][INFO  ]  disk                          : ['/dev/sdb', '/dev/sdc', '/dev/sdd', '/dev/sde']
[ceph_deploy.osd][DEBUG ] zapping /dev/sdb on ceph-node1
[ceph-node1][DEBUG ] connection detected need for sudo
[ceph-node1][DEBUG ] connected to host: ceph-node1 
[ceph-node1][DEBUG ] detect platform information from remote host
[ceph-node1][DEBUG ] detect machine type
[ceph-node1][DEBUG ] find the location of an executable
[ceph_deploy.osd][INFO  ] Distro info: Ubuntu 18.04 bionic
[ceph-node1][DEBUG ] zeroing last few blocks of device
[ceph-node1][DEBUG ] find the location of an executable
[ceph-node1][INFO  ] Running command: sudo /usr/sbin/ceph-volume lvm zap /dev/sdb
[ceph-node1][WARNIN] --> Zapping: /dev/sdb
[ceph-node1][WARNIN] --> --destroy was not specified, but zapping a whole device will remove the partition table
[ceph-node1][WARNIN] Running command: /bin/dd if=/dev/zero of=/dev/sdb bs=1M count=10 conv=fsync
[ceph-node1][WARNIN]  stderr: 10+0 records in
[ceph-node1][WARNIN] 10+0 records out
[ceph-node1][WARNIN] 10485760 bytes (10 MB, 10 MiB) copied, 0.0192922 s, 544 MB/s
[ceph-node1][WARNIN] --> Zapping successful for: <Raw Device: /dev/sdb>
[ceph_deploy.osd][DEBUG ] zapping /dev/sdc on ceph-node1
[ceph-node1][DEBUG ] connection detected need for sudo
[ceph-node1][DEBUG ] connected to host: ceph-node1 
[ceph-node1][DEBUG ] detect platform information from remote host
[ceph-node1][DEBUG ] detect machine type
[ceph-node1][DEBUG ] find the location of an executable
[ceph_deploy.osd][INFO  ] Distro info: Ubuntu 18.04 bionic
[ceph-node1][DEBUG ] zeroing last few blocks of device
[ceph-node1][DEBUG ] find the location of an executable
[ceph-node1][INFO  ] Running command: sudo /usr/sbin/ceph-volume lvm zap /dev/sdc
[ceph-node1][WARNIN] --> Zapping: /dev/sdc
[ceph-node1][WARNIN] --> --destroy was not specified, but zapping a whole device will remove the partition table
[ceph-node1][WARNIN] Running command: /bin/dd if=/dev/zero of=/dev/sdc bs=1M count=10 conv=fsync
[ceph-node1][WARNIN]  stderr: 10+0 records in
[ceph-node1][WARNIN] 10+0 records out
[ceph-node1][WARNIN]  stderr: 10485760 bytes (10 MB, 10 MiB) copied, 0.0136416 s, 769 MB/s
[ceph-node1][WARNIN] --> Zapping successful for: <Raw Device: /dev/sdc>
[ceph_deploy.osd][DEBUG ] zapping /dev/sdd on ceph-node1
[ceph-node1][DEBUG ] connection detected need for sudo
[ceph-node1][DEBUG ] connected to host: ceph-node1 
[ceph-node1][DEBUG ] detect platform information from remote host
[ceph-node1][DEBUG ] detect machine type
[ceph-node1][DEBUG ] find the location of an executable
[ceph_deploy.osd][INFO  ] Distro info: Ubuntu 18.04 bionic
[ceph-node1][DEBUG ] zeroing last few blocks of device
[ceph-node1][DEBUG ] find the location of an executable
[ceph-node1][INFO  ] Running command: sudo /usr/sbin/ceph-volume lvm zap /dev/sdd
[ceph-node1][WARNIN] --> Zapping: /dev/sdd
[ceph-node1][WARNIN] --> --destroy was not specified, but zapping a whole device will remove the partition table
[ceph-node1][WARNIN] Running command: /bin/dd if=/dev/zero of=/dev/sdd bs=1M count=10 conv=fsync
[ceph-node1][WARNIN]  stderr: 10+0 records in
[ceph-node1][WARNIN] 10+0 records out
[ceph-node1][WARNIN] 10485760 bytes (10 MB, 10 MiB) copied, 0.0232056 s, 452 MB/s
[ceph-node1][WARNIN] --> Zapping successful for: <Raw Device: /dev/sdd>
[ceph_deploy.osd][DEBUG ] zapping /dev/sde on ceph-node1
[ceph-node1][DEBUG ] connection detected need for sudo
[ceph-node1][DEBUG ] connected to host: ceph-node1 
[ceph-node1][DEBUG ] detect platform information from remote host
[ceph-node1][DEBUG ] detect machine type
[ceph-node1][DEBUG ] find the location of an executable
[ceph_deploy.osd][INFO  ] Distro info: Ubuntu 18.04 bionic
[ceph-node1][DEBUG ] zeroing last few blocks of device
[ceph-node1][DEBUG ] find the location of an executable
[ceph-node1][INFO  ] Running command: sudo /usr/sbin/ceph-volume lvm zap /dev/sde
[ceph-node1][WARNIN] --> Zapping: /dev/sde
[ceph-node1][WARNIN] --> --destroy was not specified, but zapping a whole device will remove the partition table
[ceph-node1][WARNIN] Running command: /bin/dd if=/dev/zero of=/dev/sde bs=1M count=10 conv=fsync
[ceph-node1][WARNIN]  stderr: 10+0 records in
[ceph-node1][WARNIN] 10+0 records out
[ceph-node1][WARNIN] 10485760 bytes (10 MB, 10 MiB) copied, 0.0235466 s, 445 MB/s
[ceph-node1][WARNIN]  stderr: 
[ceph-node1][WARNIN] --> Zapping successful for: <Raw Device: /dev/sde>

擦除剩余節點的磁盤

ubuntu@ceph-deploy:~/ceph-cluster$ ceph-deploy disk list ceph-node2
ubuntu@ceph-deploy:~/ceph-cluster$ ceph-deploy  disk zap ceph-node2 /dev/sd{b,c,d,e}
ubuntu@ceph-deploy:~/ceph-cluster$ ceph-deploy disk list ceph-node3
ubuntu@ceph-deploy:~/ceph-cluster$ ceph-deploy  disk zap ceph-node3 /dev/sd{b,c,d,e}
ubuntu@ceph-deploy:~/ceph-cluster$ ceph-deploy disk list ceph-node4
ubuntu@ceph-deploy:~/ceph-cluster$ ceph-deploy  disk zap ceph-node4 /dev/sd{b,c,d,e}

添加 osd

數據分類保存方式: 
Data:即 ceph 保存的對象數據 
Block: rocks DB 數據即元數據 
block-wal:數據庫的 wal 日志

osd 的 ID 從0開始順序使用
osd ID: 0-3
ceph-deploy osd create ceph-node1 --data /dev/sdb
ceph-deploy osd create ceph-node1 --data /dev/sdc
ceph-deploy osd create ceph-node1 --data /dev/sdd
ceph-deploy osd create ceph-node1 --data /dev/sde
osd ID: 4-7
ceph-deploy osd create ceph-node2 --data /dev/sdb
ceph-deploy osd create ceph-node2 --data /dev/sdc
ceph-deploy osd create ceph-node2 --data /dev/sdd
ceph-deploy osd create ceph-node2 --data /dev/sde
osd ID: 8-11
ceph-deploy osd create ceph-node3 --data /dev/sdb
ceph-deploy osd create ceph-node3 --data /dev/sdc
ceph-deploy osd create ceph-node3 --data /dev/sdd
ceph-deploy osd create ceph-node3 --data /dev/sde
osd ID: 12-14
ceph-deploy osd create ceph-node4 --data /dev/sdb
ceph-deploy osd create ceph-node4 --data /dev/sdc
ceph-deploy osd create ceph-node4 --data /dev/sdd
ceph-deploy osd create ceph-node4 --data /dev/sde

ubuntu@ceph-deploy:~/ceph-cluster$ ceph-deploy osd create ceph-node1 --data /dev/sdb
[ceph_deploy.conf][DEBUG ] found configuration file at: /home/ubuntu/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (2.0.1): /usr/bin/ceph-deploy osd create ceph-node1 --data /dev/sdb
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  bluestore                     : None
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7fa48c0f33c0>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  fs_type                       : xfs
[ceph_deploy.cli][INFO  ]  block_wal                     : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  journal                       : None
[ceph_deploy.cli][INFO  ]  subcommand                    : create
[ceph_deploy.cli][INFO  ]  host                          : ceph-node1
[ceph_deploy.cli][INFO  ]  filestore                     : None
[ceph_deploy.cli][INFO  ]  func                          : <function osd at 0x7fa48c142250>
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  zap_disk                      : False
[ceph_deploy.cli][INFO  ]  data                          : /dev/sdb
[ceph_deploy.cli][INFO  ]  block_db                      : None
[ceph_deploy.cli][INFO  ]  dmcrypt                       : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  dmcrypt_key_dir               : /etc/ceph/dmcrypt-keys
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  debug                         : False
[ceph_deploy.osd][DEBUG ] Creating OSD on cluster ceph with data device /dev/sdb
[ceph-node1][DEBUG ] connection detected need for sudo
[ceph-node1][DEBUG ] connected to host: ceph-node1 
[ceph-node1][DEBUG ] detect platform information from remote host
[ceph-node1][DEBUG ] detect machine type
[ceph-node1][DEBUG ] find the location of an executable
[ceph_deploy.osd][INFO  ] Distro info: Ubuntu 18.04 bionic
[ceph_deploy.osd][DEBUG ] Deploying osd to ceph-node1
[ceph-node1][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph-node1][WARNIN] osd keyring does not exist yet, creating one
[ceph-node1][DEBUG ] create a keyring file
[ceph-node1][DEBUG ] find the location of an executable
[ceph-node1][INFO  ] Running command: sudo /usr/sbin/ceph-volume --cluster ceph lvm create --bluestore --data /dev/sdb
[ceph-node1][WARNIN] Running command: /usr/bin/ceph-authtool --gen-print-key
[ceph-node1][WARNIN] Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new f9b7315f-902f-4f4e-9164-5f25be885754
[ceph-node1][WARNIN] Running command: /sbin/vgcreate --force --yes ceph-fb1a4ac5-8cbf-476d-9d3d-350f1df4a7e6 /dev/sdb
[ceph-node1][WARNIN]  stdout: Physical volume "/dev/sdb" successfully created.
[ceph-node1][WARNIN]  stdout: Volume group "ceph-fb1a4ac5-8cbf-476d-9d3d-350f1df4a7e6" successfully created
[ceph-node1][WARNIN] Running command: /sbin/lvcreate --yes -l 2559 -n osd-block-f9b7315f-902f-4f4e-9164-5f25be885754 ceph-fb1a4ac5-8cbf-476d-9d3d-350f1df4a7e6
[ceph-node1][WARNIN]  stdout: Logical volume "osd-block-f9b7315f-902f-4f4e-9164-5f25be885754" created.
[ceph-node1][WARNIN] Running command: /usr/bin/ceph-authtool --gen-print-key
[ceph-node1][WARNIN] Running command: /bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-0
[ceph-node1][WARNIN] --> Executable selinuxenabled not in PATH: /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/sbin
[ceph-node1][WARNIN] Running command: /bin/chown -h ceph:ceph /dev/ceph-fb1a4ac5-8cbf-476d-9d3d-350f1df4a7e6/osd-block-f9b7315f-902f-4f4e-9164-5f25be885754
[ceph-node1][WARNIN] Running command: /bin/chown -R ceph:ceph /dev/dm-1
[ceph-node1][WARNIN] Running command: /bin/ln -s /dev/ceph-fb1a4ac5-8cbf-476d-9d3d-350f1df4a7e6/osd-block-f9b7315f-902f-4f4e-9164-5f25be885754 /var/lib/ceph/osd/ceph-0/block
[ceph-node1][WARNIN] Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-0/activate.monmap
[ceph-node1][WARNIN]  stderr: 2021-08-16T08:04:07.244+0000 7f429b575700 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.bootstrap-osd.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,: (2) No such file or directory
[ceph-node1][WARNIN] 2021-08-16T08:04:07.244+0000 7f429b575700 -1 AuthRegistry(0x7f4294059b20) no keyring found at /etc/ceph/ceph.client.bootstrap-osd.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,, disabling cephx
[ceph-node1][WARNIN]  stderr: got monmap epoch 1
[ceph-node1][WARNIN] Running command: /usr/bin/ceph-authtool /var/lib/ceph/osd/ceph-0/keyring --create-keyring --name osd.0 --add-key AQD2GxphV+RyDBAAQFRomuzg4uDfIloEq5BI1g==
[ceph-node1][WARNIN]  stdout: creating /var/lib/ceph/osd/ceph-0/keyring
[ceph-node1][WARNIN]  stdout: added entity osd.0 auth(key=AQD2GxphV+RyDBAAQFRomuzg4uDfIloEq5BI1g==)
[ceph-node1][WARNIN] Running command: /bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/keyring
[ceph-node1][WARNIN] Running command: /bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/
[ceph-node1][WARNIN] Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 0 --monmap /var/lib/ceph/osd/ceph-0/activate.monmap --keyfile - --osd-data /var/lib/ceph/osd/ceph-0/ --osd-uuid f9b7315f-902f-4f4e-9164-5f25be885754 --setuser ceph --setgroup ceph
[ceph-node1][WARNIN]  stderr: 2021-08-16T08:04:08.028+0000 7f7b38513d80 -1 bluestore(/var/lib/ceph/osd/ceph-0/) _read_fsid unparsable uuid
[ceph-node1][WARNIN]  stderr: 2021-08-16T08:04:08.088+0000 7f7b38513d80 -1 freelist read_size_meta_from_db missing size meta in DB
[ceph-node1][WARNIN] --> ceph-volume lvm prepare successful for: /dev/sdb
[ceph-node1][WARNIN] Running command: /bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
[ceph-node1][WARNIN] Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph-fb1a4ac5-8cbf-476d-9d3d-350f1df4a7e6/osd-block-f9b7315f-902f-4f4e-9164-5f25be885754 --path /var/lib/ceph/osd/ceph-0 --no-mon-config
[ceph-node1][WARNIN] Running command: /bin/ln -snf /dev/ceph-fb1a4ac5-8cbf-476d-9d3d-350f1df4a7e6/osd-block-f9b7315f-902f-4f4e-9164-5f25be885754 /var/lib/ceph/osd/ceph-0/block
[ceph-node1][WARNIN] Running command: /bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block
[ceph-node1][WARNIN] Running command: /bin/chown -R ceph:ceph /dev/dm-1
[ceph-node1][WARNIN] Running command: /bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
[ceph-node1][WARNIN] Running command: /bin/systemctl enable ceph-volume@lvm-0-f9b7315f-902f-4f4e-9164-5f25be885754
[ceph-node1][WARNIN]  stderr: Created symlink /etc/systemd/system/multi-user.target.wants/ceph-volume@lvm-0-f9b7315f-902f-4f4e-9164-5f25be885754.service → /lib/systemd/system/ceph-volume@.service.
[ceph-node1][WARNIN] Running command: /bin/systemctl enable --runtime ceph-osd@0
[ceph-node1][WARNIN]  stderr: Created symlink /run/systemd/system/ceph-osd.target.wants/ceph-osd@0.service → /lib/systemd/system/ceph-osd@.service.
[ceph-node1][WARNIN] Running command: /bin/systemctl start ceph-osd@0
[ceph-node1][WARNIN] --> ceph-volume lvm activate successful for osd ID: 0
[ceph-node1][WARNIN] --> ceph-volume lvm create successful for: /dev/sdb
[ceph-node1][INFO  ] checking OSD status...
[ceph-node1][DEBUG ] find the location of an executable
[ceph-node1][INFO  ] Running command: sudo /usr/bin/ceph --cluster=ceph osd stat --format=json
[ceph_deploy.osd][DEBUG ] Host ceph-node1 is now ready for osd use.

查看結果

# 可以看到有四個 OSD 進程,而且編號是我們剛剛添加是生成的編號,
ubuntu@ceph-node1:~$ ps -ef |grep ceph
root        3037       1  0 07:23 ?        00:00:00 /usr/bin/python3.6 /usr/bin/ceph-crash
ceph        7249       1  0 08:04 ?        00:00:02 /usr/bin/ceph-osd -f --cluster ceph --id 0 --setuser ceph --setgroup ceph
ceph        8007       1  0 08:07 ?        00:00:02 /usr/bin/ceph-osd -f --cluster ceph --id 1 --setuser ceph --setgroup ceph
ceph        9145       1  0 08:07 ?        00:00:02 /usr/bin/ceph-osd -f --cluster ceph --id 2 --setuser ceph --setgroup ceph
ceph        9865       1  0 08:08 ?        00:00:02 /usr/bin/ceph-osd -f --cluster ceph --id 3 --setuser ceph --setgroup ceph
ubuntu     10123   10109  0 08:14 pts/0    00:00:00 grep --color=auto ceph

# 16塊磁盤,16個osd服務
ubuntu@ceph-deploy:~/ceph-cluster$ sudo ceph -s
  cluster:
    id:     b7c42944-dd49-464e-a06a-f3a466b79eb4
    health: HEALTH_WARN
            mon is allowing insecure global_id reclaim
 
  services:
    mon: 1 daemons, quorum ceph-mon1 (age 106m)
    mgr: ceph-mgr1(active, since 39m)
    osd: 16 osds: 16 up (since 114s), 16 in (since 114s)
 
  data:
    pools:   1 pools, 1 pgs
    objects: 0 objects, 0 B
    usage:   16 GiB used, 144 GiB / 160 GiB avail
    pgs:     1 active+clean

# osd 服務默認自啟
ubuntu@ceph-node1:~$ sudo systemctl status ceph-osd@0.service
● ceph-osd@0.service - Ceph object storage daemon osd.0
   Loaded: loaded (/lib/systemd/system/ceph-osd@.service; indirect; vendor preset: enabled)
   Active: active (running) since Mon 2021-08-16 08:04:10 UTC; 48min ago
 Main PID: 7249 (ceph-osd)
    Tasks: 58
   CGroup: /system.slice/system-ceph\x2dosd.slice/ceph-osd@0.service
           └─7249 /usr/bin/ceph-osd -f --cluster ceph --id 0 --setuser ceph --setgroup ceph

5.4.5 ceph-mon 服務高可用

ubuntu@ceph-mon2:~$ sudo apt install ceph-mon -y
ubuntu@ceph-mon3:~$ sudo apt install ceph-mon -y

# 添加 mon
ubuntu@ceph-deploy:~/ceph-cluster$ ceph-deploy mon add ceph-mon2 --address 192.168.10.12
ubuntu@ceph-deploy:~/ceph-cluster$ ceph-deploy mon add ceph-mon3 --address 172.31.0.13

# 禁用不安全模式 mons are allowing insecure global_id reclaim
ubuntu@ceph-deploy:~/ceph-cluster$ sudo ceph config set mon auth_allow_insecure_global_id_reclaim false
ubuntu@ceph-deploy:~/ceph-cluster$ sudo ceph -s
  cluster:
    id:     b7c42944-dd49-464e-a06a-f3a466b79eb4
    health: HEALTH_OK
 
  services:
    mon: 3 daemons, quorum ceph-mon1,ceph-mon2,ceph-mon3 (age 111s)
    mgr: ceph-mgr1(active, since 61m)
    osd: 16 osds: 16 up (since 23m), 16 in (since 23m)
 
  data:
    pools:   1 pools, 1 pgs
    objects: 0 objects, 0 B
    usage:   16 GiB used, 144 GiB / 160 GiB avail
    pgs:     1 active+clean

5.4.6 ceph-mgr 服務高可用

目前部署了1台,我們再部署1台,共兩台

# 安裝 ceph-mgr,我們提前安裝,這樣 ceph-deploy 添加 mgr 時就會檢測並跳過安裝,節省時間
ubuntu@ceph-mgr2:~$ sudo apt install ceph-mgr -y

# 添加 mgr
ubuntu@ceph-deploy:~/ceph-cluster$ ceph-deploy mgr create ceph-mgr2

# 查看結果
ubuntu@ceph-deploy:~/ceph-cluster$ ssh ceph-mgr2 "ps -ef |grep ceph"
root       9878      1  0 08:37 ?        00:00:00 /usr/bin/python3.6 /usr/bin/ceph-crash
ceph      14869      1 11 08:39 ?        00:00:04 /usr/bin/ceph-mgr -f --cluster ceph --id ceph-mgr2 --setuser ceph --setgroup ceph
ubuntu    15034  15033  0 08:40 ?        00:00:00 bash -c ps -ef |grep ceph
ubuntu    15036  15034  0 08:40 ?        00:00:00 grep ceph

# 使用 ceph 命令查看結果
ubuntu@ceph-deploy:~/ceph-cluster$ sudo ceph -s
  cluster:
    id:     b7c42944-dd49-464e-a06a-f3a466b79eb4
    health: HEALTH_OK
 
  services:
    mon: 3 daemons, quorum ceph-mon1,ceph-mon2,ceph-mon3 (age 7m)
    mgr: ceph-mgr1(active, since 67m), standbys: ceph-mgr2
    osd: 16 osds: 16 up (since 29m), 16 in (since 29m)
 
  task status:
 
  data:
    pools:   1 pools, 1 pgs
    objects: 0 objects, 0 B
    usage:   16 GiB used, 144 GiB / 160 GiB avail
    pgs:     1 active+clean

5.5 重啟 OSD

# 重啟第1台 node 機器
ubuntu@ceph-node1:~$ sudo reboot
# 只有12個 osd 服務存活
ubuntu@ceph-deploy:~/ceph-cluster$ sudo ceph -s
  cluster:
    id:     b7c42944-dd49-464e-a06a-f3a466b79eb4
    health: HEALTH_WARN
            4 osds down
            1 host (4 osds) down
 
  services:
    mon: 3 daemons, quorum ceph-mon1,ceph-mon2,ceph-mon3 (age 22m)
    mgr: ceph-mgr1(active, since 82m), standbys: ceph-mgr2
    osd: 16 osds: 12 up (since 17s), 16 in (since 44m)
 
  task status:
 
  data:
    pools:   1 pools, 1 pgs
    objects: 0 objects, 0 B
    usage:   16 GiB used, 144 GiB / 160 GiB avail
    pgs:     1 active+clean

# 服務器起來后,自動檢測添加 osd
ubuntu@ceph-deploy:~/ceph-cluster$ sudo ceph -s
  cluster:
    id:     b7c42944-dd49-464e-a06a-f3a466b79eb4
    health: HEALTH_OK
 
  services:
    mon: 3 daemons, quorum ceph-mon1,ceph-mon2,ceph-mon3 (age 24m)
    mgr: ceph-mgr1(active, since 84m), standbys: ceph-mgr2
    osd: 16 osds: 16 up (since 86s), 16 in (since 46m)
 
  data:
    pools:   1 pools, 1 pgs
    objects: 0 objects, 0 B
    usage:   16 GiB used, 144 GiB / 160 GiB avail
    pgs:     1 active+clean

5.6 移除 OSD

假如有一塊盤壞掉了,那我們如何移出它呢?Ceph 集群中的一個 OSD 是一個 node 節點的服務進程且對應於一個物理磁盤設備,是一個 專用的守護進程。在某 OSD 設備出現故障,或管理員出於管理之需確實要移除特定的 OSD 設備時,需要先停止相關的守護進程,而后再進行移除操作。

# 假設我們停掉osd Id為3的進程
ubuntu@ceph-node1:~$ sudo systemctl stop ceph-osd@3.service 
ubuntu@ceph-node1:~$ sudo systemctl status ceph-osd@3.service 

# 查看集群狀態
ubuntu@ceph-deploy:~$ sudo ceph -s
  cluster:
    id:     b7c42944-dd49-464e-a06a-f3a466b79eb4
    health: HEALTH_WARN
            1 osds down
 
  services:
    mon: 3 daemons, quorum ceph-mon1,ceph-mon2,ceph-mon3 (age 48m)
    mgr: ceph-mgr1(active, since 108m), standbys: ceph-mgr2
    osd: 16 osds: 15 up (since 26s), 16 in (since 70m)
 
  data:
    pools:   1 pools, 1 pgs
    objects: 0 objects, 0 B
    usage:   16 GiB used, 144 GiB / 160 GiB avail
    pgs:     1 active+clean

# 停用設備
ubuntu@ceph-deploy:~$ sudo ceph osd out 3
# 從mon服務監控的OSD中移出設備
# osd purge <id|osd.id> [--force] [--yes-i-really-mean-it]     purge all osd data from the monitors including the OSD id and CRUSH position
ubuntu@ceph-deploy:~$ sudo ceph osd purge 3 --yes-i-really-mean-it
purged osd.3
# 查看狀態
ubuntu@ceph-deploy:~$ sudo ceph -s
  cluster:
    id:     b7c42944-dd49-464e-a06a-f3a466b79eb4
    health: HEALTH_OK
 
  services:
    mon: 3 daemons, quorum ceph-mon1,ceph-mon2,ceph-mon3 (age 53m)
    mgr: ceph-mgr1(active, since 113m), standbys: ceph-mgr2
    osd: 15 osds: 15 up (since 5m), 15 in (since 3m)
 
  task status:
 
  data:
    pools:   1 pools, 1 pgs
    objects: 0 objects, 0 B
    usage:   15 GiB used, 135 GiB / 150 GiB avail
    pgs:     1 active+clean

修復osd

# 把原來壞掉的osd修復后重新加入集群
# 遠程連接到osd node1節點並切換到 /etc/ceph 目錄
ubuntu@ceph-node1:~$ cd /etc/ceph
# 創建osd,無需指定名,會按序號自動生成
ubuntu@ceph-node1:/etc/ceph$ sudo ceph osd create
3
# 創建賬戶,切記賬號與文件夾對應
ubuntu@ceph-node1:/etc/ceph$ sudo ceph-authtool --create-keyring /etc/ceph/ceph.osd.3.keyring --gen-key -n osd.3 --cap mon 'allow profile osd' --cap mgr 'allow profile osd' --cap osd 'allow *'
creating /etc/ceph/ceph.osd.3.keyring
# 導入新的賬戶秘鑰,切記賬號與文件夾對應
ubuntu@ceph-node1:/etc/ceph$ sudo ceph auth import -i /etc/ceph/ceph.osd.3.keyring
imported keyring
ubuntu@ceph-node1:/etc/ceph$ sudo ceph auth get-or-create osd.3 -o /var/lib/ceph/osd/ceph-3/keyring
# 加入集群
ubuntu@ceph-node1:/etc/ceph$ sudo ceph osd crush add osd.3 0.01900 host=ceph-node1
add item id 3 name 'osd.3' weight 0.019 at location {host=ceph-node1} to crush map
ubuntu@ceph-node1:/etc/ceph$ sudo ceph osd in osd.3
marked in osd.3.
# 重啟osd守護進程
ubuntu@ceph-node1:/etc/ceph$ sudo systemctl restart ceph-osd@3.service
Job for ceph-osd@3.service failed because the control process exited with error code.
See "systemctl status ceph-osd@3.service" and "journalctl -xe" for details.
# 如果報如上錯誤,執行 systemctl reset-failed ceph-osd@3.service
ubuntu@ceph-node1:/etc/ceph$ sudo systemctl reset-failed ceph-osd@3.service
# 再次重啟
ubuntu@ceph-node1:/etc/ceph$ sudo systemctl restart ceph-osd@3.service
ubuntu@ceph-node1:/etc/ceph$ sudo systemctl status ceph-osd@3.service
● ceph-osd@3.service - Ceph object storage daemon osd.3
   Loaded: loaded (/lib/systemd/system/ceph-osd@.service; indirect; vendor preset: enabled)
   Active: active (running) since Mon 2021-08-16 15:13:27 UTC; 8s ago
  Process: 3901 ExecStartPre=/usr/lib/ceph/ceph-osd-prestart.sh --cluster ${CLUSTER} --id 3 (code=exited, status=0/SUCCESS)
 Main PID: 3905 (ceph-osd)
    Tasks: 58
   CGroup: /system.slice/system-ceph\x2dosd.slice/ceph-osd@3.service
           └─3905 /usr/bin/ceph-osd -f --cluster ceph --id 3 --setuser ceph --setgroup ceph
           
# 查看結果
ubuntu@ceph-node1:/etc/ceph$ sudo ceph -s
  cluster:
    id:     b7c42944-dd49-464e-a06a-f3a466b79eb4
    health: HEALTH_OK

  services:
    mon: 3 daemons, quorum ceph-mon1,ceph-mon2,ceph-mon3 (age 4h)
    mgr: ceph-mgr1(active, since 7h), standbys: ceph-mgr2
    osd: 16 osds: 16 up (since 102s), 16 in (since 8m)

  data:
    pools:   1 pools, 1 pgs
    objects: 0 objects, 0 B
    usage:   16 GiB used, 144 GiB / 160 GiB avail
    pgs:     1 active+clean

5.7 測試上傳與下載數據

存取數據時,客戶端必須首先連接至 RADOS 集群上某存儲池,然后根據對象名稱由相關的 CRUSH 規則完成數據對象尋址。為了測試集群的數據存取功能,這里首先創建一個用於測試的存儲池 mypool,並設定其 PG 數量為 32 個

# 創建 pool
ubuntu@ceph-deploy:~/ceph-cluster$ sudo ceph osd pool create mypool 32 32
pool 'mypool' created
ubuntu@ceph-deploy:~/ceph-cluster$ sudo ceph osd pool ls
device_health_metrics
mypool

# 當前的 ceph 環境還沒還沒有部署使用塊存儲和文件系統使用 ceph,也沒有使用對象存儲的客戶端,
# 但是 ceph 的 rados 命令可以實現訪問 ceph 對象存儲的功能
# 把 syslog 文件上傳到 mypool 並指定對象 id 為 syslog1
ubuntu@ceph-deploy:~/ceph-cluster$ sudo rados put syslog1 /var/log/syslog --pool=mypool
# 列出文件
ubuntu@ceph-deploy:~/ceph-cluster$ sudo rados ls --pool=mypool
syslog1

# 文件信息
# ceph osd map 命令可以獲取到存儲池中數據對象的具體位置信息:
ubuntu@ceph-deploy:~/ceph-cluster$ sudo ceph osd map mypool syslog1
osdmap e102 pool 'mypool' (3) object 'syslog1' -> pg 3.1dd3f9b (3.1b) -> up ([10,4,15], p10) acting ([10,4,15], p10)

# 下載文件
ubuntu@ceph-deploy:~/ceph-cluster$ sudo rados get syslog1 --pool=mypool ./syslog

# 刪除文件
ubuntu@ceph-deploy:~/ceph-cluster$ sudo rados rm syslog1 --pool=mypool
ubuntu@ceph-deploy:~/ceph-cluster$ sudo rados ls --pool=mypool

在集群中刪除一個 pool時,注意刪除 pool后,其映射的 image 會直接被刪除,線上操作要謹慎,刪除時存儲池的名字需要重復兩次

# 刪除時會報錯:
ubuntu@ceph-deploy:~/ceph-cluster$ sudo ceph osd pool rm mypool mypool --yes-i-really-really-mean-it
Error EPERM: pool deletion is disabled; you must first set the mon_allow_pool_delete config option to true before you can destroy a pool

# 這是由於沒有配置mon節點的 mon_allow_pool_delete 字段所致,解決辦法就是到mon節點進行相應的設置。
# 解決方式:
# 方案一
ubuntu@ceph-deploy:~/ceph-cluster$ sudo ceph tell mon.* injectargs --mon_allow_pool_delete=true
ubuntu@ceph-deploy:~/ceph-cluster$ sudo ceph osd pool delete tom_test tom_test --yes-i-really-really-mean-it
# 刪除完成后最好把mon_allow_pool_delete改回去,降低誤刪的風險
# 方案二
# 如果是測試環境,想隨意刪除存儲池,可以在配置文件中全局開啟刪除存儲池的功能
ubuntu@ceph-deploy:~/ceph-cluster$ vim ceph.conf
[mon]
mon allow pool delete = true
# 推送配置
ubuntu@ceph-deploy:~/ceph-cluster$ ceph-deploy --overwrite-conf config push ceph-mon{1,2,3}
[ceph_deploy.conf][DEBUG ] found configuration file at: /home/ubuntu/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (2.0.1): /usr/bin/ceph-deploy --overwrite-conf config push ceph-mon1 ceph-mon2 ceph-mon3
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : True
[ceph_deploy.cli][INFO  ]  subcommand                    : push
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7f33765bd2d0>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  client                        : ['ceph-mon1', 'ceph-mon2', 'ceph-mon3']
[ceph_deploy.cli][INFO  ]  func                          : <function config at 0x7f33766048d0>
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.config][DEBUG ] Pushing config to ceph-mon1
[ceph-mon1][DEBUG ] connection detected need for sudo
[ceph-mon1][DEBUG ] connected to host: ceph-mon1 
[ceph-mon1][DEBUG ] detect platform information from remote host
[ceph-mon1][DEBUG ] detect machine type
[ceph-mon1][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph_deploy.config][DEBUG ] Pushing config to ceph-mon2
[ceph-mon2][DEBUG ] connection detected need for sudo
[ceph-mon2][DEBUG ] connected to host: ceph-mon2 
[ceph-mon2][DEBUG ] detect platform information from remote host
[ceph-mon2][DEBUG ] detect machine type
[ceph-mon2][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph_deploy.config][DEBUG ] Pushing config to ceph-mon3
[ceph-mon3][DEBUG ] connection detected need for sudo
[ceph-mon3][DEBUG ] connected to host: ceph-mon3 
[ceph-mon3][DEBUG ] detect platform information from remote host
[ceph-mon3][DEBUG ] detect machine type
[ceph-mon3][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf

# 重啟ceph-mon節點服務
for node in ceph-mon{1,2,3}
do
   ssh $node "sudo systemctl restart ceph-mon.target"
done

# 刪除 pool
ubuntu@ceph-deploy:~/ceph-cluster$ sudo ceph osd pool rm mypool mypool --yes-i-really-really-mean-it
pool 'mypool' removed

6. Ceph塊設備RDB

RBD(RADOS Block Devices)即為塊存儲的一種,RBD 通過 librbd 庫與 OSD 進行交互,RBD 為 KVM 等虛擬化技術和雲服務(如 OpenStack 和 CloudStack)提供高性能和無限可擴展性的存儲后端,這些系統依賴於 libvirt 和 QEMU 實用程序與 RBD 進行集成,客戶端基於librbd 庫即可將 RADOS 存儲集群用作塊設備,不過,用於 rbd 的存儲池需要事先啟用 rbd 功能並進行初始化。例如,下面的命令創建一個名為 myrbd1 的存儲池,並在啟用 rbd 功能后對其進行初始化:

  • 創建RDB
# 創建存儲池,指定 pg 和 pgp 的數量,pgp 是對存在 於 pg 的數據進行組合存儲,pgp 通常等於 pg 的值
ubuntu@ceph-deploy:~/ceph-cluster$ sudo ceph osd pool create myrbd1 64 64
pool 'myrbd1' created

ubuntu@ceph-deploy:~/ceph-cluster$ sudo ceph osd pool ls
device_health_metrics
myrbd1

# 將存儲池轉換為RBD模式
ubuntu@ceph-deploy:~/ceph-cluster$ sudo ceph osd pool application enable myrbd1 rbd
enabled application 'rbd' on pool 'myrbd1'

# 初始化存儲池
ubuntu@ceph-deploy:~/ceph-cluster$ sudo rbd pool init -p myrbd1
  • 創建映像

rbd 存儲池並不能直接用於塊設備,而是需要事先在其中按需創建映像(image),並把映像文件作為塊設備使用,rbd 命令可用於創建、查看及刪除塊設備所在的映像(image),以及克隆映像、創建快照、將映像回滾到快照和查看快照等管理操作。

# 1、創建映像
# 語法:rbd create --size 5G --pool <pool name> <image name>
ubuntu@ceph-deploy:~/ceph-cluster$ sudo rbd create myimg1 --size 5G --pool myrbd1
ubuntu@ceph-deploy:~/ceph-cluster$ sudo rbd create myimg2 --size 3G --pool myrbd1 --image-format 2 --image-feature layering

# 2、查看鏡像
# 查看存儲池下存在哪些鏡像
ubuntu@ceph-deploy:~/ceph-cluster$ sudo rbd ls --pool myrbd1 -l
NAME    SIZE   PARENT  FMT  PROT  LOCK
myimg1  5 GiB            2
myimg2  3 GiB            2

# 查看某一鏡像的詳細信息
ubuntu@ceph-deploy:~/ceph-cluster$ sudo rbd --image myimg1 --pool myrbd1 info
rbd image 'myimg1':
	size 5 GiB in 1280 objects
	order 22 (4 MiB objects)
	snapshot_count: 0
	id: 8560a5411e51
	block_name_prefix: rbd_data.8560a5411e51
	format: 2
	features: layering, exclusive-lock, object-map, fast-diff, deep-flatten
	op_features:
	flags:
	create_timestamp: Mon Aug 16 15:33:49 2021
	access_timestamp: Mon Aug 16 15:33:49 2021
	modify_timestamp: Mon Aug 16 15:33:49 2021

# 顯示內容注解:
# size:鏡像的大小與被分割成的條帶數。
# order 22:條帶的編號,有效范圍是12到25,對應4K到32M,而22代表2的22次方,這樣剛好是4M。
# id:鏡像的ID標識。
# block_name_prefix:名稱前綴。
# format:使用的鏡像格式,默認為2。
# features:當前鏡像的功能特性。
# op_features:可選的功能特性。

# 修改鏡像大小
ubuntu@ceph-deploy:~/ceph-cluster$ sudo rbd resize --pool myrbd1 --image myimg2 --size 5G
Resizing image: 100% complete...done.
ubuntu@ceph-deploy:~/ceph-cluster$ sudo rbd ls --pool myrbd1 -l
NAME    SIZE   PARENT  FMT  PROT  LOCK
myimg1  5 GiB            2
myimg2  5 GiB            2

# 使用resize就可以調整鏡像的大小,一般建議只增不減,如果是減少的話需要加一個選項 –allow-shrink
  • 驗證塊設備
# 客戶端映射 img
ubuntu@ceph-deploy:~/ceph-cluster$ sudo rbd map --pool myrbd1 --image myimg1
rbd: sysfs write failed
RBD image feature set mismatch. You can disable features unsupported by the kernel with "rbd feature disable myrbd1/myimg1 object-map fast-diff deep-flatten".
In some cases useful info is found in syslog - try "dmesg | tail".
rbd: map failed: (6) No such device or address
ubuntu@ceph-deploy:~/ceph-cluster$ sudo rbd map --pool myrbd1 --image myimg2
/dev/rbd0
# 格式化磁盤
ubuntu@ceph-deploy:~/ceph-cluster$ sudo mkfs.ext4 /dev/rbd0
mke2fs 1.44.1 (24-Mar-2018)
Discarding device blocks: done
Creating filesystem with 1310720 4k blocks and 327680 inodes
Filesystem UUID: 89dfe52f-f8a2-4a3f-bdd1-e136fd933ea9
Superblock backups stored on blocks:
	32768, 98304, 163840, 229376, 294912, 819200, 884736

Allocating group tables: done
Writing inode tables: done
Creating journal (16384 blocks): done
Writing superblocks and filesystem accounting information: done
# 掛載磁盤
ubuntu@ceph-deploy:~/ceph-cluster$ sudo mount /dev/rbd0 /mnt
ubuntu@ceph-deploy:~/ceph-cluster$ df -h
Filesystem                         Size  Used Avail Use% Mounted on
udev                               954M     0  954M   0% /dev
tmpfs                              198M  9.8M  188M   5% /run
/dev/mapper/ubuntu--vg-ubuntu--lv   20G  4.6G   15G  25% /
tmpfs                              986M     0  986M   0% /dev/shm
tmpfs                              5.0M     0  5.0M   0% /run/lock
tmpfs                              986M     0  986M   0% /sys/fs/cgroup
/dev/sda2                          976M  149M  760M  17% /boot
tmpfs                              198M     0  198M   0% /run/user/1000
/dev/rbd0                          4.9G   20M  4.6G   1% /mnt

# 測試寫入
ubuntu@ceph-deploy:~/ceph-cluster$ sudo dd if=/dev/zero of=/mnt/ceph-test bs=1MB count=1024
1024+0 records in
1024+0 records out
1024000000 bytes (1.0 GB, 977 MiB) copied, 117.308 s, 8.7 MB/s
ubuntu@ceph-deploy:~/ceph-cluster$ sudo dd if=/dev/zero of=/tmp/ceph-test bs=1MB count=1024
1024+0 records in
1024+0 records out
1024000000 bytes (1.0 GB, 977 MiB) copied, 1.81747 s, 563 MB/s

# 這測試有點尷尬呀,不過正常OSD節點用的是SSD,網卡用的是千兆、萬兆口。

7. 參考

https://blog.csdn.net/lhc121386/article/details/113488420

https://www.huaweicloud.com/articles/25d293d7b10848aff6f67861d6458fbd.html

https://blog.csdn.net/lhc121386/article/details/113488420

https://zhuanlan.zhihu.com/p/386561535


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM