Ceph N版手動部署


部署思路

 

參考:https://docs.ceph.com/en/nautilus/install/manual-deployment/#

 基礎環境

主機名 操作系統 Ceph版本 IP 角色
node01 CentOS8.2 14.2.11 nautilus (stable) 10.0.12.100/24 mon、mgr、osd、mds、rgw
node02 10.0.12.101/24 mon、mgr、osd、mds、rgw
node03 10.0.12.102/24 mon、mgr、osd、mds、rgw

 

 

 

 

 


 1.基礎環境配置

所有節點均需配置

1.1 配置主機名與靜態域名解析

# vim /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
10.0.12.100 node01
10.0.12.101 node02
10.0.12.102 node03

 

1.2 關閉防火牆和 Selinux

關閉firewalld
# systemctl disable --now firewalld

關閉 selinux
# setenforce 0
# sed -i 's/^SELINUX=.*/SELINUX=disabled/' /etc/selinux/config

 

1.3 配置時間同步

# dnf install chrony

 Server 端配置:
    # vim /etc/chrony.conf
    將 server 0.centos.pool.ntp.org iburst 改成:
          server <Server IP> iburst
    刪除其它幾行默認 iburst
    去掉這一行的注釋並將 IP 信息改成主節點管理 IP 網段:
          allow 10.0.12.0/24
    去掉這一行的注釋:
          local stratum 10
    # systemctl restart chronyd
    # systemctl enable chronyd
    
Client 端配置:
    vim /etc/chronyd.conf
    將 server 0.centos.pool.ntp.org iburst 改成:
          server <Server IP> iburst
    # systemctl restart chronyd
    # systemctl enable chronyd

 

檢查配置是否成功

# chronyc sources
210 Number of sources = 1
MS Name/IP address         Stratum Poll Reach LastRx Last sample
===============================================================================
^* node01                       10  10   377   19h     +0ns[+7141ns] +/- 9893ns

 

1.4 配置ceph源

備份 CentOS 基礎源
# mv /etc/yum.repos.d/CentOS-Base.repo /etc/yum.repos.d/CentOS-Base.repo.backup

配置阿里源 # curl
-o /etc/yum.repos.d/CentOS-Base.repo https://mirrors.aliyun.com/repo/Centos-8.repo
下載 epel 源 # dnf install -y epel-release
配置 ceph 源 #
cat >/etc/yum.repos.d/ceph.repo <<EOF [Ceph] name=Ceph packages for \$basearch baseurl=http://mirrors.aliyun.com/ceph/rpm-nautilus/el8/\$basearch enabled=1 gpgcheck=1 type=rpm-md gpgkey=https://download.ceph.com/keys/release.asc [Ceph-noarch] name=Ceph noarch packages baseurl=http://mirrors.aliyun.com/ceph/rpm-nautilus/el8/noarch enabled=1 gpgcheck=1 type=rpm-md gpgkey=https://download.ceph.com/keys/release.asc [ceph-source] name=Ceph source packages baseurl=http://mirrors.aliyun.com/ceph/rpm-nautilus/el8/SRPMS enabled=1 gpgcheck=1 type=rpm-md gpgkey=https://download.ceph.com/keys/release.asc EOF
重新緩存 dnf 元數據 # dnf clean all # dnf makecache

 

1.5 配置部署節點到其他節點免秘鑰登陸

# ssh-keygen -t rsa
# ssh-copy-id root@node01
# ssh-copy-id root@node02
# ssh-copy-id root@node03

 

1.6 安裝 ceph

# dnf install ceph ceph-radosgw

 

2. mon 部署

2.1 創建 ceph 配置文件

生成一個 UUID 作為集群的 fsid

# uuidgen
78a3f9bf-1f45-4941-873f-749616d2ae65

 

創建 ceph 初始化配置文件

# vim /etc/ceph/ceph.conf
[global]
fsid = 78a3f9bf-1f45-4941-873f-749616d2ae65
mon_initial_members = node01,node02,node03
mon_host = 192.168.0.1,192.168.0.2,192.168.0.3
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx

 

2.2 創建集群 mon. keyring

mon. 是集群的第一個用戶,可以用來創建admin用戶(admin用戶指client.admin)

# ceph-authtool --create-keyring /tmp/ceph.mon.keyring --gen-key -n mon. --cap mon 'allow *'
creating /tmp/ceph.mon.keyring

 

拓展
在整個集群啟動的時候,首先是 Monitor 啟動,再然后是其他服務啟動,在 mon 啟動的時候,會攜帶自己的秘鑰文件啟動進程,也就是mon不需要向任何進程進行秘鑰認證,哪怕攜帶的是錯誤的秘鑰,也可正常啟動。
mon 的數據庫里面,記錄着除了 mon. 以外的所有用戶密碼,在 mon 啟動之后,才真正開啟了認證這個步驟,之后的所有用戶想要連接到集群,必須先要通過 fsid 和 MON IP 連上 Ceph 集群,通過了認證之后,就可以正常訪問集群。因此當admin的秘鑰丟了之后,也可通過mon. 用戶找回秘鑰,只需要:

# ceph auth get client.admin --name mon. --keyring /var/lib/ceph/mon/ceph-node02/keyring
2020-09-17 10:36:58.697 7f9fa6ef7700 -1 auth: unable to find a keyring on /etc/ceph/ceph.mon..keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,: (2) No such file or directory
2020-09-17 10:36:58.697 7f9fa6ef7700 -1 AuthRegistry(0x7f9fa0082f68) no keyring found at /etc/ceph/ceph.mon..keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,, disabling cephx
exported keyring for client.admin
[client.admin]
        key = AQDKVGBflsvcHBAArNV8xcoJYWfaKed+hHvzmw==
        caps mds = "allow *"
        caps mgr = "allow *"
        caps mon = "allow *"
        caps osd = "allow *"

 

Ceph 秘鑰關系

 

2.3 創建 admin 用戶 keyring

一般使用 admin 用戶來操作集群

# ceph-authtool --create-keyring /etc/ceph/ceph.client.admin.keyring --gen-key -n client.admin --cap mon 'allow *' --cap osd 'allow *' --cap mds 'allow *' --cap mgr 'allow *'
creating /etc/ceph/ceph.client.admin.keyring

 

2.4創建bootstrap-osd keyring

 bootstrap 引導生成對應類型用戶,bootstrap-osd 即為引導生成osd類型用戶,如 osd.0、osd.1
# ceph-authtool --create-keyring /var/lib/ceph/bootstrap-osd/ceph.keyring --gen-key -n client.bootstrap-osd --cap mon 'profile bootstrap-osd' --cap mgr 'allow r' creating /var/lib/ceph/bootstrap-osd/ceph.keyring

 

2.5 將生成的秘鑰導入到 mon keyring

 一個 keyring 文件內可以包含多個用戶 keyring,此為將 admin 和 bootstrap-osd keyring 保存到 mon keyring 內,初始化時可以將其秘鑰導入到集群
# ceph-authtool /tmp/ceph.mon.keyring --import-keyring /etc/ceph/ceph.client.admin.keyring
importing contents of /etc/ceph/ceph.client.admin.keyring into /tmp/ceph.mon.keyring
# ceph
-authtool /tmp/ceph.mon.keyring --import-keyring /var/lib/ceph/bootstrap-osd/ceph.keyring
importing contents of /var/lib/ceph/bootstrap-osd/ceph.keyring into /tmp/ceph.mon.keyring

 

2.6 將mon keyring 屬主和屬組修改為 ceph

# chown ceph:ceph /tmp/ceph.mon.keyring

 

 2.7 創建mon map

# monmaptool --create --add node01 10.0.12.100 --add node02 10.0.12.101 --add node03 10.0.12.102 --fsid 78a3f9bf-1f45-4941-873f-749616d2ae65 /tmp/monmap
monmaptool: monmap file /tmp/monmap
monmaptool: set fsid to 78a3f9bf-1f45-4941-873f-749616d2ae65
monmaptool: writing epoch 0 to /tmp/monmap (3 monitors)

 

mon map 記錄着集群 mon 信息
查看mon map(需要等 mon 部署完成才可查看)

# ceph mon getmap -o monmap.bin
got monmap epoch 2

# monmaptool
--print monmap.bin monmaptool: monmap file monmap.bin epoch 2 fsid 78a3f9bf-1f45-4941-873f-749616d2ae65 last_changed 2020-09-15 15:17:56.345585 created 2020-09-15 14:47:17.159010 min_mon_release 14 (nautilus) 0: [v2:10.0.12.100:3300/0,v1:10.0.12.100:6789/0] mon.node01 1: [v2:10.0.12.101:3300/0,v1:10.0.12.101:6789/0] mon.node02 2: [v2:10.0.12.102:3300/0,v1:10.0.12.102:6789/0] mon.node03

 

2.8 將mon map、mon keyring 、admin keyring和ceph.conf 等文件發送到其他mon節點

# scp /tmp/monmap root@node02:/tmp/
# scp /tmp/ceph.mon.keyring root@node02:/tmp/
# scp -r /etc/ceph/ root@node02:/etc/ 

相同方式發到node03

 

2.9 創建 mon 數據目錄

用於存儲 mon 數據

# sudo -u ceph mkdir /var/lib/ceph/mon/ceph-node01


步驟
9-12 node02、node03節點進行相同操作,只需要修改相應的主機名即可,如針對node02,此時: # sudo -u ceph mkdir /var/lib/ceph/mon/ceph-node02

 

2.10 初始化mon數據目錄,構建mon守護進程運行時所需要的文件

# sudo -u ceph ceph-mon --mkfs -i node01 --monmap /tmp/monmap --keyring /tmp/ceph.mon.keyring
# ls /var/lib/ceph/mon/ceph-node01
keyring  kv_backend  store.db

 

2.11 啟動mon服務

# systemctl start ceph-mon@node01
# systemctl enable ceph-mon@node01

 

2.12 查看集群狀態

# ceph -s
  cluster:
    id:     78a3f9bf-1f45-4941-873f-749616d2ae65
    health: HEALTH_WARN
            3 monitors have not enabled msgr2

  services:
    mon: 3 daemons, quorum node01,node02,node03 (age 6s)
    mgr: no daemons active
    osd: 0 osds: 0 up, 0 in

  data:
    pools:   0 pools, 0 pgs
    objects: 0 objects, 0 B
    usage:   0 B used, 0 B / 0 B avail
    pgs:

 

2.13 啟用msgr2

# ceph mon enable-msgr2

# ceph -s
  cluster:
    id:     78a3f9bf-1f45-4941-873f-749616d2ae65
    health: HEALTH_OK

  services:
    mon: 3 daemons, quorum node01,node02,node03 (age 2s)
    mgr: no daemons active
    osd: 0 osds: 0 up, 0 in

  data:
    pools:   0 pools, 0 pgs
    objects: 0 objects, 0 B
    usage:   0 B used, 0 B / 0 B avail
    pgs:

 

3.MGR部署

node01(node02、node03類似操作)

創建mgr數據目錄
# sudo -u ceph mkdir /var/lib/ceph/mgr/ceph-node01

創建mgr用戶並將秘鑰導入到數據目錄 # ceph auth get
-or-create mgr.node01 mon 'allow profile mgr' osd 'allow *' mds 'allow *' > /var/lib/ceph/mgr/ceph-node01/keyring
啟動mgr服務 # systemctl restart ceph
-mgr@node01 ; systemctl enable ceph-mgr@node01 node02、node03節點相同操作
查看mgr是否創建成功 # ceph
-s cluster: id: 78a3f9bf-1f45-4941-873f-749616d2ae65 health: HEALTH_WARN OSD count 0 < osd_pool_default_size 3 services: mon: 3 daemons, quorum node01,node02,node03 (age 25m) mgr: node01(active, since 5m), standbys: node03, node02 osd: 0 osds: 0 up, 0 in data: pools: 0 pools, 0 pgs objects: 0 objects, 0 B usage: 0 B used, 0 B / 0 B avail pgs:

 

4.添加OSD

4.1 復制bootstrap-osd到其他節點

# scp /var/lib/ceph/bootstrap-osd/ceph.keyring root@node02:/var/lib/ceph/bootstrap-osd/
# scp /var/lib/ceph/bootstrap-osd/ceph.keyring root@node03:/var/lib/ceph/bootstrap-osd/

 

4.2 創建 OSD

實驗環境選擇數據盤、wal 和 db 同為一塊盤
# ceph-volume lvm create --data /dev/sdb
# ceph-volume lvm create --data /dev/sdc

查看集群OSD狀態 # ceph osd tree ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI
-AFF -1 0.35156 root default -3 0.11719 host node01 0 hdd 0.05859 osd.0 up 1.00000 1.00000 3 hdd 0.05859 osd.3 up 1.00000 1.00000 -5 0.11719 host node02 1 hdd 0.05859 osd.1 up 1.00000 1.00000 4 hdd 0.05859 osd.4 up 1.00000 1.00000 -7 0.11719 host node03 2 hdd 0.05859 osd.2 up 1.00000 1.00000 5 hdd 0.05859 osd.5 up 1.00000 1.00000 數據盤和 wal、db 同為一塊盤時 # ceph-volume lvm create --data /dev/sdb 數據盤和 wal、db 分離時且 wal 和 db 屬於同一設備 # ceph-volume lvm create --data /dev/sdb --block.db /dev/sdx 數據盤、wal 和 db 分離 # ceph-volume lvm create --data /dev/sdb --block.db /dev/sdx --block.wal /dev/sdy

 

5.添加MDS

node01(node02、node03類似操作)

創建MDS數據目錄
# sudo -u ceph mkdir -p /var/lib/ceph/mds/ceph-node01

創建mds用戶 # ceph auth get
-or-create mds.node01 osd "allow rwx" mds "allow" mon "allow profile mds"
獲取mds用戶秘鑰並導入其數據目錄 # ceph auth get mds.node01
| tee /var/lib/ceph/mds/ceph-node01/keyring
將配置寫入配置文件 # vim
/etc/ceph/ceph.conf … [mds.node01] host = node01 …
啟動守護進程 # systemctl restart ceph
-mds@node01 ; systemctl enable ceph-mds@node01
檢查 # ceph
-s cluster: id: 78a3f9bf-1f45-4941-873f-749616d2ae65 health: HEALTH_OK services: mon: 3 daemons, quorum node01,node02,node03 (age 15m) mgr: node01(active, since 18h), standbys: node03, node02 mds: 3 up:standby osd: 6 osds: 6 up (since 18h), 6 in (since 18h) data: pools: 0 pools, 0 pgs objects: 0 objects, 0 B usage: 6.0 GiB used, 354 GiB / 360 GiB avail pgs:

 

6.添加RGW

創建rgw數據目錄
# chown -R ceph:ceph /var/lib/ceph/radosgw/ceph-rgw.node01

創建rgw用戶 # ceph auth get
-or-create client.rgw.node01 osd "allow rwx" mon "allow rw"
獲取rgw用戶秘鑰並導入其數據目錄 # ceph auth get client.rgw.node01
| tee /var/lib/ceph/radosgw/ceph-rgw.node01/keyring
修改數據目錄屬主屬組 #
chown -R ceph:ceph /var/lib/ceph/radosgw/ceph-rgw.node01
啟動rgw服務 # systemctl start ceph
-radosgw@rgw.node01 ; systemctl enable ceph-radosgw@rgw.node01 檢查 # ceph -s cluster: id: 78a3f9bf-1f45-4941-873f-749616d2ae65 health: HEALTH_OK services: mon: 3 daemons, quorum node01,node02,node03 (age 26h) mgr: node01(active, since 44h), standbys: node03, node02 mds: 3 up:standby osd: 6 osds: 6 up (since 44h), 6 in (since 44h) rgw: 3 daemons active (node01, node02, node03) task status: data: pools: 4 pools, 128 pgs objects: 219 objects, 1.2 KiB usage: 6.0 GiB used, 354 GiB / 360 GiB avail pgs: 128 active+clean io: client: 4.0 KiB/s rd, 0 B/s wr, 3 op/s rd, 2 op/s wr

部署完成

 

 


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM