Ceph N版手动部署


部署思路

 

参考:https://docs.ceph.com/en/nautilus/install/manual-deployment/#

 基础环境

主机名 操作系统 Ceph版本 IP 角色
node01 CentOS8.2 14.2.11 nautilus (stable) 10.0.12.100/24 mon、mgr、osd、mds、rgw
node02 10.0.12.101/24 mon、mgr、osd、mds、rgw
node03 10.0.12.102/24 mon、mgr、osd、mds、rgw

 

 

 

 

 


 1.基础环境配置

所有节点均需配置

1.1 配置主机名与静态域名解析

# vim /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
10.0.12.100 node01
10.0.12.101 node02
10.0.12.102 node03

 

1.2 关闭防火墙和 Selinux

关闭firewalld
# systemctl disable --now firewalld

关闭 selinux
# setenforce 0
# sed -i 's/^SELINUX=.*/SELINUX=disabled/' /etc/selinux/config

 

1.3 配置时间同步

# dnf install chrony

 Server 端配置:
    # vim /etc/chrony.conf
    将 server 0.centos.pool.ntp.org iburst 改成:
          server <Server IP> iburst
    删除其它几行默认 iburst
    去掉这一行的注释并将 IP 信息改成主节点管理 IP 网段:
          allow 10.0.12.0/24
    去掉这一行的注释:
          local stratum 10
    # systemctl restart chronyd
    # systemctl enable chronyd
    
Client 端配置:
    vim /etc/chronyd.conf
    将 server 0.centos.pool.ntp.org iburst 改成:
          server <Server IP> iburst
    # systemctl restart chronyd
    # systemctl enable chronyd

 

检查配置是否成功

# chronyc sources
210 Number of sources = 1
MS Name/IP address         Stratum Poll Reach LastRx Last sample
===============================================================================
^* node01                       10  10   377   19h     +0ns[+7141ns] +/- 9893ns

 

1.4 配置ceph源

备份 CentOS 基础源
# mv /etc/yum.repos.d/CentOS-Base.repo /etc/yum.repos.d/CentOS-Base.repo.backup

配置阿里源 # curl
-o /etc/yum.repos.d/CentOS-Base.repo https://mirrors.aliyun.com/repo/Centos-8.repo
下载 epel 源 # dnf install -y epel-release
配置 ceph 源 #
cat >/etc/yum.repos.d/ceph.repo <<EOF [Ceph] name=Ceph packages for \$basearch baseurl=http://mirrors.aliyun.com/ceph/rpm-nautilus/el8/\$basearch enabled=1 gpgcheck=1 type=rpm-md gpgkey=https://download.ceph.com/keys/release.asc [Ceph-noarch] name=Ceph noarch packages baseurl=http://mirrors.aliyun.com/ceph/rpm-nautilus/el8/noarch enabled=1 gpgcheck=1 type=rpm-md gpgkey=https://download.ceph.com/keys/release.asc [ceph-source] name=Ceph source packages baseurl=http://mirrors.aliyun.com/ceph/rpm-nautilus/el8/SRPMS enabled=1 gpgcheck=1 type=rpm-md gpgkey=https://download.ceph.com/keys/release.asc EOF
重新缓存 dnf 元数据 # dnf clean all # dnf makecache

 

1.5 配置部署节点到其他节点免秘钥登陆

# ssh-keygen -t rsa
# ssh-copy-id root@node01
# ssh-copy-id root@node02
# ssh-copy-id root@node03

 

1.6 安装 ceph

# dnf install ceph ceph-radosgw

 

2. mon 部署

2.1 创建 ceph 配置文件

生成一个 UUID 作为集群的 fsid

# uuidgen
78a3f9bf-1f45-4941-873f-749616d2ae65

 

创建 ceph 初始化配置文件

# vim /etc/ceph/ceph.conf
[global]
fsid = 78a3f9bf-1f45-4941-873f-749616d2ae65
mon_initial_members = node01,node02,node03
mon_host = 192.168.0.1,192.168.0.2,192.168.0.3
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx

 

2.2 创建集群 mon. keyring

mon. 是集群的第一个用户,可以用来创建admin用户(admin用户指client.admin)

# ceph-authtool --create-keyring /tmp/ceph.mon.keyring --gen-key -n mon. --cap mon 'allow *'
creating /tmp/ceph.mon.keyring

 

拓展
在整个集群启动的时候,首先是 Monitor 启动,再然后是其他服务启动,在 mon 启动的时候,会携带自己的秘钥文件启动进程,也就是mon不需要向任何进程进行秘钥认证,哪怕携带的是错误的秘钥,也可正常启动。
mon 的数据库里面,记录着除了 mon. 以外的所有用户密码,在 mon 启动之后,才真正开启了认证这个步骤,之后的所有用户想要连接到集群,必须先要通过 fsid 和 MON IP 连上 Ceph 集群,通过了认证之后,就可以正常访问集群。因此当admin的秘钥丢了之后,也可通过mon. 用户找回秘钥,只需要:

# ceph auth get client.admin --name mon. --keyring /var/lib/ceph/mon/ceph-node02/keyring
2020-09-17 10:36:58.697 7f9fa6ef7700 -1 auth: unable to find a keyring on /etc/ceph/ceph.mon..keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,: (2) No such file or directory
2020-09-17 10:36:58.697 7f9fa6ef7700 -1 AuthRegistry(0x7f9fa0082f68) no keyring found at /etc/ceph/ceph.mon..keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,, disabling cephx
exported keyring for client.admin
[client.admin]
        key = AQDKVGBflsvcHBAArNV8xcoJYWfaKed+hHvzmw==
        caps mds = "allow *"
        caps mgr = "allow *"
        caps mon = "allow *"
        caps osd = "allow *"

 

Ceph 秘钥关系

 

2.3 创建 admin 用户 keyring

一般使用 admin 用户来操作集群

# ceph-authtool --create-keyring /etc/ceph/ceph.client.admin.keyring --gen-key -n client.admin --cap mon 'allow *' --cap osd 'allow *' --cap mds 'allow *' --cap mgr 'allow *'
creating /etc/ceph/ceph.client.admin.keyring

 

2.4创建bootstrap-osd keyring

 bootstrap 引导生成对应类型用户,bootstrap-osd 即为引导生成osd类型用户,如 osd.0、osd.1
# ceph-authtool --create-keyring /var/lib/ceph/bootstrap-osd/ceph.keyring --gen-key -n client.bootstrap-osd --cap mon 'profile bootstrap-osd' --cap mgr 'allow r' creating /var/lib/ceph/bootstrap-osd/ceph.keyring

 

2.5 将生成的秘钥导入到 mon keyring

 一个 keyring 文件内可以包含多个用户 keyring,此为将 admin 和 bootstrap-osd keyring 保存到 mon keyring 内,初始化时可以将其秘钥导入到集群
# ceph-authtool /tmp/ceph.mon.keyring --import-keyring /etc/ceph/ceph.client.admin.keyring
importing contents of /etc/ceph/ceph.client.admin.keyring into /tmp/ceph.mon.keyring
# ceph
-authtool /tmp/ceph.mon.keyring --import-keyring /var/lib/ceph/bootstrap-osd/ceph.keyring
importing contents of /var/lib/ceph/bootstrap-osd/ceph.keyring into /tmp/ceph.mon.keyring

 

2.6 将mon keyring 属主和属组修改为 ceph

# chown ceph:ceph /tmp/ceph.mon.keyring

 

 2.7 创建mon map

# monmaptool --create --add node01 10.0.12.100 --add node02 10.0.12.101 --add node03 10.0.12.102 --fsid 78a3f9bf-1f45-4941-873f-749616d2ae65 /tmp/monmap
monmaptool: monmap file /tmp/monmap
monmaptool: set fsid to 78a3f9bf-1f45-4941-873f-749616d2ae65
monmaptool: writing epoch 0 to /tmp/monmap (3 monitors)

 

mon map 记录着集群 mon 信息
查看mon map(需要等 mon 部署完成才可查看)

# ceph mon getmap -o monmap.bin
got monmap epoch 2

# monmaptool
--print monmap.bin monmaptool: monmap file monmap.bin epoch 2 fsid 78a3f9bf-1f45-4941-873f-749616d2ae65 last_changed 2020-09-15 15:17:56.345585 created 2020-09-15 14:47:17.159010 min_mon_release 14 (nautilus) 0: [v2:10.0.12.100:3300/0,v1:10.0.12.100:6789/0] mon.node01 1: [v2:10.0.12.101:3300/0,v1:10.0.12.101:6789/0] mon.node02 2: [v2:10.0.12.102:3300/0,v1:10.0.12.102:6789/0] mon.node03

 

2.8 将mon map、mon keyring 、admin keyring和ceph.conf 等文件发送到其他mon节点

# scp /tmp/monmap root@node02:/tmp/
# scp /tmp/ceph.mon.keyring root@node02:/tmp/
# scp -r /etc/ceph/ root@node02:/etc/ 

相同方式发到node03

 

2.9 创建 mon 数据目录

用于存储 mon 数据

# sudo -u ceph mkdir /var/lib/ceph/mon/ceph-node01


步骤
9-12 node02、node03节点进行相同操作,只需要修改相应的主机名即可,如针对node02,此时: # sudo -u ceph mkdir /var/lib/ceph/mon/ceph-node02

 

2.10 初始化mon数据目录,构建mon守护进程运行时所需要的文件

# sudo -u ceph ceph-mon --mkfs -i node01 --monmap /tmp/monmap --keyring /tmp/ceph.mon.keyring
# ls /var/lib/ceph/mon/ceph-node01
keyring  kv_backend  store.db

 

2.11 启动mon服务

# systemctl start ceph-mon@node01
# systemctl enable ceph-mon@node01

 

2.12 查看集群状态

# ceph -s
  cluster:
    id:     78a3f9bf-1f45-4941-873f-749616d2ae65
    health: HEALTH_WARN
            3 monitors have not enabled msgr2

  services:
    mon: 3 daemons, quorum node01,node02,node03 (age 6s)
    mgr: no daemons active
    osd: 0 osds: 0 up, 0 in

  data:
    pools:   0 pools, 0 pgs
    objects: 0 objects, 0 B
    usage:   0 B used, 0 B / 0 B avail
    pgs:

 

2.13 启用msgr2

# ceph mon enable-msgr2

# ceph -s
  cluster:
    id:     78a3f9bf-1f45-4941-873f-749616d2ae65
    health: HEALTH_OK

  services:
    mon: 3 daemons, quorum node01,node02,node03 (age 2s)
    mgr: no daemons active
    osd: 0 osds: 0 up, 0 in

  data:
    pools:   0 pools, 0 pgs
    objects: 0 objects, 0 B
    usage:   0 B used, 0 B / 0 B avail
    pgs:

 

3.MGR部署

node01(node02、node03类似操作)

创建mgr数据目录
# sudo -u ceph mkdir /var/lib/ceph/mgr/ceph-node01

创建mgr用户并将秘钥导入到数据目录 # ceph auth get
-or-create mgr.node01 mon 'allow profile mgr' osd 'allow *' mds 'allow *' > /var/lib/ceph/mgr/ceph-node01/keyring
启动mgr服务 # systemctl restart ceph
-mgr@node01 ; systemctl enable ceph-mgr@node01 node02、node03节点相同操作
查看mgr是否创建成功 # ceph
-s cluster: id: 78a3f9bf-1f45-4941-873f-749616d2ae65 health: HEALTH_WARN OSD count 0 < osd_pool_default_size 3 services: mon: 3 daemons, quorum node01,node02,node03 (age 25m) mgr: node01(active, since 5m), standbys: node03, node02 osd: 0 osds: 0 up, 0 in data: pools: 0 pools, 0 pgs objects: 0 objects, 0 B usage: 0 B used, 0 B / 0 B avail pgs:

 

4.添加OSD

4.1 复制bootstrap-osd到其他节点

# scp /var/lib/ceph/bootstrap-osd/ceph.keyring root@node02:/var/lib/ceph/bootstrap-osd/
# scp /var/lib/ceph/bootstrap-osd/ceph.keyring root@node03:/var/lib/ceph/bootstrap-osd/

 

4.2 创建 OSD

实验环境选择数据盘、wal 和 db 同为一块盘
# ceph-volume lvm create --data /dev/sdb
# ceph-volume lvm create --data /dev/sdc

查看集群OSD状态 # ceph osd tree ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI
-AFF -1 0.35156 root default -3 0.11719 host node01 0 hdd 0.05859 osd.0 up 1.00000 1.00000 3 hdd 0.05859 osd.3 up 1.00000 1.00000 -5 0.11719 host node02 1 hdd 0.05859 osd.1 up 1.00000 1.00000 4 hdd 0.05859 osd.4 up 1.00000 1.00000 -7 0.11719 host node03 2 hdd 0.05859 osd.2 up 1.00000 1.00000 5 hdd 0.05859 osd.5 up 1.00000 1.00000 数据盘和 wal、db 同为一块盘时 # ceph-volume lvm create --data /dev/sdb 数据盘和 wal、db 分离时且 wal 和 db 属于同一设备 # ceph-volume lvm create --data /dev/sdb --block.db /dev/sdx 数据盘、wal 和 db 分离 # ceph-volume lvm create --data /dev/sdb --block.db /dev/sdx --block.wal /dev/sdy

 

5.添加MDS

node01(node02、node03类似操作)

创建MDS数据目录
# sudo -u ceph mkdir -p /var/lib/ceph/mds/ceph-node01

创建mds用户 # ceph auth get
-or-create mds.node01 osd "allow rwx" mds "allow" mon "allow profile mds"
获取mds用户秘钥并导入其数据目录 # ceph auth get mds.node01
| tee /var/lib/ceph/mds/ceph-node01/keyring
将配置写入配置文件 # vim
/etc/ceph/ceph.conf … [mds.node01] host = node01 …
启动守护进程 # systemctl restart ceph
-mds@node01 ; systemctl enable ceph-mds@node01
检查 # ceph
-s cluster: id: 78a3f9bf-1f45-4941-873f-749616d2ae65 health: HEALTH_OK services: mon: 3 daemons, quorum node01,node02,node03 (age 15m) mgr: node01(active, since 18h), standbys: node03, node02 mds: 3 up:standby osd: 6 osds: 6 up (since 18h), 6 in (since 18h) data: pools: 0 pools, 0 pgs objects: 0 objects, 0 B usage: 6.0 GiB used, 354 GiB / 360 GiB avail pgs:

 

6.添加RGW

创建rgw数据目录
# chown -R ceph:ceph /var/lib/ceph/radosgw/ceph-rgw.node01

创建rgw用户 # ceph auth get
-or-create client.rgw.node01 osd "allow rwx" mon "allow rw"
获取rgw用户秘钥并导入其数据目录 # ceph auth get client.rgw.node01
| tee /var/lib/ceph/radosgw/ceph-rgw.node01/keyring
修改数据目录属主属组 #
chown -R ceph:ceph /var/lib/ceph/radosgw/ceph-rgw.node01
启动rgw服务 # systemctl start ceph
-radosgw@rgw.node01 ; systemctl enable ceph-radosgw@rgw.node01 检查 # ceph -s cluster: id: 78a3f9bf-1f45-4941-873f-749616d2ae65 health: HEALTH_OK services: mon: 3 daemons, quorum node01,node02,node03 (age 26h) mgr: node01(active, since 44h), standbys: node03, node02 mds: 3 up:standby osd: 6 osds: 6 up (since 44h), 6 in (since 44h) rgw: 3 daemons active (node01, node02, node03) task status: data: pools: 4 pools, 128 pgs objects: 219 objects, 1.2 KiB usage: 6.0 GiB used, 354 GiB / 360 GiB avail pgs: 128 active+clean io: client: 4.0 KiB/s rd, 0 B/s wr, 3 op/s rd, 2 op/s wr

部署完成

 

 


免责声明!

本站转载的文章为个人学习借鉴使用,本站对版权不负任何法律责任。如果侵犯了您的隐私权益,请联系本站邮箱yoyou2525@163.com删除。



 
粤ICP备18138465号  © 2018-2025 CODEPRJ.COM