Ceph部署架構
三台主機,操作系統均為Centos:
- node1:192.168.122.157 ceph-deploy mon osd
- node2:192.168.122.58 mon osd
- node3:192.168.122.54 mon osd
Ceph版本 mimic
准備工作
1)關閉防火牆,關閉selinux
systemctl stop firewalld
systemctl disable firewalld
//也可以不關閉防火牆,開放對應端口,Ceph Monitors之間默認使用 **6789** 端口通信, OSD之間默認用 **6800:7300**這個范圍內的端口通信
vim /etc/selinux/config
SELINUX=disabled
然后重啟 reboot
2)修改hosts
vim /etc/hosts
192.168.122.157 node1
192.168.122.58 node2
192.168.122.54 node3
3)免密登錄(如下示例)
[root@node1 ~]#ssh-keygen
[root@node1 ~]#ssh-copy-id -i .ssh/id_rsa.pub node2 //首次需要輸入密碼
4)安裝ntp服務,然后同步時間
[root@node1 ~]# yum install ntpd -y
[root@node1 ~]# systemctl start ntpd
[root@node1 ~]# systemctl enable ntpd
node2、node3節點同步node1的時間
配置計划任務
[root@node2 ~]#crontab -e
*/5 * * * * ntpdate 192.168.122.157
開始部署
1)node1安裝ceph-deploy
[root@node1 ~]# yum install ceph-deploy -y
2)添加Ceph源
[root@node1 ~]#export CEPH_DEPLOY_REPO_URL=http://mirrors.163.com/ceph/rpm-mimic/el7/ //使用國內的源速度會快一些
[root@node1 ~]#export CEPH_DEPLOY_GPG_URL=http://mirrors.163.com/ceph/keys/release.asc
[root@node1 ~]# vim /etc/yum.repos.d/ceph.repo
[ceph-noarch]
name=Ceph noarch packages
baseurl=http://mirrors.163.com/ceph/rpm-mimic/el7/noarch
enable=1
gpgcheck=1
type=rpm-md
gpgkey=https://download.ceph.com/keys/release.asc
3)創建一個集群
[root@node1 ~]# cd /opt
[root@node1 opt]# mkdir cluster && cd cluster
[root@node1 cluster]# ceph-deploy new node1 node2 node3
4)安裝Ceph
[root@node1 cluster]# ceph-deploy install --release mimic node1 node2 node3
5)創建monitor、配置admin key到各個節點
[root@node1 cluster]# ceph-deploy mon create-initial //完成后,在當前目錄下會生成密鑰
[root@node1 cluster]# ls
ceph.bootstrap-mds.keyring ceph.bootstrap-rgw.keyring ceph-deploy-ceph.log
ceph.bootstrap-mgr.keyring ceph.client.admin.keyring ceph.log
ceph.bootstrap-osd.keyring ceph.conf ceph.mon.keyring
[root@node1 cluster]# ceph-deploy admin node1 node2 node3 //配置admin key 到各個節點
6)添加OSD
[root@node1 cluster]# ceph-deploy osd create node1 --data /dev/vdb
[root@node1 cluster]# ceph-deploy osd create node1 --data /dev/vdc
//依次添加,默認第一個分區為數據分區,第二個分區為日志分區
7)在node2、node3上添加monitor
[root@node1 cluster]# ceph-deploy mon add node2
[root@node1 cluster]# ceph-deploy mon add node3
8)添加管理進程服務
//查看集群狀態
[root@node1 cluster]# ceph status
cluster:
id: 4e1947bd-23ae-4828-ba5d-0e09779ced22
health: HEALTH_WARN
no active mgr
clock skew detected on mon.node2, mon.node1
services:
mon: 3 daemons, quorum node3,node2,node1
mgr: no daemons active
osd: 6 osds: 6 up, 6 in
data:
pools: 0 pools, 0 pgs
objects: 0 objects, 0 B
usage: 0 B used, 0 B / 0 B avail
pgs:
//有告警no active mgr,安裝下管理進程
[root@node1 cluster]# ceph-deploy mgr create node1
//再次查看集群
[root@node1 cluster]# ceph -s
cluster:
id: 4e1947bd-23ae-4828-ba5d-0e09779ced22
health: HEALTH_OK
services:
mon: 3 daemons, quorum node3,node2,node1
mgr: node1(active)
osd: 6 osds: 6 up, 6 in
data:
pools: 0 pools, 0 pgs
objects: 0 objects, 0 B
usage: 6.0 GiB used, 54 GiB / 60 GiB avail
pgs:
致此,ceph的部署基本完成。