002 ceph的deploy部署


介紹:前期對ceph有一個簡單的介紹,但是內容太大,並不具體,接下來使用ceph-deploy部署一個Ceph集群,並做一些運維管理工作,深入的理解Ceph原理及工作工程!

一、環境准備

本次使用的虛擬機測試,使用7.6系統最小化安裝,CentOS Linux release 7.6.1810 (Core)

1.1 主機規划:

節點
類型
IP
CPU
內存
ceph1
部署管理平台
172.25.254.130
2 C
4 G
ceph2
Monitor OSD
172.25.254.131
2 C
4G
ceph3
OSD
172.25.254.132
2 C
4 G
ceph4
OSD
172.25.254.133
2 C
4 G
ceph5
client
172.25.254.134
2C
4G

1.2 功能實現

Ceph集群:monitor、manager、osd*3

1.3 主機前期准備

每個節點都要做

修改主機名安裝必要軟件

hostnamectl set-hostname username
hostname  username
yum install -y net-tools wget vim
yum update

配置阿里源

rm -f /etc/yum.repos.d/*
wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo
wget -O /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo
sed -i '/aliyuncs.com/d' /etc/yum.repos.d/*.repo
echo '#阿里ceph源
[ceph]
name=ceph
baseurl=http://mirrors.aliyun.com/ceph/rpm-luminous/el7/x86_64/
gpgcheck=0
[ceph-noarch]
name=cephnoarch
baseurl=http://mirrors.aliyun.com/ceph/rpm-luminous/el7/noarch/
gpgcheck=0
[ceph-source]
name=ceph-source
baseurl=http://mirrors.aliyun.com/ceph/rpm-luminous/el7/SRPMS/
gpgcheck=0
#'>/etc/yum.repos.d/ceph.repo
yum clean all && yum makecache
yum install deltarpm

配置時間同步

yum install ntp ntpdate ntp-doc

二、部署節點准備

2.1 部署節點配置主機名

[root@ceph1 ~]# vi /etc/hosts

172.25.254.130  ceph1
172.25.254.131  ceph2
172.25.254.132  ceph3
172.25.254.133  ceph4
172.25.254.134  ceph5
172.25.254.135  ceph6               

2.2 配置部署節點到所有osd節點免密登錄

[root@ceph1 ~]# useradd cephuser    #創建非root用戶

[root@ceph1 ~]# echo redhat | passwd --stdin cephuser

[root@ceph1 ~]# for i in {2..5}; do echo "====ceph${i}====";ssh root@ceph${i} 'useradd -d /home/cephuser -m cephuser; echo "redhat" | passwd --stdin cephuser'; done #所有osd節點創建cephuser用戶

[root@ceph1 ~]# for i in {1..5}; do echo "====ceph${i}====";ssh root@ceph${i} 'echo "cephuser ALL = (root) NOPASSWD:ALL" > /etc/sudoers.d/cephuser'; done

[root@ceph1 ~]# for i in {1..5}; do echo "====ceph${i}====";ssh root@ceph${i} 'chmod 0440 /etc/sudoers.d/cephuser'; done

2.3 關閉selinux並配置防火牆

[root@ceph1 ~]# sed -i '/^SELINUX=.*/c SELINUX=perimissive' /etc/selinux/config

[root@ceph1 ~]# sed -i 's/^SELINUXTYPE=.*/SELINUXTYPE=disabled/g' /etc/selinux/config

[root@ceph1 ~]# grep --color=auto '^SELINUX' /etc/selinux/config

SELINUX=perimissive
SELINUXTYPE=disabled

[root@ceph1 ~]# setenforce 0

[root@ceph1 ~]# firewall-cmd --list-all

public (active)
target: default
icmp-block-inversion: no interfaces: ens33 sources: services: ssh dhcpv6-client ports: protocols: masquerade: no forward-ports: source-ports: icmp-blocks: rich rules: 

[root@ceph1 ~]# systemctl stop firewalld

[root@ceph1 ~]# systemctl disable firewalld

Removed symlink /etc/systemd/system/multi-user.target.wants/firewalld.service.
Removed symlink /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.

 

2.4 部署節點安裝pip環境

[root@ceph1 ~]#  curl https://bootstrap.pypa.io/get-pip.py -o get-pip.py

[root@ceph1 ~]#  python get-pip.py

[root@ceph1 ~]# su - cephuser

[cephuser@ceph1 ~]$ ssh-keygen -f ~/.ssh/id_rsa -N ''

[cephuser@ceph1 ~]$ ssh-copy-id -i ~/.ssh/id_rsa.pub cephuser@172.25.254.130

[cephuser@ceph1 ~]$ ssh-copy-id -i ~/.ssh/id_rsa.pub cephuser@172.25.254.131

[cephuser@ceph1 ~]$ ssh-copy-id -i ~/.ssh/id_rsa.pub cephuser@172.25.254.132

[cephuser@ceph1 ~]$ ssh-copy-id -i ~/.ssh/id_rsa.pub cephuser@172.25.254.133

[cephuser@ceph1 ~]$ ssh-copy-id -i ~/.ssh/id_rsa.pub cephuser@172.25.254.134

2.5 修改部署節點的config文件

[cephuser@ceph1 ~]$ vi ~/.ssh/config

Host node1
  Hostname ceph1
  User cephuser
Host node2
  Hostname crph2
  User cephuser
Host node3
  Hostname ceph3
  User cephuser
Host node4
  Hostname ceph4
  User cephuser
Host node5
  Hostname ceph5
  User cephuser

[cephuser@ceph1 ~]$ chmod 600 .ssh/config

測試

[cephuserr@ceph1 ~]$ ssh cephuser@ceph2

[cephuser@ceph2 ~]$ exit

三、創建集群 

以下操作均在控制節點完成

[root@ceph1 ~]# su - cephuser

[cephuser@ceph1 ~]$ sudo yum install yum-plugin-priorities

[cephuser@ceph1 ~]$ sudo yum install ceph-deploy

3.1 創建配置文件

[cephuser@ceph1 ~]$ cd

[cephuser@ceph1 ~]$ mkdir my-cluster

[cephuser@ceph1 ~]$ cd my-cluster

3.2 創建監控節點

[cephuser@ceph1 my-cluster]$ ceph-deploy new ceph2

[cephuser@ceph1 my-cluster]$ ll

-rw-rw-r--. 1 cephuser cephuser  197 Mar 14 23:13 ceph.conf
-rw-rw-r--. 1 cephuser cephuser 3166 Mar 14 23:13 ceph-deploy-ceph.log
-rw-------. 1 cephuser cephuser   73 Mar 14 23:13 ceph.mon.keyring

該目錄存在一個 Ceph 配置文件、一個 monitor 密鑰環和一個日志文件。 

3.2 安裝ceph

[cephuser@ceph1 my-cluster]$ ceph-deploy install ceph2 ceph3 ceph4 

[cephuser@ceph2 my-cluster]$ sudo ceph --version

ceph version 13.2.5 (cbff874f9007f1869bfd3821b7e33b2a6ffd4988) mimic (stable)

3.4 初始化監控節點

[cephuser@ceph1 my-cluster]$ ceph-deploy mon create-initial

[cephuser@ceph1 my-cluster]$ ll

-rw-------. 1 cephuser cephuser     71 Mar 14 23:29 ceph.bootstrap-mds.keyring
-rw-------. 1 cephuser cephuser     71 Mar 14 23:29 ceph.bootstrap-mgr.keyring
-rw-------. 1 cephuser cephuser     71 Mar 14 23:29 ceph.bootstrap-osd.keyring
-rw-------. 1 cephuser cephuser     71 Mar 14 23:29 ceph.bootstrap-rgw.keyring
-rw-------. 1 cephuser cephuser     63 Mar 14 23:29 ceph.client.admin.keyring
-rw-rw-r--. 1 cephuser cephuser    197 Mar 14 23:29 ceph.conf
-rw-rw-r--. 1 cephuser cephuser 310977 Mar 14 23:30 ceph-deploy-ceph.log
-rw-------. 1 cephuser cephuser     73 Mar 14 23:29 ceph.mon.keyring

 3.5 部署MGR,創建monitor管理節點

[cephuser@ceph1 my-cluster]$ ceph-deploy mgr create ceph2 ceph3 ceph4

注:這個地方有事會卡很久,而且會報錯或者其他問題,重復的操作,后面一次竟然成功了,願意無解。

[ceph4][INFO  ] Running command: sudo ceph --cluster ceph --name client.bootstrap-mgr --keyring /var/lib/ceph/bootstrap-mgr/ceph.keyring auth get-or-create mgr.ceph4 mon allow profile mgr osd allow * mds allow * -o /var/lib/ceph/mgr/ceph-ceph4/keyring

[ceph_deploy.mon][INFO ] distro info: CentOS Linux 7.6.1810 Core

[ceph2][INFO ] Running command: sudo systemctl enable ceph.target

[ceph2][INFO ] Running command: sudo systemctl enable ceph-mon@ceph2

[ceph2][INFO ] Running command: sudo systemctl start ceph-mon@ceph2

[ceph2][INFO ] Running command: sudo ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.ceph2.asok mon_status

[ceph2][INFO ] monitor: mon.ceph2 is running

[ceph2][INFO ] Running command: sudo ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.ceph2.asok mon_status

3.6 分發配置文件

[cephuser@ceph1 my-cluster]$ ceph-deploy admin ceph2 ceph3 ceph4 

[ceph_deploy.admin][DEBUG ] Pushing admin keys and conf to ceph2
[ceph2][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf

3.7 添加osd

列出所有磁盤:

[cephuser@ceph1 my-cluster]$ ceph-deploy disk list ceph2 ceph3 ceph4

[ceph2][INFO  ] Disk /dev/sda: 21.5 GB, 21474836480 bytes, 41943040 sectors
[ceph2][INFO  ] Disk /dev/sdb: 21.5 GB, 21474836480 bytes, 41943040 sectors
[ceph2][INFO  ] Disk /dev/mapper/centos-root: 18.2 GB, 18249416704 bytes, 35643392 sectors
[ceph2][INFO  ] Disk /dev/mapper/centos-swap: 2147 MB, 2147483648 bytes, 4194304 sectors
……

[cephuser@ceph1 my-cluster]$ ceph-deploy osd create --data /dev/sdb ceph2

[ceph_deploy.osd][DEBUG ] Host ceph4 is now ready for osd use

[cephuser@ceph1 my-cluster]$ ceph-deploy osd create --data /dev/sdb ceph3

[ceph3][WARNIN] No data was received after 300 seconds, disconnecting...
[ceph3][INFO  ] checking OSD status...
[ceph3][DEBUG ] find the location of an executable [ceph3][INFO ] Running command: sudo /bin/ceph --cluster=ceph osd stat --format=json

[cephuser@ceph1 my-cluster]$ ceph-deploy osd create --data /dev/sdb ceph4

[ceph4][WARNIN] No data was received after 300 seconds, disconnecting...
[ceph4][INFO  ] checking OSD status...
[ceph4][DEBUG ] find the location of an executable [ceph4][INFO ] Running command: sudo /bin/ceph --cluster=ceph osd stat --format=json

在這里也是一樣,添加第一個OSD正常添加,但是到了第二個的時候,會失敗,而且到后面也顯示成功,但是在后面進行查看的時候,只有第一個osd,但是集群狀顯示health,原因個人猜測是因為使用的虛擬機,內存,CPU等各方面壓力太大,需要等待!為了保證正常,我在添加第一個osd后,虛擬機幾個小時后,再去添加第二個osd,嘗試看是否正常!!!

3.8 查看集群狀態:

[cephuser@ceph1 my-cluster]$ ssh ceph2 sudo ceph health

HEALTH_OK

[cephuser@ceph1 my-cluster]$ ssh ceph2 sudo ceph -s

cluster:
    id:     2835ab5a-32fe-40df-8965-32bbe4991222
    health: HEALTH_OK
 
  services:
    mon: 1 daemons, quorum ceph2
    mgr: ceph2(active)
    osd: 1 osds: 1 up, 1 in
 
  data:
    pools:   0 pools, 0 pgs
    objects: 0  objects, 0 B
    usage:   1.0 GiB used, 19 GiB / 20 GiB avail
    pgs:     

在上面的過程發現只有一個osd,但是添加的三塊磁盤,原因報錯如下,同時ceph4節點也是相同問題

[ceph3][WARNIN] No data was received after 300 seconds, disconnecting...
[ceph3][INFO  ] checking OSD status...
[ceph3][DEBUG ] find the location of an executable
[ceph3][INFO  ] Running command: sudo /bin/ceph --cluster=ceph osd stat --format=json

查閱資料,是鏡像源的問題,但是並沒有解決,繼續下一步試驗,回頭處理

也就是相當於配置了一個ceph2的單節點的ceph集群,在這個集群上繼續完成實驗

3.9 開啟dashboard

[root@ceph2 ~]# ceph mgr module enable dashboard

[root@ceph2 ~]# ceph dashboard create-self-signed-cert

Self-signed certificate created

[root@ceph2 ~]# ceph dashboard set-login-credentials admin admin

Username and password updated

 

[root@ceph2 ~]# ceph mgr services

{
    "dashboard": "https://ceph2:8443/"
}

關閉防火牆,瀏覽器訪問測試

[root@ceph2 ~]# systemctl stop firewalld

 

3.10 刪除所有配置

[cephuser@ceph1 my-cluster]$ ceph-deploy purge ceph1 ceph2 ceph3 ceph4 ceph5

[cephuser@ceph1 my-cluster]$ ceph-deploy purgedata ceph1 ceph2 ceph3 ceph4 ceph5

[cephuser@ceph1 my-cluster]$ ceph-deploy forgetkeys

[cephuser@ceph1 my-cluster]$ rm ceph.*

出現問題

[cephuser@ceph1 my-cluster]$ ceph-deploy install ceph1 ceph2 ceph3 ceph4 ceph5

Delta RPMs disabled because /usr/bin/applydeltarpm not installed

解決:

[cephuser@ceph1 my-cluster]$  yum provides '*/applydeltarpm

[cephuser@ceph1 my-cluster]$ sudo yum install deltarpm

[cephuser@ceph1 my-cluster]$ ceph-deploy mon create-initial

有錯誤:

上面的各種錯誤,大部分原因是硬件網絡問題,重置環境十來遍,只想說坑爹!!! 

參考鏈接:

https://willireamangel.github.io/2018/06/07/Ceph%E9%83%A8%E7%BD%B2%E6%95%99%E7%A8%8B%EF%BC%88luminous%EF%BC%89/

https://www.cnblogs.com/itzgr/p/10275863.html


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM