Ceph的部署手冊(Centos7.3)
Ceph簡介
Ceph是一種為優秀的性能、可靠性和可擴展性而設計的統一的、分布式文件系統。
部署邏輯架構

准備3台主機,並且修改主機名(hostnamectl set-hostname xxx 后重啟)
IP地址 主機名(Hostname)
192.168.1.24 node1(用該主機同時作為管理和監控節點)
192.168.1.25 node2 (osd.0 節點)
192.168.1.26 node3 (osd.1 節點)
在各節點上安裝啟用軟件倉庫,啟用可選軟件庫
# sudo yum install -y yum-utils && sudo yum-config-manager --add-repo https://dl.fedoraproject.org/pub/epel/7/x86_64/ && sudo yum install --nogpgcheck -y epel-release && sudo rpm --import /etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7 && sudo rm /etc/yum.repos.d/dl.fedoraproject.org*
# sudo yum install yum-plugin-priorities
修改node1節點/etc/hosts文件,增加以下內容:
192.168.1.24 node1
192.168.1.25 node2
192.168.1.26 node3
分別這三個節點上存儲創建用戶且賦予它root權限(本人這里用 ytcwd)
執行
# sudo useradd -d /home/ytcwd -m ytcwd
# sudo passwd ytcwd( 輸入密碼這里建議三台服務器密碼設為一致)
//授予無密碼sudo權限
#echo "ytcwd ALL = (root) NOPASSWD:ALL" >> /etc/sudoers.d/ytcwd
#sudo chmod 0440 /etc/sudoers.d/ytcwd
允許無密碼 SSH 登錄 因為 ceph-deploy 不支持輸入密碼,你必須在管理節點上生成 SSH 密鑰並把其公鑰分發到各 Ceph 節點。 ceph-deploy 會嘗試給初始 monitors 生成 SSH 密鑰對。生成 SSH 密鑰對,使用創建的用戶不要用 sudo 或 root 。
# ssh-keygen(提示 “Enter passphrase” 時,直接回車,口令即為空如下)
Generating public/private key pair.
Enter file in which to save the key (/ceph-admin/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /ceph-admin/.ssh/id_rsa.
Your public key has been saved in /ceph-admin/.ssh/id_rsa.pub.
//把公鑰拷貝到各 Ceph 節點上
#ssh-copy-id ytcwd@node1
#ssh-copy-id ytcwd@node2
#ssh-copy-id ytcwd@node3
在管理節點node1 上修改~/.ssh/config文件(若沒有則創建)增加一下內容:
Host node1
Hostname 192.168.1.24
User ytcwd
Host node2
Hostname 192.168.1.25
User ytcwd
Host node3
Hostname 192.168.1.26
User ytcwd
在各節點上安裝ntp(防止時鍾偏移導致故障)、openssh
#sudo yum install ntp ntpdate ntp-doc
#sudo yum install openssh-server
在各節點上配置防火牆開放所需要的端口和selinux,更新系統
#sudo firewall-cmd --zone=public --add-port=6789/tcp --permanent
//或者關閉防火牆
#sudo systemctl stop firewalld
#sudo systemctl disable firewalld
//關閉selinux
#sudo vim /etc/selinux/config
修改 SELINUX=disabled
在各節點上創建ceph 源(本人這里選擇的jewel,這里推薦使用網易或阿里的ceph源,若用官方源文件會很慢而且容易出現下載失敗中斷等問題,本人深受下載官方源的坑害)
在 /etc/yum.repos.d/目錄下創建 ceph.repo然后寫入以下內容
[Ceph]
name=Ceph packages for $basearch
baseurl=http://mirrors.163.com/ceph/rpm-jewel/el7/$basearch
enabled=1
gpgcheck=0
type=rpm-md
gpgkey=https://mirrors.163.com/ceph/keys/release.asc
priority=1
[Ceph-noarch]
name=Ceph noarch packages
baseurl=http://mirrors.163.com/ceph/rpm-jewel/el7/noarch
enabled=1
gpgcheck=0
type=rpm-md
gpgkey=https://mirrors.163.com/ceph/keys/release.asc
priority=1
[ceph-source]
name=Ceph source packages
baseurl=http://mirrors.163.com/ceph/rpm-jewel/el7/SRPMS
enabled=1
gpgcheck=0
type=rpm-md
gpgkey=https://mirrors.163.com/ceph/keys/release.asc
priority=1
在管理節點nod1上進行安裝准備(使用ytcwd用戶)
//新建文件夾ceph-cluster
$cd ~
$mkdir ceph-cluster
$cd ceph-cluster
//安裝ceph-deploy
# sudo yum install ceph-deploy
//若安裝ceph后遇到麻煩可以使用以下命令進行清除包和配置
#ceph-deploy purge node1 node2 node3
#ceph-deploy purgedata node1 node2 node3
#ceph-deploy forgetkeys
安裝ceph創建集群
//進入到創建ceph-cluster文件夾下,執行命令
#ceph-deploy new node1 node2 node3
//在生成的ceph.conf中加入(寫入[global] 段下)
osd pool default size = 2
//如果你有多個網卡,可以把 public network 寫入 Ceph 配置文件的 [global] 段下
#public network = {ip-address}/{netmask}
//安裝ceph
#ceph-deploy install node1 node2 node3
//配置初始 monitor(s)、並收集所有密鑰
# ceph-deploy mon create-initial
新建osd
//添加兩個 OSD ,登錄到 Ceph 節點、並給 OSD 守護進程創建一個目錄。
#ssh node2
#sudo mkdir /var/local/osd0
#exit
#ssh node3
#sudo mkdir /var/local/osd1
#exit
//然后,從管理節點執行 ceph-deploy 來准備 OSD
#ceph-deploy osd prepare node2:/var/local/osd0 node3:/var/local/osd1
//最后,激活 OSD
#ceph-deploy osd activate node2:/var/local/osd0 node3:/var/local/osd1
//確保你對 ceph.client.admin.keyring 有正確的操作權限。
#sudo chmod +r /etc/ceph/ceph.client.admin.keyring
//檢查集群的健康狀況
#ceph health等 peering 完成后,集群應該達到 active + clean 狀態。
安裝過程中可能遇到的問題的解決方法:
#ceph-deploy install node1 node2 node3 時報錯:
(1) [ceph_deploy][ERROR ]RuntimeError: Failed to execute command: yum -y install epel-release
解決方法:
#yum -y remove ceph-release
(2) [admin-node][WARNIN] Anotherapp is currently holding the yum lock; waiting for it to exit...
解決方法:
#rm -f /var/run/yum.pid
#ceph-deploy osd activate node1 node2 node2報錯
(1)若出現l類似錯誤:ceph-disk: Error: No cluster conf found in /etc/ceph with fsid c5bf8efd-2aea-4e32-85ca-983f1e5b18e7
解決方法:
這是由於fsid的配置文件不一樣,導致的,修改對應節點的fsid與管理節點的fid一致即可
(2)若出現[ceph_deploy][ERROR ] RuntimeError: Failedto execute command: /usr/sbin/ceph-disk -v activate --mark-init systemd --mount/var/local/osd0
解決方法:
在各個節點上給/var/local/osd1/和/var/local/osd1/添加權限
如下:
chmod 777 /var/local/osd0/
chmod 777 /var/local/osd0/*
chmod 777 /var/local/osd1/
chmod 777 /var/local/osd1/*
當你的ceph集群出現如下狀況時,
$ ceph -s
cluster 3a4399c0-2458-475f-89be-ff961fbac537
health HEALTH_WARN clock skew detected on mon.1, mon.2
monmap e17: 3 mons at {0=192.168.0.5:6789/0,1=192.168.0.6:6789/0,2=192.168.0.7:6789/0}, election epoch 6, quorum 0,1,2 0,1,2
mdsmap e39: 0/0/1 up
osdmap e127: 3 osds: 3 up, 3 in
pgmap v280: 576 pgs, 3 pools, 0 bytes data, 0 objects
128 MB used, 298 GB / 298 GB avail
576 active+clean
$ ceph health detail
HEALTH_WARN clock skew detected on mon.1, mon.2
mon.1 addr 192.168.0.6:6789/0 clock skew 8.37274s > max 0.05s (latency 0.004945s)
mon.2 addr 192.168.0.7:6789/0 clock skew 8.52479s > max 0.05s (latency 0.005965s)
這說明幾個節點之間的時間同步出現了問題。
一個簡單的解決辦法就是:
1)停掉所有節點的ntpd服務,如果有的話
$ /etc/init.d/ntpd stop
2) 同步國際時間
$ ntpdate time.nist.gov
3) 如果執行完以上兩步仍有報錯,則需要重啟所有monitor
另外一個辦法就是重新配置ntp服務。
mon故障問題:
故障現象: health HEALTH_WARN 1 mons down, quorum 0,1 ceph-mon1,ceph-mon2
具體解決辦法:
mon 故障處理:
[root@TDXY-ceph-01 ~]# ceph -s
cluster 00000000-0000-0000-0001-000000000010
health HEALTH_WARN 23 pgs degraded; 41 pgs peering; 31 pgs stale; 12 pgs stuck inactive; 24 pgs stuck unclean; recovery 7/60 objects degraded (11.667%); too few pgs per osd (4 < min 20); 4/45 in osds are down; 1 mons down, quorum 0,1,2,3 TDXY-ceph-02,TDXY-ceph-04,TDXY-ceph-05,TDXY-ceph-07
monmap e1: 5 mons at {TDXY-ceph-01=0.0.0.0:0/1,TDXY-ceph-02=10.10.120.12:6789/0,TDXY-ceph-04=10.10.120.14:6789/0,TDXY-ceph-05=10.10.120.15:6789/0,TDXY-ceph-07=10.10.120.17:6789/0}, election epoch 38, quorum 0,1,2,3 TDXY-ceph-02,TDXY-ceph-04,TDXY-ceph-05,TDXY-ceph-07
[root@TDXY-ceph-01 ~]# ceph mon remove TDXY-ceph-01
[root@TDXY-ceph-01 ~]# rm -rf /var/lib/ceph/mon/ceph-TDXY-ceph-01
[root@TDXY-ceph-01 ~]# ceph-mon --mkfs -i TDXY-ceph-01 --keyring /etc/ceph/ceph.mon.keyring
[root@TDXY-ceph-01 ~]# touch /var/lib/ceph/mon/ceph-TDXY-ceph-01/done
[root@TDXY-ceph-01 ~]# touch /var/lib/ceph/mon/ceph-TDXY-ceph-01/sysvinit
[root@TDXY-ceph-01 ~]# service ceph start mon
