centos7使用ceph-deploy部署ceph


准備階段

准備yum源

刪除默認的源,國外的比較慢

yum clean all
rm -rf /etc/yum.repos.d/*.repo

下載阿里雲的base源

wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo

下載阿里雲的epel源

wget -O /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo

修改里面的系統版本為7.3.1611,當前用的centos的版本的的yum源可能已經清空了

sed -i '/aliyuncs/d' /etc/yum.repos.d/CentOS-Base.repo
sed -i '/aliyuncs/d' /etc/yum.repos.d/epel.repo
sed -i 's/$releasever/7.3.1611/g' /etc/yum.repos.d/CentOS-Base.repo

添加ceph源

vim /etc/yum.repos.d/ceph.repo

[ceph]

 
         

name=ceph

 
         

baseurl=http://mirrors.aliyun.com/ceph/rpm-jewel/el7/x86_64/

 
         

gpgcheck=0

 
         

priority =1

 
         

[ceph-noarch]

 
         

name=cephnoarch

 
         

baseurl=http://mirrors.aliyun.com/ceph/rpm-jewel/el7/noarch/

 
         

gpgcheck=0

 
         

priority =1

 
         

[ceph-source]

 
         

name=Ceph source packages

 
         

baseurl=http://mirrors.aliyun.com/ceph/rpm-jewel/el7/SRPMS

 
         

gpgcheck=0

 
         

priority=1


   
   
   
           

 准備系統配置

設置deploy主機的/etc/hosts文件

 

192.168.0.39 ceph-admin
192.168.0.40 mon1
192.168.0.41 osd1
192.168.0.42 osd2
192.168.0.43 osd3

 

修改deploy主機上的~/.ssh/config文件

Host ceph-admin
   Hostname ceph-admin
   User cephuser
Host mon1
   Hostname mon1
   User cephuser
Host osd1
   Hostname osd1
   User cephuser
Host osd2
   Hostname osd2
   User cephuser
Host osd3
   Hostname osd3
   User cephuser

 

修改權限

chmod 644 ~/.ssh/config

添加用戶

useradd -d /home/cephuser -m cephuser
passwd cephuser

確保添加的用戶用sudo權限

echo "cephuser ALL = (root) NOPASSWD:ALL" | sudo tee /etc/sudoers.d/cephuser
chmod 0440 /etc/sudoers.d/cephuser sed -i s'/Defaults requiretty/#Defaults requiretty'/g /etc/sudoers

 

設置deploy主機可以無密碼訪問其他node

su - cephuser
ssh-keygen ssh-copy-id ceph-admin ssh-copy-id mon1 ssh-copy-id osd1 ssh-copy-id osd2 ssh-copy-id osd3

 

 

 

安裝NTP服務

yum install -y ntp ntpdate ntp-doc
ntpdate 0.us.pool.ntp.org
hwclock --systohc
systemctl enable ntpd.service
systemctl start ntpd.service

禁用selinux

sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config

 

關閉防火牆

ssh root@ceph-admin
systemctl stop firewalld
systemctl disable firewalld

准備磁盤

note:測試時使用的磁盤不要太小,否則后面添加磁盤時會報錯,建議磁盤大小為20G及以上。

檢查磁盤

sudo fdisk -l /dev/vdb

格式化磁盤

sudo parted -s /dev/vdb mklabel gpt mkpart primary xfs 0% 100%
sudo mkfs.xfs /dev/vdb -f

 

查看磁盤格式

sudo blkid -o value -s TYPE /dev/vdb

部署階段

安裝ceph-deploy

sudo yum update -y && sudo yum install ceph-deploy -y

創建cluster目錄

su - cephuser
mkdir
cluster cd cluster/

創建集群

ceph-deploy new mon1

修改ceph.conf文件

vim ceph.conf
# Your network address
public network = 192.168.0.0/24
osd pool default size = 3

安裝ceph

ceph-deploy install ceph-admin mon1 osd1 osd2 osd3

初始化monitor,並收集所有密鑰

ceph-deploy mon create-initial
ceph-deploy gatherkeys mon1

添加OSD到集群

檢查OSD節點上所有可用的磁盤

ceph-deploy disk list osd1 osd2 osd3

使用zap選項刪除所有osd節點上的分區

ceph-deploy disk zap osd1:/dev/vdb osd2:/dev/vdb osd3:/dev/vdb

准備OSD

ceph-deploy osd prepare osd1:/dev/vdb osd2:/dev/vdb osd3:/dev/vdb

激活OSD

ceph-deploy osd activate osd1:/dev/vdb1 osd2:/dev/vdb1 osd3:/dev/vdb1

查看OSD

ceph-deploy disk list osd1 osd2 osd3

顯示兩個分區

  1. /dev/sdb1 - Ceph Data
  2. /dev/sdb2 - Ceph Journal

 

用 ceph-deploy 把配置文件和 admin 密鑰拷貝到管理節點和 Ceph 節點,這樣你每次執行 Ceph 命令行時就無需指定 monitor 地址和 ceph.client.admin.keyring 了

ceph-deploy admin ceph-admin mon1 osd1 osd2 osd3

修改密鑰權限

sudo chmod 644 /etc/ceph/ceph.client.admin.keyring

完成!

檢查ceph

檢查ceph狀態

sudo ceph health
sudo ceph -s

 


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM