一、環境准備
CentOS Linux release 7.4.1708 (Core)一台,4塊磁盤(sda、sdb,、sdc、sdd)
192.168.27.130 nceph
二、配置環境
1、修改主機名
# hostnamectl set-hostname nceph
2、配置hosts文件
# cat <<"EOF">/etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.27.130 nceph
EOF
3、安裝NTP
# yum -y install ntp
修改配置文件/etc/ntp.conf
# vi /etc/ntp.conf
加入以下內容:
server NTP-server
開啟ntp,配置自啟
# systemctl start ntpd
# systemctl enable ntpd
查看ntp狀態
# ntpq -p
4、添加ceph源
# cat <<END >/etc/yum.repos.d/ceph.repo
[ceph]
name=ceph
baseurl=http://mirrors.aliyun.com/ceph/rpm-luminous/el7/x86_64/
gpgcheck=0
[ceph-noarch]
name=cephnoarch
baseurl=http://mirrors.aliyun.com/ceph/rpm-luminous/el7/noarch/
gpgcheck=0
[ceph-source]
name=ceph-source
baseurl=http://mirrors.aliyun.com/ceph/rpm-luminous/el7/SRPMS/
gpgcheck=0
END
5、關閉selinux和firewall
# setenforce 0
# sed -i "s/SELINUX=enforcing/SELINUX=permissive/g" /etc/selinux/config
# systemctl disable firewalld.service
# systemctl stop firewalld.service
6、重啟
# reboot
三、安裝 配置ceph
1、安裝ceph-deploy包
# yum -y install ceph-deploy-1.5.39
# ceph-deploy --version
1.5.39
2、創建配置文件目錄
# mkdir /etc/ceph
# cd /etc/ceph
3、創建集群 並寫入文件
# ceph-deploy new nceph
因為我們是在單節點上工作,因此需要修改一下配置文件
# echo
"osd crush chooseleaf type = 0"
>> ceph.conf
# echo
"osd pool default size = 1"
>> ceph.conf
# echo
"osd journal size = 100"
>> ceph.conf
4、安裝 ceph 基本庫
# ceph-deploy
install
nceph
5、創建一個集群監視器
# ceph-deploy mon create ceph
6、收集遠程節點上的密鑰到當前文件夾
# ceph-deploy gatherkeys nceph
7、創建 啟動OSD
清空磁盤
# ceph-deploy disk zap nceph:sdb nceph:sdc nceph:sdd
創建OSD
# ceph-deploy --overwrite-conf osd create nceph:sdb nceph:sdc nceph:sdd
8、驗證
# ceph osd tree
# ceph -s
# lsblk
四 、提供塊存儲服務
1、創建一個存儲池
# ceph osd pool create test 128
2、創建一個10G的塊
# rbd create --size 10G disk01 --pool test
3、查看rbd
# rbd ls test -l
4、將10G的塊映射到本地
# rbd map disk01
查看創建的image存不存在:
# rbd info test/disk01
將10G的塊映射到本地時報錯了,需要去掉一些feature
# rbd feature disable test/disk01 exclusive-lock object-map fast-diff deep-flatten
再次映射就可以了
# rbd map test/disk01
查看集群狀態
# ceph -s
5、查看映射
# rbd showmapped
6、格式化為xfs格式
# mkfs.xfs /dev/rbd0
7、掛載rbd0到本地的目錄中
創建用於掛載的目錄
# mkdir /cephStore
掛載
# mount /dev/rbd0 /cephStore
查看
# df -h
至此單機版ceph安裝成功