虛擬機鏡像:centos7 1908
1. 下載ceph nautilus 版本yum源
地址:https://mirrors.aliyun.com/ceph/rpm-nautilus/el7/
下載這兩個文件夾里對應 14.2.5-0.el7 的 rpm
noarch/ 14-Jan-2020 23:21
x86_64/ 14-Jan-2020 23:24
1.1 下載aarch64文件夾對應版本的rpm文件:(物理機)
]# mkdir /var/ftp/pub/ceph
]# cd /var/ftp/pub/ceph
ceph]# mkdir ceph noarch
ceph]# ls
ceph noarch
進入/var/ftp/pub/ceph/ceph文件夾,創建x86_64.txt
ceph]# vim x86_64.txt
注意:用鼠標全選復制網頁:
"https://mirrors.aliyun.com/ceph/rpm-nautilus/el7/x86_64/"
上面所有的文字粘貼到x86_64.txt
如下圖:
1.2 編寫腳本:
ceph]# cat get.sh
#!/bin/bash
rpm_file=/var/ftp/pub/ceph/ceph/$1.txt
rpm_netaddr=https://mirrors.aliyun.com/ceph/rpm-nautilus/el7/$1
for i in `cat $rpm_file`
do
if [[ $i =~ rpm ]] && [[ $i =~ 14.2.5-0 ]]
then
wget $rpm_netaddr/$i
fi
done
1.3 執行腳本,下載rpm文件:
ceph]# bash get.sh x86_64
查看:
ceph]# ls
ceph-14.2.5-0.el7.x86_64.rpm
ceph-base-14.2.5-0.el7.x86_64.rpm
ceph-common-14.2.5-0.el7.x86_64.rpm
ceph-debuginfo-14.2.5-0.el7.x86_64.rpm
cephfs-java-14.2.5-0.el7.x86_64.rpm
ceph-fuse-14.2.5-0.el7.x86_64.rpm
...
ceph]# mv get.sh x86_64.txt ../noarch/
ceph]# createrepo .
Spawning worker 0 with 11 pkgs
Spawning worker 1 with 11 pkgs
Spawning worker 2 with 10 pkgs
Spawning worker 3 with 10 pkgs
Workers Finished
Saving Primary metadata
Saving file lists metadata
Saving other metadata
Generating sqlite DBs
Sqlite DBs complete
1.4 如法炮制,下載noarch文件夾里面的 14.2.5的rpm文件放到/var/ftp/pub/ceph/noarch目錄下
注意:
noarch/ 文件夾下,有些rpm文件並未顯示全名,例如:
ceph-mgr-diskprediction-cloud-14.2.5-0.el7.noar..> 14-Jan-2020 23:18 85684
腳本下載不到,此時需要手動點擊鏈接下載。
同時,需要手動下載:(別問為啥)
ceph-deploy-2.0.1-0.noarch.rpm
ceph-medic-1.0.4-16.g60cf7e9.el7.noarch.rpm
ceph-release-1-1.el7.noarch.rpm
1.6 將下載的rpm文件制作本地yum源,給虛擬機ceph集群使用
]# vim /etc/yum.repos.d/ceph.repo
[ceph]
name=ceph repo
baseurl=ftp://192.168.4.1/pub/ceph/ceph
gpgcheck=0
enable=1
[ceph-noarch]
name=Ceph noarch packages
baseurl=ftp://192.168.4.1/pub/ceph/noarch
gpgcheck=0
enable=1
然后將此文件發到各個虛擬機即可。
2. 創建虛擬機,准備集群環境
2.1 創建虛擬機,修改主機名,啟動網卡
192.168.4.10 client
192.168.4.11 node1 admin,osd, mon,mgr
192.168.4.12 node2 osd,mds
192.168.4.13 node3 osd,mds
192.168.4.14 node4 備用
2.2 配置四台機器互相遠程無密碼連接(包括自己)
]# ssh-keygen -f /root/.ssh/id_rsa -N ''
]# for i in 10 11 12 13 14
> do
> ssh-copy-id 192.168.4.$i
> done
2.3 修改/etc/hosts並同步到所有主機。
警告:/etc/hosts解析的域名必須與本機主機名一致!
]# vim /etc/hosts
... ...
192.168.4.10 client
192.168.4.11 node1
192.168.4.12 node2
192.168.4.13 node3
192.168.4.14 node4
2.4 配置NTP時間同步
真實物理機創建NTP服務器
]# yum -y install chrony
]# vim /etc/chrony.conf
server ntp.aliyun.com iburst
allow 192.168.4.0/24
local stratum 10
]# systemctl restart chronyd
]# chronyc sources -v #出現*時間同步成功
...
^* 203.107.6.88...
其他所有節點與NTP服務器同步時間(以node1為例)。
]# vim /etc/chrony.conf
server 192.168.4.1 iburst
]# systemctl restart chronyd
]# chronyc sources -v #出現*時間同步成功
2.5 准備存儲磁盤
物理機上為每個虛擬機准備3塊磁盤
]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
vdb 252:16 0 20G 0 disk
vdc 252:32 0 20G 0 disk
vdd 252:48 0 20G 0 disk
3. 部署ceph集群
安裝部署工具ceph-deploy
創建ceph集群
准備日志磁盤分區
創建OSD存儲空間
查看ceph狀態,驗證
3.1 部署前安裝:
node1:
安裝pip:
]# yum -y install python3
]# wget --no-check-certificate https://pypi.python.org/packages/ff/d4/209f4939c49e31f5524fa0027bf1c8ec3107abaf7c61fdaad704a648c281/setuptools-21.0.0.tar.gz#md5=81964fdb89534118707742e6d1a1ddb4
]# wget --no-check-certificate https://pypi.python.org/packages/41/27/9a8d24e1b55bd8c85e4d022da2922cb206f183e2d18fee4e320c9547e751/pip-8.1.1.tar.gz#md5=6b86f11841e89c8241d689956ba99ed7
setuptools-21.0.0]# python setup.py install
pip-8.1.1]# python setup.py install
安裝ceph-deploy
]# yum -y install ceph-deploy
]# ceph-deploy --version
2.0.1
創建目錄
]# mkdir ceph-cluster
]# cd ceph-cluster/
所有節點:(注意,是所有節點)
]# wget -O /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo
]# yum clean all;yum repolist
]# yum -y install yum-priorities
]# yum -y install epel-release
]# rm -rf /etc/yum.repos.d/epel.repo.rpmnew
]# yum -y install ceph-release
]# rm -rf /etc/yum.repos.d/ceph.repo.rpmnew
]# yum -y install ceph
報錯:
錯誤:軟件包:2:ceph-mgr-14.2.5-0.el7.x86_64 (ceph)
需要:python-werkzeug
您可以嘗試添加 --skip-broken 選項來解決該問題
您可以嘗試執行:rpm -Va --nofiles --nodigest
]# wget http://rpmfind.net/linux/mageia/distrib/6/x86_64/media/core/updates/python-werkzeug-0.11.3-1.1.mga6.noarch.rpm
]# yum -y install python-werkzeug-0.11.3-1.1.mga6.noarch.rpm
繼續安裝ceph;
]# yum -y install ceph
]# yum -y install ceph-radosgw
查看版本:
]# ceph --version
ceph version 14.2.5 (ad5bd132e1492173c85fda2cc863152730b16a92) nautilus (stable)
3.2 創建Ceph集群配置 (node1操作)
cluster]# ceph-deploy new node1 node2 node3
給所有節點安裝軟件包。
cluster]# ceph-deploy install node1 node2 node3
...
[node3][INFO ] Running command: ceph --version
[node3][DEBUG ] ceph version 14.2.5 (ad5bd132e1492173c85fda2cc863152730b16a92) nautilus (stable)
開始部署mon服務
cluster]# ceph-deploy mon create-initial
復制ceph.client.admin.keyring文件
cluster]# ceph-deploy admin node1 node2 node3
查看集群狀態
cluster]# ceph -s
cluster:
id: 5e96cf02-b3c0-42b2-b357-d1186569d720
health: HEALTH_OK
services:
mon: 3 daemons, quorum node1,node2,node3 (age 42s)
mgr: no daemons active
osd: 0 osds: 0 up, 0 in
data:
pools: 0 pools, 0 pgs
objects: 0 objects, 0 B
usage: 0 B used, 0 B / 0 B avail
pgs:
3.3 創建OSD
cluster]# lsblk
vdb 253:16 0 20G 0 disk
vdc 253:32 0 20G 0 disk
vdd 253:48 0 20G 0 disk
准備磁盤分區(node1、node2、node3都做相同操作)
cluster]# parted /dev/vdb mklabel gpt
cluster]# parted /dev/vdb mkpart primary 1M 50%
cluster]# parted /dev/vdb mkpart primary 50% 100%
cluster]# chown ceph.ceph /dev/vdb1
cluster]# chown ceph.ceph /dev/vdb2
# 這兩個分區用來做存儲服務器的日志journal盤
cluster]# lsblk
vdb 253:16 0 20G 0 disk
├─vdb1 253:17 0 10G 0 part
└─vdb2 253:18 0 10G 0 part
vdc 253:32 0 20G 0 disk
vdd 253:48 0 20G 0 disk
cluster]# vim /etc/udev/rules.d/70-vdb.rules
ENV{DEVNAME}=="/dev/vdb1",OWNER="ceph",GROUP="ceph"
ENV{DEVNAME}=="/dev/vdb2",OWNER="ceph",GROUP="ceph"
初始化清空磁盤數據(僅node1操作即可)
cluster]# ceph-deploy disk zap node1 /dev/vd{c,d}
cluster]# ceph-deploy disk zap node2 /dev/vd{c,d}
cluster]# ceph-deploy disk zap node3 /dev/vd{c,d}
創建OSD存儲空間(僅node1操作即可)
# 創建osd存儲設備,vdc為集群提供存儲空間,vdb1提供JOURNAL緩存
# 一個存儲設備對應一個緩存設備,緩存需要SSD,不需要很大
cluster]# ceph-deploy osd create node1 --data /dev/vdc --journal /dev/vdb1
cluster]# ceph-deploy osd create node1 --data /dev/vdd --journal /dev/vdb2
cluster]# ceph-deploy osd create node2 --data /dev/vdc --journal /dev/vdb1
cluster]# ceph-deploy osd create node2 --data /dev/vdd --journal /dev/vdb2
cluster]# ceph-deploy osd create node3 --data /dev/vdc --journal /dev/vdb1
cluster]# ceph-deploy osd create node3 --data /dev/vdd --journal /dev/vdb2
驗證測試:
cluster]# ceph -s
cluster:
id: 5e96cf02-b3c0-42b2-b357-d1186569d720
health: HEALTH_WARN
no active mgr
services:
mon: 3 daemons, quorum node1,node2,node3 (age 5m)
mgr: no daemons active
osd: 6 osds: 6 up (since 7s), 6 in (since 7s)
data:
pools: 0 pools, 0 pgs
objects: 0 objects, 0 B
usage: 0 B used, 0 B / 0 B avail
pgs:
報錯,no active mgr
配置mgr
在node1上創建名為mgr1的mgr
cluster]# ceph-deploy mgr create node1:mgr1
報錯消失:
cluster]# ceph -s
cluster:
id: 5e96cf02-b3c0-42b2-b357-d1186569d720
health: HEALTH_OK
services:
mon: 3 daemons, quorum node1,node2,node3 (age 5m)
mgr: mgr1(active, since 6s)
osd: 6 osds: 6 up (since 56s), 6 in (since 56s)
data:
pools: 0 pools, 0 pgs
objects: 0 objects, 0 B
usage: 6.0 GiB used, 108 GiB / 114 GiB avail
pgs:
4. 創建Ceph塊存儲
創建塊存儲鏡像
客戶端映射鏡像
創建鏡像快照
使用快照還原數據
使用快照克隆鏡像
刪除快照與鏡像
4.1 創建鏡像(node1)
查看存儲池
]# ceph osd lspools
cluster]# ceph osd pool create pool-zk 100
pool 'pool-zk' created
指定池為塊設備
cluster]# ceph osd pool application enable pool-zk rbd
重命名為pool為rbd
cluster]# ceph osd pool rename pool-zk rbd
創建鏡像
cluster]# rbd create demo-image --image-feature layering --size 10G
cluster]# rbd create rbd/image --image-feature layering --size 10G
cluster]# rbd list
demo-image
image
查看鏡像
cluster]# rbd info demo-image
rbd image 'demo-image':
size 10 GiB in 2560 objects
order 22 (4 MiB objects)
snapshot_count: 0
id: 1111d288dfb3
block_name_prefix: rbd_data.1111d288dfb3
format: 2
features: layering
op_features:
flags:
create_timestamp: Mon Jan 20 00:25:16 2020
access_timestamp: Mon Jan 20 00:25:16 2020
modify_timestamp: Mon Jan 20 00:25:16 2020
4.2 動態調整
縮小容量
cluster]# rbd resize --size 7G image --allow-shrink
Resizing image: 100% complete...done.
cluster]# rbd info image
rbd image 'image':
size 7 GiB in 1792 objects
order 22 (4 MiB objects)
snapshot_count: 0
id: 10ee3721a5af
block_name_prefix: rbd_data.10ee3721a5af
format: 2
features: layering
op_features:
flags:
create_timestamp: Mon Jan 20 00:25:47 2020
access_timestamp: Mon Jan 20 00:25:47 2020
modify_timestamp: Mon Jan 20 00:25:47 2020
擴容容量
]# rbd resize --size 15G image
Resizing image: 100% complete...done.
cluster]# rbd info image
rbd image 'image':
size 15 GiB in 3840 objects
order 22 (4 MiB objects)
snapshot_count: 0
id: 10ee3721a5af
block_name_prefix: rbd_data.10ee3721a5af
format: 2
features: layering
op_features:
flags:
create_timestamp: Mon Jan 20 00:25:47 2020
access_timestamp: Mon Jan 20 00:25:47 2020
modify_timestamp: Mon Jan 20 00:25:47 2020
4.3 通過KRBD訪問
集群內將鏡像映射為本地磁盤
cluster]# rbd map demo-image
/dev/rbd0
]# lsblk
… …
rbd0 251:0 0 10G 0 disk
cluster]# mkfs.xfs /dev/rbd0
cluster]# mount /dev/rbd0 /mnt
cluster]# df -h
文件系統 容量 已用 可用 已用% 掛載點
/dev/rbd0 10G 33M 10G 1% /mnt
客戶端通過KRBD訪問(client)
#客戶端需要安裝ceph-common軟件包
#拷貝配置文件(否則不知道集群在哪)
#拷貝連接密鑰(否則無連接權限)
]# yum -y install ceph-common
]# scp 192.168.4.11:/etc/ceph/ceph.conf /etc/ceph/
]# scp 192.168.4.11:/etc/ceph/ceph.client.admin.keyring /etc/ceph/
]# rbd map image
/dev/rbd0
]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
rbd0 251:0 0 15G 0 disk
]# rbd showmapped
id pool image snap device
0 rbd image - /dev/rbd0
客戶端格式化、掛載分區(client)
]# mkfs.xfs /dev/rbd0
]# mount /dev/rbd0 /mnt/
]# echo "test" > /mnt/test.txt
]# ls /mnt/
test.txt
4.4 創建鏡像快照(node1)
查看鏡像快照
cluster]# rbd snap ls image (無)
創建鏡像快照
cluster]# rbd snap create image --snap image-snap1
查看
cluster]# rbd snap ls image
SNAPID NAME SIZE PROTECTED TIMESTAMP
4 image-snap1 15 GiB Mon Jan 20 00:36:47 2020
###此時創建的快照里有test.txt############
刪除客戶端寫入的測試文件
client ~]# rm -rf /mnt/test.txt
還原快照(一定要離線回滾)
client離線:
client ~]# umount /mnt/ # 不要在目錄里操作
client ~]# rbd unmap image
node1回滾:
cluster]# rbd snap rollback image --snap image-snap1
Rolling back to snapshot: 100% complete...done.
client重新掛載:
client ~]# rbd map image
client ~]# mount /dev/rbd0 /mnt/
查看數據是否存在:
client ~]# ls /mnt/
test.txt
4.5 創建快照克隆(node1) image-clone
克隆快照
cluster]# rbd snap protect image --snap image-snap1
cluster]# rbd snap rm image --snap image-snap1 # 會失敗
cluster]# rbd clone image --snap image-snap1 image-clone --image-feature layering
# 使用image的快照image-snap1克隆一個新的image-clone鏡像
查看克隆鏡像與父鏡像快照的關系
cluster]# rbd info image-clone
rbd image 'image-clone':
size 15 GiB in 3840 objects
order 22 (4 MiB objects)
snapshot_count: 0
id: 115f22e6cb86
block_name_prefix: rbd_data.115f22e6cb86
format: 2
features: layering
op_features:
flags:
create_timestamp: Mon Jan 20 00:53:42 2020
access_timestamp: Mon Jan 20 00:53:42 2020
modify_timestamp: Mon Jan 20 00:53:42 2020
parent: rbd/image@image-snap1
overlap: 15 GiB
#克隆鏡像很多數據都來自於快照鏈
#如果希望克隆鏡像可以獨立工作,就需要將父快照中的數據,全部拷貝一份,但比較耗時!
獨立工作
cluster]# rbd flatten image-clone
cluster]# rbd info image-clone
rbd image 'image-clone':
size 15 GiB in 3840 objects
order 22 (4 MiB objects)
snapshot_count: 0
id: 115f22e6cb86
block_name_prefix: rbd_data.115f22e6cb86
format: 2
features: layering
op_features:
flags:
create_timestamp: Mon Jan 20 00:53:42 2020
access_timestamp: Mon Jan 20 00:53:42 2020
modify_timestamp: Mon Jan 20 00:53:42 2020
# 注意,父快照信息沒了!
4.6 其他操作
客戶端撤銷磁盤映射(client)
]# umount /mnt
]# rbd showmapped
id pool image snap device
0 rbd image - /dev/rbd0
]# rbd unmap /dev/rbd0
]# rbd showmapped(無)
刪除快照與鏡像(node1)
cluster]# umount /mnt
]# rbd unmap /dev/rbd0
取消保護
cluster]# rbd snap unprotect image --snap image-snap1
刪除快照
cluster]# rbd snap rm image --snap image-snap1
查看鏡像
cluster]# rbd list
demo-image
image
image-clone
刪除鏡像
cluster]# rbd rm demo-image
cluster]# rbd rm image
cluster]# rbd rm image-clone