部署ceph存儲集群及塊設備測試


 

集群環境

配置基礎環境

添加ceph.repo

wget -O /etc/yum.repos.d/ceph.repo https://raw.githubusercontent.com/aishangwei/ceph-demo/master/ceph-deploy/ceph.repo
yum makecache

配置NTP

yum -y install ntpdate ntp
ntpdate cn.ntp.org.cn
systemctl restart ntpd ntpdate;systemctl enable ntpd ntpdate

創建用戶和ssh免密登錄

useradd ceph-admin
echo "ceph-admin"|passwd --stdin ceph-admin
echo "ceph-admin ALL = (root) NOPASSWD:ALL" | sudo tee /etc/sudoers.d/ceph-admin
sudo chmod 0440 /etc/sudoers.d/ceph-admin
配置host解析
cat >>/etc/hosts<<EOF
10.1.10.201 ceph01
10.1.10.202 ceph02
10.1.10.203 ceph03
EOF

配置sudo不需要tty

sed -i 's/Default requiretty/#Default requiretty/' /etc/sudoers

 

使用ceph-deploy部署集群

配置免密登錄

su - ceph-admin
ssh-keygen
ssh-copy-id ceph-admin@ceph01
ssh-copy-id ceph-admin@ceph02
ssh-copy-id ceph-admin@ceph03

安裝ceph-deploy

sudo yum install -y ceph-deploy python-pip

部署節點

mkdir my-cluster;cd my-cluster
ceph-deploy new ceph01 ceph02 ceph03

安裝ceph包(代替ceph-deploy install node1 node2,下面命令需要在每台node上安裝)

sudo wget -O /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo
rpm --import /etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7
sudo
yum install -y ceph ceph-radosgw

配置初始monitor(s),收集所有密鑰

sudo systemctl stop firewalld;sudo systemctl disable firewalld  #確認防火牆已關閉
ceph-deploy mon create-initial ls -l *.keyring

把配置信息拷貝到各節點

ceph-deploy admin ceph01 ceph02 ceph03

配置osd

su - ceph-admin
cd /home/my-cluster
for dev in /dev/sdb /dev/sdc /dev/sdd
do
ceph-deploy disk zap ceph01 $dev
ceph-deploy osd create ceph01 --data $dev
ceph-deploy disk zap ceph02 $dev
ceph-deploy osd create ceph02 --data $dev
ceph-deploy disk zap ceph03 $dev
ceph-deploy osd create ceph03 --data $dev
done
sudo ceph osd tree    #查看osd

部署mgr,Luminous版以后才需要部署

ceph-deploy mgr create ceph01 ceph02 ceph03

開啟dashboard模塊

sudo chown -R ceph-admin /etc/ceph/
ceph mgr module enable dashboard
netstat -lntup|grep 7000

http://10.1.10.201:7000

刪除集群

ceph-deploy purge ceph01 ceph02 ceph03
ceph-deploy purgedata ceph01 ceph02 ceph03
ceph-deploy forgetkeys

 

配置ceph塊存儲

檢查是否復合塊設備環境要求

uname -r
sudo modprobe rbd
echo $?

創建池和塊設備

ceph osd lspools
ceph osd pool create rbd 128

確定pg_num取值是強制性的,因為不能自動計算,下面是幾個常用的值

少於5個OSD時,pg_num設置為128
OSD數量在5到10個時,pg_num設置為512
OSD數量在10到50個時,pg_num設置為4096
OSD數量大於50時,理解權衡方法、以及如何自己計算pg_num取值

客戶端創建塊設備

sudo rbd create rbd1 --size 1G --image-feature layering --name client.admin

映射塊設備

sudo rbd map --image rbd1 --name client.admin

創建文件系統並掛載

fdisk -l /dev/rbd0
mkfs.xfs /dev/rbd0
mkdir /mnt/ceph-disk1
mount /dev/rbd0 /mnt/ceph-disk1
df -h /mnt/ceph-disk1

寫入數據測試

dd if=/dev/zero of=/mnt/ceph-disk1/file1 count=100 bs=1M

 

配置ceph對象存儲

安裝ceph對象網關

ceph-deploy install --rgw ceph01 ceph02 ceph03

新建對象網關實例

ceph-deploy rgw create ceph01 ceph02 ceph03

一旦網關開始運行,你就可以通過 7480 端口來訪問它(比如 http://client-node:7480)

 

采用fio軟件壓力測試

安裝fio壓測軟件

yum install libaio-devel -y
yum
install zlib-devel -y yum install ceph-devel -y git clone git://git.kernel.dk/fio.git cd fio/ ./configure make;make install

測試磁盤性能

fio -direct=1 -iodepth=1 -rw=read -ioengine=libaio -bs=2k -size=100G -numjobs=128 -runtime=30 -group_reporting - 
filename=/dev/rbd0 -name=readiops
fio -direct=1 -iodepth=1 -rw=write -ioengine=libaio -bs=2k -size=100G -numjobs=128 -runtime=30 -group_reporting - 
filename=/dev/rbd0 -name=writeiops
fio -direct=1 -iodepth=1 -rw=randread -ioengine=libaio -bs=2k -size=100G -numjobs=128 -runtime=30 -group_reporting - 
filename=/dev/rbd0 -name=randreadiops
fio -direct=1 -iodepth=1 -rw=randwrite -ioengine=libaio -bs=2k -size=100G -numjobs=128 -runtime=30 -group_reporting - 
filename=/dev/rbd0 -name=randwriteiops

 


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM