准备工作参考:ceph之centos7.6.1810手动部署ceph-luminous
yum -y install ceph 后确认ceph版本
ceph -v ceph version 14.2.1 (d555a9489eb35f84f2e1ef49b77e19da9d113972) nautilus (stable)
为ceph admin用户创建管理集群的密钥并赋予访问权限时:
n版ceph-authtool不存在--set-uid参数,去掉即可
部署完mon后,ceph -s
ceph -s cluster: id: 871025c6-a6df-43e2-9193-0c857fe38617 health: HEALTH_WARN 1 monitors have not enabled msgr2 services: mon: 1 daemons, quorum node1 (age 50s) mgr: no daemons active osd: 0 osds: 0 up, 0 in data: pools: 0 pools, 0 pgs objects: 0 objects, 0 B usage: 0 B used, 0 B / 0 B avail pgs:
解决warning前
netstat -tnlp |grep ceph-mon Active Internet connections (only servers) Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name tcp 0 0 10.1.1.51:6789 0.0.0.0:* LISTEN 8216/ceph-mon
ceph health detail 会看到提示,nautilus版本中mon需要打开v2,监听3300端口
enable the msgr2 protocol on port 3300
ceph mon enable-msgr2
验证
netstat -tnlp Active Internet connections (only servers) Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name tcp 0 0 10.1.1.51:3300 0.0.0.0:* LISTEN 8216/ceph-mon tcp 0 0 10.1.1.51:6789 0.0.0.0:* LISTEN 8216/ceph-mon
添加多mon可能遇到的问题
如果因添加node2导致集群出差,需要到node1去先修改配置,再从步骤1开始添加新的mon,node1步骤如下:
systemctl stop ceph-mon@node1 monmaptool /tmp/monmap --rm node2 ceph-mon -i node1 --inject-monmap /tmp/monmap systemctl start ceph-mon@node1
配置osd
按照官方说明BLUESTORE方式效率比FILESTORE方式效率更高,对SSD优化良好
之前部署osd属于FILESTORE方式,此次使用BLUESTORE
列出盘
lsblk
ceph-volume lvm create --data /dev/sdb Running command: /bin/ceph-authtool --gen-print-key Running command: /bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 3979389f-b57f-43ce-ac9d-cccd80fbd55e Running command: /usr/sbin/vgcreate -s 1G --force --yes ceph-32b11805-c05f-4d93-a3ef-270b9e0203e1 /dev/sdb stdout: Physical volume "/dev/sdb" successfully created. stdout: Volume group "ceph-32b11805-c05f-4d93-a3ef-270b9e0203e1" successfully created Running command: /usr/sbin/lvcreate --yes -l 100%FREE -n osd-block-3979389f-b57f-43ce-ac9d-cccd80fbd55e ceph-32b11805-c05f-4d93-a3ef-270b9e0203e1 stdout: Logical volume "osd-block-3979389f-b57f-43ce-ac9d-cccd80fbd55e" created. Running command: /bin/ceph-authtool --gen-print-key Running command: /bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-0 Running command: /usr/sbin/restorecon /var/lib/ceph/osd/ceph-0 Running command: /bin/chown -h ceph:ceph /dev/ceph-32b11805-c05f-4d93-a3ef-270b9e0203e1/osd-block-3979389f-b57f-43ce-ac9d-cccd80fbd55e Running command: /bin/chown -R ceph:ceph /dev/dm-2 Running command: /bin/ln -s /dev/ceph-32b11805-c05f-4d93-a3ef-270b9e0203e1/osd-block-3979389f-b57f-43ce-ac9d-cccd80fbd55e /var/lib/ceph/osd/ceph-0/block Running command: /bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-0/activate.monmap stderr: 2019-07-01 17:31:59.586 7f23d5b95700 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.bootstrap-osd.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,: (2) No such file or directory 2019-07-01 17:31:59.586 7f23d5b95700 -1 AuthRegistry(0x7f23d0063bc8) no keyring found at /etc/ceph/ceph.client.bootstrap-osd.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,, disabling cephx stderr: got monmap epoch 4 Running command: /bin/ceph-authtool /var/lib/ceph/osd/ceph-0/keyring --create-keyring --name osd.0 --add-key AQAE0xldXWz4BRAAVrIaajRFQIukwqMxZRBT5Q== stdout: creating /var/lib/ceph/osd/ceph-0/keyring added entity osd.0 auth(key=AQAE0xldXWz4BRAAVrIaajRFQIukwqMxZRBT5Q==) Running command: /bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/keyring Running command: /bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/ Running command: /bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 0 --monmap /var/lib/ceph/osd/ceph-0/activate.monmap --keyfile - --osd-data /var/lib/ceph/osd/ceph-0/ --osd-uuid 3979389f-b57f-43ce-ac9d-cccd80fbd55e --setuser ceph --setgroup ceph --> ceph-volume lvm prepare successful for: /dev/sdb Running command: /bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0 Running command: /bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph-32b11805-c05f-4d93-a3ef-270b9e0203e1/osd-block-3979389f-b57f-43ce-ac9d-cccd80fbd55e --path /var/lib/ceph/osd/ceph-0 --no-mon-config Running command: /bin/ln -snf /dev/ceph-32b11805-c05f-4d93-a3ef-270b9e0203e1/osd-block-3979389f-b57f-43ce-ac9d-cccd80fbd55e /var/lib/ceph/osd/ceph-0/block Running command: /bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block Running command: /bin/chown -R ceph:ceph /dev/dm-2 Running command: /bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0 Running command: /bin/systemctl enable ceph-volume@lvm-0-3979389f-b57f-43ce-ac9d-cccd80fbd55e stderr: Created symlink from /etc/systemd/system/multi-user.target.wants/ceph-volume@lvm-0-3979389f-b57f-43ce-ac9d-cccd80fbd55e.service to /usr/lib/systemd/system/ceph-volume@.service. Running command: /bin/systemctl enable --runtime ceph-osd@0 stderr: Created symlink from /run/systemd/system/ceph-osd.target.wants/ceph-osd@0.service to /usr/lib/systemd/system/ceph-osd@.service. Running command: /bin/systemctl start ceph-osd@0 --> ceph-volume lvm activate successful for osd ID: 0 --> ceph-volume lvm create successful for: /dev/sdb
查看
ceph-volume lvm list ====== osd.0 ======= [block] /dev/ceph-32b11805-c05f-4d93-a3ef-270b9e0203e1/osd-block-3979389f-b57f-43ce-ac9d-cccd80fbd55e block device /dev/ceph-32b11805-c05f-4d93-a3ef-270b9e0203e1/osd-block-3979389f-b57f-43ce-ac9d-cccd80fbd55e block uuid VZmqut-RKpL-5sZJ-4StW-YLcW-I9Rs-ytGxCX cephx lockbox secret cluster fsid 871025c6-a6df-43e2-9193-0c857fe38617 cluster name ceph crush device class None encrypted 0 osd fsid 3979389f-b57f-43ce-ac9d-cccd80fbd55e osd id 0 type block vdo 0 devices /dev/sdb
会显示osd.num
已经自行启动
查看osd运行状态
systemctl status ceph-osd@0
确认盘已被作为lvm,vg : ceph--32b11805--c05f--4d93--a3ef--270b9e0203e1,lv : osd--block--3979389f--b57f--43ce--ac9d--cccd80fbd55e
lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 20G 0 disk ├─sda1 8:1 0 1G 0 part /boot └─sda2 8:2 0 19G 0 part ├─centos-root 253:0 0 17G 0 lvm / └─centos-swap 253:1 0 2G 0 lvm [SWAP] sdb 8:16 0 5G 0 disk └─ceph--32b11805--c05f--4d93--a3ef--270b9e0203e1-osd--block--3979389f--b57f--43ce--ac9d--cccd80fbd55e 253:2 0 4G 0 lvm sdc 8:32 0 5G 0 disk sr0 11:0 1 4.3G 0 rom
扩osd,需要bootstrap-osd
ceph auth get client.bootstrap-osd -o /var/lib/ceph/bootstrap-osd/ceph.keyring
块存储
使用ceph块设备时需要手动创建rbd池
ceph osd lspools ceph osd pool create rbd 128 pool 'rbd' created ceph osd lspools 1 rbd
初始化rbd pool
rbd pool init rbd
rbd create volume1 --size 100M rbd ls -l NAME SIZE PARENT FMT PROT LOCK volume1 100 MiB
rbd map volume1 rbd: sysfs write failed RBD image feature set mismatch. You can disable features unsupported by the kernel with "rbd feature disable volume1 object-map fast-diff deep-flatten". In some cases useful info is found in syslog - try "dmesg | tail". rbd: map failed: (6) No such device or address
rbd feature disable volume1 object-map fast-diff deep-flatten exclusive-lock rbd map volume1 /dev/rbd0 rbd showmapped id pool namespace image snap device 0 rbd volume1 - /dev/rbd0 mkfs.xfs /dev/rbd0 meta-data=/dev/rbd0 isize=512 agcount=4, agsize=7168 blks = sectsz=512 attr=2, projid32bit=1 = crc=1 finobt=0, sparse=0 data = bsize=4096 blocks=25600, imaxpct=25 = sunit=1024 swidth=1024 blks naming =version 2 bsize=4096 ascii-ci=0 ftype=1 log =internal log bsize=4096 blocks=624, version=2 = sectsz=512 sunit=8 blks, lazy-count=1 realtime =none extsz=4096 blocks=0, rtextents=0 mkdir /root/test mount /dev/rbd0 /root/test df -Th |grep /dev/rbd0 /dev/rbd0 xfs 98M 5.3M 93M 6% /root/test
文件存储
配置mds服务
sudo -u ceph mkdir /var/lib/ceph/mds/ceph-node1 ceph auth get-or-create mds.node1 osd "allow rwx" mds "allow" mon "allow profile mds" [mds.node1] key = AQDUAxtdGFLEHxAAJgxIv4hcgeQ4kN2wXRRQDQ== ceph auth get mds.node1 -o /var/lib/ceph/mds/ceph-node1/keyring exported keyring for mds.node1
vi /etc/ceph/ceph.conf
[mds.node1] host = 10.1.1.51
启动服务
systemctl start ceph-mds@node1 systemctl enable ceph-mds@node1
构建文件存储的元数据池和数据池
ceph osd pool create cephfs_data 128 pool 'cephfs_data' created ceph osd pool create cephfs_metadata 128 pool 'cephfs_metadata' created ceph fs new cephfs cephfs_metadata cephfs_data new fs with metadata pool 3 and data pool 2 ceph fs ls name: cephfs, metadata pool: cephfs_metadata, data pools: [cephfs_data ] ceph mds stat cephfs:1 {0=node1=up:active}
客户端mount cephFS
yum -y install ceph-fuse ceph-authtool -p /etc/ceph/ceph.client.admin.keyring > admin.key chmod 600 admin.key mount -t ceph node1:6789:/ /mnt -o name=admin,secretfile=admin.key df -hT 10.1.1.51:6789:/ ceph 5.6G 0 5.6G 0% /mnt
查看
ceph fs volume ls [ { "name": "cephfs" } ]
ceph fs ls name: cephfs, metadata pool: cephfs_metadata, data pools: [cephfs_data ]
ceph fs status cephfs - 1 clients ====== +------+--------+-------+---------------+-------+-------+ | Rank | State | MDS | Activity | dns | inos | +------+--------+-------+---------------+-------+-------+ | 0 | active | node1 | Reqs: 0 /s | 12 | 15 | +------+--------+-------+---------------+-------+-------+ +-----------------+----------+-------+-------+ | Pool | type | used | avail | +-----------------+----------+-------+-------+ | cephfs_metadata | metadata | 1536k | 5717M | | cephfs_data | data | 192k | 5717M | +-----------------+----------+-------+-------+ +-------------+ | Standby MDS | +-------------+ +-------------+ MDS version: ceph version 14.2.1 (d555a9489eb35f84f2e1ef49b77e19da9d113972) nautilus (stable)