ceph-nautilus版本部署


實驗目的:

  ceph版本使用,體驗ceph新特性,使用單機部署體驗rbd/bgw/cephfs,cephfs需要mds服務,rbd/bgw不需要mds服務

實驗環境:

  • Ubuntu 18.04.3 LTS
  • ceph-nautilus

注意:ceph-octopus部署出現很多錯誤,不太穩定就回退到上個版本ceph-nautilus

實驗操作:

hosts/firewalled/disk

root@ubuntu:~# hostname
ubuntu
root@ubuntu:~# ping ubuntu
PING ubuntu (192.168.3.103) 56(84) bytes of data.
64 bytes from ubuntu (192.168.3.103): icmp_seq=1 ttl=64 time=0.015 ms

root@ubuntu:~# ufw status
Status: inactive   ###防火牆未使用

root@ubuntu:~# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 20G 0 disk
└─sda1 8:1 0 20G 0 part /
sdb 8:16 0 20G 0 disk    ###一塊空閑的磁盤

添加國內源

root@ubuntu:~# wget -q -O- 'https://mirrors.cloud.tencent.com/ceph/keys/release.asc' | sudo apt-key add -       ###添加效驗
OK
root@ubuntu:~# echo deb https://mirrors.cloud.tencent.com/ceph/debian-nautilus/ $(lsb_release -sc) main | tee /etc/apt/sources.list.d/ceph.list

deb https://mirrors.cloud.tencent.com/ceph/debian-nautilus/ bionic main
root@ubuntu:~#
root@ubuntu:~# apt-get update
Hit:1 http://mirrors.aliyun.com/ubuntu bionic InRelease

安裝ceph-deploy及初始化集群mon

root@ubuntu:~# apt-get install -y ceph-deploy

root@ubuntu:~# mkdir -p /etc/cluster-ceph  ###配置存儲路徑

root@ubuntu:~# cd /etc/cluster-ceph
root@ubuntu:/etc/cluster-ceph#
root@ubuntu:/etc/cluster-ceph# pwd
/etc/cluster-ceph
root@ubuntu:/etc/cluster-ceph# ceph-deploy new `hostname`

vi ceph.conf    ###新增兩個,一個osd默認副本1個 active+clean

osd_pool default size = 1    
osd_pool defaultmin size = 1

節點安裝ceph相關軟件

root@ubuntu:/etc/cluster-ceph# export CEPH_DEPLOY_REPO_URL=https://mirrors.cloud.tencent.com/ceph/debian-nautilus/
root@ubuntu:/etc/cluster-ceph# export CEPH_DEPLOY_GPG_URL=https://mirrors.cloud.tencent.com/ceph/keys/release.asc
root@ubuntu:/etc/cluster-ceph# ceph-deploy install --release nautilus `hostname`

###ERROR

[ubuntu][DEBUG ] Unpacking radosgw (15.1.0-1bionic) ...
[ubuntu][DEBUG ] Errors were encountered while processing:
[ubuntu][DEBUG ]  /tmp/apt-dpkg-install-sKeUKm/35-ceph-base_15.1.0-1bionic_amd64.deb
[ubuntu][WARNIN] E: Sub-process /usr/bin/dpkg returned an error code (1)
[ubuntu][ERROR ] RuntimeError: command returned non-zero exit status: 100
[ceph_deploy][ERROR ] RuntimeError: Failed to execute command: env DEBIAN_FRONTEND=noninteractive DEBIAN_PRIORITY=critical apt-get --a
ssume-yes -q --no-install-recommends install ceph ceph-osd ceph-mds ceph-mon radosgw

[ubuntu][WARNIN] E: Unmet dependencies. Try 'apt --fix-broken install' with no packages (or specify a solution).   ###嘗試修復
[ubuntu][ERROR ] RuntimeError: command returned non-zero exit status: 100
[ceph_deploy][ERROR ] RuntimeError: Failed to execute command: env DEBIAN_FRONTEND=noninteractive DEBIAN_PRIORITY=critical apt-get --a
ssume-yes -q --no-install-recommends install ceph ceph-osd ceph-mds ceph-mon radosgw
root@ubuntu:/etc/cluster-ceph# apt --fix-broken install
Reading package lists... Done
Building dependency tree       


(Reading database ... 120818 files and directories currently installed.)
Preparing to unpack .../ceph-base_15.1.0-1bionic_amd64.deb ...
Unpacking ceph-base (15.1.0-1bionic) .............................................................................................] 
dpkg: error processing archive /var/cache/apt/archives/ceph-base_15.1.0-1bionic_amd64.deb (--unpack):
 trying to overwrite '/usr/share/man/man8/ceph-deploy.8.gz', which is also in package ceph-deploy 1.5.38-0ubuntu1
Errors were encountered while processing:
 /var/cache/apt/archives/ceph-base_15.1.0-1bionic_amd64.deb
E: Sub-process /usr/bin/dpkg returned an error code (1)

解決:

存在一個ceph-base安裝失敗導致其他依賴這個包的軟件依次失敗

root@ubuntu:/etc/cluster-ceph# ll /var/cache/apt/archives/ceph-base_15.1.0-1bionic_amd64.deb
-rw-r--r-- 1 root root 5167392 Jan 29 16:43 /var/cache/apt/archives/ceph-base_15.1.0-1bionic_amd64.deb

root@ubuntu:/etc/cluster-ceph# useradd ceph   ###添加賬戶     apt --fix-broken  install   ###自動修復存在異常的deb
root@ubuntu:/etc/cluster-ceph# dpkg -i --force-overwrite /var/cache/apt/archives/ceph-base_15.1.0-1bionic_amd64.deb

root@ubuntu:/etc/cluster-ceph# ceph-deploy install  --release octopus  `hostname`

root@ubuntu:/etc/cluster-ceph# ceph -v
ceph version 15.1.0 (49b0421165765bbcfb07e5aa7a818a47cc023df7) octopus (rc)

初始化mon/mgr

ceph-deploy mon create-initial

ceph-deploy admin `hostname`

ceph-deploy  mgr create `hostname`

root@c1:~# systemctl list-units 'ceph*' --type=service    ###查看ceph運行的相關服務
UNIT LOAD ACTIVE SUB DESCRIPTION
ceph-crash.service loaded active running Ceph crash dump collector
ceph-mgr@c1.service loaded active running Ceph cluster manager daemon
ceph-mon@c1.service loaded active running Ceph cluster monitor daemon

初始化osd bluestore /mds

root@ubuntu:/etc/cluster-ceph# ceph-deploy osd create -h ###查看命令的幫助信息,不同發行版本后續參數都不一樣!真caodan
usage: ceph-deploy osd create [-h] [--data DATA] [--journal JOURNAL]
                              [--zap-disk] [--fs-type FS_TYPE] [--dmcrypt]
                              [--dmcrypt-key-dir KEYDIR] [--filestore]
                              [--bluestore] [--block-db BLOCK_DB]
                              [--block-wal BLOCK_WAL] [--debug]
                              [HOST]

positional arguments:
  HOST                  Remote host to connect

optional arguments:
  -h, --help            show this help message and exit
  --data DATA           The OSD data logical volume (vg/lv) or absolute path   ###vg/lv/device
                        to device
  --journal JOURNAL     Logical Volume (vg/lv) or path to GPT partition
  --zap-disk            DEPRECATED - cannot zap when creating an OSD
  --fs-type FS_TYPE     filesystem to use to format DEVICE (xfs, btrfs)
  --dmcrypt             use dm-crypt on DEVICE
  --dmcrypt-key-dir KEYDIR
                        directory where dm-crypt keys are stored
  --filestore           filestore objectstore
  --bluestore           bluestore objectstore
  --block-db BLOCK_DB   bluestore block.db path
  --block-wal BLOCK_WAL
                        bluestore block.wal path
  --debug               Enable debug mode on remote ceph-volume calls

root@ubuntu:/etc/cluster-ceph# ceph-deploy osd create --bluestore `hostname` --data /dev/sdb

root@ubuntu:/etc/cluster-ceph# ceph -s
cluster:
id: 081a571c-cb0b-452f-b583-ab4f82f8344a
health: HEALTH_OK

services:
mon: 1 daemons, quorum ubuntu (age 5m)
mgr: ubuntu(active, since 4m)
osd: 1 osds: 1 up (since 3m), 1 in (since 3m)

data:
pools: 0 pools, 0 pgs
objects: 0 objects, 0 B
usage: 1.0 GiB used, 18 GiB / 19 GiB avail
pgs:

root@ubuntu:/etc/cluster-ceph# ceph-deploy mds create `hostname`    ###cephfs類型存儲,必須依賴mds(元數據)

root@ubuntu:/etc/cluster-ceph# systemctl list-units 'ceph*' --type=service    ###查看ceph運行的服務,默認就是開機自啟動
UNIT LOAD ACTIVE SUB DESCRIPTION
ceph-crash.service loaded active running Ceph crash dump collector
ceph-mgr@ubuntu.service loaded active running Ceph cluster manager daemon
ceph-mon@ubuntu.service loaded active running Ceph cluster monitor daemon
ceph-osd@0.service loaded active running Ceph object storage daemon osd.0

ceph-mds@ubuntu.service loaded active running Ceph metadata server daemon

cephfs掛載  mount

root@ubuntu:/etc/cluster-ceph# ceph osd pool create -h    ###多看幫助命令,了解基本的命令組成

osd pool create <poolname> <int[0-]> {<int[0-]>} {replicated| create pool
erasure} {<erasure_code_profile>} {<rule>} {<int>} {<int>}
{<int[0-]>} {<int[0-]>} {<float[0.0-1.0]>}

root@ubuntu:/etc/cluster-ceph# ceph osd pool create cephfs_data 128 128
pool 'cephfs_data' created
root@ubuntu:/etc/cluster-ceph# ceph osd pool create cephfs_metadata 128 128
pool 'cephfs_metadata' created
root@ubuntu:/etc/cluster-ceph#
root@ubuntu:/etc/cluster-ceph# ceph osd pool ls
cephfs_data
cephfs_metadata

root@ubuntu:/etc/cluster-ceph# ceph fs new -h

fs new <fs_name> <metadata> <data> {--force} {--allow-dangerous- make new filesystem using named pools <metadata> and <data>
metadata-overlay}
root@ubuntu:/etc/cluster-ceph# ceph fs new cephfs cephfs_metadata cephfs_data   ###創建cephfs
new fs with metadata pool 2 and data pool 1
root@ubuntu:/etc/cluster-ceph#
root@ubuntu:/etc/cluster-ceph# ceph fs ls
name: cephfs, metadata pool: cephfs_metadata, data pools: [cephfs_data ]
root@ubuntu:/etc/cluster-ceph#mkdir /ceph   ###創建掛載點

root@ubuntu:/etc/cluster-ceph# cat ceph.client.admin.keyring   ###查看mount ceph的認證信息
[client.admin]
key = AQDDUD5e4S95LhAAVgxDj5jC+QxU0KEvZ6XgBA==
caps mds = "allow *"
caps mgr = "allow *"
caps mon = "allow *"
caps osd = "allow *"

root@ubuntu:/etc/cluster-ceph# mount -o name=admin,secret=AQDDUD5e4S95LhAAVgxDj5jC+QxU0KEvZ6XgBA== -t ceph 192.168.3.103:6789:/ /ceph/
root@ubuntu:/etc/cluster-ceph# root@ubuntu:/etc/cluster-ceph#
root@ubuntu:/etc/cluster-ceph#
root@ubuntu:/etc/cluster-ceph# df -hT
Filesystem Type Size Used Avail Use% Mounted on
udev devtmpfs 1.9G 0 1.9G 0% /dev
tmpfs tmpfs 393M 1020K 392M 1% /run
/dev/sda1 ext4 20G 5.5G 14G 30% /
tmpfs tmpfs 2.0G 0 2.0G 0% /dev/shm
tmpfs tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs tmpfs 2.0G 0 2.0G 0% /sys/fs/cgroup
tmpfs tmpfs 393M 0 393M 0% /run/user/0
tmpfs tmpfs 2.0G 52K 2.0G 1% /var/lib/ceph/osd/ceph-0
192.168.3.103:6789:/ ceph 18G 0 18G 0% /ceph
root@ubuntu:/etc/cluster-ceph# ll /ceph/
total 0
root@ubuntu:/etc/cluster-ceph#
root@ubuntu:/etc/cluster-ceph#
root@ubuntu:/etc/cluster-ceph# touch /ceph/sb
root@ubuntu:/etc/cluster-ceph# ll /ceph/
total 0
-rw-r--r-- 1 root root 0 Feb 7 22:43 sb

###ERROR

tail -f /var/log/kern.log     ###掛載查看內核日志

Feb  7 22:42:08 ubuntu kernel: [ 2436.477124] libceph: mon0 192.168.3.103:6789 session established
Feb  7 22:42:08 ubuntu kernel: [ 2436.477983] libceph: client4207 fsid 081a571c-cb0b-452f-b583-ab4f82f8344a
Feb  7 22:42:08 ubuntu kernel: [ 2436.478050] ceph: probably no mds server is up
Feb  7 22:42:45 ubuntu kernel: [ 2473.042195] libceph: mon0 192.168.3.103:6789 session established
Feb  7 22:42:45 ubuntu kernel: [ 2473.042338] libceph: client4215 fsid 081a571c-cb0b-452f-b583-ab4f82f8344a

根據日志,mds服務為創建導致的

root@ubuntu:/etc/cluster-ceph# ceph-deploy mds create `hostname`

總結:不同版本,部分命令不一樣,很不爽!所以要針對ceph-xxx版本提問題


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM