基於cephfs搭建高可用分布式存儲並mount到本地


服務器硬件配置及環境

項目 說明
CPU 1核
內存 1GB
硬盤 40GB
系統 CentOS 7.5
時間同步服務 chrony
ceph 13.2.2-0

節點部署圖

節點部署圖

節點功能簡介

項目 說明
yum repo 如果部署環境每個節點都可以訪問外網, 則無需做任何操作, 部署腳本會自動添加外網環境的yum源. 如果部署無法訪問外網, 需要自行部署centos, epel, ceph三個yum源. 每個節點都必須可以訪問所提到的所有yum源
時間同步服務器 每個節點都必須可以訪問, 如果部署環境無法訪問外網需要自行搭建時間同步服務器
client-x 需要掛載存儲的設備, 需要同時可以訪問每個storage-ha-x和yum源, 時間服務器
storage-deploy-1 用於統一部署ceph集群的工作機, 系統為CentOS 7.5
storage-ha-x 承載ceph各項服務的服務器節點, 系統為CentOS 7.5
mon Monitors, 節點映射管理, 身份驗證管理, 需要達到冗余和高可用至少需要3個節點
osd object storage daemon, 對象存儲服務, 需要達到冗余和高可用至少需要3個節點
mgr Manager, 用於跟蹤運行指標和集群狀態, 性能.
mds Metadata Serve, 提供cephfs的元數據存儲

參考:

默認端口

項目 說明
ssh tcp: 22
mon tcp: 6789
mds/mgr/osd tcp: 6800~7300

參考:

默認路徑

項目 說明
主配置文件 /etc/ceph/ceph.conf
配置文件夾 /etc/ceph
日志文件夾 /var/log/ceph
各服務認證key文件 /var/lib/ceph/{server name}/{hostname}/keyring
admin認證key文件 ceph.client.admin.keyring

部署腳本說明

  • node-init.sh: storage-ha-x節點初期運行的初始化腳本
  • admin-init.sh: storage-deploy-1節點初期運行的初始化腳本, 必須要在每個storage-ha-x節點都運行完node-init.sh之后才能運行.
  • ceph-deploy.sh: ceph部署腳本, 僅在storage-deploy-1節點上運行即可, 需要在node-init.shadmin-init.sh運行完成且成功后運行.

PS: 腳本中涉及到的ip和其它不同信息請先自行修改后再運行.

腳本運行命令

請將 附錄: 腳本內容章節或腳本Git庫章節中的各個腳本放到各個對應服務器任意位置並使用以下命令按照順序運行.

PS: 需嚴格按照部署腳本說明章節中的持續順序執行腳本.

PS: 腳本中涉及到不同於當前環境的信息(如: ip, yum源, 密碼, 主機名等)請先自行修改后再運行.

  • 初始化ceph節點
1
/bin/bash node-init.sh

執行命令結果

初始化ceph節點

  • 初始化部署節點
1
/bin/bash admin-init.sh

執行命令結果

初始化部署節點

初始化部署節點

  • 開始部署
1
/bin/bash ceph-deploy.sh

執行命令結果

部署結果-1

可以看到上方的pgs下方有個creating+peering, 這表示OSDs在創建和准備同步中.需要等待

這時可以在任意有admin角色的storage-ha-x節點上執行以下命令看查是否完成准備

1
ceph -s

pgs顯示為下圖的active+clean代表各個節點同步完成.

部署結果-2

如果一直無法達到active+clean狀態, 請參考以下操作文章:
TROUBLESHOOTING PGS

掛載存儲

創建測試用戶

以下命令在任意一個storage-ha-x服務器上運行

1
2
3
4
5
# 此命令含義是創建一個名為client.fs-test-1的用戶, 對於掛載的根目錄'/'只有可讀權限, 對於掛載的'/test_1'目錄有讀寫權限.
ceph fs authorize cephfs client.fs-test-1 / r /test_1 rw
# 命令輸入完成后會返回類如下信息:
# [client.fs-test-1]
# key = AQA0Cr9b9afRDBAACJ0M8HxsP41XmLhbSxWkqA==

獲取用戶授權信息

以下命令在任意一個添加過admin角色的storage-ha-x服務器上運行

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
# 獲取client.admin用戶的授權信息
ceph auth get client.admin
# 命令輸入后會返回類似如下信息
# [client.admin]
# key = AQAm4L5b60alLhAARxAgr9jQDLopr9fbXfm87w==
# caps mds = "allow *"
# caps mgr = "allow *"
# caps mon = "allow *"
# caps osd = "allow *"

# 獲取client.fs-test-1用戶的授權信息
ceph auth get client.fs-test-1
# 命令輸入后會返回類似如下信息
# [client.fs-test-1]
# key = AQA0Cr9b9afRDBAACJ0M8HxsP41XmLhbSxWkqA==
# caps mds = "allow r, allow rw path=/test-1"
# caps mon = "allow r"
# caps osd = "allow rw tag cephfs data=cephfs"

掛載方式

掛載方式分為兩種, 分別是cephfs和fuse. 選擇其中一種方式進行掛載即可.

兩種掛載方式的區別和優勢請參考以下文章:
WHICH CLIENT?

cephfs方式

以下命令在任意需要掛載存儲的client下執行

PS: 此掛載方式依賴於ceph, 需要先添加ceph和epel的yum源.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
# cephfs方式掛載
yum install ceph -y
mkdir -p /etc/ceph
mkdir -p /mnt/mycephfs
# 以下寫入的secret請根據'獲取用戶授權信息'章節中獲取到的'key'進行修改
cat > /etc/ceph/admin_secret.key << EOF
AQAm4L5b60alLhAARxAgr9jQDLopr9fbXfm87w==
EOF

# 以下寫入的secret請根據'獲取用戶授權信息'章節中獲取到的'key'進行修改
cat > /etc/ceph/test_cephfs_1_secret.key << EOF
AQA0Cr9b9afRDBAACJ0M8HxsP41XmLhbSxWkqA==
EOF

# 使用'admin'用戶掛載cephfs的根目錄
# ip或主機名請根據實際情況修改
# 這里填寫的'name=admin'是'client.admin'點后面的'admin'.
mount.ceph 192.168.60.111:6789,192.168.60.112:6789,192.168.60.113:6789:/ /mnt/mycephfs -o name=admin,secretfile=/etc/ceph/admin_secret.key

# 使用只讀的用戶掛載
mkdir -p /mnt/mycephfs/test_1
mkdir -p /mnt/test_cephfs_1
# 使用'fs-test-1'用戶掛載cephfs的根目錄
# ip或主機名請根據實際情況修改
# 這里填寫的'name=fs-test-1'是'client.fs-test-1'點后面的'fs-test-1'.
mount.ceph 192.168.60.111:6789,192.168.60.112:6789,192.168.60.113:6789:/ /mnt/test_cephfs_1 -o name=fs-test-1,secretfile=/etc/ceph/test_cephfs_1_secret.key

# 開機自動掛載
cat >> /etc/fstab << EOF
192.168.60.111:6789,192.168.60.112:6789,192.168.60.113:6789:/ /mnt/mycephfs ceph name=admin,secretfile=/etc/ceph/secret.key,noatime,_netdev 0 2
EOF

fuse方式

以下命令在任意需要掛載存儲的client下執行

PS: 此掛載方式依賴於ceph-fuse, 需要先添加ceph和epel的yum源.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
yum install ceph-fuse -y
mkdir -p /etc/ceph
mkdir -p /mnt/mycephfs

# 獲取storage-ha-x任意一個節點上的ceph配置文件
scp storage@storage-ha-1:/etc/ceph/ceph.conf /etc/ceph/ceph.conf

# 以下寫入的secret請根據'獲取用戶授權信息'章節中獲取到的'key'進行修改
cat > /etc/ceph/ceph.keyring << EOF
[client.admin]
key = AQAm4L5b60alLhAARxAgr9jQDLopr9fbXfm87w==
caps mds = "allow *"
caps mgr = "allow *"
caps mon = "allow *"
caps osd = "allow *"
[client.fs-test-1]
key = AQA0Cr9b9afRDBAACJ0M8HxsP41XmLhbSxWkqA==
caps mds = "allow r, allow rw path=/test-1"
caps mon = "allow r"
caps osd = "allow rw tag cephfs data=cephfs"
EOF

# 使用'admin'用戶掛載cephfs的根目錄
# ip或主機名請根據實際情況修改
ceph-fuse -m 192.168.60.111:6789,192.168.60.112:6789,192.168.60.113:6789 /mnt/mycephfs
# 開機自動掛載
cat >> /etc/fstab << EOF
none /mnt/ceph fuse.ceph ceph.id=admin,ceph.conf=/etc/ceph/ceph.conf,_netdev,defaults 0 0
EOF

# 使用只讀的用戶掛載
mkdir -p /mnt/mycephfs/test_1
mkdir -p /mnt/test_cephfs_1
# 使用'fs-test-1'用戶掛載cephfs的根目錄
# ip或主機名請根據實際情況修改
# 這里填寫的'-n client.fs-test-1'是完整的'client.fs-test-1'.
ceph-fuse -m 192.168.60.111:6789,192.168.60.112:6789,192.168.60.113:6789 -n client.fs-test-1 /mnt/test_cephfs_1

掛載結果

掛載結果可以使用以下命令查看

1
df -h

掛載結果

運維命令

  • 狀態查看
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
# 查看集群整體狀態
ceph -s

# 查看集群健康狀態
ceph health

# 查看集群健康狀態詳情
ceph health detail

# 查看cephfs列表
ceph fs ls

# 查看mds狀態
ceph mds stat

# 查看 osd節點狀態
ceph osd tree

# 查看監視器情況
ceph quorum_status --format json-pretty
  • 簡單寫性能測試
1
2
# 在掛載了存儲的client下簡單測試寫性能
time dd if=/dev/zero of=/mnt/mycephfs/test.dbf bs=8k count=3000 oflag=direct

測試結果

  • 刪除cephfs和pool
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
# 刪除cephfs前需要的操作
# 停止每個mds節點的mds服務
# 每個mds節點上都要執行
systemctl stop ceph-mds.target

# 僅在任意一台'storage-ha-x'上執行
# 刪除cephfs
ceph fs rm cephfs --yes-i-really-mean-it

# 刪除pool
# 需要刪除pool的時候需要寫兩次pool名外帶'--yes-i-really-really-mean-it'參數
ceph osd pool rm cephfs_data cephfs_data --yes-i-really-really-mean-it
ceph osd pool rm cephfs_metadata cephfs_metadata --yes-i-really-really-mean-it

# 停止每個mds節點的mds服務
# 每個mds節點上都要執行
systemctl start ceph-mds.target
  • 同步ceph配置文件
1
2
3
4
5
6
7
8
9
10
11
# 同步的配置文件
# 如果節點上有配置文件且和當前要同步的配置文件不通, 需要帶'--overwrite-conf'參數
# 此命令會把執行此命令目錄下的'ceph.conf'文件同步到各個指定節點上
ceph-deploy --overwrite-conf config push storage-ha-1 storage-ha-2 storage-ha-3

# 重啟每個節點的cepf相關服務
# 需要在有對應功能節點的節點上分別運行以下命令
systemctl restart ceph-osd.target
systemctl restart ceph-mds.target
systemctl restart ceph-mon.target
systemctl restart ceph-mgr.target

FAQ

  • Q: health_warn:clock skew detected on mon
    A: 使用chrony同步每台服務器節點的時間

  • Q: Error ERANGE: pg_num “*“ size “*“ would mean “*“ total pgs, which exceeds max “*“ (mon_max_pg_per_osd 250 num_in_osds “\“)
    A: ceph.conf配置文件中加入mon_max_pg_per_osd = 1000(參數中的數值自己根據實際情況修改)並用同步ceph配置文件方式上傳到各個節點, 並重啟ceph-mon.target

  • Q: too many PGs per OSD
    A: ceph.conf配置文件中加入mon_max_pg_per_osd = 1000(參數中的數值自己根據實際情況修改)並用同步ceph配置文件方式上傳到各個節點, 並重啟ceph-mon.target

參考

ceph cephx認證參考
設置cephfs訪問權限
ceph用戶管理
ceph-fuse方式掛載
Ceph 運維手冊
Red Hat Ceph存儲—《深入理解Ceph架構》
Ceph常規操作及常見問題梳理

腳本Git庫

https://github.com/x22x22/cephfs-verify-script

附錄: 腳本內容

  • node-init.sh
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
#!/bin/bash

# 禁用ipv6, 加大pid限制
cat >>/etc/sysctl.conf <<EOF
net.ipv6.conf.all.disable_ipv6 = 1
net.ipv6.conf.default.disable_ipv6 = 1
kernel.pid_max = 4194303
EOF

sysctl -p
sysctl -w net.ipv6.conf.all.disable_ipv6=1
sysctl -w net.ipv6.conf.default.disable_ipv6=1

# 簡單代替dns服務器寫入當前環境中的主機名和ip的對應關系
cat >>/etc/hosts <<EOF

192.168.60.110 storage-deploy-1
192.168.60.111 storage-ha-1
192.168.60.112 storage-ha-2
192.168.60.113 storage-ha-3
EOF

systemctl stop firewalld
systemctl disable firewalld
setenforce 0
sed -i 's/SELINUX=.*/SELINUX=disabled/g' /etc/selinux/config

# 添加一個storage用戶, 用於ceph-deploy工具進行節點的安裝和操作
useradd -d /home/storage -m storage
echo 'fullstackmemo***' | passwd --stdin storage
echo "storage ALL = (root) NOPASSWD:ALL" | tee /etc/sudoers.d/storage
chmod 0440 /etc/sudoers.d/storage

# 添加ceph的yum源, 如果無法訪問外網請自行搭建並修改
cat >/etc/yum.repos.d/ceph.repo <<'EOF'
[Ceph]
name=Ceph packages for $basearch
baseurl=http://mirror.tuna.tsinghua.edu.cn/ceph/rpm-mimic/el7/$basearch
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://mirror.tuna.tsinghua.edu.cn/ceph/keys/release.asc
priority=1

[Ceph-noarch]
name=Ceph noarch packages
baseurl=http://mirror.tuna.tsinghua.edu.cn/ceph/rpm-mimic/el7/noarch
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://mirror.tuna.tsinghua.edu.cn/ceph/keys/release.asc
priority=1

[ceph-source]
name=Ceph source packages
baseurl=http://mirror.tuna.tsinghua.edu.cn/ceph/rpm-mimic/el7/SRPMS
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://mirror.tuna.tsinghua.edu.cn/ceph/keys/release.asc
priority=1

EOF

mv /etc/yum.repos.d/CentOS-Base.repo /etc/yum.repos.d/CentOS-Base.repo.backup

# 修改CentOS的yum基礎源, 如果無法訪問外網請自行搭建並修改
cat >/etc/yum.repos.d/CentOS-Base.repo <<'EOF'
# CentOS-Base.repo
#
# The mirror system uses the connecting IP address of the client and the
# update status of each mirror to pick mirrors that are updated to and
# geographically close to the client. You should use this for CentOS updates
# unless you are manually picking other mirrors.
#
# If the mirrorlist= does not work for you, as a fall back you can try the
# remarked out baseurl= line instead.
#
#

[base]
name=CentOS-$releasever - Base
baseurl=https://mirrors.tuna.tsinghua.edu.cn/centos/$releasever/os/$basearch/
#mirrorlist=http://mirrorlist.centos.org/?release=$releasever&arch=$basearch&repo=os
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7

#released updates
[updates]
name=CentOS-$releasever - Updates
baseurl=https://mirrors.tuna.tsinghua.edu.cn/centos/$releasever/updates/$basearch/
#mirrorlist=http://mirrorlist.centos.org/?release=$releasever&arch=$basearch&repo=updates
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7

#additional packages that may be useful
[extras]
name=CentOS-$releasever - Extras
baseurl=https://mirrors.tuna.tsinghua.edu.cn/centos/$releasever/extras/$basearch/
#mirrorlist=http://mirrorlist.centos.org/?release=$releasever&arch=$basearch&repo=extras
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7

#additional packages that extend functionality of existing packages
[centosplus]
name=CentOS-$releasever - Plus
baseurl=https://mirrors.tuna.tsinghua.edu.cn/centos/$releasever/centosplus/$basearch/
#mirrorlist=http://mirrorlist.centos.org/?release=$releasever&arch=$basearch&repo=centosplus
gpgcheck=1
enabled=0
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7
EOF

yum makecache fast
# 安裝CentOS的yum epel源
yum install -y epel-release

# 修改CentOS的yum epel源, 如果無法訪問外網請自行搭建並修改
cat >/etc/yum.repos.d/epel.repo <<'EOF'
[epel]
name=Extra Packages for Enterprise Linux 7 - $basearch
baseurl=https://mirrors.tuna.tsinghua.edu.cn/epel/7/$basearch
#mirrorlist=https://mirrors.fedoraproject.org/metalink?repo=epel-7&arch=$basearch
failovermethod=priority
enabled=1
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7

[epel-debuginfo]
name=Extra Packages for Enterprise Linux 7 - $basearch - Debug
baseurl=https://mirrors.tuna.tsinghua.edu.cn/epel/7/$basearch/debug
#mirrorlist=https://mirrors.fedoraproject.org/metalink?repo=epel-debug-7&arch=$basearch
failovermethod=priority
enabled=0
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7
gpgcheck=1

[epel-source]
name=Extra Packages for Enterprise Linux 7 - $basearch - Source
baseurl=https://mirrors.tuna.tsinghua.edu.cn/epel/7/SRPMS
#mirrorlist=https://mirrors.fedoraproject.org/metalink?repo=epel-source-7&arch=$basearch
failovermethod=priority
enabled=0
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7
gpgcheck=1
EOF

yum makecache
yum install yum-plugin-priorities chrony parted xfsprogs -y
mv /etc/chrony.conf /etc/chrony.conf.bk

# 添加時間同步服務器, 如果無法訪問外網請自行搭建並修改
# 添加時間同步服務器, 如果無法訪問外網請更換成yum.yfb.sunline.cn和nexus.yfb.sunline.cn
cat > /etc/chrony.conf << EOF
server ntp.api.bz iburst
driftfile /var/lib/chrony/drift
makestep 1.0 3
rtcsync
logdir /var/log/chrony
EOF

systemctl enable chronyd
systemctl restart chronyd
sleep 10
chronyc activity
chronyc sources -v
hwclock -w

# 這里將/dev/sdb作為ceph的存儲池, 所以先格式化/dev/sdb, 請根據自己實際情況修改
parted -s /dev/sdb mklabel gpt mkpart primary xfs 0% 100%
partprobe /dev/sdb
mkfs.xfs /dev/sdb -f
  • admin-init.sh
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
#!/bin/bash

# 禁用ipv6
cat >>/etc/sysctl.conf <<EOF
net.ipv6.conf.all.disable_ipv6 = 1
net.ipv6.conf.default.disable_ipv6 = 1
EOF

sysctl -p
sysctl -w net.ipv6.conf.all.disable_ipv6=1
sysctl -w net.ipv6.conf.default.disable_ipv6=1

# 簡單代替dns服務器寫入當前環境中的主機名和ip的對應關系
cat >>/etc/hosts <<EOF

192.168.60.110 storage-deploy-1
192.168.60.111 storage-ha-1
192.168.60.112 storage-ha-2
192.168.60.113 storage-ha-3
EOF

systemctl stop firewalld
systemctl disable firewalld
setenforce 0
sed -i 's/SELINUX=.*/SELINUX=disabled/g' /etc/selinux/config

# 添加ceph的yum源, 如果無法訪問外網請自行搭建並修改
cat >/etc/yum.repos.d/ceph.repo <<'EOF'
[Ceph]
name=Ceph packages for $basearch
baseurl=http://mirror.tuna.tsinghua.edu.cn/ceph/rpm-mimic/el7/$basearch
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://mirror.tuna.tsinghua.edu.cn/ceph/keys/release.asc
priority=1

[Ceph-noarch]
name=Ceph noarch packages
baseurl=http://mirror.tuna.tsinghua.edu.cn/ceph/rpm-mimic/el7/noarch
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://mirror.tuna.tsinghua.edu.cn/ceph/keys/release.asc
priority=1

[ceph-source]
name=Ceph source packages
baseurl=http://mirror.tuna.tsinghua.edu.cn/ceph/rpm-mimic/el7/SRPMS
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://mirror.tuna.tsinghua.edu.cn/ceph/keys/release.asc
priority=1

EOF

mv /etc/yum.repos.d/CentOS-Base.repo /etc/yum.repos.d/CentOS-Base.repo.backup

# 修改CentOS的yum基礎源, 如果無法訪問外網請自行搭建並修改
cat >/etc/yum.repos.d/CentOS-Base.repo <<'EOF'
# CentOS-Base.repo
#
# The mirror system uses the connecting IP address of the client and the
# update status of each mirror to pick mirrors that are updated to and
# geographically close to the client. You should use this for CentOS updates
# unless you are manually picking other mirrors.
#
# If the mirrorlist= does not work for you, as a fall back you can try the
# remarked out baseurl= line instead.
#
#

[base]
name=CentOS-$releasever - Base
baseurl=https://mirrors.tuna.tsinghua.edu.cn/centos/$releasever/os/$basearch/
#mirrorlist=http://mirrorlist.centos.org/?release=$releasever&arch=$basearch&repo=os
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7

#released updates
[updates]
name=CentOS-$releasever - Updates
baseurl=https://mirrors.tuna.tsinghua.edu.cn/centos/$releasever/updates/$basearch/
#mirrorlist=http://mirrorlist.centos.org/?release=$releasever&arch=$basearch&repo=updates
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7

#additional packages that may be useful
[extras]
name=CentOS-$releasever - Extras
baseurl=https://mirrors.tuna.tsinghua.edu.cn/centos/$releasever/extras/$basearch/
#mirrorlist=http://mirrorlist.centos.org/?release=$releasever&arch=$basearch&repo=extras
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7

#additional packages that extend functionality of existing packages
[centosplus]
name=CentOS-$releasever - Plus
baseurl=https://mirrors.tuna.tsinghua.edu.cn/centos/$releasever/centosplus/$basearch/
#mirrorlist=http://mirrorlist.centos.org/?release=$releasever&arch=$basearch&repo=centosplus
gpgcheck=1
enabled=0
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7
EOF

yum makecache fast
# 安裝CentOS的yum epel源
yum install -y epel-release

# 修改CentOS的yum epel源, 如果無法訪問外網請自行搭建並修改
cat >/etc/yum.repos.d/epel.repo <<'EOF'
[epel]
name=Extra Packages for Enterprise Linux 7 - $basearch
baseurl=https://mirrors.tuna.tsinghua.edu.cn/epel/7/$basearch
#mirrorlist=https://mirrors.fedoraproject.org/metalink?repo=epel-7&arch=$basearch
failovermethod=priority
enabled=1
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7

[epel-debuginfo]
name=Extra Packages for Enterprise Linux 7 - $basearch - Debug
baseurl=https://mirrors.tuna.tsinghua.edu.cn/epel/7/$basearch/debug
#mirrorlist=https://mirrors.fedoraproject.org/metalink?repo=epel-debug-7&arch=$basearch
failovermethod=priority
enabled=0
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7
gpgcheck=1

[epel-source]
name=Extra Packages for Enterprise Linux 7 - $basearch - Source
baseurl=https://mirrors.tuna.tsinghua.edu.cn/epel/7/SRPMS
#mirrorlist=https://mirrors.fedoraproject.org/metalink?repo=epel-source-7&arch=$basearch
failovermethod=priority
enabled=0
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7
gpgcheck=1
EOF

yum makecache
yum install yum-plugin-priorities chrony sshpass ceph-deploy ceph -y
mv /etc/chrony.conf /etc/chrony.conf.bk

# 添加時間同步服務器, 如果無法訪問外網請自行搭建並修改
cat > /etc/chrony.conf << EOF
server ntp.api.bz iburst
driftfile /var/lib/chrony/drift
makestep 1.0 3
rtcsync
logdir /var/log/chrony
EOF

systemctl enable chronyd
systemctl restart chronyd
sleep 10
chronyc activity
chronyc sources -v
hwclock -w

rm -f "${HOME}"/.ssh/ceph_id_rsa
ssh-keygen -t rsa -b 4096 -f "${HOME}"/.ssh/ceph_id_rsa -N ''
cat >"${HOME}"/.ssh/config <<EOF
Host storage-ha-1
Hostname storage-ha-1
User storage
IdentityFile ${HOME}/.ssh/ceph_id_rsa
IdentitiesOnly yes
StrictHostKeyChecking no
Host storage-ha-2
Hostname storage-ha-2
User storage
IdentityFile ${HOME}/.ssh/ceph_id_rsa
IdentitiesOnly yes
StrictHostKeyChecking no
Host storage-ha-3
Hostname storage-ha-3
User storage
IdentityFile ${HOME}/.ssh/ceph_id_rsa
IdentitiesOnly yes
StrictHostKeyChecking no
EOF
chmod 0400 "${HOME}"/.ssh/config
sshpass -p "fullstackmemo***" ssh-copy-id -i ~/.ssh/ceph_id_rsa storage@storage-ha-1
sshpass -p "fullstackmemo***" ssh-copy-id -i ~/.ssh/ceph_id_rsa storage@storage-ha-2
sshpass -p "fullstackmemo***" ssh-copy-id -i ~/.ssh/ceph_id_rsa storage@storage-ha-3

mkdir -p "${HOME}"/ceph-cluster
cd "${HOME}"/ceph-cluster || exit
  • ceph-deploy.sh
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
#!/bin/bash

mkdir -p "${HOME}"/ceph-cluster
cd "${HOME}"/ceph-cluster || exit
ceph-deploy new storage-ha-1 storage-ha-2 storage-ha-3

cat >>ceph.conf <<EOF
# 'public network':
# 整個集群所存在的網段
# 這里需要根據實際情況修改
public network = 192.168.60.0/24
osd pool default size = 3
osd pool default min size = 2
osd pool default pg num = 100
osd pool default pgp num = 100
# 'mon allow pool delete':
# 此設置允許刪除pool的操作, poc環境為方便操作加上此選項, 生產環境建議注釋
mon allow pool delete = true

[osd]
osd_max_backfills = 1
osd_recovery_max_active = 1
osd_recovery_op_priority = 1
EOF

# 在各個節點上安裝ceph, 並指定了外網的ceph yum源, 如果無法訪問外網請自行搭建並修改
ceph-deploy install storage-ha-1 storage-ha-2 storage-ha-3 --repo-url http://mirrors.ustc.edu.cn/ceph/rpm-mimic/el7 --gpg-url 'http://mirrors.ustc.edu.cn/ceph/keys/release.asc'
# 初始化mon服務和key信息
ceph-deploy mon create-initial
ceph-deploy mon add storage-ha-2
ceph-deploy mon add storage-ha-3
ceph-deploy admin storage-ha-1 storage-ha-2 storage-ha-3
ceph-deploy mgr create storage-ha-1 storage-ha-2 storage-ha-3

# 添加存儲服務節點上的裸盤到存儲池中
ceph-deploy osd create --data /dev/sdb storage-ha-1
ceph-deploy osd create --data /dev/sdb storage-ha-2
ceph-deploy osd create --data /dev/sdb storage-ha-3

ceph-deploy mds create storage-ha-1 storage-ha-2 storage-ha-3

ssh storage@storage-ha-1 << EOF
# 創建兩個pool, 服務於cephfs, cephfs至少需要兩個pool, 分別做metadata和data
sudo ceph osd pool create cephfs_data 100
# 使用raid 5方式存儲數據即erasure類型, 當單個文件平均大小大於8k時erasure比replicated有優勢.
# sudo ceph osd pool create cephfs_data 100 100 erasure
# sudo ceph osd pool set cephfs_data allow_ec_overwrites true
# sudo metadata pool必須使用replicated類型.
sudo ceph osd pool create cephfs_metadata 100
# 如果使用了erasure類型, 此步驟跳過
sudo ceph osd pool set cephfs_data size 3

sudo ceph osd pool set cephfs_metadata size 3
sudo ceph fs new cephfs cephfs_metadata cephfs_data

# 查看集群各項信息
sudo ceph quorum_status --format json-pretty
sudo ceph fs ls
sudo ceph mds stat
sudo ceph health
sudo ceph -s
EOF


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM