1.部署RadosGW 服務
將ceph-mgr1、ceph-mgr2 服務器部署為高可用的radosGW 服務
1.1 安裝radosgw 服務
root@mgr1:~# apt install radosgw root@mgr2:~# apt install radosgw
1.2 初始化rgw節點
$ ceph-deploy --overwrite-conf rgw create mgr1
$ ceph-deploy --overwrite-conf rgw create mgr2
1.3 驗證radosgw 服務狀態
$ ceph -s
cluster:
id: 54ed6318-9830-4152-917c-f1af7fa1002a
health: HEALTH_OK
services: mon: 3 daemons, quorum mon1,mon2,mon3 (age 3d)
mgr: mgr1(active, since 3d), standbys: mgr2
mds: 2/2 daemons up, 2 standby
osd: 20 osds: 20 up (since 3d), 20 in (since 3d)
rgw: 2 daemons active (2 hosts, 1 zones)
data:
volumes: 1/1 healthy
pools: 8 pools, 217 pgs
bjects: 477 objects, 774 MiB
usage: 11 GiB used, 1.9 TiB / 2.0 TiB avail
pgs: 217 active+clean
1.4 驗證radosgw 服務進程
root@mgr1:~# ps -ef |grep radosgw
ceph 1261 1 0 10:40 ? 00:00:02 /usr/bin/radosgw -f --cluster ceph --name client.rgw.mgr1 --setuser ceph --setgroup ceph
root@mgr2:~# ps -ef |grep radosgw
ceph 32197 1 0 10:41 ? 00:00:02 /usr/bin/radosgw -f --cluster ceph --name client.rgw.mgr2 --setuser ceph --setgroup ceph
2.radosgw 服務配置
2.1 自定義端口
2.1.1 配置文件可以在ceph deploy 服務器修改然后統一推送,或者單獨修改每個radosgw 服務器的配置為同一配置。
[client.rgw.mgr1] rgw_host = mgr1 rgw_frontends = civetweb port=8080
[client.rgw.mgr2] rgw_host = mgr2 rgw_frontends = civetweb port=8080
2.1.2 同步配置文件到rgw1(mgr1),rgw2(mgr2)
cephuser@ceph-deploy:~/ceph-cluster$ scp ceph.conf root@mgr1:/etc/ceph/
cephuser@ceph-deploy:~/ceph-cluster$ scp ceph.conf root@mgr2:/etc/ceph/
2.1.3 在rgw節點重啟rgw服務
root@mgr1:~# systemctl restart ceph-radosgw@rgw.mgr1.service
root@mgr2:~# systemctl restart ceph-radosgw@rgw.mgr2.service
root@mgr2:~# netstat -ntlp |grep radosgw
Active Internet connections (only servers) Proto Recv-Q Send-Q Local Address
Foreign Address State PID/Program name tcp 0 0
0.0.0.0:8080 0.0.0.0:* LISTEN 32197/radosgw
2.2 配置nginx 代理
2.2.1 安裝nginx並配置
apt install nginx
rm -f /etc/nginx/sites-enabled/default
vim /etc/nginx/sites-enabled/ceph_rgw.conf
upstream rgw_yanceph {
server 192.168.2.31:8080;
server 192.168.2.32:8080;
}
server {
listen 80;
server_name rgw.yanceph.com;
charset utf-8;
location / {
proxy_pass http://rgw_yanceph;
proxy_set_header Host $host;
access_log /var/log/nginx/rgw.yanceph.com.log;
}
}
# nginx 配置檢查
nginx -t
# nginx 后台啟動
nginx
2.2.2 winows測試機上配置hosts
192.168.2.2 rgw.yanceph.com
2.2.3 訪問rgw.yanceph.com 測試:

2.2.4 如需https協議, 可以申請域名的ssl證書並在nginx配置即可,配置文件修改如下
listen 443 ssl;
server_name lrgw.yanceph.com;
ssl_certificate cert/rgw.yanceph.comt.pem;
ssl_certificate_key cert/rgw.yanceph.com.key;
charset utf-8;
3.Ceph dashboard
Ceph dashboard 是通過一個web 界面,對已經運行的ceph 集群進行狀態查看及功能配置等功能,
3.1 啟用dashboard 插件
Ceph mgr 是一個多插件( 模塊化) 的組件, 其組件可以單獨的啟用或關閉
root@mgr1:~# apt install ceph-mgr-dashboard
3.2 查看開啟的模塊
cephuser@ceph-deploy:~/ceph-cluster$ ceph mgr module ls |head -n 30
{
...
"enabled_modules": [
"dashboard",
"iostat",
"nfs",
"restful"
],
...
}
3.3 啟用dashboard 模塊
Ceph dashboard 在mgr 節點進行開啟設置,並且可以配置開啟或者關閉SSL
# 開啟dashboard模塊
$ ceph mgr module enable dashboard
# 開啟dashboard模塊的ssl (關閉是false)
$ ceph config set mgr mgr/dashboard/ssl true
# 設置監聽地址
ceph confi set mgr mgr/dashboard/mgr1/server_addr 192.168.2.31
#設置監聽端口
ceph confi set mgr mgr/dashboard/mgr1/server_port 9009
如果集群報錯,需要重啟mgr服務
cephuser@ceph-deploy:~/ceph-cluster$ ceph -s
cluster:
id: 54ed6318-9830-4152-917c-f1af7fa1002a
health: HEALTH_ERR
Module 'dashboard' has failed: OSError("Port 8080 not free on '192.168.2.31'",)
root@mgr1:~# systemctl restart ceph-mgr@mgr1.service
3.4 查看dashboard服務狀態
cephuser@ceph-deploy:~/ceph-cluster$ ceph mgr services
{
"dashboard": "http://192.168.2.31:9009/"
}
3.5 設置dashboard賬號和密碼
$ touch ceph-dashboard-passwd.txt
$ echo admin123123 >> ceph-dashboard-passwd.txt
$ ceph dashboard set-login-credentials admin -i ceph-dashboard-passwd.txt
******************************************************************
*** WARNING: this command is deprecated. ***
*** Please use the ac-user-* related commands to manage users. ***
******************************************************************
3.6 驗證並訪問dashboard
3.7 配置dashboard SSL
# ceph 自簽名證書
$ ceph dashboard create-self-signed-cert
Self-signed certificate created
$ ceph config set mgr mgr/dashboard/ssl true
如果不生效需要重啟mgr服務
root@mgr1:~# systemctl restart ceph-mgr@mgr1.service
3.8 查看dashboard ssl
cephuser@ceph-deploy:~/ceph-cluster$ ceph mgr services
{
"dashboard": "https://192.168.2.31:8443/"
}
3.9 使用https訪問dashboard

4.使用prometheus監控ceph node節點
4.1 部署prometheus
# 創建專用目錄
mkdir /apps
cd /apps
# 下載安裝包
root@mgr1:/apps# wget https://mirrors.tuna.tsinghua.edu.cn/github-release/prometheus/prometheus/LatestRelease/prometheus-2.29.2.linux-amd64.tar.gz
# 解壓安裝包
root@mgr1:/apps# tar xf prometheus-2.29.2.linux-amd64.tar.gz
# 創建軟連接,主要方便后續版本升級,執行修改軟連接即可
root@mgr1:/apps# ln -s /apps/prometheus-2.29.2.linux-amd64 /apps/prometheus
4.2 配置prometheus
# 新建配置文件
# vim /etc/systemd/system/prometheus.service
[Unit]
Description=Prometheus Server
Documentation=https://prometheus.io/docs/introduction/overview/
After=network.target
[Service]
Restart=on-failure
WorkingDirectory=/apps/prometheus/
ExecStart=/apps/prometheus/prometheus --config.file=/apps/prometheus/prometheus.yml
[Install]
WantedBy=multi-user.target
#啟動服務
root@mgr1:/apps/prometheus# systemctl daemon-reload
root@mgr1:/apps/prometheus# systemctl restart prometheus
root@mgr1:/apps/prometheus# systemctl enable prometheus
4.3 瀏覽器訪問prometheus
4.4 部署node_exporter
各node(osd)節點安裝node_exporter
# 創建目錄
root@osd1:/apps# mkdir /apps && cd !$
# 下載安裝包
root@osd1:/apps# wget https://github.com/prometheus/node_exporter/releases/download/v1.2.2/node_exporter-1.2.2.linux-amd64.tar.gz
# 解壓
root@osd1:/apps# tar xf node_exporter-1.2.2.linux-amd64.tar.gz
# 為方便后續版本升級創建軟連接
root@osd1:/apps# ln -sv /apps/node_exporter-1.2.2.linux-amd64
4.5 配置node_exporter
# 創建配置文件
root@osd1:/apps# vim /etc/systemd/system/node-exporter.service
[Unit]
Description=Prometheus Node Exporter
After=network.target
[Service]
ExecStart=/apps/node_exporter/node_exporter
[Install]
WantedBy=multi-user.target
# 啟動服務
root@osd1:/apps# systemctl daemon-reload
root@osd1:/apps# systemctl restart node-exporter
root@osd1:/apps# systemctl enable node-exporter
# 查看進程和監聽端口號
root@osd1:/apps# ps -ef |grep exporter
root 29396 1 0 16:23 ? 00:00:00 /apps/node_exporter/node_exporter
root 29457 29017 0 16:23 pts/0 00:00:00 grep --color=auto exporter
root@osd1:/apps# netstat -ntlp |grep exporter
tcp6 0 0 :::9100 :::* LISTEN 29396/node_exporter
其他node節點也需安裝
4.6 配置prometheus server,增加node節點監控配置
root@mgr1:/apps/prometheus# vim prometheus.yml
- job_name: 'ceph-node-data'
static_configs:
- targets: ['192.168.2.41:9100','192.168.2.42:9100','192.168.2.43:9100','192.168.2.44:9100']
-
# 重啟服務
root@mgr1:/apps/prometheus# systemctl restart prometheus.service
4.7瀏覽器訪問

5.通過prometheus 監控ceph 服務
Ceph manager 內部的模塊中包含了prometheus 的監控模塊,並監聽在每個manager 節點的9283 端口,該端口用於將采集到的信息通過http 接口向prometheus 提供數據
5.1 啟用ceph prometheus 監控模塊
$ ceph mgr module enable prometheus
5.2 prometheus server 驗證監聽端口
root@mgr1:/apps/prometheus# netstat -ntlp |grep 9283
tcp 0 0 192.168.2.31:9283 0.0.0.0:* LISTEN 2687/ceph-mgr
5.3 驗證manager 數據
curl "http://192.168.2.31:9283"
5.4 配置prometheus采集數據
root@mgr1:/apps/prometheus# vim prometheus.yml
- job_name: 'ceph-cluster-data'
static_configs:
- targets: ['192.168.2.31:9283']
6. 通過grafana顯示監控數據
通過granfana 顯示對ceph 的集群監控數據及node 數據
6.1 安裝grafana
# 安裝依賴包
root@mgr1:/apps# apt-get install -y adduser libfontconfig1
# 下載安裝包
root@mgr1:/apps# wget https://dl.grafana.com/oss/release/grafana_8.1.2_amd64.deb
# 安裝
root@mgr1:/apps# dpkg -i grafana_8.1.2_amd64.deb
# 啟動
root@mgr1:/apps# systemctl restart grafana-server.service
# 設置開機自啟動
root@mgr1:/apps# systemctl enable grafana-server.service
6.2 瀏覽器訪問
初始賬號密碼admin:admin
6.3 配置數據源
configuration -> Data souces

6.4 導入模板
create -> import
2842

5346

