相關的名稱解釋
Region :可以理解為區域,是基於地理位置的邏輯划分;如:華南,華北之類,包含多個region的Ceph集群必須指定一個master region,一個region可以包含一個或者多個zone
Zone : 可以理解為可用區,它包含一組Ceph rgw實例,一個region必須指定一個master zone用以處理客戶端請求。
部署拓撲
本文描述的多可用區部署拓撲如下:
Ceph | SH / \ SH-1 SH-2 | | SH-SH-1 SH-SH-2
在Ceph集群配置名為SH的Region,在Region下配置名為SH-1及SH-2兩個Zone,並將SH-1設置為master,SH-2備用,可以通過radosgw-agent實現數據復制;每個Zone各運行一個rgw實例,分別為SH-SH-1及SH-SH-2
rgw組成要素
rgw作為一個客戶端,包含如下基本元素:
rgw實例名, 本文中兩個實例分別是SH-SH-1,SH-SH-2rgw實例用戶- 存儲池
ceph.conf中配置入口rgw實例運行時數據目錄- 前端配置文件
配置rgw
創建pools
Ceph rgw需要使用多個pool來存儲相關的配置及用戶數據。如果后續創建的rgw用戶具有相關權限,在rgw實例啟動的時候是會自動創建某些存儲池的;但是,通常都會建議用戶自行創建。為便於區別不同Zone,在各存儲池名前冠以.{region-name}-{zone-name}前綴,SH-1及SH-2的各存儲池如下
.SH-SH-1.rgw.root
.SH-SH-1.rgw.control
.SH-SH-1.rgw.gc
.SH-SH-1.rgw.buckets
.SH-SH-1.rgw.buckets.index
.SH-SH-1.rgw.buckets.extra
.SH-SH-1.log
.SH-SH-1.intent-log
.SH-SH-1.usage
.SH-SH-1.users
.SH-SH-1.users.email
.SH-SH-1.users.swift
.SH-SH-1.users.uid
.SH-SH-2.rgw.root
.SH-SH-2.rgw.control
.SH-SH-2.rgw.gc
.SH-SH-2.rgw.buckets
.SH-SH-2.rgw.buckets.index
.SH-SH-2.rgw.buckets.extra
.SH-SH-2.log
.SH-SH-2.intent-log
.SH-SH-2.usage
.SH-SH-2.users
.SH-SH-2.users.email
.SH-SH-2.users.swift
.SH-SH-2.users.uid
創建存儲池的命令如下: ceph osd pool create {pool_name} 128 128
注意:不要忘記存儲池名前的’.’,否則在啟動rgw實例的時候會失敗
創建rgw用戶及秘鑰
-
創建秘鑰文件
在
/etc/ceph/目錄下創建秘鑰文件並設置執行權限
#ceph-authtool --create-keyring /etc/ceph/ceph.client.radosgw.keyring #chmod +r /etc/ceph/ceph.client.radosgw.keyring
- 創建
rgw用戶及秘鑰
為每個實例生成用戶及秘鑰,並存儲到前述創建的秘鑰文件中
#ceph-authtool /etc/ceph/ceph.client.radosgw.keyring -n client.radosgw.SH-SH-1 --gen-key #ceph-authtool /etc/ceph/ceph.client.radosgw.keyring -n client.radosgw.SH-SH-2 --gen-key
- 授權
為前述創建的用戶授予合適的權限
#ceph-authtool -n client.radosgw.SH-SH-1 --cap osd 'allow rwx' --cap mon 'allow rwx' /etc/ceph/ceph.client.radosgw.keyring #ceph-authtool -n client.radosgw.SH-SH-2 --cap osd 'allow rwx' --cap mon 'allow rwx' /etc/ceph/ceph.client.radosgw.keyring
- 注冊
將用戶添加到Ceph集群
#ceph -k /etc/ceph/ceph.client.admin.keyring auth add client.radosgw.SH-SH-1 -i /etc/ceph/ceph.client.radosgw.keyring #ceph -k /etc/ceph/ceph.client.admin.keyring auth add client.radosgw.SH-SH-2 -i /etc/ceph/ceph.client.radosgw.keyring
添加rgw實例信息到配置文件
- 添加下述信息到
ceph.conf配置文件
[global]
rgw region root pool = .SH.rgw.root //用於存儲region信息,會自動創建 [client.radosgw.SH-SH-1] //實例名,格式為:{tpye}.{id} rgw region = SH //region名 rgw zone = SH-1 //zone名 rgw zone root pool = .SH-SH-1.rgw.root //根存儲池,存儲zone信息 keyring = /etc/ceph/ceph.client.radosgw.keyring //秘鑰文件 rgw dns name = {hostname} //DNS ;rgw socket path = /var/run/ceph/$name.sock //unix路徑 rgw frontends = fastcgi socket_port=9000 socket_host=0.0.0.0 host = {host-name} //主機名,通過`hostname -s`獲得 [client.radosgw.SH-SH-2] rgw region = SH rgw zone = SH-2 rgw zone root pool = .SH-SH-2.rgw.root keyring = /etc/ceph/ceph.client.radosgw.keyring rgw dns name = {hostname} rgw frontends = fastcgi socket_port=9000 socket_host=0.0.0.0 host = {host-name}
各配置的含義請看上文的注解,這里要說明的一點是:rgw socket path及rgw frontends中的socket_port配置是互斥的並且rgw socket path配置優先,即:如果配置了rgw socket path,rgw實例將啟動unix socket監聽並忽略socket_port配置,只有在沒有配置rgw socket path的情況下,rgw實例才會使用socket_port及socket_host建立socket監聽
- 更新
ceph節點的配置信息
通過ceph-deploy推送配置文件到rgw實例節點
#ceph-deploy --overwrite-conf config push {inst-1} {inst-2}
創建Region
- 創建
region配置文件
創建一個包含region信息,名為sh.json的配置文件:
{ "name": "SH", //region名 "api_name": "SH", "is_master": "true", //設置為master "endpoints": "", //region之間的復制地址 "master_zone": "SH-1", //master zone "zones": [ { "name": "SH-1", "endpoints": [ "http:\/\/{fqdn}:80\/"], //zone之間的復制地址 "log_meta": "true", "log_data": "true"}, { "name": "SH-2", "endpoints": [ "http:\/\/{fqdn}:80\/"], "log_meta": "true", "log_data": "true"}], "placement_targets": [ //可用的位置組, { "name": "default-placement", "tags": [] } ], "default_placement": "default-placement"}
endpoints是用於region之間,zone之間的數據復制地址,需設置為rgw實例前端(上例中{fqdn}需設置為主機名)的地址;如果是單region或者單zone配置,該地址可不設置
placement_targets指定可用的位置組,上文中只配置了一個;zone中的placement_pools配置值來源於此,placement_pools用來存儲zone的用戶數據,如:bucket,bucket_index
- 創建
Region
通過前述的sh.json配置文件創建region(通過一個實例執行即可)
#radosgw-admin region set --infile us.json --name client.radosgw.SH-SH-1
- 1
- 設置默認
Region
#radosgw-admin region default --rgw-region=SH --name client.radosgw.SH-SH-1
- 1
- 更新
regionmap
#radosgw-admin regionmap update --name client.radosgw.SH-SH-1 #radosgw-admin regionmap update --name client.radosgw.SH-SH-2
- 1
- 2
創建Zone
- 創建兩個分別包含zone
SH-1及SH-2信息,名為sh-1.json及sh-2.json的配置文件,sh-1.json的內容如下(sh-2.json中只是將SH-SH-1替換為SH-SH-2):
{ "domain_root": ".SH-SH-1.domain.rgw", "control_pool": ".SH-SH-1.rgw.control", "gc_pool": ".SH-SH-1.rgw.gc", "log_pool": ".SH-SH-1.log", "intent_log_pool": ".SH-SH-1.intent-log", "usage_log_pool": ".SH-SH-1.usage", "user_keys_pool": ".SH-SH-1.users", "user_email_pool": ".SH-SH-1.users.email", "user_swift_pool": ".SH-SH-1.users.swift", "user_uid_pool": ".SH-SH-1.users.uid", "system_key": { "access_key": "", "secret_key": ""}, "placement_pools": [ { "key": "default-placement", "val": { "index_pool": ".SH-SH-1.rgw.buckets.index", "data_pool": ".SH-SH-1.rgw.buckets"} } ] }
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 12
- 13
- 14
- 15
- 16
- 17
- 18
注意
placement_pools配置,用來存儲bucket,bucket index等信息key需要是定義region時placement_target中指定的某個name
- 創建
Zone-SH-1
#radosgw-admin zone set --rgw-zone=SH-1 --infile sh-1.json --name client.radosgw.SH-SH-1 #radosgw-admin zone set --rgw-zone=SH-1 --infile sh-1.json --name client.radosgw.SH-SH-1
- 1
- 2
- 創建
Zone-SH-2
#radosgw-admin zone set --rgw-zone=SH-2 --infile sh-2.json --name client.radosgw.SH-SH-2 #radosgw-admin zone set --rgw-zone=SH-2 --infile sh-2.json --name client.radosgw.SH-SH-2
- 1
- 2
- 更新
regionmap
#radosgw-admin regionmap update --name client.radosgw.SH-SH-1 #radosgw-admin regionmap update --name client.radosgw.SH-SH-2
- 1
- 2
- 3
創建Zone - 用戶
Zone用戶信息存儲在Zone的存儲池中,所以需要先配置Zone再創建用戶;創建用戶后,請注意保留access_key及secret_key信息,以便后面更新Zone信息:
#radosgw-admin user create --uid="sh-1" --display-name="Region-SH Zone-SH-1" --name client.radosgw.SH-SH-1 --system #radosgw-admin user create --uid="sh-2" --display-name="Region-SH Zone-SH-2" --name client.radosgw.SH-SH-2 --system
- 1
- 2
- 3
更新 Zone
將上述創建的兩個用戶的access_key及secret_key分布拷貝到創建Zone那一節創建的兩個配置文件sh-1.json及sh-2.json中,並執行下面的命令更新Zone配置:
#radosgw-admin zone set --rgw-zone=SH-1 --infile sh-1.json --name client.radosgw.SH-SH-1 #radosgw-admin zone set --rgw-zone=SH-1 --infile sh-1.json --name client.radosgw.SH-SH-1 #radosgw-admin zone set --rgw-zone=SH-2 --infile sh-2.json --name client.radosgw.SH-SH-2 #radosgw-admin zone set --rgw-zone=SH-2 --infile sh-2.json --name client.radosgw.SH-SH-2
- 1
- 2
- 3
- 4
- 5
配置前端
- 生成rgw配置
Apache,Nignx及civetweb都可以作為rgw前端,在這之前,曾寫過一篇有關rgw各前端配置的文章,有需要的讀者可以前往閱讀。下面,給出本文中的apache配置(/etc/httpd/cond.d/rgw.conf):
<VirtualHost *:80> ServerName {fqdn} DocumentRoot /var/www/html ErrorLog /var/log/httpd/rgw_error.log CustomLog /var/log/httpd/rgw_access.log combined # LogLevel debug RewriteEngine On RewriteRule .* - [E=HTTP_AUTHORIZATION:%{HTTP:Authorization},L] SetEnv proxy-nokeepalive 1 #ProxyPass / unix:///var/run/ceph/ceph-client.radosgw.SH-SH-1.asok ProxyPass / fcgi://{fqdn}:9000/ </VirtualHost>
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 12
- 13
- 14
- 15
- 16
- 17
- 18
- 19
- 20
{fqdn}需替換為所在節點的主機名,ProxyPass需要根據
ceph.conf文件中的rgw實例配置來設置
- 創建數據目錄
在實例節點上分別為rgw實例創建數據目錄,目錄格式:{type}.{id}:
#mkdir -p /var/lib/ceph/radosgw/ceph-radosgw.SH-SH-1 #mkdir -p /var/lib/ceph/radosgw/ceph-radosgw.SH-SH-2
- 1
- 2
- 3
啟動實例
在兩個實例節點上分別啟動實例及前端
#radosgw -c /etc/ceph/ceph.conf -n client.radosgw.SH-SH-1 #systemctl restart httpd.service #radosgw -c /etc/ceph/ceph.conf -n client.radosgw.SH-SH-2 #systemctl restart httpd.service
- 1
- 2
- 3
- 4
- 5
- 6
主備復制
rgw配好后,可以通過radosgw-agent將master zone - SH-1中的數據復制到slave zone - SH-1,實現提高可用性及讀性能的目的
配置
創建cluster-data-sync.conf文件,並填充如下內容:
src_access_key: {source-access-key} //master zone用戶的秘鑰信息 src_secret_key: {source-secret-key} destination: https://{fqdn}:port //這里是slave zone的endpoints地址 dest_access_key: {destination-access-key} //slave zone用戶的秘鑰信息 dest_secret_key: {destination-secret-key} log_file: {log.filename} //日志文件

