ceph部署rgw對象存儲網關高可用集群
部署rgw對象網關節點
要使用 Ceph Object Gateway對象網關組件,必須部署RGW的實例。執行以下操作以創建RGW新實例(已部署則忽略本節操作):
官方參考:https://docs.ceph.com/docs/master/install/ceph-deploy/install-ceph-gateway/
安裝radosgw包,默認已安裝radosgw包,也可以手動安裝相關包,以ceph-deploy部署的集群為例:
# ceph-deploy install --no-adjust-repos --rgw node1 node2 node3 # rpm -qa |grep radosgw ceph-radosgw-14.2.9-0.el7.x86_64
創建rgw實例,這里在3個節點啟用rgw對象存儲網關:
ceph-deploy rgw create node1 node2 node3
默認情況下,RGW實例將偵聽7480端口。為了方便訪問,在運行RGW節點上編輯ceph.conf來更改此端口,如下所示:
#ceph-deploy節點 cat >> /etc/ceph/ceph.conf <<EOF [client.rgw.node1] rgw frontends = civetweb port=81 [client.rgw.node2] rgw frontends = civetweb port=81 [client.rgw.node3] rgw frontends = civetweb port=81 EOF #更新到所有節點 ceph-deploy --overwrite-conf config push node1 node2 node3 #每個節點重啟radosw服務 systemctl restart ceph-radosgw.target
驗證rgb,瀏覽器訪問http://192.168.93.40,輸出以下內容
[root@node1 my-cluster]# curl http://192.168.93.40:81 <?xml version="1.0" encoding="UTF-8"?><ListAllMyBucketsResult xmlns="http://s3.amazonaws.com/doc/2006-03-01/"><Owner><ID>anonymous</ID><DisplayName></DisplayName></Owner><Buckets></Buckets></ListAllMyBucketsResult>
默認部署rgw后會自動創建以下4個存儲池
[root@node1 my-cluster]# ceph osd lspools 1 .rgw.root 2 default.rgw.control 3 default.rgw.meta 4 default.rgw.log
部署rgw高可用集群
基本架構圖
在3個rgw節點安裝haproxy和keepalived
yum install -y haproxy keepalived
3個節點修改keepalived配置文件,以非搶占模式運行,3個節點配置相同。
只需修改interface以及virtual_ipaddress字段,需要提供一個與節點同網段的IP地址,其他默認即可。
cat > /etc/keepalived/keepalived.conf <<EOF global_defs { router_id 10 vrrp_version 2 vrrp_garp_master_delay 1 vrrp_garp_master_refresh 1 vrrp_mcast_group4 224.0.0.18 } vrrp_script chk-haproxy { script "killall -0 haproxy" timeout 1 interval 1 # check every 1 second fall 2 # require 2 failures for KO rise 2 # require 2 successes for OK } vrrp_instance rgw { state BACKUP interface ens33 virtual_router_id 100 priority 1 advert_int 1 nopreempt track_script { chk-haproxy } authentication { auth_type PASS auth_pass haproxy } virtual_ipaddress { 192.168.93.50/24 dev ens33 } } EOF
3個節點修改haproxy配置文件,3個節點配置全部相同。
只需修改server字段,地址為3個節點實際IP地址,其他保持默認即可。
cat > /etc/haproxy/haproxy.cfg << EOF global #chroot /var/lib/haproxy daemon #group haproxy #user haproxy log 127.0.0.1:514 local0 warning pidfile /var/lib/haproxy.pid maxconn 20000 spread-checks 3 nbproc 8 defaults log global mode http retries 3 option redispatch listen http-web bind *:80 mode http balance roundrobin timeout server 15s timeout connect 15s server node1 192.168.93.40:81 check port 81 inter 5000 fall 5 server node2 192.168.93.41:81 check port 81 inter 5000 fall 5 server node3 192.168.93.42:81 check port 81 inter 5000 fall 5 EOF
啟動keepalived及haproxy服務
systemctl enable --now keepalived haproxy
查看vip狀態
[root@node1 ~]# ip a | grep ens33 2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000 inet 192.168.93.40/24 brd 192.168.93.255 scope global noprefixroute ens33 inet 192.168.93.50/24 scope global secondary ens33
驗證vip是否能夠漂移,這里vip轉移到node3節點
[root@node1 ~]# systemctl stop haproxy [root@node3 ~]# ip a | grep ens33 2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000 inet 192.168.93.42/24 brd 192.168.93.255 scope global noprefixroute ens33 inet 192.168.93.50/24 scope global secondary ens33
驗證vip轉發后端是否正常
[root@node1 ~]# curl http://192.168.93.50:80 <?xml version="1.0" encoding="UTF-8"?><ListAllMyBucketsResult xmlns="http://s3.amazonaws.com/doc/2006-03-01/"><Owner><ID>anonymous</ID><DisplayName></DisplayName></Owner><Buckets></Buckets></ListAllMyBucketsResult>[root@node1 ~]#
修改s3cfg配置文件連接參數,改為vip地址:
[root@node1 my-cluster]# cat /root/.s3cfg |grep host_ host_base = 192.168.93.50:80 host_bucket = 192.168.93.50:80/%���(bucket)s
驗證s3cmd命令訪問對象存儲
[root@node1 ~]# s3cmd ls 2020-07-03 06:49 s3://my-new-bucket 2020-07-03 07:03 s3://s3cmd-demo 2020-07-03 07:37 s3://swift-demo [root@node1 ~]# s3cmd mb s3://test-1 Bucket 's3://test-1/' created
修改swift_openrc.sh文件,改為vip地址
[root@node1 my-cluster]# cat swift_openrc.sh | grep ST_AUTH export ST_AUTH=http://192.168.93.50:80/auth
驗證swift命令訪問對象存儲
[root@node1 my-cluster]# source swift_openrc.sh [root@node1 my-cluster]# swift list my-new-bucket s3cmd-demo swift-demo test-1
至此高可用環境部署完成,后續可以通過多種方式對ceph對象存儲進行操作。
s3風格接口操作
以python腳本方式操作對象存儲。
創建用戶
[root@node1 my-cluster]# radosgw-admin user create --uid="ceph-s3-user" --display-name="ceph s3 user demo" { "user_id": "ceph-s3-user", "display_name": "ceph s3 user demo", "email": "", "suspended": 0, "max_buckets": 1000, "subusers": [], "keys": [ { "user": "ceph-s3-user", "access_key": "W4UQON1266AZX7H4R78A", "secret_key": "LjaAgGJOTZ0cLhVUHSlOZ45NuJtt2OElYF83el9r" } ], "swift_keys": [], "caps": [], "op_mask": "read, write, delete", "default_placement": "", "default_storage_class": "", "placement_tags": [], "bucket_quota": { "enabled": false, "check_on_raw": false, "max_size": -1, "max_size_kb": 0, "max_objects": -1 }, "user_quota": { "enabled": false, "check_on_raw": false, "max_size": -1, "max_size_kb": 0, "max_objects": -1 }, "temp_url_keys": [], "type": "rgw", "mfa_ids": [] }
查看創建的用戶信息
[root@node1 my-cluster]# radosgw-admin user list [root@node1 my-cluster]# radosgw-admin user info --uid ceph-s3-user
客戶端連接
yum install -y python-boto
創建Python腳本:
cat > s3test.py <<EOF import boto.s3.connection access_key = 'W4UQON1266AZX7H4R78A' secret_key = 'LjaAgGJOTZ0cLhVUHSlOZ45NuJtt2OElYF83el9r' conn = boto.connect_s3( aws_access_key_id=access_key, aws_secret_access_key=secret_key, host='192.168.93.50', port=80, is_secure=False, calling_format=boto.s3.connection.OrdinaryCallingFormat(), ) bucket = conn.create_bucket('my-new-bucket') for bucket in conn.get_all_buckets(): print "{name} {created}".format( name=bucket.name, created=bucket.creation_date, ) EOF
注意替換hostname字段及port。
運行腳本:
[root@node1 my-cluster]# python s3test.py my-new-bucket 2020-07-03T06:49:45.867Z
此時自動創建一個default.rgw.buckets.index的存儲池
[root@node1 my-cluster]# ceph osd lspools 1 .rgw.root 2 default.rgw.control 3 default.rgw.meta 4 default.rgw.log 5 default.rgw.buckets.index
這種方式通常適用於開發人員以sdk方式操作。
s3cmd命令行操作
使用s3cm命令操作對象存儲。
參考:https://github.com/s3tools/s3cmd
yum install -y s3cmd
配置
[root@node1 my-cluster]# s3cmd --configure Enter new values or accept defaults in brackets with Enter. Refer to user manual for detailed description of all options. Access key and Secret key are your identifiers for Amazon S3. Leave them empty for using the env variables. Access Key: W4UQON1266AZX7H4R78A Secret Key: LjaAgGJOTZ0cLhVUHSlOZ45NuJtt2OElYF83el9r Default Region [US]: Use "s3.amazonaws.com" for S3 Endpoint and not modify it to the target Amazon S3. S3 Endpoint [s3.amazonaws.com]: 192.168.93.50:80 Use "%(bucket)s.s3.amazonaws.com" to the target Amazon S3. "%(bucket)s" and "%(location)s" vars can be used if the target S3 system supports dns based buckets. DNS-style bucket+hostname:port template for accessing a bucket [%(bucket)s.s3.amazonaws.com]: 192.168.93.40:80/%(bucket)s Encryption password is used to protect your files from reading by unauthorized persons while in transfer to S3 Encryption password: Path to GPG program [/usr/bin/gpg]: When using secure HTTPS protocol all communication with Amazon S3 servers is protected from 3rd party eavesdropping. This method is slower than plain HTTP, and can only be proxied with Python 2.7 or newer Use HTTPS protocol [Yes]: n On some networks all internet access must go through a HTTP proxy. Try setting it here if you can't connect to S3 directly HTTP Proxy server name: New settings: Access Key: W4UQON1266AZX7H4R78A Secret Key: LjaAgGJOTZ0cLhVUHSlOZ45NuJtt2OElYF83el9r Default Region: US S3 Endpoint: 192.168.93.50:80 DNS-style bucket+hostname:port template for accessing a bucket: 192.168.93.50:80/%���(bucket)s Encryption password: Path to GPG program: /usr/bin/gpg Use HTTPS protocol: False HTTP Proxy server name: HTTP Proxy server port: 0 Test access with supplied credentials? [Y/n] y Please wait, attempting to list all buckets... Success. Your access key and secret key worked fine :-) Now verifying that encryption works... Not configured. Never mind. Save settings? [y/N] y Configuration saved to '/root/.s3cfg'
查看bucket
[root@node1 my-cluster]# s3cmd ls 2020-07-03 06:49 s3://my-new-bucket
啟用signature_v2
# vi /root/.s3cfg signature_v2 = True
創建bucket
[root@node1 my-cluster]# s3cmd mb s3://s3cmd-demo Bucket 's3://s3cmd-demo/' created
查看創建的bucket
[root@node1 my-cluster]# s3cmd ls 2020-07-03 06:49 s3://my-new-bucket 2020-07-03 07:03 s3://s3cmd-demo
上傳文件或目錄到bucket
s3cmd put /etc/fstab s3://s3cmd-demo/fstab-bak
s3cmd put /var/log/ --recursive s3://s3cmd-demo/log/
查看bucket中的文件
[root@node1 my-cluster]# s3cmd ls s3://s3cmd-demo/ DIR s3://s3cmd-demo/log/ 2020-07-03 07:05 541 s3://s3cmd-demo/fstab-bak
下載文件
[root@node1 my-cluster]# s3cmd get s3://s3cmd-demo/fstab-bak fstab-bak
刪除文件或目錄
s3cmd rm s3://s3cmd-demo/fstab-bak s3cmd rm --recursive s3://s3cmd-demo/log/
bucket上傳文件后會自動新建一個default.rgw.buckets.data的存儲池
[root@node1 my-cluster]# ceph osd lspools 1 .rgw.root 2 default.rgw.control 3 default.rgw.meta 4 default.rgw.log 5 default.rgw.buckets.index 6 default.rgw.buckets.data
查看存儲池中的內容
[root@node1 my-cluster]# rados -p default.rgw.buckets.data ls 2f105b56-46fe-4230-a9ae-9bd0b0ac1f1d.4323.2_fstab-bak
文件前綴存儲在default.rgw.buckets.index中
[root@node1 my-cluster]# rados -p default.rgw.buckets.index ls .dir.2f105b56-46fe-4230-a9ae-9bd0b0ac1f1d.4323.2 .dir.2f105b56-46fe-4230-a9ae-9bd0b0ac1f1d.4323.1
swift風格接口操作
使用swift命令操作對象存儲。
創建swift用戶
[root@node1 my-cluster]# radosgw-admin user list [ "ceph-s3-user" ] [root@node1 my-cluster]# radosgw-admin subuser create --uid=ceph-s3-user --subuser=ceph-s3-user:swift --access=full { "user_id": "ceph-s3-user", "display_name": "ceph s3 user demo", "email": "", "suspended": 0, "max_buckets": 1000, "subusers": [ { "id": "ceph-s3-user:swift", "permissions": "full-control" } ], "keys": [ { "user": "ceph-s3-user", "access_key": "W4UQON1266AZX7H4R78A", "secret_key": "LjaAgGJOTZ0cLhVUHSlOZ45NuJtt2OElYF83el9r" } ], "swift_keys": [ { "user": "ceph-s3-user:swift", "secret_key": "3zM2goJKoiRFUswG6MBoNEwTXwb3EaP4fU3SF4pA" } ], "caps": [], "op_mask": "read, write, delete", "default_placement": "", "default_storage_class": "", "placement_tags": [], "bucket_quota": { "enabled": false, "check_on_raw": false, "max_size": -1, "max_size_kb": 0, "max_objects": -1 }, "user_quota": { "enabled": false, "check_on_raw": false, "max_size": -1, "max_size_kb": 0, "max_objects": -1 }, "temp_url_keys": [], "type": "rgw", "mfa_ids": [] }
Create the secret key:
[root@node1 my-cluster]# radosgw-admin key create --subuser=ceph-s3-user:swift --key-type=swift --gen-secret { "user_id": "ceph-s3-user", "display_name": "ceph s3 user demo", "email": "", "suspended": 0, "max_buckets": 1000, "subusers": [ { "id": "ceph-s3-user:swift", "permissions": "full-control" } ], "keys": [ { "user": "ceph-s3-user", "access_key": "W4UQON1266AZX7H4R78A", "secret_key": "LjaAgGJOTZ0cLhVUHSlOZ45NuJtt2OElYF83el9r" } ], "swift_keys": [ { "user": "ceph-s3-user:swift", "secret_key": "ZLy2kCT1AJA6T2tKAhl1yjMKtMwYK9VJfZLAavJT" } ], "caps": [], "op_mask": "read, write, delete", "default_placement": "", "default_storage_class": "", "placement_tags": [], "bucket_quota": { "enabled": false, "check_on_raw": false, "max_size": -1, "max_size_kb": 0, "max_objects": -1 }, "user_quota": { "enabled": false, "check_on_raw": false, "max_size": -1, "max_size_kb": 0, "max_objects": -1 }, "temp_url_keys": [], "type": "rgw", "mfa_ids": [] }
安裝swift客戶端
sudo yum install python-setuptools #sudo easy_install pip yum install -y python-pip mkdir ~/.pip cat > ~/.pip/pip.conf << EOF [global] trusted-host=mirrors.aliyun.com index-url=https://mirrors.aliyun.com/pypi/simple/ EOF pip install -U pip sudo pip install --upgrade setuptools sudo pip install --upgrade python-swiftclient
列出bucket
[root@node1 my-cluster]# swift -V 1 -A http://192.168.93.50:80/auth -U ceph-s3-user:swift -K 'ZLy2kCT1AJA6T2tKAhl1yjMKtMwYK9VJfZLAavJT' list my-new-bucket s3cmd-demo
注意替換IPADDRESS及key
定義環境變量
[root@node1 my-cluster]# swift list Auth version 1.0 requires ST_AUTH, ST_USER, and ST_KEY environment variables to be set or overridden with -A, -U, or -K. Auth version 2.0 requires OS_AUTH_URL, OS_USERNAME, OS_PASSWORD, and OS_TENANT_NAME OS_TENANT_ID to be set or overridden with --os-auth-url, --os-username, --os-password, --os-tenant-name or os-tenant-id. Note: adding "-V 2" is necessary for this. [root@node1 my-cluster]#
定義環境變量
cat > swift_openrc.sh <<EOF export ST_AUTH=http://192.168.93.50:80/auth export ST_USER=ceph-s3-user:swift export ST_KEY=ZLy2kCT1AJA6T2tKAhl1yjMKtMwYK9VJfZLAavJT EOF [root@node1 my-cluster]# set | grep ST_ ST_AUTH=http://192.168.93.50:80/auth ST_KEY=ZLy2kCT1AJA6T2tKAhl1yjMKtMwYK9VJfZLAavJT ST_USER=ceph-s3-user:swift
驗證
[root@node1 my-cluster]# swift list my-new-bucket s3cmd-demo
創建bucket
[root@node1 my-cluster]# swift post swift-demo [root@node1 my-cluster]# swift list my-new-bucket s3cmd-demo swift-demo
上傳文件或目錄
swift upload swift-demo /etc/passwd
swift upload swift-demo /etc/
查看bucket中的文件
swift list swift-demo
下載bucket中的文件
swift download swift-demo etc/passwd
參考:https://edu.51cto.com/center/course/lesson/index?id=553461