ceph部署rgw对象存储网关高可用集群


ceph部署rgw对象存储网关高可用集群

部署rgw对象网关节点

要使用 Ceph Object Gateway对象网关组件,必须部署RGW的实例。执行以下操作以创建RGW新实例(已部署则忽略本节操作):

官方参考:https://docs.ceph.com/docs/master/install/ceph-deploy/install-ceph-gateway/

安装radosgw包,默认已安装radosgw包,也可以手动安装相关包,以ceph-deploy部署的集群为例:

# ceph-deploy install --no-adjust-repos --rgw node1 node2 node3 # rpm -qa |grep radosgw ceph-radosgw-14.2.9-0.el7.x86_64

创建rgw实例,这里在3个节点启用rgw对象存储网关:

ceph-deploy rgw create node1 node2 node3

默认情况下,RGW实例将侦听7480端口。为了方便访问,在运行RGW节点上编辑ceph.conf来更改此端口,如下所示:

#ceph-deploy节点 cat >> /etc/ceph/ceph.conf <<EOF [client.rgw.node1] rgw frontends = civetweb port=81 [client.rgw.node2] rgw frontends = civetweb port=81 [client.rgw.node3] rgw frontends = civetweb port=81 EOF #更新到所有节点 ceph-deploy --overwrite-conf config push node1 node2 node3 #每个节点重启radosw服务 systemctl restart ceph-radosgw.target 

验证rgb,浏览器访问http://192.168.93.40,输出以下内容

[root@node1 my-cluster]# curl http://192.168.93.40:81 <?xml version="1.0" encoding="UTF-8"?><ListAllMyBucketsResult xmlns="http://s3.amazonaws.com/doc/2006-03-01/"><Owner><ID>anonymous</ID><DisplayName></DisplayName></Owner><Buckets></Buckets></ListAllMyBucketsResult>

默认部署rgw后会自动创建以下4个存储池

[root@node1 my-cluster]# ceph osd lspools 1 .rgw.root 2 default.rgw.control 3 default.rgw.meta 4 default.rgw.log

部署rgw高可用集群

基本架构图
在这里插入图片描述
在3个rgw节点安装haproxy和keepalived

yum install -y haproxy keepalived

3个节点修改keepalived配置文件,以非抢占模式运行,3个节点配置相同。
只需修改interface以及virtual_ipaddress字段,需要提供一个与节点同网段的IP地址,其他默认即可。

cat > /etc/keepalived/keepalived.conf <<EOF global_defs { router_id 10 vrrp_version 2 vrrp_garp_master_delay 1 vrrp_garp_master_refresh 1 vrrp_mcast_group4 224.0.0.18 } vrrp_script chk-haproxy { script "killall -0 haproxy" timeout 1 interval 1 # check every 1 second fall 2 # require 2 failures for KO rise 2 # require 2 successes for OK } vrrp_instance rgw { state BACKUP interface ens33 virtual_router_id 100 priority 1 advert_int 1 nopreempt track_script { chk-haproxy } authentication { auth_type PASS auth_pass haproxy } virtual_ipaddress { 192.168.93.50/24 dev ens33 } } EOF

3个节点修改haproxy配置文件,3个节点配置全部相同。
只需修改server字段,地址为3个节点实际IP地址,其他保持默认即可。

cat > /etc/haproxy/haproxy.cfg << EOF global #chroot /var/lib/haproxy daemon #group haproxy #user haproxy log 127.0.0.1:514 local0 warning pidfile /var/lib/haproxy.pid maxconn 20000 spread-checks 3 nbproc 8 defaults log global mode http retries 3 option redispatch listen http-web bind *:80 mode http balance roundrobin timeout server 15s timeout connect 15s server node1 192.168.93.40:81 check port 81 inter 5000 fall 5 server node2 192.168.93.41:81 check port 81 inter 5000 fall 5 server node3 192.168.93.42:81 check port 81 inter 5000 fall 5 EOF

启动keepalived及haproxy服务

systemctl enable --now keepalived haproxy

查看vip状态

[root@node1 ~]# ip a | grep ens33 2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000 inet 192.168.93.40/24 brd 192.168.93.255 scope global noprefixroute ens33 inet 192.168.93.50/24 scope global secondary ens33

验证vip是否能够漂移,这里vip转移到node3节点

[root@node1 ~]# systemctl stop haproxy [root@node3 ~]# ip a | grep ens33 2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000 inet 192.168.93.42/24 brd 192.168.93.255 scope global noprefixroute ens33 inet 192.168.93.50/24 scope global secondary ens33

验证vip转发后端是否正常

[root@node1 ~]# curl http://192.168.93.50:80 <?xml version="1.0" encoding="UTF-8"?><ListAllMyBucketsResult xmlns="http://s3.amazonaws.com/doc/2006-03-01/"><Owner><ID>anonymous</ID><DisplayName></DisplayName></Owner><Buckets></Buckets></ListAllMyBucketsResult>[root@node1 ~]# 

修改s3cfg配置文件连接参数,改为vip地址:

[root@node1 my-cluster]# cat /root/.s3cfg |grep host_ host_base = 192.168.93.50:80 host_bucket = 192.168.93.50:80/%���(bucket)s

验证s3cmd命令访问对象存储

[root@node1 ~]# s3cmd ls 2020-07-03 06:49 s3://my-new-bucket 2020-07-03 07:03 s3://s3cmd-demo 2020-07-03 07:37 s3://swift-demo [root@node1 ~]# s3cmd mb s3://test-1 Bucket 's3://test-1/' created

修改swift_openrc.sh文件,改为vip地址

[root@node1 my-cluster]# cat swift_openrc.sh | grep ST_AUTH export ST_AUTH=http://192.168.93.50:80/auth

验证swift命令访问对象存储

[root@node1 my-cluster]# source swift_openrc.sh [root@node1 my-cluster]# swift list my-new-bucket s3cmd-demo swift-demo test-1

至此高可用环境部署完成,后续可以通过多种方式对ceph对象存储进行操作。

s3风格接口操作

以python脚本方式操作对象存储。

创建用户

[root@node1 my-cluster]# radosgw-admin user create --uid="ceph-s3-user" --display-name="ceph s3 user demo" { "user_id": "ceph-s3-user", "display_name": "ceph s3 user demo", "email": "", "suspended": 0, "max_buckets": 1000, "subusers": [], "keys": [ { "user": "ceph-s3-user", "access_key": "W4UQON1266AZX7H4R78A", "secret_key": "LjaAgGJOTZ0cLhVUHSlOZ45NuJtt2OElYF83el9r" } ], "swift_keys": [], "caps": [], "op_mask": "read, write, delete", "default_placement": "", "default_storage_class": "", "placement_tags": [], "bucket_quota": { "enabled": false, "check_on_raw": false, "max_size": -1, "max_size_kb": 0, "max_objects": -1 }, "user_quota": { "enabled": false, "check_on_raw": false, "max_size": -1, "max_size_kb": 0, "max_objects": -1 }, "temp_url_keys": [], "type": "rgw", "mfa_ids": [] }

查看创建的用户信息

[root@node1 my-cluster]# radosgw-admin user list [root@node1 my-cluster]# radosgw-admin user info --uid ceph-s3-user

客户端连接

yum install -y python-boto

创建Python脚本:

cat > s3test.py <<EOF import boto.s3.connection access_key = 'W4UQON1266AZX7H4R78A' secret_key = 'LjaAgGJOTZ0cLhVUHSlOZ45NuJtt2OElYF83el9r' conn = boto.connect_s3( aws_access_key_id=access_key, aws_secret_access_key=secret_key, host='192.168.93.50', port=80, is_secure=False, calling_format=boto.s3.connection.OrdinaryCallingFormat(), ) bucket = conn.create_bucket('my-new-bucket') for bucket in conn.get_all_buckets(): print "{name} {created}".format( name=bucket.name, created=bucket.creation_date, ) EOF

注意替换hostname字段及port。

运行脚本:

[root@node1 my-cluster]# python s3test.py my-new-bucket 2020-07-03T06:49:45.867Z

此时自动创建一个default.rgw.buckets.index的存储池

[root@node1 my-cluster]# ceph osd lspools 1 .rgw.root 2 default.rgw.control 3 default.rgw.meta 4 default.rgw.log 5 default.rgw.buckets.index

这种方式通常适用于开发人员以sdk方式操作。

s3cmd命令行操作

使用s3cm命令操作对象存储。

参考:https://github.com/s3tools/s3cmd

yum install -y s3cmd

配置

[root@node1 my-cluster]# s3cmd --configure Enter new values or accept defaults in brackets with Enter. Refer to user manual for detailed description of all options. Access key and Secret key are your identifiers for Amazon S3. Leave them empty for using the env variables. Access Key: W4UQON1266AZX7H4R78A Secret Key: LjaAgGJOTZ0cLhVUHSlOZ45NuJtt2OElYF83el9r Default Region [US]: Use "s3.amazonaws.com" for S3 Endpoint and not modify it to the target Amazon S3. S3 Endpoint [s3.amazonaws.com]: 192.168.93.50:80 Use "%(bucket)s.s3.amazonaws.com" to the target Amazon S3. "%(bucket)s" and "%(location)s" vars can be used if the target S3 system supports dns based buckets. DNS-style bucket+hostname:port template for accessing a bucket [%(bucket)s.s3.amazonaws.com]: 192.168.93.40:80/%(bucket)s Encryption password is used to protect your files from reading by unauthorized persons while in transfer to S3 Encryption password: Path to GPG program [/usr/bin/gpg]: When using secure HTTPS protocol all communication with Amazon S3 servers is protected from 3rd party eavesdropping. This method is slower than plain HTTP, and can only be proxied with Python 2.7 or newer Use HTTPS protocol [Yes]: n On some networks all internet access must go through a HTTP proxy. Try setting it here if you can't connect to S3 directly HTTP Proxy server name: New settings: Access Key: W4UQON1266AZX7H4R78A Secret Key: LjaAgGJOTZ0cLhVUHSlOZ45NuJtt2OElYF83el9r Default Region: US S3 Endpoint: 192.168.93.50:80 DNS-style bucket+hostname:port template for accessing a bucket: 192.168.93.50:80/%���(bucket)s Encryption password: Path to GPG program: /usr/bin/gpg Use HTTPS protocol: False HTTP Proxy server name: HTTP Proxy server port: 0 Test access with supplied credentials? [Y/n] y Please wait, attempting to list all buckets... Success. Your access key and secret key worked fine :-) Now verifying that encryption works... Not configured. Never mind. Save settings? [y/N] y Configuration saved to '/root/.s3cfg'

查看bucket

[root@node1 my-cluster]# s3cmd ls 2020-07-03 06:49 s3://my-new-bucket

启用signature_v2

# vi /root/.s3cfg signature_v2 = True

创建bucket

[root@node1 my-cluster]# s3cmd mb s3://s3cmd-demo Bucket 's3://s3cmd-demo/' created

查看创建的bucket

[root@node1 my-cluster]# s3cmd ls 2020-07-03 06:49 s3://my-new-bucket 2020-07-03 07:03 s3://s3cmd-demo

上传文件或目录到bucket

s3cmd put /etc/fstab s3://s3cmd-demo/fstab-bak
s3cmd put /var/log/ --recursive s3://s3cmd-demo/log/

查看bucket中的文件

[root@node1 my-cluster]# s3cmd ls s3://s3cmd-demo/ DIR s3://s3cmd-demo/log/ 2020-07-03 07:05 541 s3://s3cmd-demo/fstab-bak

下载文件

[root@node1 my-cluster]# s3cmd get s3://s3cmd-demo/fstab-bak fstab-bak

删除文件或目录

s3cmd rm s3://s3cmd-demo/fstab-bak s3cmd rm --recursive s3://s3cmd-demo/log/

bucket上传文件后会自动新建一个default.rgw.buckets.data的存储池

[root@node1 my-cluster]# ceph osd lspools 1 .rgw.root 2 default.rgw.control 3 default.rgw.meta 4 default.rgw.log 5 default.rgw.buckets.index 6 default.rgw.buckets.data

查看存储池中的内容

[root@node1 my-cluster]# rados -p default.rgw.buckets.data ls 2f105b56-46fe-4230-a9ae-9bd0b0ac1f1d.4323.2_fstab-bak

文件前缀存储在default.rgw.buckets.index中

[root@node1 my-cluster]# rados -p default.rgw.buckets.index ls .dir.2f105b56-46fe-4230-a9ae-9bd0b0ac1f1d.4323.2 .dir.2f105b56-46fe-4230-a9ae-9bd0b0ac1f1d.4323.1

swift风格接口操作

使用swift命令操作对象存储。

创建swift用户

[root@node1 my-cluster]# radosgw-admin user list [ "ceph-s3-user" ] [root@node1 my-cluster]# radosgw-admin subuser create --uid=ceph-s3-user --subuser=ceph-s3-user:swift --access=full { "user_id": "ceph-s3-user", "display_name": "ceph s3 user demo", "email": "", "suspended": 0, "max_buckets": 1000, "subusers": [ { "id": "ceph-s3-user:swift", "permissions": "full-control" } ], "keys": [ { "user": "ceph-s3-user", "access_key": "W4UQON1266AZX7H4R78A", "secret_key": "LjaAgGJOTZ0cLhVUHSlOZ45NuJtt2OElYF83el9r" } ], "swift_keys": [ { "user": "ceph-s3-user:swift", "secret_key": "3zM2goJKoiRFUswG6MBoNEwTXwb3EaP4fU3SF4pA" } ], "caps": [], "op_mask": "read, write, delete", "default_placement": "", "default_storage_class": "", "placement_tags": [], "bucket_quota": { "enabled": false, "check_on_raw": false, "max_size": -1, "max_size_kb": 0, "max_objects": -1 }, "user_quota": { "enabled": false, "check_on_raw": false, "max_size": -1, "max_size_kb": 0, "max_objects": -1 }, "temp_url_keys": [], "type": "rgw", "mfa_ids": [] }

Create the secret key:

[root@node1 my-cluster]# radosgw-admin key create --subuser=ceph-s3-user:swift --key-type=swift --gen-secret { "user_id": "ceph-s3-user", "display_name": "ceph s3 user demo", "email": "", "suspended": 0, "max_buckets": 1000, "subusers": [ { "id": "ceph-s3-user:swift", "permissions": "full-control" } ], "keys": [ { "user": "ceph-s3-user", "access_key": "W4UQON1266AZX7H4R78A", "secret_key": "LjaAgGJOTZ0cLhVUHSlOZ45NuJtt2OElYF83el9r" } ], "swift_keys": [ { "user": "ceph-s3-user:swift", "secret_key": "ZLy2kCT1AJA6T2tKAhl1yjMKtMwYK9VJfZLAavJT" } ], "caps": [], "op_mask": "read, write, delete", "default_placement": "", "default_storage_class": "", "placement_tags": [], "bucket_quota": { "enabled": false, "check_on_raw": false, "max_size": -1, "max_size_kb": 0, "max_objects": -1 }, "user_quota": { "enabled": false, "check_on_raw": false, "max_size": -1, "max_size_kb": 0, "max_objects": -1 }, "temp_url_keys": [], "type": "rgw", "mfa_ids": [] }

安装swift客户端

sudo yum install python-setuptools #sudo easy_install pip yum install -y python-pip mkdir ~/.pip cat > ~/.pip/pip.conf << EOF [global] trusted-host=mirrors.aliyun.com index-url=https://mirrors.aliyun.com/pypi/simple/ EOF pip install -U pip sudo pip install --upgrade setuptools sudo pip install --upgrade python-swiftclient

列出bucket

[root@node1 my-cluster]# swift -V 1 -A http://192.168.93.50:80/auth -U ceph-s3-user:swift -K 'ZLy2kCT1AJA6T2tKAhl1yjMKtMwYK9VJfZLAavJT' list my-new-bucket s3cmd-demo

注意替换IPADDRESS及key

定义环境变量

[root@node1 my-cluster]# swift list Auth version 1.0 requires ST_AUTH, ST_USER, and ST_KEY environment variables to be set or overridden with -A, -U, or -K. Auth version 2.0 requires OS_AUTH_URL, OS_USERNAME, OS_PASSWORD, and OS_TENANT_NAME OS_TENANT_ID to be set or overridden with --os-auth-url, --os-username, --os-password, --os-tenant-name or os-tenant-id. Note: adding "-V 2" is necessary for this. [root@node1 my-cluster]#

定义环境变量

cat > swift_openrc.sh <<EOF export ST_AUTH=http://192.168.93.50:80/auth export ST_USER=ceph-s3-user:swift export ST_KEY=ZLy2kCT1AJA6T2tKAhl1yjMKtMwYK9VJfZLAavJT EOF [root@node1 my-cluster]# set | grep ST_ ST_AUTH=http://192.168.93.50:80/auth ST_KEY=ZLy2kCT1AJA6T2tKAhl1yjMKtMwYK9VJfZLAavJT ST_USER=ceph-s3-user:swift

验证

[root@node1 my-cluster]# swift list my-new-bucket s3cmd-demo

创建bucket

[root@node1 my-cluster]# swift post swift-demo [root@node1 my-cluster]# swift list my-new-bucket s3cmd-demo swift-demo

上传文件或目录

swift upload swift-demo /etc/passwd
swift upload swift-demo /etc/

查看bucket中的文件

swift list swift-demo

下载bucket中的文件

swift download swift-demo etc/passwd

参考:https://edu.51cto.com/center/course/lesson/index?id=553461


免责声明!

本站转载的文章为个人学习借鉴使用,本站对版权不负任何法律责任。如果侵犯了您的隐私权益,请联系本站邮箱yoyou2525@163.com删除。



 
粤ICP备18138465号  © 2018-2025 CODEPRJ.COM