010 Ceph RGW對象存儲


一、對象存儲

1.1 介紹

通過對象存儲,將數據存儲為對象,每個對象除了包含數據,還包含數據自身的元數據

對象通過Object ID來檢索,無法通過普通文件系統操作來直接訪問對象,只能通過API來訪問,或者第三方客戶端(實際上也是對API的封裝)

對象存儲中的對象不整理到目錄樹中,而是存儲在扁平的命名空間中,Amazon S3將這個扁平命名空間稱為bucket。而swift則將其稱為容器

無論是bucket還是容器,都不能嵌套

bucket需要被授權才能訪問到,一個帳戶可以對多個bucket授權,而權限可以不同

對象存儲的優點:易擴展、快速檢索

1.2 Rados網關介紹

RADOS網關也稱為Ceph對象網關、RADOSGW、RGW,是一種服務,使客戶端能夠利用標准對象存儲API來訪問Ceph集群。它支持S3和Swift API

rgw運行於librados之上,事實上就是一個稱之為Civetweb的web服務器來響應api請求

客戶端使用標准api與rgw通信,而rgw則使用librados與ceph集群通信

rgw客戶端通過s3或者swift api使用rgw用戶進行身份驗證。然后rgw網關代表用戶利用cephx與ceph存儲進行身份驗證

二、RADOS網關部署

2.1 配置radosgw

[root@ceph5 ~]#  ceph auth get-or-create client.rgw.ceph5  mon 'allow rwx' osd 'allow rwx' -o /etc/ceph/backup.client.rgw.ceph5.keyring --cluster backup

[root@ceph5 ~]# vim /etc/ceph/backup.conf

fsid = 51dda18c-7545-4edb-8ba9-27330ead81a7
mon_initial_members = ceph5
mon_host = 172.25.250.14

auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx

public_network = 172.25.250.0/24
cluster_network = 172.25.250.0/24

[mgr]
mgr modules = dashboard

[client.rgw.ceph5]
host = ceph5
keyring = /etc/ceph/backup.client.rgw.ceph5.keyring
rgw_frontends = civetweb port=80

[root@ceph5 ~]# systemctl restart ceph-radosgw@rgw.ceph5

[root@ceph5 ~]# ps -ef|grep rados

root     13828     1  0 18:07 ?        00:00:00 /usr/bin/radosgw -f --cluster backup --name client.rgw.ceph5 --setuser ceph --setgroup ceph

[root@ceph5 ~]# netstat -ntlp|grep 80

tcp        0      0 0.0.0.0:80              0.0.0.0:*               LISTEN      13828/radosgw

[root@ceph5 ~]# ceph osd pool ls

[root@ceph5 ~]# ceph -s

[root@ceph5 ~]# ceph osd pool application enable rbd rbd

[root@ceph5 ~]# ceph osd pool application enable rbdmirror rbd

[root@ceph5 ~]# ceph -s

[root@ceph5 ~]#  cat /usr/lib/systemd/system/ceph-radosgw@.service

[Unit]
Description=Ceph rados gateway
After=network-online.target local-fs.target time-sync.target
Wants=network-online.target local-fs.target time-sync.target
PartOf=ceph-radosgw.target

[Service]
LimitNOFILE=1048576
LimitNPROC=1048576
EnvironmentFile=-/etc/sysconfig/ceph
Environment=CLUSTER=ceph
ExecStart=/usr/bin/radosgw -f --cluster ${CLUSTER} --name client.%i --setuser ceph --setgroup ceph
PrivateDevices=yes
ProtectHome=true
ProtectSystem=full
PrivateTmp=true
TasksMax=infinity
Restart=on-failure
StartLimitInterval=30s
StartLimitBurst=5

[Install]
WantedBy=ceph-radosgw.target

 2.2 定義監聽的線程數

[root@ceph5 ~]# vim /etc/ceph/backup.conf

[client.rgw.ceph5]
host = ceph5
keyring = /etc/ceph/backup.client.rgw.ceph5.keyring
rgw_frontends = civetweb port=80 num_threads=100
log = /var/log/ceph/$cluster.$name.log

[root@ceph5 ~]# systemctl restart ceph-radosgw@rgw.ceph5

[root@ceph5 ~]# ps -ef|grep rados

ceph 15553 1 1 20:26 ? 00:00:00 /usr/bin/radosgw -f --cluster backup --name client.rgw.ceph5 --setuser ceph --setgroup ceph

2.3 訪問rados網關

[root@ceph5 ~]# curl http://ceph5

<?xml version="1.0" encoding="UTF-8"?><ListAllMyBucketsResult xmlns="http://s3.amazonaws.com/doc/2006-03-01/"><Owner><ID>anonymous</ID><DisplayName></DisplayName></Owner><Buckets></Buckets></ListAllMyBucketsResult>

三、S3對象存儲

3.1 S3簡介

S3由Amazon於2006年推出,全稱為Simple Storage Service

S3定義了對象存儲,是對象存儲事實上的標准,從某種意義上說,S3就是對象存儲,對象存儲就是S3

S3是對象存儲市場的霸主,后續的對象存儲都是對S3的模仿

3.2 用戶以及權限設置

創建radosgw的用戶

[root@ceph5 ~]#  radosgw-admin user create --uid joy --display-name 'Joy Ning'

{
    "user_id": "joy",
    "display_name": "Joy Ning",
    "email": "",
    "suspended": 0,
    "max_buckets": 1000,
    "auid": 0,
    "subusers": [],
    "keys": [
        {
            "user": "joy",
            "access_key": "X0CVIF04TAJVTN9D29UL",
            "secret_key": "vMmPqPap0FC0IRC5J3t9AIPgXNoiw1H9TOWELd5B"
        }
    ],
    "swift_keys": [],
    "caps": [],
    "op_mask": "read, write, delete",
    "default_placement": "",
    "placement_tags": [],
    "bucket_quota": {
        "enabled": false,
        "check_on_raw": false,
        "max_size": -1,
        "max_size_kb": 0,
        "max_objects": -1
    },
    "user_quota": {
        "enabled": false,
        "check_on_raw": false,
        "max_size": -1,
        "max_size_kb": 0,
        "max_objects": -1
    },
    "temp_url_keys": [],
    "type": "rgw"
}
"Joy Ning"

修改信息

 [root@ceph5 ~]# radosgw-admin user modify --uid joy --display-name 'joy Ningrui'  --max_buckets 2000

 

 禁用suspend

[root@ceph5 ~]# radosgw-admin user suspend --uid joy

 

 啟用

[root@ceph5 ~]# radosgw-admin user enable --uid joy

列出用戶

[root@ceph5 ~]# radosgw-admin user list

 

 刪除用戶

[root@ceph5 ~]# radosgw-admin user rm --uid joy

[root@ceph5 ~]# radosgw-admin user list

 [root@ceph5 ~]#  radosgw-admin user create --uid joy --display-name 'Joy Ning'

{
    "user_id": "joy",
    "display_name": "Joy Ning",
    "email": "",
    "suspended": 0,
    "max_buckets": 1000,
    "auid": 0,
    "subusers": [],
    "keys": [
        {
            "user": "joy",
            "access_key": "5XCV68WUQJFFJPVM3UHK",
            "secret_key": "xhaA2YB1CA3xH54xLbmwPcglqjDyuFez36F8XGuG"
        }
    ],
    "swift_keys": [],
    "caps": [],
    "op_mask": "read, write, delete",
    "default_placement": "",
    "placement_tags": [],
    "bucket_quota": {
        "enabled": false,
        "check_on_raw": false,
        "max_size": -1,
        "max_size_kb": 0,
        "max_objects": -1
    },
    "user_quota": {
        "enabled": false,
        "check_on_raw": false,
        "max_size": -1,
        "max_size_kb": 0,
        "max_objects": -1
    },
    "temp_url_keys": [],
    "type": "rgw"
}

 [root@ceph5 ~]# radosgw-admin key create --uid joy --display-name 'Joy Ning' --key-type=s3  --gen-access-key --gen-secret

 

 刪除key

[root@ceph5 ~]# radosgw-admin key rm --uid joy --display-name 'Joy Ning' --key-type=s3 --access-key HPT1SBAXCXW46ZACKPY0

 

3.3 設置配額

基於用戶的配額

[root@ceph5 ~]# radosgw-admin quota set --quota-scope=user --uid=joy --max-size 1

[root@ceph5 ~]# radosgw-admin user info --uid joy

 

開啟配額

[root@ceph5 ~]# radosgw-admin quota enable --quota-scope=user --uid joy

[root@ceph5 ~]# radosgw-admin user info --uid joy

 

[root@ceph5 ~]# radosgw-admin quota set --quota-scope=bucket --uid=joy --max-size 1

[root@ceph5 ~]# radosgw-admin quota enable --quota-scope=bucket --uid=joy

[root@ceph5 ~]# radosgw-admin user info --uid joy

注:如果兩個都進行配置,則那個先到,使用哪一個

關閉配額

可以disable

[root@ceph5 ~]# radosgw-admin quota disable  --quota-scope=bucket --uid=joy

也可以參數設為1

[root@ceph5 ~]# radosgw-admin quota set --quota-scope=user --uid joy --max-size -1

[root@ceph5 ~]# radosgw-admin user info --uid joy

3.4 統計數據

統計所有

[root@ceph5 ~]# radosgw-admin usage show --uid joy

[root@ceph5 ~]# radosgw-admin usage show --uid joy --start-date 2019-03-19 21:00:00 --end-date 2019-03-19 22:00:00

3.5 利用rados網關來訪問s3對象

[root@ceph5 ~]#  vim /etc/ceph/backup.conf

[root@ceph5 ~]# systemctl restart ceph-radosgw@rgw.ceph5
[root@ceph5 ~]# ps -ef|grep rados

ceph     18072     1  2 21:52 ?        00:00:00 /usr/bin/radosgw -f --cluster backup --name client.rgw.ceph5 --setuser ceph --setgroup ceph

四 驗證配置

4.1 配置s3cmd

[root@ceph1 ceph]# yum -y install s3cmd

[root@ceph1 ceph]# s3cmd --configure

root@ceph6's password: 
Permission denied, please try again.
root@ceph6's password: 
hosts                                                                                                                100%  786     1.6MB/s   00:00    
[root@ceph1 ceph]# s3cmd --configure

Enter new values or accept defaults in brackets with Enter.
Refer to user manual for detailed description of all options.

Access key and Secret key are your identifiers for Amazon S3. Leave them empty for using the env variables.
Access Key: 5XCV68WUQJFFJPVM3UHK
Secret Key: xhaA2YB1CA3xH54xLbmwPcglqjDyuFez36F8XGuG
Default Region [US]: 

Encryption password is used to protect your files from reading
by unauthorized persons while in transfer to S3
Encryption password: redhat
Path to GPG program [/usr/bin/gpg]: 

When using secure HTTPS protocol all communication with Amazon S3
servers is protected from 3rd party eavesdropping. This method is
slower than plain HTTP, and can only be proxied with Python 2.7 or newer
Use HTTPS protocol [Yes]: no

On some networks all internet access must go through a HTTP proxy.
Try setting it here if you can't connect to S3 directly
HTTP Proxy server name: ceph5.lab.example.com
HTTP Proxy server port [3128]: 80

New settings:
  Access Key: 5XCV68WUQJFFJPVM3UHK
  Secret Key: xhaA2YB1CA3xH54xLbmwPcglqjDyuFez36F8XGuG
  Default Region: US
  Encryption password: redhat
  Path to GPG program: /usr/bin/gpg
  Use HTTPS protocol: False
  HTTP Proxy server name: ceph5.lab.example.com
  HTTP Proxy server port: 80

Test access with supplied credentials? [Y/n] n

Save settings? [y/N] y
Configuration saved to '/root/.s3cfg'

[root@ceph1 ceph]# vim /root/.s3cfg

host_base = ceph5
host_bucket = %(bucket)s.ceph5.lab.example.com
cloudfront_host = cloudfront.amazonaws.com
website_endpoint = http://%(bucket)s.ceph5.lab.example.com/

4.2 創建bucket

[root@ceph1 ceph]# s3cmd mb s3://test

4.3 傳送數據

[root@ceph1 ceph]# echo 11111 >/tmp/demoobject

[root@ceph1 ceph]# s3cmd put --acl-public /tmp/demoobject s3://test/demoobject

[root@ceph1 ceph]# vim /etc/hosts

172.25.250.10  ceph1    ceph1.lab.example.com servera
172.25.250.11  ceph2    ceph2.lab.example.com serverb
172.25.250.12  ceph3    ceph3.lab.example.com serverc
172.25.250.13  ceph4    ceph4.lab.example.com serverd
172.25.250.14  ceph5    ceph5.lab.example.com servere  test.ceph5.lab.example.com

4.4 訪問bucket

[root@ceph1 ceph]# curl http://test.ceph5.lab.example.com/demoobject

4.5 查看bucket

到服務端看

[root@ceph5 ~]# radosgw-admin bucket list

[root@ceph5 ~]# radosgw-admin bucket stats --bucket=test

{
    "bucket": "test",
    "zonegroup": "e80133e1-a513-44f5-ba90-e25b6c987b26",
    "placement_rule": "default-placement",
    "explicit_placement": {
        "data_pool": "",
        "data_extra_pool": "",
        "index_pool": ""
    },
    "id": "1b85c5b1-19d2-48a1-bb45-3ac75895aeed.4235.1",
    "marker": "1b85c5b1-19d2-48a1-bb45-3ac75895aeed.4235.1",
    "index_type": "Normal",
    "owner": "joy",
    "ver": "0#3",
    "master_ver": "0#0",
    "mtime": "2019-03-19 22:02:50.726716",
    "max_marker": "0#",
    "usage": {
        "rgw.main": {
            "size": 6,
            "size_actual": 4096,
            "size_utilized": 6,
            "size_kb": 1,
            "size_kb_actual": 4,
            "size_kb_utilized": 1,
            "num_objects": 1
        }
    },
    "bucket_quota": {
        "enabled": false,
        "check_on_raw": false,
        "max_size": -1,
        "max_size_kb": 0,
        "max_objects": -1
    }
}

[root@ceph5 ~]# radosgw-admin bucket check --bucket=test

刪除

[root@ceph5 ~]# radosgw-admin bucket rm --bucket=test

[root@ceph1 ceph]# s3cmd put --acl-public  /etc/ceph/ceph.conf  s3://test/ceph

upload: '/etc/ceph/ceph.conf' -> 's3://test/ceph'  [1 of 1]
 589 of 589   100% in    0s    20.96 kB/s  done
Public URL of the object is: http://test.ceph5/ceph

 [root@ceph1 ceph]# curl http://test.ceph5.lab.example.com/ceph

# Please do not change this file directly since it is managed by Ansible and will be overwritten

[global]
fsid = 35a91e48-8244-4e96-a7ee-980ab989d20d



mon initial members = ceph2,ceph3,ceph4
mon host = 172.25.250.11,172.25.250.12,172.25.250.13

public network = 172.25.250.0/24
cluster network = 172.25.250.0/24

auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx

[osd]
osd mkfs type = xfs
osd mkfs options xfs = -f -i size=2048
osd mount options xfs = noatime,largeio,inode64,swalloc
osd journal size = 5120

[mon]
mon_allow_pool_delete = true

[root@ceph1 ceph]# s3cmd get s3://test/demoobject ./demoobject

download: 's3://test/demoobject' -> './demoobject' [1 of 1]
6 of 6 100% in 0s 1346.20 B/s done

[root@ceph1 ceph]# cat ./demoobject

4.6 查看底層數據

[root@ceph5 ~]# ceph osd pool ls
rbd
rbdmirror
.rgw.root
default.rgw.control
default.rgw.meta
default.rgw.log
default.rgw.buckets.index
default.rgw.buckets.data
[root@ceph5 ~]#  rados -p  default.rgw.buckets.index ls --cluster backup
.dir.1b85c5b1-19d2-48a1-bb45-3ac75895aeed.4235.1
[root@ceph5 ~]#  rados -p  default.rgw.buckets.data ls
error opening pool default.rgw.buckets.data: (2) No such file or directory
[root@ceph5 ~]#  rados -p  default.rgw.buckets.data ls --cluster backup
1b85c5b1-19d2-48a1-bb45-3ac75895aeed.4235.1_demoobject
1b85c5b1-19d2-48a1-bb45-3ac75895aeed.4235.1_ceph

實驗完成


 

博主聲明:本文的內容來源主要來自譽天教育晏威老師,由本人實驗完成操作驗證,需要的博友請聯系譽天教育(http://www.yutianedu.com/),獲得官方同意或者晏老師(https://www.cnblogs.com/breezey/)本人同意即可轉載,謝謝!

 


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM