ceph高可用分布式存儲集群05-nfs-ganesha將rgw導出為nfs文件接口


概述
我們知道ceph是統一的分布式存儲系統,其可以提供給應用三種訪問數據的接口:對象(RGW)、塊(RBD)和文件(CEPHFS)接口。我們在使用對象接口的時候通常都是基於HTTP協議來存取數據。
下面介紹另外一種方式來使用ceph的對象(RGW)接口 -- nfs-ganesha。這種方式可以通過基於文件的訪問協議(如NFSv3和NFSv4)來訪問Ceph對象網關命名空間。詳細的介紹可以去ceph官網看看:http://docs.ceph.com/docs/master/radosgw/nfs/。
1、環境准備
1.1、准備虛擬機
單純為了測試可用性,所以我這里就是使用一個虛擬機搭建ceph集群,然后配置nfs-ganesha將rgw導出為nfs文件接口
[root@ceph-osd-232 ~]# cat /etc/redhat-release
CentOS Linux release 7.6.1810 (Core)
[root@ceph-osd-232 ~]# uname -a
Linux ceph-osd-232 3.10.0-957.el7.x86_64 #1 SMP Thu Nov 8 23:39:32 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux
1.2、配置yum源
[root@ceph-osd-232 ~]# ll /etc/yum.repos.d/
total 48
-rw-r--r--. 1 root root 1664 Nov 23  2018 CentOS-Base.repo
-rw-r--r--. 1 root root 1309 Nov 23  2018 CentOS-CR.repo
-rw-r--r--. 1 root root  649 Nov 23  2018 CentOS-Debuginfo.repo
-rw-r--r--. 1 root root  314 Nov 23  2018 CentOS-fasttrack.repo
-rw-r--r--. 1 root root  630 Nov 23  2018 CentOS-Media.repo
-rw-r--r--  1 root root  717 Mar 24  2020 CentOS-NFS-Ganesha-28.repo
-rw-r--r--. 1 root root 1331 Nov 23  2018 CentOS-Sources.repo
-rw-r--r--  1 root root  353 Jul 31  2018 CentOS-Storage-common.repo
-rw-r--r--. 1 root root 5701 Nov 23  2018 CentOS-Vault.repo
-rw-r--r--  1 root root  557 Feb  7 16:39 ceph.repo
-rw-r--r--  1 root root  664 Dec 26 19:31 epel.repo  
1.2.1、base源
[root@ceph-osd-232 ~]# cat /etc/yum.repos.d/CentOS-Base.repo
# CentOS-Base.repo
#
# The mirror system uses the connecting IP address of the client and the
# update status of each mirror to pick mirrors that are updated to and
# geographically close to the client.  You should use this for CentOS updates
# unless you are manually picking other mirrors.
#
# If the mirrorlist= does not work for you, as a fall back you can try the
# remarked out baseurl= line instead.
#
#
 
[base]
name=CentOS-$releasever - Base
mirrorlist=http://mirrorlist.centos.org/?release=$releasever&arch=$basearch&repo=os&infra=$infra
#baseurl=http://mirror.centos.org/centos/$releasever/os/$basearch/
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7
 
#released updates
[updates]
name=CentOS-$releasever - Updates
mirrorlist=http://mirrorlist.centos.org/?release=$releasever&arch=$basearch&repo=updates&infra=$infra
#baseurl=http://mirror.centos.org/centos/$releasever/updates/$basearch/
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7
 
#additional packages that may be useful
[extras]
name=CentOS-$releasever - Extras
mirrorlist=http://mirrorlist.centos.org/?release=$releasever&arch=$basearch&repo=extras&infra=$infra
#baseurl=http://mirror.centos.org/centos/$releasever/extras/$basearch/
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7
 
#additional packages that extend functionality of existing packages
[centosplus]
name=CentOS-$releasever - Plus
mirrorlist=http://mirrorlist.centos.org/?release=$releasever&arch=$basearch&repo=centosplus&infra=$infra
#baseurl=http://mirror.centos.org/centos/$releasever/centosplus/$basearch/
gpgcheck=1
enabled=0
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7
1.2.2、epel源
[root@ceph-osd-232 ~]# cat /etc/yum.repos.d/epel.repo
[epel]
name=Extra Packages for Enterprise Linux 7 - $basearch
baseurl=http://mirrors.aliyun.com/epel/7/$basearch
failovermethod=priority
enabled=1
gpgcheck=0
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7
[epel-debuginfo]
name=Extra Packages for Enterprise Linux 7 - $basearch - Debug
baseurl=http://mirrors.aliyun.com/epel/7/$basearch/debug
failovermethod=priority
enabled=0
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7
gpgcheck=0
[epel-source]
name=Extra Packages for Enterprise Linux 7 - $basearch - Source
baseurl=http://mirrors.aliyun.com/epel/7/SRPMS
failovermethod=priority
enabled=0
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7
gpgcheck=0
1.2.3、ceph源
我這里配置nautilus的ceph源
[root@ceph-osd-232 ~]# cat /etc/yum.repos.d/ceph.repo
[Ceph]
name=Ceph packages for $basearch
baseurl=http://mirrors.aliyun.com/ceph/rpm-nautilus/el7/$basearch
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://download.ceph.com/keys/release.asc
[Ceph-noarch]
name=Ceph noarch packages
baseurl=http://mirrors.aliyun.com/ceph/rpm-nautilus/el7/noarch
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://download.ceph.com/keys/release.asc
[ceph-source]
name=Ceph source packages
baseurl=http://mirrors.aliyun.com/ceph/rpm-nautilus/el7/SRPMS
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://download.ceph.com/keys/release.asc
2、相關軟件包准備
2.1、安裝ceph相關包
注意這里的 librgw2-devel 包后面編譯nfs-ganesha時需要,所以一定要安裝
[root@ceph-osd-232 ~]# yum install ceph librgw2-devel libcephfs2 -y
[root@ceph-osd-232 ~]# ceph -v
ceph version 14.2.9 (581f22da52345dba46ee232b73b990f06029a2a0) nautilus (stable)
部署ceph集群,下面是我部署好的單機ceph集群
[root@ceph-osd-232 ~]# ceph -v
  cluster:
    id:     56863ba7-82f6-45db-b687-987f3d4cfa7c
    health: HEALTH_WARN
            1 pools have many more objects per pg than average
  services:
    mon: 3 daemons, quorum ceph-osd-232,ceph-osd-232,ceph-osd-233 (age 2d)
    mgr: ceph-osd-232(active, since 2d), standbys: ceph-osd-232, ceph-osd-233
    osd: 8 osds: 8 up (since 2d), 8 in (since 6d)
    rgw: 3 daemons active (ceph-osd-232, ceph-osd-232, ceph-osd-233)
  data:
    pools:   8 pools, 480 pgs
    objects: 2.17M objects, 6.5 TiB
    usage:   9.9 TiB used, 48 TiB / 58 TiB avail
    pgs:     480 active+clean
2.2、安裝nfs-ganesha
因為我使用的ceph是nautilus 14.2.9,所以這里下載的nfs-ganesha是V2.8.4
在ganesha節點上配置nfs-ganesha源。
#vi /etc/yum.repos.d/nfs-ganesha.repo
[nfs-ganesha]
name=nfs-ganesha
baseurl=http://us-west.ceph.com/nfs-ganesha/rpm-V2.8-stable/nautilus/x86_64/
enabled=1
priority=1
 
# yum install nfs-ganesha nfs-ganesha-ceph nfs-ganesha-rgw -y
 
3、開始配置
3.1、准備rgw用戶
# radosgw-admin user create --uid=qf --display-name="qf" --access-key=DS1WUIMV2NZHK6KGURTG --secret=ppi22WJN9ElnxhOyDbjmA3gEyuKxHP8y6Vm8JSrH
3.2、准備nfs-ganesha配置文件
配置起來還是比較簡單的(不過是最簡配置··· 哈哈)
[root@ceph-osd-232 ~]# cat /etc/ganesha/ganesha.conf
EXPORT
{
        Export_ID=1;
        Path = "/";
        Pseudo = "/";
        Access_Type = RW;
        Protocols = 4;
        Transports = TCP;
        FSAL {
                Name = RGW;
                User_Id = "qf";
                Access_Key_Id ="DS1WUIMV2NZHK6KGURTG";
                Secret_Access_Key = "ppi22WJN9ElnxhOyDbjmA3gEyuKxHP8y6Vm8JSrH";
        }
}
RGW {
        ceph_conf = "/etc/ceph/ceph.conf";
        # for vstart cluster, name = "client.admin"
        name = "client.rgw.ceph-osd-232";
        cluster = "ceph";
#       init_args = "-d --debug-rgw=16";
}
RGW配置項中的name值,可以使用命令ceph auth list進行查詢。
 
RGW-NFS配置文件的模板路徑在:
/usr/share/doc/ganesha/config_samples/rgw.conf
 
3.3、啟動nfs-ganesha服務
systemctl start nfs-ganesha
systemctl enable nfs-ganesha
啟動完成后,可以通過ps -ef | grep ganesha.nfsd 查詢是否有生成對應的進程,如果沒有,可以查看日志nfs-ganesha.log,根據日志中輸出的信息進行一下檢查。
 
3.4、查看nfs-ganesha服務是否正常啟動
[root@ceph-osd-232 ganesha]# ps aux|grep ganesha
root       68675  0.3  0.3 7938392 55348 ?       Ssl  16:44   0:02 /usr/bin/ganesha.nfsd -L /var/log/ganesha/ganesha.log -f /etc/ganesha/ganesha.conf -N NIV_EVENT
可以看到nfs-ganesha服務已經正常啟動了
# ceph -w
  cluster:
    id:     9e9cc600-9f75-4621-8094-26082d390578
    health: HEALTH_WARN
            application not enabled on 1 pool(s)
            1 daemons have recently crashed
  services:
    mon:     3 daemons, quorum ceph-osd-231,ceph-osd-232,ceph-osd-233 (age 97m)
    mgr:     ceph-osd-231(active, since 9h), standbys: ceph-osd-233, ceph-osd-232
    osd:     12 osds: 12 up (since 97m), 12 in (since 9h)
    rgw:     3 daemons active (ceph-osd-231, ceph-osd-232, ceph-osd-233)
     rgw-nfs: 1 daemon active (ceph-osd-232)
  data:
    pools:   9 pools, 528 pgs
    objects: 642.62k objects, 2.1 TiB
    usage:   6.4 TiB used, 13 TiB / 19 TiB avail
    pgs:     528 active+clean
  io:
    client:   6.7 KiB/s rd, 9.9 MiB/s wr, 9 op/s rd, 14 op/s wr
    cache:    2.3 MiB/s flush, 8.0 MiB/s evict, 1 op/s promote
 
4、使用nfs客戶端掛載目錄
現在來到另外一台服務器上面
4.1、安裝nfs-utils
[root@host-10-2-110-11 ~]# yum install -y nfs-utils
4.2、掛載nfs
這里的10.2.110.11就是我們上面的服務器ip
[root@host-10-2-110-11 ~]# mount -t nfs4 10.2.110.232:/ /mnt/
[root@host-10-2-110-11 ~]# mount |grep mnt
10.2.110.232:/ on /mnt type nfs4 (rw,relatime,vers=4.1,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=10.2.110.11,local_lock=none,addr=10.2.110.232)
 
可以看到現在rgw data池里面的對象數是0個,現在在nfs掛載目錄下創建一個目錄試試
[root@host-10-2-110-11 ~]# mkdir -pv /mnt/testbucket
我們看到bucket增加了一個
[root@ceph-osd-233 ~]# radosgw-admin bucket list
[
    "qfpool",
    "testbucket"
]
 
總結
我們上面主要做了這些步驟:
* 安裝ceph並搭建ceph集群
* 使用radosgw-admin創建rgw用戶(在創建rgw用戶的時候,會自動創建rgw所需的存儲池)
* 獲取nfs-ganesha及其依賴的ntirpc模塊
* 編譯、安裝和配置nfs-ganesha
* 最后啟動nfs-ganesha服務並在其客戶端掛載、測試使用
 
友善提示:使用nfs-ganesha將rgw導出為nfs文件,當你的對象存儲中文件數量達到百萬級時,性能會非常糟糕,建議使用goofys
 
作者:Dexter_Wang   工作崗位:某互聯網公司資深雲計算與存儲工程師  聯系郵箱:993852246@qq.com


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM