自建基於Cephfs的NFS和S3高可用集群


自建基於Ceph的NFS和S3高可用集群

應總工程師的的要求,搭建一套Ceph集群,要求可以達到NFS高可用、S3服務高可用。主要用於測試,積累經驗后可用於搭建自己使用的備份存儲。

1. 配置環境

新建三台虛擬機,每台虛擬機有三塊塊硬盤(兩塊用於做OSD)、兩個網卡,分屬兩個網段,規划其IP分別為:

  • node1 public:192.168.40.61; cluster:172.18.0.61
  • node2 public:192.168.40.62; cluster:172.18.0.62
  • node3 public:192.168.40.63; cluster:172.18.0.63

修改所有虛擬機的hosts

[root@node1 ~]# vim /etc/hosts
192.168.40.61   node1
192.168.40.62   node2
192.168.40.63   node3

SSH設置免密登錄

免密登錄

[root@node1 ~]# ssh-copy-id root@node01  # 若提示沒有秘鑰,就先用ssh-keygen生成秘鑰
[root@node1 ~]# ssh-copy-id root@node02
[root@node1 ~]# ssh-copy-id root@node03

內核升級

官方推薦內核版本4.x以上

根據下面的內容進行內核升級。

CentOS7升級內核

[root@node1 ~]# rpm --import https://www.elrepo.org/RPM-GPG-KEY-elrepo.org  # 導入ELRepo倉庫的公共密鑰
[root@node1 ~]# yum install https://www.elrepo.org/elrepo-release-7.el7.elrepo.noarch.rpm  # 安裝ELRepo倉庫的yum源
[root@node1 ~]# yum --disablerepo="*" --enablerepo="elrepo-kernel" list available  # 查看可用系統內核包,可以看到5.4和5.16兩個版本
[root@node1 ~]# yum --enablerepo=elrepo-kernel install kernel-ml  # --enablerepo 選項開啟 CentOS 系統上的指定倉庫。默認開啟的是 elrepo,這里用 elrepo-kernel 替換。
# 內核安裝好后,需要設置為默認啟動選項並重啟后才會生效
[root@node1 ~]# awk -F\' '$1=="menuentry " {print i++ " : " $2}' /etc/grub2.cfg  # 查看系統上的所有可用內核:
[root@node1 ~]# grub2-set-default 0 # 其中 0 是上面查詢出來的可用內核
[root@node1 ~]# grub2-mkconfig -o /boot/grub2/grub.cfg  # 生成 grub 配置文件
[root@node1 ~]# reboot  # 重啟
[root@node1 ~]# uname -r  # 驗證

關閉防火牆和SELinux

[root@node1 ~]# systemctl stop firewalld
[root@node1 ~]# systemctl disable firewalld
[root@node1 ~]# setenforce 0
[root@node1 ~]# vi /etc/selinux/config
修改SELINUX=disabled
SELINUX=disabled

或者直接運行以下命令
[root@node1 ~]# sed -i 's/=enforcing/=disabled/' /etc/selinux/config

時間同步

在所有節點上安裝chrony

yum -y install  chrony

在node1節點上配置chrony服務

[root@node1 ~]# vim /etc/chrony.conf 
server ntp.aliyun.com iburst  # 注釋掉其他server
......
#allow 192.168.0.0/16
allow 192.168.40.0/24  # 添加允許訪問的網段
[root@node1 ~]# systemctl enable chronyd
[root@node1 ~]# systemctl start chronyd

node2、node3刪除其他server,只有一個server

[root@node2 ~]# vim /etc/chrony.conf
......
server 192.168.40.61 iburst
[root@node2 ~]# systemctl enable chronyd
[root@node2 ~]# systemctl start chronyd
[root@node2 ~]# chronyc sources -v
210 Number of sources = 1
...
MS Name/IP address         Stratum Poll Reach LastRx Last sample               
===============================================================================
^? node1                         0   8     0     -     +0ns[   +0ns] +/-    0ns

配置yum源

所有節點都要配置。

[root@node1 ~]# yum install -y epel-release
[root@node1 ~]# vim /etc/yum.repos.d/ceph.repo 
[Ceph]
name=Ceph packages for $basearch
baseurl=http://mirrors.aliyun.com/ceph/rpm-nautilus/el7/x86_64/
enabled=1
gpgcheck=0
type=rpm-md
gpgkey=https://download.ceph.com/keys/release.asc
[Ceph-noarch]
name=Ceph noarch packages
baseurl=http://mirrors.aliyun.com/ceph/rpm-nautilus/el7/noarch
enabled=1
gpgcheck=0
type=rpm-md
gpgkey=https://download.ceph.com/keys/release.asc
[ceph-source]
name=Ceph source packages
baseurl=http://mirrors.aliyun.com/ceph/rpm-nautilus/el7/SRPMS
enabled=1
gpgcheck=0
type=rpm-md
gpgkey=https://download.ceph.com/keys/release.asc
[root@node1 ~]# yum clean all && yum makecache
[root@node1 ~]# yum update
[root@node1 ~]# yum install ceph-deploy -y  # 只在主節點安裝
[root@node1 ~]# yum install -y ceph ceph-mon ceph-mgr ceph-mgr-dashboard ceph-radosgw ceph-mds ceph-osd  # 在所有節點上都要安裝

2. 安裝和配置Ceph

nfs-ganesha配置樣例

[root@node1 ~]# mkdir -p /root/my-cluster  # 用戶存放Ceph最初的配置文件
[root@node1 ~]# cd ~/my-cluster
[root@node1 my-cluster]# ceph-deploy new --public-network 192.168.40.0/24 --cluster-network 172.18.0.0/24 node1  # 創建一個ceph集群,mon為node1
[root@node1 my-cluster]# ceph-deploy mon create-initial  # 初始化monitor
[root@node1 my-cluster]# ceph-deploy admin node1 node2 node3  # 給節點分配秘鑰和配置文件
[root@node1 my-cluster]# ceph-deploy mgr create node1  # 配置配置Manager節點
[root@node1 my-cluster]# ceph-deploy mgr create node2 node3
[root@node1 my-cluster]# ceph-deploy mon create node2  # 擴展mon節點
[root@node1 my-cluster]# ceph-deploy mon create node3
[root@node1 my-cluster]# ceph -s
[root@node1 my-cluster]# ceph-deploy osd create node1 --data /dev/sdb  # 添加OSD,事先用lsblk確認對應的盤符
[root@node1 my-cluster]# ceph-deploy osd create node2 --data /dev/sdb
[root@node1 my-cluster]# ceph-deploy osd create node3 --data /dev/sdb
[root@node1 my-cluster]# ceph-deploy osd create node1 --data /dev/sdc
[root@node1 my-cluster]# ceph-deploy osd create node2 --data /dev/sdc
[root@node1 my-cluster]# ceph-deploy osd create node3 --data /dev/sdc
[root@node1 my-cluster]# ceph osd tree  # 確認OSD狀態
[root@node1 my-cluster]# ceph -s  # 確認ceph集群的狀態
[root@node1 my-cluster]# ceph mgr module enable dashboard  # 開啟dashboard
[root@node1 my-cluster]# ceph dashboard create-self-signed-cert  # 創建證書
[root@node1 my-cluster]# ceph dashboard set-login-credentials admin 123456  # 創建 web 登錄用戶密碼
[root@node1 my-cluster]# ceph mgr services  # 查看服務訪問方式

安裝dashboard后就可以用 https://192.168.40.61:8443/ 這個地址來查看ceph集群的狀態了。

3.安裝和配置nfs-ganesha

所有節點上都要安裝nfs-ganesha。

[root@node1 my-cluster]# vim /etc/yum.repos.d/nfs-ganasha.repo
[nfsganesha]
name=nfsganesha
baseurl=https://mirrors.cloud.tencent.com/ceph/nfs-ganesha/rpm-V2.8-stable/nautilus/x86_64/
gpgcheck=0
enable=1
[root@node1 my-cluster]# yum makecache
[root@node1 my-cluster]# yum install -y nfs-ganesha nfs-ganesha-ceph  nfs-ganesha-rados-grace nfs-ganesha-rgw nfs-utils rpcbind haproxy keepalived
[root@node1 my-cluster]# ceph-deploy mds create node1 node2 node3
[root@node1 my-cluster]# ceph osd pool create fs-meta 32
[root@node1 my-cluster]# ceph osd pool create fs-data 128
[root@node1 my-cluster]# ceph fs new cephfs fs-meta fs-data  # 創建cephfs
[root@node1 my-cluster]# ceph fs ls  # 查看cephfs
[root@node1 my-cluster]# ceph-deploy --overwrite-conf admin node1 node2 node3   # 更新配置文件
[root@node1 my-cluster]# mkdir /mnt/cephfs
[root@node1 my-cluster]# mount -t ceph 192.168.40.61:/ /mnt/cephfs/ -o name=admin,secret=AQDXrtNhaD/VOBAAuVtilymuHIkb9elyH6bCVQ==  # 掛載一下試試,secret的值從/etc/ceph/ceph.client.admin.keyring中獲得
[root@node1 my-cluster]# mkdir -p /mnt/cephfs/nfs1  # 創建兩個文件夾
[root@node1 my-cluster]# mkdir -p /mnt/cephfs/nfs2
[root@node1 my-cluster]# vim /etc/ganesha/ganesha.conf  # 配置以下內容
NFS_CORE_PARAM {
    Enable_NLM = false;
    NFS_Port = 52049;
    Enable_RQUOTA = false;
}
EXPORT_DEFAULTS {
    Access_Type = RW;
    Anonymous_uid = 65534;
    Anonymous_gid = 65534;
}
LOG {
    Default_Log_Level = INFO;
    Facility {
        name = FILE;
        destination = "/var/log/ganesha/ganesha.log";
        enable = active;
    }
}

NFSv4 {
    # Delegations = false;
    # RecoveryBackend = 'rados_cluster';
    # Minor_Versions = 1,2;
}

EXPORT
{
        Export_Id = 1;
        Path = /nfs1;
        Pseudo = /nfs1;
        Squash = no_root_squash;
        Access_Type = RW;
        FSAL {
            secret_access_key = "AQDXrtNhaD/VOBAAuVtilymuHIkb9elyH6bCVQ==";
            user_id = "admin";
            name = "CEPH";
            filesystem = "cephfs";
        }
}
EXPORT
{
        Export_Id = 2;
        Path = /nfs2;
        Pseudo = /nfs2;
        Squash = no_root_squash;
        Access_Type = RW;
        FSAL {
            secret_access_key = "AQDXrtNhaD/VOBAAuVtilymuHIkb9elyH6bCVQ==";
            user_id = "admin";
            name = "CEPH";
            filesystem = "cephfs";
        }
}

[root@node1 my-cluster]# systemctl start nfs-ganesha
[root@node1 my-cluster]# systemctl enable nfs-ganesha
[root@node1 my-cluster]# scp /etc/ganesha/ganesha.conf root@node2:/etc/ganesha/ganesha.conf  # 拷貝配置文件到另外兩個節點上
[root@node1 my-cluster]# scp /etc/ganesha/ganesha.conf root@node3:/etc/ganesha/ganesha.conf
[root@node1 my-cluster]# ssh root@node2 systemctl start nfs-ganesha  # 2、 3節點重啟nfs-ganesha 服務
[root@node1 my-cluster]# ssh root@node3 systemctl start nfs-ganesha
[root@node1 my-cluster]# ssh root@node2 systemctl enable nfs-ganesha
[root@node1 my-cluster]# ssh root@node3 systemctl enable nfs-ganesha
[root@node1 my-cluster]# systemctl status nfs-ganesha
[root@node1 my-cluster]# showmount -e node1
Export list for node1:
/nfs1 (everyone)
/nfs2 (everyone)

4. 配置S3服務

[root@node1 my-cluster]# ceph-deploy rgw create node1 node2
[root@node1 my-cluster]# systemctl status ceph-radosgw@rgw.node1.service
[root@node1 my-cluster]# ceph -s

配置S3的時候出現了rgw服務啟動失敗的情況。
google了一番之后,在這里找到了大致的原因:pg_num < pgp_num or mon_max_pg_per_osd exceeded。使用以下命令進行調試:

[root@node1 my-cluster]# /usr/bin/radosgw -d --cluster ceph --name client.rgw.node1 --setuser ceph --setgroup ceph --debug-rgw=2
好像是pg太多了,因為之前手動添加了幾個pool,而rgw服務是會自動創建所需要的pool的(數據池除外),這里沒有必要手動創建。
刪除多余的pool
[root@node1 my-cluster]# ceph osd pool delete .rgw.control .rgw.control --yes-i-really-really-mean-it 
[root@node1 my-cluster]# systemctl start ceph-radosgw@rgw.node1.service
[root@node1 my-cluster]# systemctl status ceph-radosgw@rgw.node1.service  # rgw服務正常了

創建存儲池

[root@node1 my-cluster]# ceph osd crush class ls
[
    "hdd"
]
[root@node1 my-cluster]# ceph osd crush rule create-replicated rule-hdd default host hdd
[root@node1 my-cluster]# [root@node1 my-cluster]# ceph osd crush rule ls
replicated_rule
rule-hdd
[root@node1 my-cluster]# ceph osd pool create default.rgw.buckets.data 64  # 創建存儲池,efault.rgw.buckets.index已經存在,不需重復創建
[root@node1 my-cluster]# ceph osd pool application enable default.rgw.buckets.data rgw
[root@node1 my-cluster]# ceph osd pool application enable default.rgw.buckets.index rgw
修改所有存儲池的crush規則,在node1上執行:
[root@node1 my-cluster]# for i in `ceph osd lspools | grep -v data | awk '{print $2}'`; do ceph osd pool set $i crush_rule rule-hdd; done
[root@node1 my-cluster]# ceph osd pool set default.rgw.buckets.data crush_rule rule-hdd
[root@node1 my-cluster]# radosgw-admin user create --uid="testuser" --display-name="First User"  # 創建一個S3用戶
{
    "user_id": "testuser",
    "display_name": "First User",
    "email": "",
    "suspended": 0,
    "max_buckets": 1000,
    "subusers": [],
    "keys": [
        {
            "user": "testuser",
            "access_key": "LOLDHV9L1CS12586AQ4Y",
            "secret_key": "9aHAOD8vOTwTI5OpBAbVPL35QqJA8yfZLQOI7jHJ"
        }
    ],
    "swift_keys": [],
    "caps": [],
    "op_mask": "read, write, delete",
    "default_placement": "",
    "default_storage_class": "",
    "placement_tags": [],
    "bucket_quota": {
        "enabled": false,
        "check_on_raw": false,
        "max_size": -1,
        "max_size_kb": 0,
        "max_objects": -1
    },
    "user_quota": {
        "enabled": false,
        "check_on_raw": false,
        "max_size": -1,
        "max_size_kb": 0,
        "max_objects": -1
    },
    "temp_url_keys": [],
    "type": "rgw",
    "mfa_ids": []
}
[root@node1 my-cluster]# yum install python-boto  # 安裝python-boto模塊
[root@node1 my-cluster]# vi s3test.py # 寫個簡單的腳本,創建bucket
import boto.s3.connection

access_key = 'LOLDHV9L1CS12586AQ4Y'
secret_key = '9aHAOD8vOTwTI5OpBAbVPL35QqJA8yfZLQOI7jHJ'
conn = boto.connect_s3(
        aws_access_key_id=access_key,
        aws_secret_access_key=secret_key,
        host='node1', port=7480,
        is_secure=False, calling_format=boto.s3.connection.OrdinaryCallingFormat(),
       )

bucket = conn.create_bucket('my-new-bucket')
for bucket in conn.get_all_buckets():
    print "{name} {created}".format(
        name=bucket.name,
        created=bucket.creation_date,
    )
[root@node1 my-cluster]# [root@node1 my-cluster]# python s3.py 
my-new-bucket 2022-01-21T01:39:11.186Z  # 創建bucket成功。

也可以使用s3 browser,進行bucket的創建刪除及對象上傳下載的測試。

5. 配置haproxy

[root@node1 my-cluster]# vim /etc/haproxy/haproxy.cfg
global
    log         127.0.0.1 local2
    chroot      /var/lib/haproxy
    pidfile     /var/run/haproxy.pid
    maxconn     8000
    user        haproxy
    group       haproxy
    daemon
    stats socket /var/lib/haproxy/stats
defaults
    mode                    http
    log                     global
    option                  httplog
    option                  dontlognull
    option http-server-close
#    option forwardfor       except 127.0.0.0/8
    option                  redispatch
    retries                 3
    timeout http-request    10s
    timeout queue           1m
    timeout connect         10s
    timeout client          1m
    timeout server          1m
    timeout http-keep-alive 10s
    timeout check           10s
    maxconn                 8000

listen stats
   bind *:9090
   mode http
   stats enable
   stats uri /
   stats refresh 5s
   stats realm Haproxy\ Stats
   stats auth admin:admin

frontend nfs-in
    bind 192.168.40.64:2049
    mode tcp
    option tcplog
    default_backend         nfs-back

frontend s3-in
    bind 192.168.40.64:58080
    mode tcp
    option tcplog
    default_backend         s3-back

frontend dashboard-in
    bind 192.168.40.64:8888
    mode tcp
    option tcplog
    default_backend         dashboard-back

backend nfs-back
    balance     source
    mode        tcp
    log         /dev/log local0 debug
    server      node1   192.168.40.61:52049 check
    server      node2   192.168.40.62:52049 check
    server      node3   192.168.40.63:52049 check

backend s3-back
    balance     source
    mode        tcp
    log         /dev/log local0 debug
    server      node1   192.168.40.61:7480 check
    server      node2   192.168.40.62:7480 check

backend dashboard-back
    balance     source
    mode        tcp
    log         /dev/log local0 debug
    server      node1   192.168.40.61:8443 check  
    server      node2   192.168.40.62:8443 check
    server      node3   192.168.40.63:8443 check

[root@node1 my-cluster]# systemctl start haproxy
[root@node1 my-cluster]# systemctl enable haproxy
[root@node1 my-cluster]# scp /etc/haproxy/haproxy.cfg root@node2:/etc/haproxy/haproxy.cfg
[root@node1 my-cluster]# ssh root@node2 systemctl start haproxy
[root@node1 my-cluster]# ssh root@node2 systemctl enable haproxy

6. 配置keepalived

配置搶占式keepalived,node1為MASTER,優先級200,node2/3為BACKUP,優先級分別為150、100

[root@node1 my-cluster]# vim /etc/keepalived/keepalived.conf
global_defs {
   router_id CEPH_NFS  # 標識信息,隨便寫;
}

vrrp_script check_haproxy {  # 要執行的腳本
    script "killall -0 haproxy"
    weight -20
    interval 2
    rise 2
    fall 2
}

vrrp_instance VI_0 {
    state MASTER  # node1為MASTER,其余節點要修改為BACKUP
    priority 200  # 優先級,優先級高的優先獲取VIP,其余兩個節點分別設置150和100
    interface ens192  # 定義網絡接口
    virtual_router_id 51  # 三個節點都要是51
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1234
    }
    virtual_ipaddress {
        192.168.40.64/24 dev ens192  # 虛擬IP
    }
    track_script {
        check_haproxy
    }
}

[root@node1 my-cluster]# systemctl start keepalived
[root@node1 my-cluster]# systemctl enable keepalived
記得修改節點2/3的keepalived配置
[root@node1 my-cluster]# ssh root@node2 systemctl start keepalived
[root@node1 my-cluster]# ssh root@node2 systemctl enable keepalived
[root@node1 my-cluster]# ssh root@node3 systemctl start keepalived
[root@node1 my-cluster]# ssh root@node3 systemctl enable keepalived

7. 測試、驗證高可用

  • NFS

找一台測試機,將nfs1掛載,寫入一些文件。這個時候虛擬IP是在node1上的。
將node1重啟,會發現正在寫入的文件會卡死進程,這很正常,因為用的不是nfs v4.1,無法將會話(session)繼續。
Ctrl C斷掉卡死的進程,重新寫入,能夠正常寫入,說明VIP正常遷移了,nfs達到了最基本的高可用。

  • s3

通過s3 browser上傳文件,重啟node1,上傳進程會卡幾秒,之后上傳繼續。
S3服務達到了基本的高可用。


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM