基於ctdb的nfs-ganesha+glusterfs



關於nfs-ganesha

pages之11即關於nfs-ganesha之high availability

nfs-ganesha不提供自己的群集支持,但可以使用linux ha實現ha

fsal gluster

一、基於pacemaker的nfs-ganesha+glusterfs僅適用於glusterfs3.10

二、基於ctdb的nfs-ganesha+glusterfs

使用ctdb為nfs-ganesha設置ha

1.在所有參與節點安裝storhaug包,這將安裝所有依賴項,如ctdb,nfs-ganesha-gluster,glusterfs及其相關依賴項

yum install storhaug-nfs

  

2.配置免密

在其中一個參與節點

創建目錄

mkdir -p /etc/sysconfig/storhaug.d/

生成密鑰

ssh-keygen -f /etc/sysconfig/storhaug.d/secret.pem

將公鑰拷貝至其他節點

ssh-copy-id -i /etc/sysconfig/storhaug.d/secret.pem.pub root@nas5

登錄確認

ssh -oPasswordAuthentication=no -oStrictHostKeyChecking=no -i /etc/sysconfig/storhaug.d/secret.pem root@nas5

  

3.填充

使用參與節點的固定ip

/etc/ctdb/nodes

10.1.1.14
10.1.1.15

  

使用參與節點的浮動ip(即vip),必須不同於/etc/ctdb/nodes內的固定ip

/etc/ctdb/public_addresses

10.1.1.114/24 ens33
10.1.1.115/24 ens33

  

4.配置ctdb的主配置文件

/etc/ctdb/ctdbd.conf

CTDB_MANAGES_SAMBA=yes
CTDB_MANAGES_NFS=yes
CTDB_NODES=/etc/ctdb/nodes
CTDB_PUBLIC_ADDRESSES=/etc/ctdb/public_addresses
#額外新增
CTDB_NFS_CALLOUT=/etc/ctdb/nfs-ganesha-callout
CTDB_NFS_STATE_FS_TYPE=glusterfs
CTDB_NFS_STATE_MNT=/run/gluster/shared_storage
CTDB_NFS_SKIP_SHARE_CHECK=yes
NFS_HOSTNAME=localhost

 

5.可以稍后編輯它以設置全局配置選項

touch /etc/ganesha/ganesha.conf
echo "### NFS-Ganesha.config" > /etc/ganesha/ganesha.conf

  

6.創建受信任的存儲池並啟動gluster共享存儲卷

在所有參與節點上

systemctl start glusterd
systemctl enable glusterd

  

在引導節點上,peer探測其它節點

gluster peer probe nas5

  

啟用gluster共享存儲卷,之后會自動創建一個卷並掛載,作為配置存放

gluster volume set all cluster.enable-shared-storage enable
volume set: success

  

驗證gluster_shared_storage是否已經在/run/gluster/shared_storage

gluster volume list
gluster_shared_storage
gv0

df -h
Filesystem                    Size  Used Avail Use% Mounted on
/dev/mapper/centos-root        17G  1.6G   16G  10% /
devtmpfs                      475M     0  475M   0% /dev
tmpfs                         487M   38M  449M   8% /dev/shm
tmpfs                         487M  7.7M  479M   2% /run
tmpfs                         487M     0  487M   0% /sys/fs/cgroup
/dev/sda1                    1014M  133M  882M  14% /boot
/dev/mapper/datavg-lv1        1.8G   33M  1.8G   2% /data/brick1
tmpfs                          98M     0   98M   0% /run/user/0
nas4:/gv0                     1.8G   33M  1.8G   2% /mnt
nas4:/gv0                     1.8G   32M  1.8G   2% /root/test
nas4:/gluster_shared_storage   17G  1.6G   16G  10% /run/gluster/shared_storage

  

7.啟動ctdbd和ganesha.nfsd守護進程

systemctl start nfs-ganesha
systemctl enable nfs-ganesha
systemctl start ctdb
systemctl enable ctdb
systemctl status ctdb
● ctdb.service - CTDB
   Loaded: loaded (/usr/lib/systemd/system/ctdb.service; enabled; vendor preset: disabled)
   Active: active (running) since Fri 2019-05-10 16:00:00 CST; 7min ago
     Docs: man:ctdbd(1)
           man:ctdb(7)
 Main PID: 9581 (ctdbd)
   CGroup: /system.slice/ctdb.service
           ├─9581 /usr/sbin/ctdbd --pidfile=/run/ctdb/ctdbd.pid --nlist=/etc/ctdb/nodes --public-addresses=/etc/ctdb/public_a...
           ├─9583 /usr/libexec/ctdb/ctdb_eventd -e /etc/ctdb/events.d -s /var/run/ctdb/eventd.sock -P 9581 -l file:/var/log/l...
           └─9654 /usr/sbin/ctdbd --pidfile=/run/ctdb/ctdbd.pid --nlist=/etc/ctdb/nodes --public-addresses=/etc/ctdb/public_a...

May 10 15:59:57 nas4 systemd[1]: Starting CTDB...
May 10 15:59:57 nas4 ctdbd_wrapper[9575]: No recovery lock specified. Starting CTDB without split brain prevention.
May 10 16:00:00 nas4 systemd[1]: Started CTDB.

  

在主導節點:

storhaug setup
Setting up
nfs-ganesha is already running

  

觀察ctdb
/var/log/log.ctdb

觀察ganesha
/var/log/ganesha/ganesha.log

 

8.導出gluster卷

 

1)先將盤做成lvm

pvcreate /dev/sdc
vgcreate bricks /dev/sdc
vgs
lvcreate -L 1.9G -T bricks/thinpool
  Rounding up size to full physical extent 1.90 GiB
  Thin pool volume with chunk size 64.00 KiB can address at most 15.81 TiB of data.
  Logical volume "thinpool" created.

塊大小為64.00 KiB的瘦池容量最多可以處理15.81 TiB的數據。

做lv用thin瘦卷,指定的卷大小,僅僅意味着可以使用的最大值,並不會直接為你划出一塊大卷,否則浪費, -T表示瘦卷,其實是個池。lv在這個瘦卷pool的基礎上創建

lvcreate -V 1.9G -T bricks/thinpool -n brick-1
  Rounding up size to full physical extent 1.90 GiB
  Logical volume "brick-1" created.

  

2)格式化

mkfs.xfs -i size=512 /dev/bricks/brick-1 
meta-data=/dev/bricks/brick-1    isize=512    agcount=8, agsize=62336 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=1        finobt=0, sparse=0
data     =                       bsize=4096   blocks=498688, imaxpct=25
         =                       sunit=16     swidth=16 blks
naming   =version 2              bsize=4096   ascii-ci=0 ftype=1
log      =internal log           bsize=4096   blocks=2560, version=2
         =                       sectsz=512   sunit=16 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0

  

3)創建目錄

mkdir -p /bricks/vol

  

4)手動掛載

mount /dev/bricks/brick-1  /bricks/vol/myvol

  

5)寫入自掛

/etc/fstab

/dev/bricks/brick-1 /bricks/vol		xfs	defaults	0 0

  

6)掛載點下創建sub-directory

mkdir /bricks/vol/myvol

  

7)創建gluster卷

gluster volume create  myvol replica 2 nas4:/bricks/vol/myvol nas5:/bricks/vol/myvol 
Replica 2 volumes are prone to split-brain. Use Arbiter or Replica 3 to avoid this. See: http://docs.gluster.org/en/latest/Administrator%20Guide/Split%20brain%20and%20ways%20to%20deal%20with%20it/.
Do you still want to continue?
 (y/n) y
volume create: myvol: success: please start the volume to access data

  

8)啟動gluster卷

gluster volume start myvol
volume start: myvol: success

  

9)從ganesha導出gluster卷

storhaug export myvol

  

第一次報錯:未手動創建導出目錄

/usr/sbin/storhaug: line 247: /run/gluster/shared_storage/nfs-ganesha/exports/export.myvol.conf: No such file or directory
ls: cannot access /run/gluster/shared_storage/nfs-ganesha/exports/export.*.conf: No such file or directory
sed: can't read /run/gluster/shared_storage/nfs-ganesha/exports/export.myvol.conf: No such file or directory'

  處理:創建導出目錄

mkdir -p /run/gluster/shared_storage/nfs-ganesha/exports/

  

第二次報錯:之前配置自動導出前,自己手寫的導出中Export_Id重復啦,手動指定啦1,然而ganesha自動導出的時候,默認從1開始,

Error org.freedesktop.DBus.Error.InvalidFileContent: Selected entries in /run/gluster/shared_storage/nfs-ganesha/exports/export.myvol.conf already active!!!
WARNING: Command failed on 10.1.1.14: dbus-send --print-reply --system --dest=org.ganesha.nfsd /org/ganesha/nfsd/ExportMgr org.ganesha.nfsd.exportmgr.AddExport string:/run/gluster/shared_storage/nfs-ganesha/exports/export.myvol.conf string:EXPORT\(Path=/myvol\)

  處理:將自己手動導出的改為101,解決啦Export_Id沖突,就ok

第三次導出:

storhaug export myvol
export myvol
method return time=1557478537.439483 sender=:1.64 -> destination=:1.66 serial=51 reply_serial=2
   string "1 exports added"
method return time=1557478538.752212 sender=:1.84 -> destination=:1.87 serial=21 reply_serial=2
   string "1 exports added"

  

查看自動導出的文件

cd /run/gluster/shared_storage/nfs-ganesha/exports/
cat export.myvol.conf 
EXPORT {
  Export_Id = 1;
  Path = "/myvol";
  Pseudo = "/myvol";
  Access_Type = RW;
  Squash = No_root_squash;
  Disable_ACL = true;
  Protocols = "3","4";
  Transports = "UDP","TCP";
  SecType = "sys";
  FSAL {
    Name = "GLUSTER";
    Hostname = localhost;
    Volume = "myvol";
  }
}

  

 


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM