1.zypper 安裝各種庫
zypper in bison openssl* libacl* sqlite libxml2*
zypper in libxml++* fuse fuse-devel
zyypper in openssl-devel libaio-devel bison bison-devel flex systemtap-sdt-devel readline-devel
cd /home/src/glusterfs-3.8.9
./configure --prefix=/home/rzrk/server/glusterfs
報錯:
configure: error: libxml2 devel libraries not found
改不出來這個悲傷。。。
configure: error: pass --disable-tiering to build without sqlite
./configure --prefix=/home/rzrk/server/glusterfs --disable-tiering --這樣編譯吧
反正最后還是沒編過 也沒啥報錯
查看內核文件是否掛載
# lsmod |grep fuse
fuse 95758 3
2.源碼的編不過
zypper in glusterfs
zypper in glusterfs-devel
lsb_release -a 可以先看下操作系統
LSB Version: n/a
Distributor ID: SUSE LINUX
Description: SUSE Linux Enterprise Server 12 SP1
Release: 12.1
Codename: n/a
一 、 rpm安裝
---史上最牛逼文檔哈哈哈哈按照這個做出來的
lsb_release -a 可以先看下操作系統
LSB Version: n/a
Distributor ID: SUSE LINUX
Description: SUSE Linux Enterprise Server 12 SP1
Release: 12.1
Codename: n/a
1 這個是zypper源從官網下的
zypper ar http://download.opensuse.org/repositories/home:/kkeithleatredhat:/SLES12-3.8/SLE_12_SP2/ glusterfs
zypper refresh
zypper in glusterfs-3.8.10 libgfapi0-3.8.10 libgfchangelog0-3.8.10 libgfrpc0-3.8.10
libgfxdr0-3.8.10
libglusterfs0-3.8.10
glusterfs-3.8.10
上面的庫都要裝要不然會有問題 的。。。
項目要求:
集群 四個點 每兩個點互備
加油呀芷晴xi~~
四台機器: 4 18做個集群
172.30.5.4
172.30.5.17
172.30.5.18
172.30.5.19
4-17互備
18-19互備 17,19客戶端
2 啟動服務
# service glusterd start
ps -ef |grep glusterd
root 78162 1 0 16:31 ? 00:00:00 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO
# netstat -tunlp|grep gluster
tcp 0 0 0.0.0.0:24007 0.0.0.0:* LISTEN 78162/glusterd
罷了 我就是試試能不能啟動
- #如果需要在系統啟動時開啟glusterd
- chkconfig glusterd on
- yum install glusterfs{,-server,-fuse,-geo-replication} ---人家是這么安裝的但是不是suse是centOS wget http://download.gluster.org/pub/gluster/glusterfs/LATEST/CentOS/gluster-epel.repo -O /etc/yum.repo.d/glusterfs.repo
3 4-17這兩台服務器互備
glusterfs管理
- $gluster peer probe host|ip
- $gluster peer status #查看除本機外的其他設備狀態
- $gluster peer detach host|ip #如果希望將某設備從存儲池中刪除
在創建volume之前需要先將一組存儲設備組成一個存儲池,通過存儲設備提供的bricks來組成卷。
在設備上啟動glusterd之后,可通過設備的主機名或IP地址,將設備加到存儲池中。
在4這台機器操作:
gluster peer probe 172.30.5.18
peer probe: failed: Probe returned with Transport endpoint is not connected
報錯解決:5.18上gluster啟動服務
# gluster peer probe s3
# gluster peer status
Number of Peers: 1
Hostname: s3
Uuid: 0e0230ea-74e3-48b4-a595-81be72a36309
State: Peer in Cluster (Connected)
# cat /etc/hosts
172.30.5.4 s1
172.30.5.17 s2
172.30.5.18 s3
172.30.5.19 s4
1)創建GlusterFS邏輯卷(Volume)
因為4 18是服務器只在一台服務器上操作就行
# gluster volume create gv0 replica 2 172.30.5.4:/data/gluster 172.30.5.18:/data/gluster
報錯如下
volume create: gv0: failed: The brick 172.30.5.4:/data/gluster is being created in the root partition. It is recommended that you don't use the system's root partition for storage backend. Or use 'force' at the end of the command if you want to override this behavior.
發現報錯了,這是因為我們創建的brick在系統盤,這個在gluster的默認情況下是不允許的,生產環境下也盡可能的與系統盤分開,如果必須這樣請使用force
# gluster volume create gv0 replica 2 172.30.5.4:/data/gluster 172.30.5.18:/data/gluster force
volume create: gv0: success: please start the volume to access data
啟用GlusterFS邏輯卷:
# gluster volume start gv0
volume start: gv0: success
查看:
# gluster volume info
客戶端掛載17掛載吧
# mkdir /gluster
# mount -t glusterfs 172.30.5.4:/gv0 /gluster
# df -h
172.30.5.4:/gv0 80G 4.1G 76G 6% /gluster
哦shit。。。 客戶端跟我預想的不一樣阿
在5.4上刪除卷吧:
# gluster volume stop gv0
Stopping volume will make its data inaccessible. Do you want to continue? (y/n) y
volume stop: gv0: success
# gluster volume delete gv0
Deleting volume will erase all information about the volume. Do you want to continue? (y/n) y
volume delete: gv0: success
重新做一遍在5.4上
# gluster volume create gv0 replica 2 172.30.5.4:/home/gluster 172.30.5.18:/home/gluster force
volume create: gv0: success: please start the volume to access data
# gluster volume start gv0
volume start: gv0: success
# gluster volume info
Volume Name: gv0
Type: Replicate
Volume ID: e28cf751-38db-4081-a686-dc218959de97
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: 172.30.5.4:/home/gluster
Brick2: 172.30.5.18:/home/gluster
Options Reconfigured:
transport.address-family: inet
performance.readdir-ahead: on
nfs.disable: on
-----------------------------------------------------------以上掛載完只有4.5T----------------------------------------
移除
然后把這四台機器都放到一個儲存池里
# gluster volume create dr-volume
repl 2 s1:/home/data_fluster s2:/home/data_fluster s3:/home/data_fluster s4:/home/data_fluster
volume create: dr-volume: success: please start the volume to access data
# gluster volume info
Volume Name: dr-volume
Type: Distributed-Replicate
Volume ID: 578babc5-bd40-45d7-867b-b21fd970be3f
Status: Started
Snapshot Count: 0
Number of Bricks:
2 x 2 = 4
Transport-type: tcp
Bricks:
Brick1: s1:/home/data_fluster
Brick2: s2:/home/data_fluster
Brick3: s3:/home/data_fluster
Brick4: s4:/home/data_fluster
Options Reconfigured:
transport.address-family: inet
performance.readdir-ahead: on
nfs.disable: on
route add default gw 172.30.5.1
四台客戶端分別掛載:
4 mount -t glusterfs 172.30.5.4:/dr-volume /gluster/
17 mount -t glusterfs 172.30.5.4:/dr-volume /gluster_data
18 mount -t glusterfs 172.30.5.18:/dr-volume /gluster_data
19 mount -t glusterfs 172.30.5.18:/dr-volume /gluster_data
開機自啟:chkconfig glusterd on
測試並發
自動掛載:
# cat /etc/fstab
UUID=70af5fe1-a9b4-408e-9b81-6c34048e5a10 swap swap defaults 0 0
UUID=b560683d-0afb-45fe-a86a-359c6c0ae104 / xfs defaults 1 1
UUID=5715b418-7bcb-4a37-8b8c-901769a5b3be /home xfs defaults 1 2
172.30.5.4:/dr-volume /gluster/ glusterfs defaults,_netdev 0 0