GlusterFS分布式文件系統部署及基本使用(CentOS 7.6)
作者:尹正傑
版權聲明:原創作品,謝絕轉載!否則將追究法律責任。
Gluster File System 是一款自由軟件,主要由ZRESEARCH公司負責開發,十幾名開發者,最近非常活躍。 文檔也比較齊全,不難上手。Gluster是一個分布式橫向擴展文件系統,可根據您的存儲消耗需求快速配置額外的存儲。它將自動故障轉移作為主要功能。官網快速入門文檔:https://docs.gluster.org/en/latest/Install-Guide/Overview/。
一.安裝 Gluster
1>.什么是Gluster
Gluster是一個可擴展的分布式文件系統,可將來自多個服務器的磁盤存儲資源聚合到一個全局命名空間中。
2>.Gluster的好處
- 可擴展到幾PB
- 處理成千上萬的客戶
- POSIX兼容
- 使用商品硬件
- 可以使用任何支持擴展屬性的ondisk文件系統
- 可使用NFS和SMB等行業標准協議訪問
- 提供復制,配額,地理復制,快照和位置檢測
- 允許優化不同的工作負載
- 開源
3>.查看當前最新版本,如下圖所示,目前最新版本是Gluster 5
[root@node101 ~]# cat /etc/yum.repos.d/glusterfs.repo [myglusterfs] name=glusterfs baseurl=https://buildlogs.centos.org/centos/7/storage/x86_64/gluster-5/ enabled=1 gpgcheck=0 gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-mysql [root@node101 ~]#

[root@node101 yum.repos.d]# yum -y install glusterfs-server Loaded plugins: fastestmirror Loading mirror speeds from cached hostfile * base: mirror.jdcloud.com * extras: mirrors.163.com * updates: mirrors.shu.edu.cn myglusterfs | 2.9 kB 00:00:00 myglusterfs/primary_db | 76 kB 00:00:03 Resolving Dependencies --> Running transaction check ---> Package glusterfs-server.x86_64 0:5.3-1.el7 will be installed --> Processing Dependency: glusterfs-libs = 5.3-1.el7 for package: glusterfs-server-5.3-1.el7.x86_64 --> Processing Dependency: glusterfs-fuse = 5.3-1.el7 for package: glusterfs-server-5.3-1.el7.x86_64 --> Processing Dependency: glusterfs-client-xlators = 5.3-1.el7 for package: glusterfs-server-5.3-1.el7.x86_64 --> Processing Dependency: glusterfs-cli = 5.3-1.el7 for package: glusterfs-server-5.3-1.el7.x86_64 --> Processing Dependency: glusterfs-api = 5.3-1.el7 for package: glusterfs-server-5.3-1.el7.x86_64 --> Processing Dependency: glusterfs = 5.3-1.el7 for package: glusterfs-server-5.3-1.el7.x86_64 --> Processing Dependency: rpcbind for package: glusterfs-server-5.3-1.el7.x86_64 --> Processing Dependency: libgfapi.so.0(GFAPI_PRIVATE_3.7.0)(64bit) for package: glusterfs-server-5.3-1.el7.x86_64 --> Processing Dependency: libgfapi.so.0(GFAPI_PRIVATE_3.4.0)(64bit) for package: glusterfs-server-5.3-1.el7.x86_64 --> Processing Dependency: libgfapi.so.0(GFAPI_3.7.4)(64bit) for package: glusterfs-server-5.3-1.el7.x86_64 --> Processing Dependency: libgfapi.so.0(GFAPI_3.7.0)(64bit) for package: glusterfs-server-5.3-1.el7.x86_64 --> Processing Dependency: libgfapi.so.0(GFAPI_3.6.0)(64bit) for package: glusterfs-server-5.3-1.el7.x86_64 --> Processing Dependency: libgfapi.so.0(GFAPI_3.5.1)(64bit) for package: glusterfs-server-5.3-1.el7.x86_64 --> Processing Dependency: libgfapi.so.0(GFAPI_3.4.2)(64bit) for package: glusterfs-server-5.3-1.el7.x86_64 --> Processing Dependency: libgfapi.so.0(GFAPI_3.4.0)(64bit) for package: glusterfs-server-5.3-1.el7.x86_64 --> Processing Dependency: liburcu-cds.so.6()(64bit) for package: glusterfs-server-5.3-1.el7.x86_64 --> Processing Dependency: liburcu-bp.so.6()(64bit) for package: glusterfs-server-5.3-1.el7.x86_64 --> Processing Dependency: libglusterfs.so.0()(64bit) for package: glusterfs-server-5.3-1.el7.x86_64 --> Processing Dependency: libgfxdr.so.0()(64bit) for package: glusterfs-server-5.3-1.el7.x86_64 --> Processing Dependency: libgfrpc.so.0()(64bit) for package: glusterfs-server-5.3-1.el7.x86_64 --> Processing Dependency: libgfchangelog.so.0()(64bit) for package: glusterfs-server-5.3-1.el7.x86_64 --> Processing Dependency: libgfapi.so.0()(64bit) for package: glusterfs-server-5.3-1.el7.x86_64 --> Running transaction check ---> Package glusterfs.x86_64 0:5.3-1.el7 will be installed ---> Package glusterfs-api.x86_64 0:5.3-1.el7 will be installed ---> Package glusterfs-cli.x86_64 0:5.3-1.el7 will be installed ---> Package glusterfs-client-xlators.x86_64 0:5.3-1.el7 will be installed ---> Package glusterfs-fuse.x86_64 0:5.3-1.el7 will be installed --> Processing Dependency: psmisc for package: glusterfs-fuse-5.3-1.el7.x86_64 --> Processing Dependency: attr for package: glusterfs-fuse-5.3-1.el7.x86_64 ---> Package glusterfs-libs.x86_64 0:5.3-1.el7 will be installed ---> Package rpcbind.x86_64 0:0.2.0-47.el7 will be installed --> Processing Dependency: libtirpc >= 0.2.4-0.7 for package: rpcbind-0.2.0-47.el7.x86_64 --> Processing Dependency: libtirpc.so.1()(64bit) for package: rpcbind-0.2.0-47.el7.x86_64 ---> Package userspace-rcu.x86_64 0:0.10.0-3.el7 will be installed --> Running transaction check ---> Package attr.x86_64 0:2.4.46-13.el7 will be installed ---> Package libtirpc.x86_64 0:0.2.4-0.15.el7 will be installed ---> Package psmisc.x86_64 0:22.20-15.el7 will be installed --> Finished Dependency Resolution Dependencies Resolved =================================================================================================================================================================== Package Arch Version Repository Size =================================================================================================================================================================== Installing: glusterfs-server x86_64 5.3-1.el7 myglusterfs 1.4 M Installing for dependencies: attr x86_64 2.4.46-13.el7 base 66 k glusterfs x86_64 5.3-1.el7 myglusterfs 668 k glusterfs-api x86_64 5.3-1.el7 myglusterfs 106 k glusterfs-cli x86_64 5.3-1.el7 myglusterfs 202 k glusterfs-client-xlators x86_64 5.3-1.el7 myglusterfs 989 k glusterfs-fuse x86_64 5.3-1.el7 myglusterfs 147 k glusterfs-libs x86_64 5.3-1.el7 myglusterfs 415 k libtirpc x86_64 0.2.4-0.15.el7 base 89 k psmisc x86_64 22.20-15.el7 base 141 k rpcbind x86_64 0.2.0-47.el7 base 60 k userspace-rcu x86_64 0.10.0-3.el7 myglusterfs 92 k Transaction Summary =================================================================================================================================================================== Install 1 Package (+11 Dependent packages) Total download size: 4.3 M Installed size: 16 M Downloading packages: (1/12): attr-2.4.46-13.el7.x86_64.rpm | 66 kB 00:00:00 (2/12): glusterfs-api-5.3-1.el7.x86_64.rpm | 106 kB 00:00:05 (3/12): glusterfs-5.3-1.el7.x86_64.rpm | 668 kB 00:00:06 (4/12): glusterfs-cli-5.3-1.el7.x86_64.rpm | 202 kB 00:00:02 (5/12): glusterfs-fuse-5.3-1.el7.x86_64.rpm | 147 kB 00:00:01 (6/12): glusterfs-client-xlators-5.3-1.el7.x86_64.rpm | 989 kB 00:00:05 (7/12): libtirpc-0.2.4-0.15.el7.x86_64.rpm | 89 kB 00:00:00 (8/12): psmisc-22.20-15.el7.x86_64.rpm | 141 kB 00:00:00 (9/12): rpcbind-0.2.0-47.el7.x86_64.rpm | 60 kB 00:00:00 (10/12): glusterfs-libs-5.3-1.el7.x86_64.rpm | 415 kB 00:00:03 (11/12): userspace-rcu-0.10.0-3.el7.x86_64.rpm | 92 kB 00:00:01 (12/12): glusterfs-server-5.3-1.el7.x86_64.rpm | 1.4 MB 00:00:06 ------------------------------------------------------------------------------------------------------------------------------------------------------------------- Total 238 kB/s | 4.3 MB 00:00:18 Running transaction check Running transaction test Transaction test succeeded Running transaction Installing : glusterfs-libs-5.3-1.el7.x86_64 1/12 Installing : glusterfs-5.3-1.el7.x86_64 2/12 Installing : glusterfs-client-xlators-5.3-1.el7.x86_64 3/12 Installing : glusterfs-api-5.3-1.el7.x86_64 4/12 Installing : glusterfs-cli-5.3-1.el7.x86_64 5/12 Installing : libtirpc-0.2.4-0.15.el7.x86_64 6/12 Installing : rpcbind-0.2.0-47.el7.x86_64 7/12 Installing : psmisc-22.20-15.el7.x86_64 8/12 Installing : attr-2.4.46-13.el7.x86_64 9/12 Installing : glusterfs-fuse-5.3-1.el7.x86_64 10/12 Installing : userspace-rcu-0.10.0-3.el7.x86_64 11/12 Installing : glusterfs-server-5.3-1.el7.x86_64 12/12 Verifying : glusterfs-libs-5.3-1.el7.x86_64 1/12 Verifying : glusterfs-cli-5.3-1.el7.x86_64 2/12 Verifying : glusterfs-fuse-5.3-1.el7.x86_64 3/12 Verifying : rpcbind-0.2.0-47.el7.x86_64 4/12 Verifying : glusterfs-api-5.3-1.el7.x86_64 5/12 Verifying : glusterfs-5.3-1.el7.x86_64 6/12 Verifying : userspace-rcu-0.10.0-3.el7.x86_64 7/12 Verifying : glusterfs-server-5.3-1.el7.x86_64 8/12 Verifying : attr-2.4.46-13.el7.x86_64 9/12 Verifying : psmisc-22.20-15.el7.x86_64 10/12 Verifying : glusterfs-client-xlators-5.3-1.el7.x86_64 11/12 Verifying : libtirpc-0.2.4-0.15.el7.x86_64 12/12 Installed: glusterfs-server.x86_64 0:5.3-1.el7 Dependency Installed: attr.x86_64 0:2.4.46-13.el7 glusterfs.x86_64 0:5.3-1.el7 glusterfs-api.x86_64 0:5.3-1.el7 glusterfs-cli.x86_64 0:5.3-1.el7 glusterfs-client-xlators.x86_64 0:5.3-1.el7 glusterfs-fuse.x86_64 0:5.3-1.el7 glusterfs-libs.x86_64 0:5.3-1.el7 libtirpc.x86_64 0:0.2.4-0.15.el7 psmisc.x86_64 0:22.20-15.el7 rpcbind.x86_64 0:0.2.0-47.el7 userspace-rcu.x86_64 0:0.10.0-3.el7 Complete! [root@node101 yum.repos.d]#
官方的實驗就是使用2台機器教學的,因此我們這里也需要開啟2台虛擬機即可。2台都需要安裝glusterfs-server服務。
5>.啟動glusterfs服務

[root@node101 ~]# systemctl status glusterd ● glusterd.service - GlusterFS, a clustered file-system server Loaded: loaded (/usr/lib/systemd/system/glusterd.service; disabled; vendor preset: disabled) Active: inactive (dead) [root@node101 ~]# [root@node101 ~]# systemctl start glusterd [root@node101 ~]# [root@node101 ~]# systemctl status glusterd ● glusterd.service - GlusterFS, a clustered file-system server Loaded: loaded (/usr/lib/systemd/system/glusterd.service; disabled; vendor preset: disabled) Active: active (running) since Mon 2019-02-18 16:23:27 CST; 2s ago Process: 6581 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS) Main PID: 6582 (glusterd) CGroup: /system.slice/glusterd.service └─6582 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO Feb 18 16:23:27 node101.yinzhengjie.org.cn systemd[1]: Starting GlusterFS, a clustered file-system server... Feb 18 16:23:27 node101.yinzhengjie.org.cn systemd[1]: Started GlusterFS, a clustered file-system server. [root@node101 ~]# [root@node101 ~]# systemctl enable glusterd Created symlink from /etc/systemd/system/multi-user.target.wants/glusterd.service to /usr/lib/systemd/system/glusterd.service. [root@node101 ~]#

[root@node102 ~]# systemctl status glusterd ● glusterd.service - GlusterFS, a clustered file-system server Loaded: loaded (/usr/lib/systemd/system/glusterd.service; disabled; vendor preset: disabled) Active: inactive (dead) [root@node102 ~]# [root@node102 ~]# [root@node102 ~]# systemctl start glusterd [root@node102 ~]# [root@node102 ~]# systemctl status glusterd ● glusterd.service - GlusterFS, a clustered file-system server Loaded: loaded (/usr/lib/systemd/system/glusterd.service; disabled; vendor preset: disabled) Active: active (running) since Mon 2019-02-18 16:22:09 CST; 2s ago Process: 14000 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS) Main PID: 14001 (glusterd) CGroup: /system.slice/glusterd.service └─14001 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO Feb 18 16:22:09 node102.yinzhengjie.org.cn systemd[1]: Starting GlusterFS, a clustered file-system server... Feb 18 16:22:09 node102.yinzhengjie.org.cn systemd[1]: Started GlusterFS, a clustered file-system server. [root@node102 ~]# [root@node102 ~]# [root@node102 ~]# systemctl enable glusterd Created symlink from /etc/systemd/system/multi-user.target.wants/glusterd.service to /usr/lib/systemd/system/glusterd.service. [root@node102 ~]# [root@node102 ~]#
二.GlusterFS 的配置於基本使用(https://docs.gluster.org/en/latest/Install-Guide/Configure/)
1>.配置信任池
[root@node101 ~]# gluster peer probe node102.yinzhengjie.org.cn #我們在node101.yinzhengjie.org.cn節點上配置和node102.yinzhengjie.org.cn的信任池。 peer probe: success. [root@node101 ~]# [root@node101 ~]# gluster peer status #查看node101.yinzhengjie.org.cn信任池的狀態 Number of Peers: 1 Hostname: node102.yinzhengjie.org.cn Uuid: ec348557-e9c3-46c8-8ce9-bac6c1b4c298 State: Peer in Cluster (Connected) [root@node101 ~]# [root@node101 ~]# [root@node101 ~]# ssh node102.yinzhengjie.org.cn #我們遠程到其他的node102.yinzhengjie.org.cn節點上 Last login: Mon Feb 18 16:19:47 2019 from 172.30.1.2 [root@node102 ~]# [root@node102 ~]# [root@node102 ~]# gluster peer status #查看node102.yinzhengjie.org.cn以及有的信任池狀態 Number of Peers: 1 Hostname: node101.yinzhengjie.org.cn Uuid: 9ed5663a-72ec-44c2-92f6-118c6f6cabed State: Peer in Cluster (Connected) [root@node102 ~]# [root@node102 ~]# exit logout Connection to node102.yinzhengjie.org.cn closed. [root@node101 ~]#
2>.創建分布式卷

可以在存儲環境中創建以下類型的卷: 分布式 - 分布式卷在卷中的塊中分配文件。您可以使用需要擴展存儲的分布式卷,並且冗余要么不重要,要么由其他硬件/軟件層提供。 已復制 - 復制的卷跨卷中的塊復制文件。您可以在高可用性和高可靠性至關重要的環境中使用復制卷。 分布式復制 - 分布式復制卷在卷中的復制塊中分布文件。您可以在需要擴展存儲且高可靠性至關重要的環境中使用分布式復制卷。分布式復制卷還可在大多數環境中提供更高的讀取性能。 分散 - 分散卷基於擦除代碼,提供節省空間的磁盤或服務器故障保護。它將原始文件的編碼片段存儲到每個塊中,其方式是僅需要片段的子集來恢復原始文件。管理員在創建卷時配置可丟失而不會丟失數據訪問權限的磚塊數。 Distributed Dispersed - 分布式分散卷在分散的子卷上分發文件。這與分發復制卷具有相同的優點,但使用分散將數據存儲到塊中。 Striped [Deprecated] - 條帶卷在卷中的磚塊上划分數據。為獲得最佳結果,只應在訪問非常大的文件的高並發環境中使用條帶卷。 Distributed Striped [已棄用] - 分布式條帶卷跨群集中的兩個或多個節點條帶化數據。您應該使用需要擴展存儲的分布式條帶卷,而在高並發環境中訪問非常大的文件至關重要。 分布式條帶復制[已棄用] - 分布式條帶復制卷可在群集中的復制塊中分布條帶化數據。為了獲得最佳結果,您應該在高度並發的環境中使用分布式條帶復制卷,在這些環境中,對非常大的文件和性能進行並行訪 在此版本中,僅對Map Reduce工作負載支持此卷類型的配置。 Striped Replicated [已棄用] - 條帶化復制卷會跨群集中的復制塊條帶化數據。為了獲得最佳結果,您應該在高度並發的環境中使用條帶化復制卷,在這些環境中可以並行訪問非常大的文件,並且性能至關重要。在此版本中,僅對Map Reduce工作負載支持此卷類型的配置。
[root@node101 ~]# [root@node101 ~]# mkdir -p /home/yinzhengjie/glusterfs/file1 #我們創建在2台服務器上分別創建一個存放glusterfs卷的路徑 [root@node101 ~]# [root@node101 ~]# ssh node102.yinzhengjie.org.cn Last login: Mon Feb 18 16:28:23 2019 from node101.yinzhengjie.org.cn [root@node102 ~]# [root@node102 ~]# [root@node102 ~]# mkdir -p /home/yinzhengjie/glusterfs/file1 [root@node102 ~]# [root@node102 ~]# exit logout Connection to node102.yinzhengjie.org.cn closed. [root@node101 ~]# [root@node101 ~]# gluster volume create test-volume node101.yinzhengjie.org.cn:/home/yinzhengjie/glusterfs/file1/ node102.yinzhengjie.org.cn:/home/yinzhengjie/glusterfs/file1/ #創建分布式群 volume create: test-volume: success: please start the volume to access data [root@node101 ~]# [root@node101 ~]# [root@node101 ~]# gluster volume info #顯示創建的卷信息 Volume Name: test-volume Type: Distribute Volume ID: d73f1306-1984-4fea-8fe2-a37771b471d5 Status: Created Snapshot Count: 0 Number of Bricks: 2 Transport-type: tcp Bricks: Brick1: node101.yinzhengjie.org.cn:/home/yinzhengjie/glusterfs/file1 Brick2: node102.yinzhengjie.org.cn:/home/yinzhengjie/glusterfs/file1 Options Reconfigured: transport.address-family: inet nfs.disable: on [root@node101 ~]# [root@node101 ~]#
3>.創建復制卷(類似於raid 1)
[root@node101 ~]# mkdir -p /home/yinzhengjie/glusterfs/file2 [root@node101 ~]# [root@node101 ~]# ssh node102.yinzhengjie.org.cn Last login: Mon Feb 18 16:35:57 2019 from node101.yinzhengjie.org.cn [root@node102 ~]# [root@node102 ~]# mkdir -p /home/yinzhengjie/glusterfs/file2 [root@node102 ~]# [root@node102 ~]# exit logout Connection to node102.yinzhengjie.org.cn closed. [root@node101 ~]# [root@node101 ~]# gluster volume create replicated-volume replica 2 transport tcp node101.yinzhengjie.org.cn:/home/yinzhengjie/glusterfs/file2 node102.yinzhengjie.org.cn:/home/yinzhengjie/glusterfs/file2 Replica 2 volumes are prone to split-brain. Use Arbiter or Replica 3 to avoid this. See: http://docs.gluster.org/en/latest/Administrator%20Guide/Split%20brain%20and%20ways%20to%20deal%20with%20it/. Do you still want to continue? #注意,官方推薦使用3個副本來仲裁,因為使用2個副本會存在腦裂的風險!由於我實驗只有2台虛擬機,因此我這里繼續執行了我的命令 (y/n) y volume create: replicated-volume: success: please start the volume to access data [root@node101 ~]# [root@node101 ~]# gluster volume info #通過上面的方式,我們即創建來分布式卷也創建了復制卷,我們可以通過該條命令查看創建的卷信息。我們可以單獨查看某個卷信息,如果沒有指定的話默認查看所有已經創建的卷信息。 Volume Name: replicated-volume Type: Replicate Volume ID: abbcc657-9170-40bc-b64f-a48af4c46e70 Status: Created Snapshot Count: 0 Number of Bricks: 1 x 2 = 2 Transport-type: tcp Bricks: Brick1: node101.yinzhengjie.org.cn:/home/yinzhengjie/glusterfs/file2 Brick2: node102.yinzhengjie.org.cn:/home/yinzhengjie/glusterfs/file2 Options Reconfigured: transport.address-family: inet nfs.disable: on performance.client-io-threads: off Volume Name: test-volume Type: Distribute Volume ID: d73f1306-1984-4fea-8fe2-a37771b471d5 Status: Created Snapshot Count: 0 Number of Bricks: 2 Transport-type: tcp Bricks: Brick1: node101.yinzhengjie.org.cn:/home/yinzhengjie/glusterfs/file1 Brick2: node102.yinzhengjie.org.cn:/home/yinzhengjie/glusterfs/file1 Options Reconfigured: transport.address-family: inet nfs.disable: on [root@node101 ~]#

[root@node101 ~]# gluster volume info Volume Name: replicated-volume Type: Replicate Volume ID: abbcc657-9170-40bc-b64f-a48af4c46e70 Status: Created Snapshot Count: 0 Number of Bricks: 1 x 2 = 2 Transport-type: tcp Bricks: Brick1: node101.yinzhengjie.org.cn:/home/yinzhengjie/glusterfs/file2 Brick2: node102.yinzhengjie.org.cn:/home/yinzhengjie/glusterfs/file2 Options Reconfigured: transport.address-family: inet nfs.disable: on performance.client-io-threads: off Volume Name: test-volume Type: Distribute Volume ID: d73f1306-1984-4fea-8fe2-a37771b471d5 Status: Created Snapshot Count: 0 Number of Bricks: 2 Transport-type: tcp Bricks: Brick1: node101.yinzhengjie.org.cn:/home/yinzhengjie/glusterfs/file1 Brick2: node102.yinzhengjie.org.cn:/home/yinzhengjie/glusterfs/file1 Options Reconfigured: transport.address-family: inet nfs.disable: on [root@node101 ~]# [root@node101 ~]# [root@node101 ~]# [root@node101 ~]# gluster volume info replicated-volume Volume Name: replicated-volume Type: Replicate Volume ID: abbcc657-9170-40bc-b64f-a48af4c46e70 Status: Created Snapshot Count: 0 Number of Bricks: 1 x 2 = 2 Transport-type: tcp Bricks: Brick1: node101.yinzhengjie.org.cn:/home/yinzhengjie/glusterfs/file2 Brick2: node102.yinzhengjie.org.cn:/home/yinzhengjie/glusterfs/file2 Options Reconfigured: transport.address-family: inet nfs.disable: on performance.client-io-threads: off [root@node101 ~]# [root@node101 ~]# [root@node101 ~]# gluster volume info test-volume Volume Name: test-volume Type: Distribute Volume ID: d73f1306-1984-4fea-8fe2-a37771b471d5 Status: Created Snapshot Count: 0 Number of Bricks: 2 Transport-type: tcp Bricks: Brick1: node101.yinzhengjie.org.cn:/home/yinzhengjie/glusterfs/file1 Brick2: node102.yinzhengjie.org.cn:/home/yinzhengjie/glusterfs/file1 Options Reconfigured: transport.address-family: inet nfs.disable: on [root@node101 ~]#
4>.創建條帶卷(類似於raid 0)
[root@node101 ~]# mkdir -p /home/yinzhengjie/glusterfs/file3 [root@node101 ~]# [root@node101 ~]# ssh node102.yinzhengjie.org.cn Last login: Mon Feb 18 16:46:11 2019 from node101.yinzhengjie.org.cn [root@node102 ~]# [root@node102 ~]# mkdir -p /home/yinzhengjie/glusterfs/file3 [root@node102 ~]# [root@node102 ~]# exit logout Connection to node102.yinzhengjie.org.cn closed. [root@node101 ~]# [root@node101 ~]# gluster volume create raid0-volume stripe 2 transport tcp node101.yinzhengjie.org.cn:/home/yinzhengjie/glusterfs/file3 node102.yinzhengjie.org.cn:/home/yinzhengjie/glusterfs/file3 volume create: raid0-volume: success: please start the volume to access data [root@node101 ~]#

[root@node101 ~]# gluster volume info Volume Name: raid0-volume Type: Stripe Volume ID: c40ad86c-adc4-42a7-9dd4-d9086755403b Status: Created Snapshot Count: 0 Number of Bricks: 1 x 2 = 2 Transport-type: tcp Bricks: Brick1: node101.yinzhengjie.org.cn:/home/yinzhengjie/glusterfs/file3 Brick2: node102.yinzhengjie.org.cn:/home/yinzhengjie/glusterfs/file3 Options Reconfigured: transport.address-family: inet nfs.disable: on Volume Name: replicated-volume Type: Replicate Volume ID: abbcc657-9170-40bc-b64f-a48af4c46e70 Status: Created Snapshot Count: 0 Number of Bricks: 1 x 2 = 2 Transport-type: tcp Bricks: Brick1: node101.yinzhengjie.org.cn:/home/yinzhengjie/glusterfs/file2 Brick2: node102.yinzhengjie.org.cn:/home/yinzhengjie/glusterfs/file2 Options Reconfigured: transport.address-family: inet nfs.disable: on performance.client-io-threads: off Volume Name: test-volume Type: Distribute Volume ID: d73f1306-1984-4fea-8fe2-a37771b471d5 Status: Created Snapshot Count: 0 Number of Bricks: 2 Transport-type: tcp Bricks: Brick1: node101.yinzhengjie.org.cn:/home/yinzhengjie/glusterfs/file1 Brick2: node102.yinzhengjie.org.cn:/home/yinzhengjie/glusterfs/file1 Options Reconfigured: transport.address-family: inet nfs.disable: on [root@node101 ~]# [root@node101 ~]# [root@node101 ~]# gluster volume info raid0-volume Volume Name: raid0-volume Type: Stripe Volume ID: c40ad86c-adc4-42a7-9dd4-d9086755403b Status: Created Snapshot Count: 0 Number of Bricks: 1 x 2 = 2 Transport-type: tcp Bricks: Brick1: node101.yinzhengjie.org.cn:/home/yinzhengjie/glusterfs/file3 Brick2: node102.yinzhengjie.org.cn:/home/yinzhengjie/glusterfs/file3 Options Reconfigured: transport.address-family: inet nfs.disable: on [root@node101 ~]#
以上三種創建卷的方式是最常見的,這三種方式可以組合床卷,生產環境中我們推薦使用分布式復制卷。
5>.啟動卷
我們需要知道的是,我們創建卷后是無法直接使用的,而是在使用之前我們必須啟動改卷。具體操作如下:
[root@node101 ~]# gluster volume status #我們查看卷的信息,發現之前創建的3個卷都沒有啟動 Volume raid0-volume is not started Volume replicated-volume is not started Volume test-volume is not started [root@node101 ~]# [root@node101 ~]# gluster volume start raid0-volume #既然沒有啟動,我們就分別啟動這3個卷 volume start: raid0-volume: success [root@node101 ~]# [root@node101 ~]# gluster volume start replicated-volume volume start: replicated-volume: success [root@node101 ~]# [root@node101 ~]# gluster volume start test-volume volume start: test-volume: success [root@node101 ~]# [root@node101 ~]# gluster volume status #再次查看卷的信息,我們會發現創建的卷啟動成功了! Status of volume: raid0-volume Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Brick node101.yinzhengjie.org.cn:/home/yinz hengjie/glusterfs/file3 49152 0 Y 6801 Brick node102.yinzhengjie.org.cn:/home/yinz hengjie/glusterfs/file3 49152 0 Y 14522 Task Status of Volume raid0-volume ------------------------------------------------------------------------------ There are no active volume tasks Status of volume: replicated-volume Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Brick node101.yinzhengjie.org.cn:/home/yinz hengjie/glusterfs/file2 49153 0 Y 6850 Brick node102.yinzhengjie.org.cn:/home/yinz hengjie/glusterfs/file2 49153 0 Y 14559 Self-heal Daemon on localhost N/A N/A Y 6873 Self-heal Daemon on node102.yinzhengjie.org .cn N/A N/A Y 14582 Task Status of Volume replicated-volume ------------------------------------------------------------------------------ There are no active volume tasks Status of volume: test-volume Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Brick node101.yinzhengjie.org.cn:/home/yinz hengjie/glusterfs/file1 49154 0 Y 6911 Brick node102.yinzhengjie.org.cn:/home/yinz hengjie/glusterfs/file1 49154 0 Y 14608 Task Status of Volume test-volume ------------------------------------------------------------------------------ There are no active volume tasks [root@node101 ~]#

[root@node101 ~]# gluster volume info Volume Name: raid0-volume Type: Stripe Volume ID: c40ad86c-adc4-42a7-9dd4-d9086755403b Status: Started Snapshot Count: 0 Number of Bricks: 1 x 2 = 2 Transport-type: tcp Bricks: Brick1: node101.yinzhengjie.org.cn:/home/yinzhengjie/glusterfs/file3 Brick2: node102.yinzhengjie.org.cn:/home/yinzhengjie/glusterfs/file3 Options Reconfigured: transport.address-family: inet nfs.disable: on Volume Name: replicated-volume Type: Replicate Volume ID: abbcc657-9170-40bc-b64f-a48af4c46e70 Status: Started Snapshot Count: 0 Number of Bricks: 1 x 2 = 2 Transport-type: tcp Bricks: Brick1: node101.yinzhengjie.org.cn:/home/yinzhengjie/glusterfs/file2 Brick2: node102.yinzhengjie.org.cn:/home/yinzhengjie/glusterfs/file2 Options Reconfigured: transport.address-family: inet nfs.disable: on performance.client-io-threads: off Volume Name: test-volume Type: Distribute Volume ID: d73f1306-1984-4fea-8fe2-a37771b471d5 Status: Started Snapshot Count: 0 Number of Bricks: 2 Transport-type: tcp Bricks: Brick1: node101.yinzhengjie.org.cn:/home/yinzhengjie/glusterfs/file1 Brick2: node102.yinzhengjie.org.cn:/home/yinzhengjie/glusterfs/file1 Options Reconfigured: transport.address-family: inet nfs.disable: on [root@node101 ~]# [root@node101 ~]# [root@node101 ~]# gluster volume status replicated-volume Status of volume: replicated-volume Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Brick node101.yinzhengjie.org.cn:/home/yinz hengjie/glusterfs/file2 49153 0 Y 6850 Brick node102.yinzhengjie.org.cn:/home/yinz hengjie/glusterfs/file2 49153 0 Y 14559 Self-heal Daemon on localhost N/A N/A Y 6873 Self-heal Daemon on node102.yinzhengjie.org .cn N/A N/A Y 14582 Task Status of Volume replicated-volume ------------------------------------------------------------------------------ There are no active volume tasks [root@node101 ~]#
6>.掛載我們剛剛啟動的卷
[root@node101 ~]# mkdir /mnt/gluster1 /mnt/gluster2 /mnt/gluster3 [root@node101 ~]# [root@node101 ~]# mount.glusterfs node101.yinzhengjie.org.cn:/test-volume /mnt/gluster1 [root@node101 ~]# [root@node101 ~]# mount.glusterfs node101.yinzhengjie.org.cn:/replicated-volume /mnt/gluster2 [root@node101 ~]# [root@node101 ~]# mount.glusterfs node101.yinzhengjie.org.cn:/raid0-volume /mnt/gluster3 [root@node101 ~]# [root@node101 ~]# [root@node101 ~]# df -h Filesystem Size Used Avail Use% Mounted on /dev/mapper/VolGroup-lv_root 50G 3.8G 43G 8% / devtmpfs 1.9G 0 1.9G 0% /dev tmpfs 1.9G 0 1.9G 0% /dev/shm tmpfs 1.9G 8.9M 1.9G 1% /run tmpfs 1.9G 0 1.9G 0% /sys/fs/cgroup /dev/sda1 477M 114M 335M 26% /boot /dev/mapper/VolGroup-lv_home 12G 41M 11G 1% /home Home 234G 182G 52G 78% /media/psf/Home 迅雷影音 79M 60M 20M 76% /media/psf/迅雷影音 tmpfs 379M 0 379M 0% /run/user/0 node101.yinzhengjie.org.cn:/test-volume 23G 311M 22G 2% /mnt/gluster1 #這3個卷就是咱們剛剛掛載的卷。這是分布式卷 node101.yinzhengjie.org.cn:/replicated-volume 12G 156M 11G 2% /mnt/gluster2 #這是復制卷 node101.yinzhengjie.org.cn:/raid0-volume 23G 311M 22G 2% /mnt/gluster3 #這是條帶卷 [root@node101 ~]#
7>.往分布式卷寫入測試數據

[root@node101 ~]# yum -y install tree Loaded plugins: fastestmirror Loading mirror speeds from cached hostfile * base: mirror.jdcloud.com * extras: mirrors.163.com * updates: mirrors.shu.edu.cn Resolving Dependencies --> Running transaction check ---> Package tree.x86_64 0:1.6.0-10.el7 will be installed --> Finished Dependency Resolution Dependencies Resolved =================================================================================================================================================================== Package Arch Version Repository Size =================================================================================================================================================================== Installing: tree x86_64 1.6.0-10.el7 base 46 k Transaction Summary =================================================================================================================================================================== Install 1 Package Total download size: 46 k Installed size: 87 k Downloading packages: tree-1.6.0-10.el7.x86_64.rpm | 46 kB 00:00:00 Running transaction check Running transaction test Transaction test succeeded Running transaction Installing : tree-1.6.0-10.el7.x86_64 1/1 Verifying : tree-1.6.0-10.el7.x86_64 1/1 Installed: tree.x86_64 0:1.6.0-10.el7 Complete! [root@node101 ~]#
[root@node101 ~]# [root@node101 ~]# echo "https://www.cnblogs.com/yinzhengjie/" > /mnt/gluster1/blog.txt #我們往分布式卷寫入測試數據 [root@node101 ~]# [root@node101 ~]# tree /home/yinzhengjie/glusterfs/* #查看當前節點,奇怪?數據跑哪去了?怎么在本地沒有呢?不要慌,我們去node102.yinzhengjie.org.cn這個節點上去看看! /home/yinzhengjie/glusterfs/file1 /home/yinzhengjie/glusterfs/file2 /home/yinzhengjie/glusterfs/file3 0 directories, 0 files [root@node101 ~]# [root@node101 ~]# ssh node102.yinzhengjie.org.cn Last login: Mon Feb 18 17:03:32 2019 from node101.yinzhengjie.org.cn [root@node102 ~]# [root@node102 ~]# tree /home/yinzhengjie/glusterfs/* #發現了沒有?我們在node101.yinzhengjie.org.cn寫入的測試數據,實際上跑到node102.yinzhengjie.org.cn上存儲啦! /home/yinzhengjie/glusterfs/file1 └── blog.txt /home/yinzhengjie/glusterfs/file2 /home/yinzhengjie/glusterfs/file3 0 directories, 1 file [root@node102 ~]# [root@node102 ~]# cat /home/yinzhengjie/glusterfs/file1/blog.txt #我們查看一下寫入的數據,是不是覺得很奇怪?數據是完好無損的,直接就能查看,包括文件名都沒有發生變化! https://www.cnblogs.com/yinzhengjie/ [root@node102 ~]# [root@node102 ~]#
8>.往復制卷寫入測試數據
[root@node101 ~]# [root@node101 ~]# echo "尹正傑到此一游!" > /mnt/gluster2/msg.log #我們往復制卷寫入測試數據 [root@node101 ~]# [root@node101 ~]# tree /home/yinzhengjie/glusterfs/ #很顯然,寫入的數據被存到本地了 /home/yinzhengjie/glusterfs/ ├── file1 ├── file2 │ └── msg.log └── file3 3 directories, 1 file [root@node101 ~]# [root@node101 ~]# cat /home/yinzhengjie/glusterfs/file2/msg.log #查看文件的內容也是完好無損的 尹正傑到此一游! [root@node101 ~]# [root@node101 ~]# ssh node102.yinzhengjie.org.cn Last login: Mon Feb 18 17:36:14 2019 from node101.yinzhengjie.org.cn [root@node102 ~]# [root@node102 ~]# tree /home/yinzhengjie/glusterfs/ #小伙伴們,注意啦,發現復制卷不僅僅在node101.yinzhengjie.org.cn節點上存在,在node102.yinzhengjie.org.cn上也是有的! /home/yinzhengjie/glusterfs/ ├── file1 │ └── blog.txt ├── file2 │ └── msg.log └── file3 3 directories, 2 files [root@node102 ~]# [root@node102 ~]# cat /home/yinzhengjie/glusterfs/file2/msg.log #我們在node102.yinzhengjie.org.cn上也可以查看到完整的數據喲! 尹正傑到此一游! [root@node102 ~]# [root@node102 ~]# exit logout Connection to node102.yinzhengjie.org.cn closed. [root@node101 ~]#
9>.往條帶卷寫入測試數據
[root@node101 ~]# [root@node101 ~]# echo "Jason Yin 2019" > /mnt/gluster3/access.log #我們往條帶卷寫入測試數據 [root@node101 ~]# [root@node101 ~]# tree /home/yinzhengjie/glusterfs/ #很顯然,我們在node101.yinzhengjie.org.cn存在文件名 /home/yinzhengjie/glusterfs/ ├── file1 ├── file2 │ └── msg.log └── file3 └── access.log 3 directories, 2 files [root@node101 ~]# [root@node101 ~]# cat /home/yinzhengjie/glusterfs/file3/access.log #我們寫入的測試數據被保存到了node101.yinzhengjie.org.cn到條帶卷中了 Jason Yin 2019 [root@node101 ~]# [root@node101 ~]# ssh node102.yinzhengjie.org.cn Last login: Mon Feb 18 17:46:06 2019 from node101.yinzhengjie.org.cn [root@node102 ~]# [root@node102 ~]# tree /home/yinzhengjie/glusterfs/ #細心的小伙伴應該發現了,在node102.yinzhengjie.org.cn也存在該文件名。 /home/yinzhengjie/glusterfs/ ├── file1 │ └── blog.txt ├── file2 │ └── msg.log └── file3 └── access.log 3 directories, 3 files [root@node102 ~]# [root@node102 ~]# cat /home/yinzhengjie/glusterfs/file3/access.log #我們查看node102.yinzhnegjie.org.cn節點文件名,里面的內容竟然是空的!這一點希望引起大家的注意喲!這就是為什么我們說條帶卷是的原理和raid很相似的依據! [root@node102 ~]# [root@node102 ~]#
三.模擬生產環境中 分布式復制卷 的使用
1>.在各個節點創建存儲目錄
[root@node101 ~]# mkdir /home/yinzhengjie/glusterfs/file6 /home/yinzhengjie/glusterfs/file7 [root@node101 ~]# [root@node101 ~]# ssh node102.yinzhengjie.org.cn Last login: Mon Feb 18 18:34:09 2019 from node101.yinzhengjie.org.cn [root@node102 ~]# [root@node102 ~]# mkdir /home/yinzhengjie/glusterfs/file6 /home/yinzhengjie/glusterfs/file7 [root@node102 ~]# [root@node102 ~]# exit logout Connection to node102.yinzhengjie.org.cn closed. [root@node101 ~]#
2>.創建分布式復制卷並啟動
[root@node101 ~]# gluster volume create my-distributed-replication-volume replica 2 transport tcp node101.yinzhengjie.org.cn:/home/yinzhengjie/glusterfs/file6 node102.yinzhengjie.org.cn:/home/yinzhengjie/glusterfs/file6 node101.yinzhengjie.org.cn:/home/yinzhengjie/glusterfs/file7 node102.yinzhengjie.org.cn:/home/yinzhengjie/glusterfs/file7 force volume create: my-distributed-replication-volume: success: please start the volume to access data [root@node101 ~]# [root@node101 ~]# gluster volume start my-distributed-replication-volume volume start: my-distributed-replication-volume: success [root@node101 ~]# [root@node101 ~]# gluster volume status my-distributed-replication-volume Status of volume: my-distributed-replication-volume Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Brick node101.yinzhengjie.org.cn:/home/yinz hengjie/glusterfs/file6 49157 0 Y 8875 Brick node102.yinzhengjie.org.cn:/home/yinz hengjie/glusterfs/file6 49157 0 Y 16229 Brick node101.yinzhengjie.org.cn:/home/yinz hengjie/glusterfs/file7 49158 0 Y 8897 Brick node102.yinzhengjie.org.cn:/home/yinz hengjie/glusterfs/file7 49158 0 Y 16251 Self-heal Daemon on localhost N/A N/A Y 8920 Self-heal Daemon on node102.yinzhengjie.org .cn N/A N/A Y 16274 Task Status of Volume my-distributed-replication-volume ------------------------------------------------------------------------------ There are no active volume tasks [root@node101 ~]# [root@node101 ~]# gluster volume info my-distributed-replication-volume Volume Name: my-distributed-replication-volume Type: Distributed-Replicate Volume ID: 1c142bb6-0bdc-45ba-8de0-c6faadc871a1 Status: Started Snapshot Count: 0 Number of Bricks: 2 x 2 = 4 Transport-type: tcp Bricks: Brick1: node101.yinzhengjie.org.cn:/home/yinzhengjie/glusterfs/file6 Brick2: node102.yinzhengjie.org.cn:/home/yinzhengjie/glusterfs/file6 Brick3: node101.yinzhengjie.org.cn:/home/yinzhengjie/glusterfs/file7 Brick4: node102.yinzhengjie.org.cn:/home/yinzhengjie/glusterfs/file7 Options Reconfigured: transport.address-family: inet nfs.disable: on performance.client-io-threads: off [root@node101 ~]#
3>.掛載分布式復制卷
[root@node101 ~]# mkdir /mnt/gluster10 [root@node101 ~]# [root@node101 ~]# mount.glusterfs node101.yinzhengjie.org.cn:/my-distributed-replication-volume /mnt/gluster10 [root@node101 ~]# [root@node101 ~]# [root@node101 ~]# df -h | grep gluster node101.yinzhengjie.org.cn:/test-volume 23G 312M 22G 2% /mnt/gluster1 node101.yinzhengjie.org.cn:/replicated-volume 12G 156M 11G 2% /mnt/gluster2 node101.yinzhengjie.org.cn:/raid0-volume 23G 312M 22G 2% /mnt/gluster3 node101.yinzhengjie.org.cn:/my-distributed-replication-volume 12G 156M 11G 2% /mnt/gluster10 #這就是咱們剛剛創建的分布式復制卷 [root@node101 ~]# [root@node101 ~]#
4>.往分布式復制卷寫入測試數據
[root@node101 ~]# echo "大王叫我來巡山" > /mnt/gluster10/test1.log [root@node101 ~]# echo "大王叫我來巡山" > /mnt/gluster10/test2.log [root@node101 ~]# echo "大王叫我來巡山" > /mnt/gluster10/test3.log [root@node101 ~]# echo "大王叫我來巡山" > /mnt/gluster10/test4.log [root@node101 ~]# echo "大王叫我來巡山" > /mnt/gluster10/test5.log [root@node101 ~]# [root@node101 ~]# tree /home/yinzhengjie/glusterfs/ /home/yinzhengjie/glusterfs/ ├── file1 ├── file2 │ └── msg.log ├── file3 │ └── access.log ├── file4 ├── file5 ├── file6 │ ├── test1.log │ ├── test2.log │ └── test4.log └── file7 ├── test3.log └── test5.log 7 directories, 7 files [root@node101 ~]# [root@node101 ~]# tree /home/yinzhengjie/glusterfs/ /home/yinzhengjie/glusterfs/ ├── file1 ├── file2 │ └── msg.log ├── file3 │ └── access.log ├── file4 ├── file5 ├── file6 │ ├── test1.log │ ├── test2.log │ └── test4.log └── file7 ├── test3.log └── test5.log 7 directories, 7 files [root@node101 ~]# [root@node101 ~]# cat /home/yinzhengjie/glusterfs/file6/test1.log 大王叫我來巡山 [root@node101 ~]# cat /home/yinzhengjie/glusterfs/file6/test2.log 大王叫我來巡山 [root@node101 ~]# cat /home/yinzhengjie/glusterfs/file6/test4.log 大王叫我來巡山 [root@node101 ~]# [root@node101 ~]# cat /home/yinzhengjie/glusterfs/file7/test3.log 大王叫我來巡山 [root@node101 ~]# [root@node101 ~]# cat /home/yinzhengjie/glusterfs/file7/test5.log 大王叫我來巡山 [root@node101 ~]# [root@node101 ~]# [root@node101 ~]# ssh node102.yinzhengjie.org.cn Last login: Mon Feb 18 19:01:41 2019 from node101.yinzhengjie.org.cn [root@node102 ~]# [root@node102 ~]# [root@node102 ~]# tree /home/yinzhengjie/glusterfs/ /home/yinzhengjie/glusterfs/ ├── file1 │ └── blog.txt ├── file2 │ └── msg.log ├── file3 │ └── access.log ├── file4 ├── file5 ├── file6 │ ├── test1.log │ ├── test2.log │ └── test4.log └── file7 ├── test3.log └── test5.log 7 directories, 8 files [root@node102 ~]# [root@node102 ~]# [root@node102 ~]# cat /home/yinzhengjie/glusterfs/file6/test1.log 大王叫我來巡山 [root@node102 ~]# [root@node102 ~]# cat /home/yinzhengjie/glusterfs/file6/test2.log 大王叫我來巡山 [root@node102 ~]# cat /home/yinzhengjie/glusterfs/file6/test4.log 大王叫我來巡山 [root@node102 ~]# [root@node102 ~]# cat /home/yinzhengjie/glusterfs/file7/test3.log 大王叫我來巡山 [root@node102 ~]# [root@node102 ~]# cat /home/yinzhengjie/glusterfs/file7/test5.log 大王叫我來巡山 [root@node102 ~]# [root@node102 ~]# exit logout Connection to node102.yinzhengjie.org.cn closed. [root@node101 ~]# [root@node101 ~]#