部署GlusterFS集群,使用heketi管理集群


一、CentOS 7部署GlusterFS存储

 

由于centos7只支持到glusterfs的8版本,所以,本次安装gluster8的版本。采用yum的方式安装

  • 1.配置yum源
vim /etc/yum.repos.d/CentOS-Storage.repo

[gluster]
name=CentOS-$releasever - Gluster 8
baseurl=http://mirrors.aliyun.com/$contentdir/$releasever/storage/$basearch/gluster-8/
gpgcheck=0
enabled=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-SIG-Storage
 
[gluster-test]
name=CentOS-$releasever - Gluster 8 Testing
baseurl=http://mirrors.aliyun.com/$releasever/storage/$basearch/gluster-8/
gpgcheck=0
enabled=0
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-SIG-Storage
  • 2.安装gluster-server
yum -y install glusterfs-server
  • 3.修改/etc/hosts文件配置
vim /etc/hosts

192.168.1.1   node01
192.168.1.2   node02
192.168.1.3   node03
  • 4.启动服务,创建文件夹,分别在node01、node02、node03中创建/data/gluster/brick1,在node01中执行命令添加node02、node03节点到集群
## 启动服务器
systemctl start glusterd && systemctl enable glusterd

## 添加节点,在node01执行命令
gluster peer probe node02
gluster peer probe node03

## 创建分布式卷
gluster volume create mydata replica 3 node01:/data/gluster/brick1 node02:/data/gluster/brick1 node03:/data/gluster/brick1 force

## 启动分布式卷
gluster volume start mydata

## 停止分布式卷
gluster volume stop mydata

## 删除分布式卷
gluster volume delete mydata
  • 5.查看创建的存储卷
gluster volume info

Volume Name: mydata1
Type: Replicate
Volume ID: fa5e0872-0b9f-4447-9779-7c219b17b91b
Status: Created
Snapshot Count: 0
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: node01:/data/gfsdata
Brick2: node02:/data/gfsdata
Options Reconfigured:
storage.fips-mode-rchecksum: on
transport.address-family: inet
nfs.disable: on
performance.client-io-threads: off
 
Volume Name: mydata2
Type: Replicate
Volume ID: c8385028-3635-40f3-bb23-aa8729488374
Status: Created
Snapshot Count: 0
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: node01:/data/gfstest
Brick2: node02:/data/gfstest
Options Reconfigured:
storage.fips-mode-rchecksum: on
transport.address-family: inet
nfs.disable: on
performance.client-io-threads: off

 


 

二、Heketi管理GlusterFS存储

 

Heketi管理GlusterFS集群需要注意:

1.GlusterFS集群的各peer必须要有未安装文件系统的磁盘设备(磁盘设备未进行格式化)

2.GlusterFS集群各peer的端口号不能太少,不然当peer上的brick将端口号用完后,会造成无法创建卷

3.GlusterFS的各peer不能组成集群,Heketi可以自创建集群组合。

 

  • 1.安装heketi
rpm -ivh https://mirrors.aliyun.com/centos/7/storage/x86_64/gluster-8/Packages/h/heketi-9.0.0-1.el7.x86_64.rpm
  • 2.配置

配置文件在/etc/heketi下的heketi.json,需要说明的是,heketi有三种executor,分别为mock、ssh、kubernetes,建议在测试环境使用mock,生产环境使用ssh,当glusterfs以容器的方式部署在kubernetes上时,才使用kubernetes。我们这里将glusterfs和heketi独立部署,使用ssh的方式

## 生成秘钥
ssh-keygen -t rsa -q -f /etc/heketi/heketi_key -N ''

## 分发秘钥到其它服务器
ssh-copy-id -i /etc/heketi/heketi_key.pub -p 11022 root@192.168.1.2

ssh-copy-id -i /etc/heketi/heketi_key.pub -p 11022 root@192.168.1.3

## 验证
ssh -i /etc/heketi/heketi_key -p 11022 root@192.168.1.2

ssh -i /etc/heketi/heketi_key -p 11022 root@192.168.1.3
vim /etc/heketi/heketi.json

{
  ## 修改端口号为18080
  "_port_comment": "Heketi Server Port Number",
  "port": "18080",

  ## 用户验证改为false
  "_use_auth": "Enable JWT authorization. Please enable for deployment",
  "use_auth": false,

  ## 修改admin key和user key
  "_jwt": "Private keys for access",
  "jwt": {
    "_admin": "Admin has access to all APIs",
    "admin": {
      "key": "adminkey"
    },
    "_user": "User only has access to /volumes endpoint",
    "user": {
      "key": "userkey"
    }
  },

  ## 使用ssh的方式
  "_glusterfs_comment": "GlusterFS Configuration",
  "glusterfs": {
    "_executor_comment": [
      "Execute plugin. Possible choices: mock, ssh",
      "mock: This setting is used for testing and development.",
      "      It will not send commands to any node.",
      "ssh:  This setting will notify Heketi to ssh to the nodes.",
      "      It will need the values in sshexec to be configured.",
      "kubernetes: Communicate with GlusterFS containers over",
      "            Kubernetes exec api."
    ],
    "executor": "ssh",
    
    ## 修改key的存放位置,user和端口号
    "_sshexec_comment": "SSH username and private key file information",
    "sshexec": {
      "keyfile": "/etc/heketi/heketi_key",
      "user": "root",
      "port": "11022",
      "fstab": "Optional: Specify fstab file on node.  Default is /etc/fstab"
    },

    "_kubeexec_comment": "Kubernetes configuration",
    "kubeexec": {
      "host" :"https://kubernetes.host:8443",
      "cert" : "/path/to/crt.file",
      "insecure": false,
      "user": "kubernetes username",
      "password": "password for kubernetes user",
      "namespace": "OpenShift project or Kubernetes namespace",
      "fstab": "Optional: Specify fstab file on node.  Default is /etc/fstab"
    },

    "_db_comment": "Database file name",
    "db": "/var/lib/heketi/heketi.db",

    "_loglevel_comment": [
      "Set log level. Choices are:",
      "  none, critical, error, warning, info, debug",
      "Default is warning"
    ],
    "loglevel" : "warning"
  }
}
  • 3.修改配置,启动服务
vim /usr/lib/systemd/system/heketi.service

[Unit]
Description=Heketi Server

[Service]
Type=simple
WorkingDirectory=/var/lib/heketi
EnvironmentFile=/etc/heketi/heketi.json
User=root
ExecStart=/usr/bin/heketi --config=/etc/heketi/heketi.json
Restart=on-failure
StandardOutput=syslog
StandardError=syslog

[Install]
WantedBy=multi-user.target

## 启动服务
systemctl daemon-reload && systemctl start heketi
  • 4.验证服务
curl http://192.168.1.1:18080/hello

Hello from Heketi
  • 5.安装heketi-client,像heketi倒入GlusterFS集群信息
## 1.安装heketi-client
rpm -ivh https://mirrors.aliyun.com/centos/7/storage/x86_64/gluster-8/Packages/h/heketi-client-9.0.0-1.el7.x86_64.rpm

## 创建topology.json文件,倒入集群信息
### 注1:manage和storage尽量使用IP地址,使用hostname会报`New Node doesn't have glusterd running`的错误
vim /etc/heketi/topology.json

{
    "clusters": [
      {
        "nodes": [
          {
            "node": {
              "hostnames": {
                "manage": [
                  "192.168.1.1"
                ],
                "storage": [
                  "192.168.1.1"
                ]
              },
              "zone": 1
            },
            "devices": [
              "/dev/vdb"
            ]
          },
          {
            "node": {
              "hostnames": {
                "manage": [
                  "192.168.1.2"
                ],
                "storage": [
                  "192.168.1.2"
                ]
              },
              "zone": 1
            },
            "devices": [
              "/dev/vdb"
            ]
          },
          {
            "node": {
              "hostnames": {
                "manage": [
                  "192.168.1.3"
                ],
                "storage": [
                  "192.168.1.3"
                ]
              },
              "zone": 1
            },
            "devices": [
              "/dev/vdb"
            ]
          }
        ]
      }
    ]
  }
  • 6.通过Heketi命令初始化GlusterFS各节点的存储
heketi-cli --user admin --secret adminkey --server http://192.168.1.1:18080 topology load --json=/etc/heketi/topology.json

Creating cluster ... ID: a9ca2cbc28c1194c59c5e26aac3ee307
    Allowing file volumes on cluster.
    Allowing block volumes on cluster.
    Creating node 192.168.1.1 ... ID: 8ef15510fb3152ab4515a375474842e3
        Adding device /dev/vdb ... OK
    Creating node 192.168.1.2 ... ID: 013d0fbed34f01964243f91123347568
        Adding device /dev/vdb ... OK
    Creating node 192.168.1.3 ... ID: 4515a37547488ef1551042e3fb3152ab
        Adding device /dev/vdb ... OK    
  • 7.集群创建完成,查看数据
## 查看集群
[root@node-1 ~]# heketi-cli --server http://192.168.1.1:18080 cluster list
Clusters:
Id:a9ca2cbc28c1194c59c5e26aac3ee307 [file][block]

## 删除集群
heketi-cli --server http://192.168.1.1:18080 cluster delete a9ca2cbc28c1194c59c5e26aac3ee307

[root@node-1 ~]# heketi-cli --server http://192.168.1.1:18080 node list
Id:013d0fbed34f01964243f91123347568 Cluster:a9ca2cbc28c1194c59c5e26aac3ee307
Id:8ef15510fb3152ab4515a375474842e3 Cluster:a9ca2cbc28c1194c59c5e26aac3ee307
Id:4515a37547488ef1551042e3fb3152ab Cluster:a9ca2cbc28c1194c59c5e26aac3ee307
  • 8.通过heketi创建gluster volume
## 创建一个100G,副本为2,集群为a9ca2cbc28c1194c59c5e26aac3ee307的复制卷
[root@node-1 ~]# heketi-cli --server http://192.168.1.1:18080 volume create --size=100 --replica=2 --clusters=a9ca2cbc28c1194c59c5e26aac3ee307
Name: vol_244ebb5ee623b28a18ace5c39db721ab
Size: 100
Volume Id: 244ebb5ee623b28a18ace5c39db721ab
Cluster Id: a9ca2cbc28c1194c59c5e26aac3ee307
Mount: 192.168.1.1:vol_244ebb5ee623b28a18ace5c39db721ab
Mount Options: backup-volfile-servers=192.168.1.1
Block: false
Free Size: 0
Block Volumes: []
Durability Type: replicate
Distributed+Replica: 2
  • 9.glutserfs集群上查看volume
[root@node-1 diff]# gluster volume info
 
Volume Name: vol_244ebb5ee623b28a18ace5c39db721ab
Type: Replicate
Volume ID: e325399b-b458-4f88-b4d9-420c0082cf78
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: 192.168.1.1:/var/lib/heketi/mounts/vg_10ff1dfd97b93c2f4a19bc51628d9581/brick_cf81dcf6916ec28c2ba8d837621c4a53/brick
Brick2: 192.168.1.2:/var/lib/heketi/mounts/vg_d78562d163b20e0b20083b5776f47df3/brick_bf99a18af00887c0e9879481848d5712/brick
Brick2: 192.168.1.3:/var/lib/heketi/mounts/vg_d78562d163b20e0b20083b5776f47df3/brick_bf99a18af00887c0e9879481848d5712/brick
Options Reconfigured:
transport.address-family: inet
nfs.disable: on
performance.client-io-threads: off


免责声明!

本站转载的文章为个人学习借鉴使用,本站对版权不负任何法律责任。如果侵犯了您的隐私权益,请联系本站邮箱yoyou2525@163.com删除。



 
粤ICP备18138465号  © 2018-2025 CODEPRJ.COM