glusterfs詳解及kubernetes 搭建heketi-glusterfs


本文包含:

  • gluster各存儲卷詳解、創建及使用
  • gluster-kubernetes搭建glusterfs存儲

前言

傳統的運維中,往往需要管理員手動先在存儲集群分配空間,然后才能掛載到應用中去。Kubernetes 的最新版中,dynamic provisioning 升級到了 beta ,並支持多種存儲服務的動態預配置,從而可以更有效地利用存儲環境中的存儲容量,達到按需使用存儲空間的目的。本文將介紹 dynamic provisioning 這一特性,並以 GlusterFS 為例,說明存儲服務與 k8s 的對接。

簡介

                ⚠️熟悉的小伙伴直接跳過啦

dynamic provisioning:
 存儲是容器編排中非常重要的一部分。Kubernetes 從 v1.2 開始,提供了 dynamic provisioning 這一強大的特性,可以給集群提供按需分配的存儲,並能支持包括 AWS-EBS、GCE-PD、Cinder-Openstack、Ceph、GlusterFS 等多種雲存儲。非官方支持的存儲也可以通過編寫 plugin 方式支持。
  在沒有 dynamic provisioning 時,容器為了使用 Volume,需要預先在存儲端分配好,這個過程往往是管理員手動的。在引入 dynamic provisioning 之后,Kubernetes 會根據容器所需的 volume 大小,通過調用存儲服務的接口,動態地創建滿足所需的存儲。

Storageclass:
 管理員可以配置 storageclass,來描述所提供存儲的類型。以 AWS-EBS 為例,管理員可以分別定義兩種 storageclass:slow 和 fast。slow 對接 sc1(機械硬盤),fast 對接 gp2(固態硬盤)。應用可以根據業務的性能需求,分別選擇兩種 storageclass。

Glusterfs:
 一個開源的分布式文件系統,具有強大的橫向擴展能力,通過擴展能夠支持數 PB 存儲容量和處理數千客戶端。GlusterFS 借助 TCP/IP 或 InfiniBandRDMA 網絡將物理分布的存儲資源聚集在一起,使用單一全局命名空間來管理數據。
⚠️Glusterfs架構中最大的設計特點是沒有元數據服務器組件,也就是說沒有主/從服務器之分,每一個節點都可以是主服務器

Heketi:
 Heketi(https://github.com/heketi/heketi),是一個基於 RESTful API 的 GlusterFS 卷管理框架。
 Heketi 可以方便地和雲平台整合,提供 RESTful API 供 Kubernetes 調用,實現多 glusterfs 集群的卷管理。另外,heketi 還有保證 bricks 和它對應的副本均勻分布在集群中的不同可用區的優點。

gluster-kubernetes搭建glusterfs存儲

heketi官網推薦通過gluster-kubernetes搭建,生產環境可以直接利用gluster-kubernetes提供的腳本搭建,減小復雜度,個人觀點,仁者見仁,智者見智

環境

  • k8s 1.14.1
  • 4 nodes with volume: /dev/vdb
  • 1 master

注意⚠️

1. 至少需要3個kubernetes slave節點用來部署glusterfs集群,並且這3個slave節點每個節點需要至少一個空余的磁盤
2. 查看是否運行內核模塊lsmod |grep thin,每個kubernetes集群的節點運行modprobe dm_thin_pool,加載內核模塊。

下載腳本

git clone https://github.com/gluster/gluster-kubernetes.git
cd xxx/gluster-kubernetes/deploy

修改topology.json

cp topology.json.sample topology.json
修改對應的主機名(nodes),ip,和數據卷

{
  "clusters": [
    {
      "nodes": [
        {
          "node": {
            "hostnames": {
              "manage": [
                "10.8.4.92"
              ],
              "storage": [
                "10.8.4.92"
              ]
            },
            "zone": 1
          },
          "devices": [
            "/dev/vdb"
          ]
        },
        {
          "node": {
            "hostnames": {
              "manage": [
                "10.8.4.93"
              ],
              "storage": [
                "10.8.4.93"
              ]
            },
            "zone": 1
          },
          "devices": [
            "/dev/vdb"
          ]
        },
        {
          "node": {
            "hostnames": {
              "manage": [
                "10.8.4.131"
              ],
              "storage": [
                "10.8.4.131"
              ]
            },
            "zone": 1
          },
          "devices": [
            "/dev/vdb"
          ]
        },
     {
          "node": {
            "hostnames": {
              "manage": [
                "10.8.4.132"
              ],
              "storage": [
                "10.8.4.132"
              ]
            },
            "zone": 1
          },
          "devices": [
            "/dev/vdb"
           ]
       }
     ]
    }
  ]
}

修改heketi.json.template

{
        "_port_comment": "Heketi Server Port Number",
        "port" : "8080",

        "_use_auth": "Enable JWT authorization. Please enable for deployment",
        "use_auth" : true, #開啟用戶認證

        "_jwt" : "Private keys for access",
        "jwt" : {
                "_admin" : "Admin has access to all APIs",
                "admin" : {
                        "key" : "adminkey" #管理員密碼
                },
                "_user" : "User only has access to /volumes endpoint",
                "user" : {
                        "key" : "userkey" #用戶密碼
                }
        },

        "_glusterfs_comment": "GlusterFS Configuration",
        "glusterfs" : {

                "_executor_comment": "Execute plugin. Possible choices: mock, kubernetes, ssh",
                "executor" : "${HEKETI_EXECUTOR}",#本文搭建為kubernete方式

                "_db_comment": "Database file name",
                "db" : "/var/lib/heketi/heketi.db", #heketi數據存儲

                "kubeexec" : {
                        "rebalance_on_expansion": true
                },

                "sshexec" : {
                        "rebalance_on_expansion": true,
                        "keyfile" : "/etc/heketi/private_key",
                        "port" : "${SSH_PORT}",
                        "user" : "${SSH_USER}",
                        "sudo" : ${SSH_SUDO}
                }
        },

        "backup_db_to_kube_secret": false
}

gk-deploy腳本概述

./gk-deploy -h概述

-g, --deploy-gluster #pod部署gluster使用
-s, --ssh-keyfile    #ssh方式管理gluster使用,/root/.ssh/id_rsa.pub
--admin-key ADMIN_KEY#管理員secret設置
--user-key USER_KEY  #用戶secret設置
--abort              #刪除heketi資源使用

vi gk-deploy腳本主要內容

  • 創建資源
  • 添加glusterfs設備節點
  • 對heketi的存儲進行掛載

⚠️想要深入理解腳本都做了什么,可以查看https://www.kubernetes.org.cn/3893.html

#添加glusterfs設備節點
heketi_cli="${CLI} exec -i ${heketi_pod} -- heketi-cli -s http://localhost:8080 --user admin --secret '${ADMIN_KEY}'"

  load_temp=$(mktemp)
  eval_output "${heketi_cli} topology load --json=/etc/heketi/topology.json 2>&1" | tee "${load_temp}"


執行腳本

⚠️Adding device時比較慢,耐心等待

kubectl create ns glusterfs
./gk-deploy -y -n glusterfs -g --user-key=userkey --admin-key=adminkey

Using namespace "glusterfs".
Checking that heketi pod is not running ... OK
serviceaccount "heketi-service-account" created
clusterrolebinding "heketi-sa-view" created
node "10.8.4.92" labeled
node "10.8.4.93" labeled
node "10.8.4.131" labeled
node "10.8.4.132" labeled
daemonset "glusterfs" created
Waiting for GlusterFS pods to start ... OK
service "deploy-heketi" created
deployment "deploy-heketi" created
Waiting for deploy-heketi pod to start ... OK
Creating cluster ... ID: 4cfe35ce3cdc64b8afb8dbc46cad0e09
Creating node 10.8.4.92 ... ID: 1d323ddf243fd4d8c7f0ed58eb0e2c0ab
Adding device /dev/vdb ... OK
Creating node 10.8.4.93 ... ID: 12df23f339dj4jf8jdk3oodd31ba9e12c52
Adding device /dev/vdb ... OK
Creating node 10.8.4.131 ... ID: 1c529sd3ewewed1286e29e260668a1
Adding device /dev/vdb ... OK
Creating node 10.8.4.132 ... ID: 12ff323cd1121232323fddf9e260668a1
Adding device /dev/vdb ... OK
heketi topology loaded.
Saving heketi-storage.json
secret "heketi-storage-secret" created
endpoints "heketi-storage-endpoints" created
service "heketi-storage-endpoints" created
job "heketi-storage-copy-job" created
service "deploy-heketi" deleted
job "heketi-storage-copy-job" deleted
deployment "deploy-heketi" deleted
secret "heketi-storage-secret" deleted
service "heketi" created
deployment "heketi" created
Waiting for heketi pod to start ... OK
heketi is now running and accessible via http://10.10.23.148:8080/
Ready to create and provide GlusterFS volumes.

kubectl get po -o wide -n glusterfs
glusterfs

[root@k8s1-master1 deploy]# export HEKETI_CLI_SERVER=$(kubectl get svc/heketi -n glusterfs --template 'http://{{.spec.clusterIP}}:{{(index .s
pec.ports 0).port}}')
[root@k8s1-master1 deploy]# echo $HEKETI_CLI_SERVER
http://10.0.0.131:8080
[root@k8s1-master1 deploy]# curl $HEKETI_CLI_SERVER/hello
Hello from Heketi

失敗重試

kubectl delete -f kube-templates/deploy-heketi-deployment.yaml
kubectl delete -f kube-templates/heketi-deployment.yaml
kubectl delete -f kube-templates/heketi-service-account.yaml
kubectl delete -f kube-templates/glusterfs-daemonset.yaml
#每個節點執行
rm -rf /var/lib/heketi
rm -rf /var/lib/glusterd

問題:Unable to add device,嘗試格式化vdb

#每個節點執行
dd if=/dev/zero of=/dev/vdb bs=1k count=1
blockdev --rereadpt /dev/vdb

其他錯誤排查

Connected狀態

[root@k8s1-master2 ~]# kubectl exec -ti glusterfs-sb7l9 -n glusterfs bash
[root@k8s1-master2 /]# gluster peer status

Number of Peers: 3

Hostname: 10.8.4.93
Uuid: 52824c41-2fce-468a-b9c9-7c3827ed7a34
State: Peer in Cluster (Connected)

Hostname: 10.8.4.131
Uuid: 6a27b31f-dbd9-4de5-aefd-73c1ac9b81c5
State: Peer in Cluster (Connected)

Hostname: 10.8.4.132
Uuid: 7b7b53ff-af7f-49aa-b371-29dd1e784ad1
State: Peer in Cluster (Connected)

存儲已經掛載

[root@k8s1-master2 ~]# kubectl exec -ti glusterfs-sb7l9 -n glusterfs bash
[root@k8s1-master2 /]# gluster volume info

Volume Name: heketidbstorage
Type: Replicate
Volume ID: 02fd891f-dd43-4c1b-a2ba-87e1be7c706f
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: 10.8.4.132:/var/lib/heketi/mounts/vg_5634269dc08edd964032871801920f1e/brick_b980d3f5ce7b1b4314c4b57c8aaf35fa/brick
Brick2: 10.8.4.93:/var/lib/heketi/mounts/vg_1d2cf75ab474dd63edb917a78096e429/brick_b375443687051038234e50fe3cd5fe12/brick
Brick3: 10.8.4.92:/var/lib/heketi/mounts/vg_a5d145795d59c51d2335153880049760/brick_e8f9ec722a235448fbf6730c25d7441a/brick
Options Reconfigured:
user.heketi.id: dfed68e6dca82c7cd5911c8ddda7746b
transport.address-family: inet
nfs.disable: on
performance.client-io-threads: off

搭建StorageClass

vi storageclass-dev-glusterfs.yaml

---
apiVersion: v1
kind: Secret
metadata:
  name: heketi-secret
  namespace: glusterfs
data:
  # base64 encoded password. E.g.: echo -n "adminkey" | base64
  key: YWRtaW5rZXk=
type: kubernetes.io/glusterfs
---
apiVersion: storage.k8s.io/v1beta1
kind: StorageClass
metadata:
  name: glusterfs
provisioner: kubernetes.io/glusterfs
parameters:
  resturl: "http://10.8.4.91:42951"
  clusterid: "364a0a72b3343c537c20db5576ffd46c"
  restauthenabled: "true"
  restuser: "admin"
  secretNamespace: "glusterfs"
  secretName: "heketi-secret"
  #restuserkey: "adminkey"
  gidMin: "40000"
  gidMax: "50000"
  volumetype: "none"

屬性概述

  • resturl :heketi地址
  • clusterid:heketi-cli --user admin --secret adminkey cluster list進入Podheketi-549c999b6f-5l8sp獲取
  • restauthenabled:是否開啟認證
  • restuser:用戶
  • secretName:密碼

主要說下volumetype

  • volumetype

 volumetype : The volume type and its parameters can be configured with this optional value. If the volume type is not mentioned, it’s up to the provisioner to decide the volume type.
 For example:
Replica volume: volumetype: replicate:3 where ‘3’ is replica count.
Disperse/EC volume: volumetype: disperse:4:2 where ‘4’ is data and ‘2’ is the redundancy count.
Distribute volume: volumetype: none

  • volumetype: disperse:4:2

糾錯卷,應該需要6台服務器,作者只有4台,實驗volumetype: disperse:4:1,pv沒有自動創建,但是手動創建volume是成功的,可進入Podglusterfs-5jzdh中執行,注意Type

gluster volume create gv1 disperse 4 redundancy 1 10.8.4.92:/var/lib/heketi/mounts/gv1 10.8.4.93:/var/lib/heketi/mounts/gv1 10.8.4.131:/var/lib/heketi/mounts/gv1 10.8.4.132:/var/lib/heketi/mounts/gv1

gluster volume start gv1

gluster volume info

輸出如下

Volume Name: gv2
Type: Disperse
Volume ID: e072f9fa-6139-4471-a163-0e0dde0265ef
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x (3 + 1) = 4
Transport-type: tcp
Bricks:
Brick1: 10.8.4.92:/var/lib/heketi/mounts/gv2
Brick2: 10.8.4.93:/var/lib/heketi/mounts/gv2
Brick3: 10.8.4.131:/var/lib/heketi/mounts/gv2
Brick4: 10.8.4.132:/var/lib/heketi/mounts/gv2
Options Reconfigured:
transport.address-family: inet
nfs.disable: on
  • volumetype: replicate:3

創建3個副本,復制卷模式,耗資源,但是一個磁盤算壞或節點宕機可以正常使用,gluster volume info查看如下,注意Type

Volume Name: vol_d78f449dbeab2286267c7e1842086a8f
Type: Replicate
Volume ID: 02fd891f-dd43-4c1b-a2ba-87e1be7c706f
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: 10.8.4.132:/var/lib/heketi/mounts/vg_5634269dc08edd964032871801920f1e/brick_b980d3f5ce7b1b4314c4b57c8aaf35fa/brick
Brick2: 10.8.4.93:/var/lib/heketi/mounts/vg_1d2cf75ab474dd63edb917a78096e429/brick_b375443687051038234e50fe3cd5fe12/brick
Brick3: 10.8.4.92:/var/lib/heketi/mounts/vg_a5d145795d59c51d2335153880049760/brick_e8f9ec722a235448fbf6730c25d7441a/brick
Options Reconfigured:
user.heketi.id: dfed68e6dca82c7cd5911c8ddda7746b
transport.address-family: inet
nfs.disable: on
performance.client-io-threads: off
  • volumetype: none

分布式卷,通過hash算法分布到一個brick上,磁盤算壞或節點宕機不可使用,gluster volume info查看如下,注意Type

Volume Name: vol_e1b27d580cbe18a96b0fdf7cbfe69cc2
Type: Distribute
Volume ID: cb4a7e4f-3850-4809-b159-fc8000527d71
Status: Started
Snapshot Count: 0
Number of Bricks: 1
Transport-type: tcp
Bricks:
Brick1: 10.8.4.93:/var/lib/heketi/mounts/vg_1d2cf75ab474dd63edb917a78096e429/brick_8f62218753db589204b753295a318795/brick
Options Reconfigured:
user.heketi.id: e1b27d580cbe18a96b0fdf7cbfe69cc2
transport.address-family: inet
nfs.disable: on

創建pvc

vi glusterfs-pv.yaml

---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: glusterfs
  annotations:
    volume.beta.kubernetes.io/storage-class: "glusterfs"
spec:
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 1Gi

親愛的朋友,您應該根據具體情況作出選擇,想要繼續了解存儲卷模式,和使用方式,請查看《GlusterFs卷類型分析及創建、使用(結合kubernetes集群分析)》(排好版后上傳)
手碼無坑,有問題歡迎打擾,給贊呦!!!贊!!又不花錢!!!!


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM