Kubernetes的持久化存儲


一、k8s使用存儲的原因

   k8s中的副本控制器保證了pod的始終存儲,卻保證不了pod中的數據。只有啟動一個新pod的,之前pod中的數據會隨着容器的刪掉而丟失!

二、共享存儲機制

   k8s對於有狀態的容器應用或者對數據需要持久化的應用,不僅需要將容器內的目錄掛載到宿主機的目錄或者emptyDir臨時存儲卷,而且需要更加可靠的存儲來保存應用產生的重要數據,以便容器應用在重建之后,仍然可以使用之前的數據。k8s引入pv和pvc兩個資源對象實現對存儲的管理子系統。

三、pv和pvc概念

      PersistentVolume(一些簡稱PV):是對底層網絡共享存儲的抽象,將共享存儲定義為一種“資源“。由管理員添加的的一個存儲的描述,是一個全局資源,包含存儲的類型,存儲的大小和訪問模式等。它的生命周期獨立於Pod,例如當使用它的Pod銷毀時對PV沒有影響。它與共享存儲的具體實現直接相關。例如GlusterFS、iSCSI、RBD或GCE/AWS公有雲提供的共享存儲,通過插件式的機制完成與共享存儲的對接,以供應用訪問和使用。

     PersistentVolumeClaim(一些簡稱PVC):是用戶對於存儲資源的一個“申請”,是Namespace里的資源,描述對PV的一個請求。就像Pod“消費”Node資源一樣,PVC會消費PV資源。PVC可以申請特定的存儲空間和訪問模式。

四、創建pv

     PV作為存儲資源,主要包括存儲能力、訪問模式、存儲類型、回放策略、后端存儲類型等關鍵信息的設置。

1. 安裝nfs服務端和客戶端

服務端:192.168.0.212

客戶端:192.168.0.184/192.168.0.208

[root@kub_master ~]# yum install nfs-utils -y

服務端

[root@kub_master ~]# vim /etc/exports
[root@kub_master ~]# cat /etc/exports
/data  192.168.0.0/24(rw,async,no_root_squash,no_all_squash)
[root@kub_master ~]# mkdir /data
[root@kub_master ~]# mkdir /data/k8s
[root@kub_master ~]# systemctl restart rpcbind
[root@kub_master ~]# systemctl restart nfs

客戶端查看

[root@kub_node1 ~]# showmount -e 192.168.0.212
Export list for 192.168.0.212:
/data 192.168.0.0/24
[root@kub_node2 ~]# showmount -e 192.168.0.212
Export list for 192.168.0.212:
/data 192.168.0.0/24

2. 創建pv

[root@kub_master ~]# cd k8s/
[root@kub_master k8s]# mkdir volume
[root@kub_master k8s]# cd volume/
[root@kub_master volume]# vim test-pv.yaml
[root@kub_master volume]# cat test-pv.yaml 
apiVersion: v1
kind: PersistentVolume
metadata:
  name: test
  labels:
    type: test
spec:
  capacity:
    storage: 10Gi   #存儲空間
  accessModes:
    - ReadWriteMany  #訪問模式
  persistentVolumeReclaimPolicy: Recycle  #回收策略
  nfs:
    path:  "/data/k8s"
    server: 192.168.0.212
    readOnly: false
[root@kub_master volume]# kubectl create -f test-pv.yaml 
persistentvolume "test" created
[root@kub_master volume]# kubectl get pv
NAME      CAPACITY   ACCESSMODES   RECLAIMPOLICY   STATUS      CLAIM     REASON    AGE
test      10Gi       RWX           Recycle         Available                       11s

#再創建一個5G的PV

[root@kub_master volume]# vim test-pv.yaml 
[root@kub_master volume]# cat test-pv.yaml 
apiVersion: v1
kind: PersistentVolume
metadata:
  name: test1
  labels:
    type: test
spec:
  capacity:
    storage: 5Gi 
  accessModes:
    - ReadWriteMany 
  persistentVolumeReclaimPolicy: Recycle
  nfs:
    path:  "/data/k8s"
    server: 192.168.0.212
    readOnly: false
[root@kub_master volume]# kubectl create -f test-pv.yaml 
persistentvolume "test1" created
[root@kub_master volume]# kubectl get pv
NAME      CAPACITY   ACCESSMODES   RECLAIMPOLICY   STATUS      CLAIM     REASON    AGE
test      10Gi       RWX           Recycle         Available                       2m
test1     5Gi        RWX           Recycle         Available                       5s

3. PV的關鍵配置參數

1)存儲能力(Capacity)

描述存儲設備具備的能力,目前僅支持對存儲空間的設置(storage=xx),未來可能加入IOPS、吞吐率等指標的設置

2)訪問模式

對PV進行訪問模式的設置,用於描述用戶應用對存儲資源的訪問權限,訪問模式如下:

ReadWriteOnce(簡寫RWO):讀寫權限,並且只能被單個Node掛載

ReadOnlyMany(簡寫ROX):只讀權限,允許被多個Node掛載

ReadWriteMany(簡寫RWX):讀寫權限,允許被多個Node掛載

3)存儲類別(Class)

PV可以設定其存儲類型(Class),通過storage ClassName 參數指定一個StorageClass資源對象的名稱。具有特定“類別”的PV只能與請求了該“類別”的PVC進行綁定。未設定“類別”的PV則只能與不請求任何“類別”的PVC進行綁定。

4)回收策略

目前支持如下三種回收策略:

保留(Retain):保留數據,需要手工處理

回收空間(Recycle):簡單清除文件的操作

刪除:與PV相連的后端存儲完成volume的刪除操作。

目前,只有NFS和HostPath兩種類型的存儲支持“Recycle”策略;AWS EBS、GCE PD、Azure Disk 和Cinder volume支持“Delete”策略。

4. PV生命周期的各個階段(Phase)

某個PV在生命周期中,可能處於以下4個階段之一:

Available:可用狀態,還未與某個PVC綁定

Bound:已與某個PVC綁定

Released:綁定的PVC已經刪除,資源已釋放,但沒有被集群回收

Failed:自動資源回收失敗

5. PV的掛載參數

    在將PV掛載到一個Node上時,根據后端存儲的特點,可能需要設置額外的掛載參數,目前可以通過在PV的定義中,設置一個名為“volume.beta.kubernetes.io/mount-options”的annotation來實現。

五、創建PVC

1. PVC作為用戶對存儲資源的需求申請,主要包括存儲空間請求,訪問模式,PV選擇條件和存儲類別等信息的設置。

[root@kub_master volume]# vim test-pvc.yaml
[root@kub_master volume]# cat test-pvc.yaml 
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: myclaim
spec:
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 1Gi  #申請1G的存儲空間
[root@kub_master volume]# kubectl create -f test-pvc.yaml 
persistentvolumeclaim "myclaim" created
[root@kub_master volume]# kubectl get pvc
NAME      STATUS    VOLUME    CAPACITY   ACCESSMODES   AGE
myclaim Bound test1 5Gi RWX 6s
[root@kub_master volume]# kubectl get pv
NAME      CAPACITY   ACCESSMODES   RECLAIMPOLICY   STATUS      CLAIM             REASON    AGE
test      10Gi       RWX           Recycle         Available                               7m
test1 5Gi RWX Recycle Bound default/myclaim             5m

2. PVC關鍵配置參數說明如下:

資源請求:描述對存儲資源的請求,目前僅支持request.storage的設置,即存儲空間大小。

訪問模式:PVC也可以設置訪問模式,用於描述用戶應用對存儲資源的訪問權限。可以設置的三種訪問模式與PV的設置相同。

PV選擇條件:通過Label Selector 的設置,可使PVC對於系統中已存在的各種PV進行篩選。系統將根據標簽選擇出合適的PV與該PVC進行綁定。選擇條件可以使用matchLabels和matchExpression 進行設置,如果兩個字段都設置了,則Selector的邏輯將是兩組條件同時滿足才能完成匹配。

存儲類別:PVC在定義時可以設定需要的后端存儲的“類別”(通過storageClassName字段指定),以降低對后端存儲特性的詳細信息的依賴。只有設置了該Class的PV才能被系統選出,並與該PVC進行綁定。

     注意:PVC和PV都受限於namespace,pvc在選擇pv時受到namespace的限制,只有相同的namespace中的PV才可能與PVC綁定。Pod在引用PVC時同樣受namespace的限制,只有相同的namespace中的PVC才能掛載到pod內。

     當Selector和Class都進行了設置時,系統將選擇兩個條件同時滿足的pv與之匹配。

六、持久化存儲實戰

1.創建tomcat+mysql項目

[root@kub_master ~]# cd k8s/tomcat_demo_volume/
[root@kub_master tomcat_demo_volume]# ll
total 16
-rw-r--r-- 1 root root 420 Oct  4 16:47 mysql-rc.yaml
-rw-r--r-- 1 root root 145 Oct  4 16:47 mysql-svc.yaml
-rw-r--r-- 1 root root 487 Oct  4 16:48 tomcat-rc.yaml
-rw-r--r-- 1 root root 162 Sep 26 17:03 tomcat-svc.yaml
[root@kub_master tomcat_demo_volume]# kubectl create -f .
replicationcontroller "mysql" created
service "mysql" created
replicationcontroller "myweb" created
service "myweb" created
[root@kub_master tomcat_demo_volume]# kubectl get all
NAME       DESIRED   CURRENT   READY     AGE
rc/mysql   1         1         1         6s
rc/myweb   2         2         2         6s

NAME             CLUSTER-IP        EXTERNAL-IP   PORT(S)          AGE
svc/kubernetes   192.168.0.1       <none>        443/TCP          13d
svc/mysql        192.168.213.147   <none>        3306/TCP         6s
svc/myweb        192.168.58.131    <nodes>       8080:30001/TCP   5s

NAME             READY     STATUS    RESTARTS   AGE
po/mysql-t8qmt   1/1       Running   0          6s
po/myweb-48t21   1/1       Running   0          5s
po/myweb-znzkb   1/1       Running   0          5s

測試訪問

插入數據

數據將存儲在pod mysql-t8qmt中,將該pod刪除,檢測數據是否還存在

[root@kub_master tomcat_demo_volume]# kubectl delete pod mysql-t8qmt
pod "mysql-t8qmt" deleted
[root@kub_master tomcat_demo_volume]# kubectl get all
NAME       DESIRED   CURRENT   READY     AGE
rc/mysql   1         1         1         5m
rc/myweb   2         2         2         5m

NAME             CLUSTER-IP        EXTERNAL-IP   PORT(S)          AGE
svc/kubernetes   192.168.0.1       <none>        443/TCP          13d
svc/mysql        192.168.213.147   <none>        3306/TCP         5m
svc/myweb        192.168.58.131    <nodes>       8080:30001/TCP   5m

NAME             READY     STATUS        RESTARTS   AGE
po/mysql-b6833   1/1       Running       0 2s
po/mysql-t8qmt   1/1       Terminating   0          5m
po/myweb-48t21   1/1       Running       0          5m
po/myweb-znzkb   1/1       Running       0          5m

刪除后,重新啟動了一個pod,但是之前插入的數據沒有了

2. 創建相應的pv和pvc

為了讓數據持久化,創建pv和pvc

[root@kub_master tomcat_demo_volume]# vim mysql-pv.yaml 
[root@kub_master tomcat_demo_volume]# cat mysql-pv.yaml 
apiVersion: v1
kind: PersistentVolume
metadata:
  name: mysql
  labels:
    type: mysql
spec:
  capacity:
    storage: 10Gi 
  accessModes:
    - ReadWriteMany 
  persistentVolumeReclaimPolicy: Recycle
  nfs:
    path:  "/data/mysql"
    server: 192.168.0.212
    readOnly: false
[root@kub_master tomcat_demo_volume]# vim mysql-pvc.yaml 
[root@kub_master tomcat_demo_volume]# cat mysql-pvc.yaml 
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: mysql
spec:
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 10Gi
#創建存儲目錄 [root@kub_master tomcat_demo_volume]# mkdir
/data/mysql
[root@kub_master tomcat_demo_volume]# kubectl create -f mysql-pv.yaml 
persistentvolume "mysql" created
[root@kub_master tomcat_demo_volume]# kubectl create -f mysql-pvc.yaml 
persistentvolumeclaim "mysql" created
[root@kub_master tomcat_demo_volume]# kubectl get pv
NAME      CAPACITY   ACCESSMODES   RECLAIMPOLICY   STATUS    CLAIM           REASON    AGE
mysql     10Gi       RWX           Recycle         Bound     default/mysql             9s
[root@kub_master tomcat_demo_volume]# kubectl get pvc
NAME      STATUS    VOLUME    CAPACITY   ACCESSMODES   AGE
mysql     Bound     mysql     10Gi       RWX           8s

3. pod使用pvc

[root@kub_master tomcat_demo_volume]# kubectl apply -f mysql-rc-pvc.yaml 
replicationcontroller "mysql" configured
[root@kub_master tomcat_demo_volume]# kubectl get all
NAME       DESIRED   CURRENT   READY     AGE
rc/mysql   1         1         1         23m
rc/myweb   2         2         2         23m

NAME             CLUSTER-IP        EXTERNAL-IP   PORT(S)          AGE
svc/kubernetes   192.168.0.1       <none>        443/TCP          13d
svc/mysql        192.168.213.147   <none>        3306/TCP         23m
svc/myweb        192.168.58.131    <nodes>       8080:30001/TCP   23m

NAME             READY     STATUS    RESTARTS   AGE
po/mysql-b6833   1/1       Running   0          18m
po/myweb-48t21   1/1       Running   0          23m
po/myweb-znzkb   1/1       Running   0          23m
[root@kub_master tomcat_demo_volume]# kubectl delete pod mysql-b6833
pod "mysql-b6833" deleted
[root@kub_master tomcat_demo_volume]# kubectl get all
NAME       DESIRED   CURRENT   READY     AGE
rc/mysql   1         1         1         23m
rc/myweb   2         2         2         23m

NAME             CLUSTER-IP        EXTERNAL-IP   PORT(S)          AGE
svc/kubernetes   192.168.0.1       <none>        443/TCP          13d
svc/mysql        192.168.213.147   <none>        3306/TCP         23m
svc/myweb        192.168.58.131    <nodes>       8080:30001/TCP   23m

NAME             READY     STATUS    RESTARTS   AGE
po/mysql-6rpwq   1/1       Running   0 2s
po/myweb-48t21   1/1       Running   0          23m
po/myweb-znzkb   1/1       Running   0          23m
#查看pv和pvc應用
[root@kub_master tomcat_demo_volume]# kubectl get pvc -o wide NAME STATUS VOLUME CAPACITY ACCESSMODES AGE mysql Bound mysql 10Gi RWX 8m [root@kub_master tomcat_demo_volume]# kubectl get pv -o wide NAME CAPACITY ACCESSMODES RECLAIMPOLICY STATUS CLAIM REASON AGE mysql 10Gi RWX Recycle Bound default/mysql 8m [root@kub_master tomcat_demo_volume]# kubectl get pod mysql-6rpwq -o wide NAME READY STATUS RESTARTS AGE IP NODE mysql-6rpwq 1/1 Running 0 1m 172.16.46.3 192.168.0.184
#查看掛載情況
[root@kub_node1 ~]# df -h |grep mysql 192.168.0.212:/data/mysql 99G 12G 83G 12% /var/lib/kubelet/pods/c31f3bdd-0621-11eb-8a8e-fa163e38ad0d/volumes/kubernetes.io~nfs/mysql
#查看存儲目錄,已有文件
[root@kub_master tomcat_demo_volume]# ll /data/mysql/ total 188448 -rw-r----- 1 polkitd input 56 Oct 4 17:12 auto.cnf -rw-r----- 1 polkitd input 1329 Oct 4 17:12 ib_buffer_pool -rw-r----- 1 polkitd input 79691776 Oct 4 17:12 ibdata1 -rw-r----- 1 polkitd input 50331648 Oct 4 17:12 ib_logfile0 -rw-r----- 1 polkitd input 50331648 Oct 4 17:12 ib_logfile1 -rw-r----- 1 polkitd input 12582912 Oct 4 17:13 ibtmp1 drwxr-x--- 2 polkitd input 4096 Oct 4 17:12 mysql drwxr-x--- 2 polkitd input 4096 Oct 4 17:12 performance_schema drwxr-x--- 2 polkitd input 12288 Oct 4 17:12 sys

4. 測試訪問,添加數據,查看數據是否會丟失

#刪除當前pod
[root@kub_master tomcat_demo_volume]# kubectl delete pod mysql-6rpwq pod "mysql-6rpwq" deleted [root@kub_master tomcat_demo_volume]# kubectl get all NAME DESIRED CURRENT READY AGE rc/mysql 1 1 1 29m rc/myweb 2 2 2 29m NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE svc/kubernetes 192.168.0.1 <none> 443/TCP 13d svc/mysql 192.168.213.147 <none> 3306/TCP 29m svc/myweb 192.168.58.131 <nodes> 8080:30001/TCP 29m NAME READY STATUS RESTARTS AGE po/mysql-6rpwq 1/1 Terminating 0 6m po/mysql-79gfq 1/1 Running 0 2s po/myweb-48t21 1/1 Running 0 29m po/myweb-znzkb 1/1 Running 0 29m #重啟一個新的pod
[root@kub_master tomcat_demo_volume]# kubectl
get all NAME DESIRED CURRENT READY AGE rc/mysql 1 1 1 29m rc/myweb 2 2 2 29m NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE svc/kubernetes 192.168.0.1 <none> 443/TCP 13d svc/mysql 192.168.213.147 <none> 3306/TCP 29m svc/myweb 192.168.58.131 <nodes> 8080:30001/TCP 29m NAME READY STATUS RESTARTS AGE po/mysql-79gfq 1/1 Running 0 7s po/myweb-48t21 1/1 Running 0 29m po/myweb-znzkb 1/1 Running 0 29m

查看插入的數據是否存在

[root@kub_master tomcat_demo_volume]# ll /data/mysql/
total 188452
-rw-r----- 1 polkitd input       56 Oct  4 17:12 auto.cnf
drwxr-x--- 2 polkitd input     4096 Oct  4 17:17 HPE_APP
-rw-r----- 1 polkitd input      698 Oct  4 17:18 ib_buffer_pool
-rw-r----- 1 polkitd input 79691776 Oct  4 17:18 ibdata1
-rw-r----- 1 polkitd input 50331648 Oct  4 17:18 ib_logfile0
-rw-r----- 1 polkitd input 50331648 Oct  4 17:12 ib_logfile1
-rw-r----- 1 polkitd input 12582912 Oct  4 17:19 ibtmp1
drwxr-x--- 2 polkitd input     4096 Oct  4 17:12 mysql
drwxr-x--- 2 polkitd input     4096 Oct  4 17:12 performance_schema
drwxr-x--- 2 polkitd input    12288 Oct  4 17:12 sys

七、分布式文件系統GlusterFS

Glusterfs是一個開源分布式文件系統,具有強大的橫向擴展能力,可支持數PB存儲容量和數千客戶端,通過網絡互聯成一個並行的網絡文件系統。具有可擴展性、高性能、高可用性等特點。

1.  安裝glusterfs(所有節點)

[root@kub_master ~]# yum install  centos-release-gluster -y
[root@kub_master ~]# yum install  install glusterfs-server -y
[root@kub_master ~]# systemctl start glusterd.service
[root@kub_master ~]# systemctl enable glusterd.service
[root@kub_master ~]# systemctl status glusterd.service
● glusterd.service - GlusterFS, a clustered file-system server
   Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: enabled)
   Active: active (running) since Sun 2020-10-04 17:29:16 CST; 44s ago
     Docs: man:glusterd(8)
 Main PID: 22388 (glusterd)
   CGroup: /system.slice/glusterd.service
           └─22388 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO

Oct 04 17:29:16 kub_master systemd[1]: Starting GlusterFS, a clustered file-system server...
Oct 04 17:29:16 kub_master systemd[1]: Started GlusterFS, a clustered file-system server.
#為gluster集群增加存儲單元
[root@kub_master ~]# mkdir -p /gfs/test1
[root@kub_master ~]# mkdir -p /gfs/test2

2. 添加存儲資源池

#master節點
[root@kub_master ~]# gluster pool list UUID Hostname State 165199a9-1e89-47f4-97f1-6c0a54376ba9 localhost Connected [root@kub_master ~]# gluster peer probe 192.168.0.184 peer probe: success. [root@kub_master ~]# gluster peer probe 192.168.0.208 peer probe: success. [root@kub_master ~]# gluster pool list UUID Hostname State 2dd3e723-ec1b-404a-8ba3-eb78eabcf0cd 192.168.0.184 Connected 9e6c240c-0564-4aaf-861b-3ddfad6bf614 192.168.0.208 Connected 165199a9-1e89-47f4-97f1-6c0a54376ba9 localhost Connected

3.glusterfs卷管理

1)創建分布式復制卷

[root@kub_master ~]# gluster volume create test replica 2 192.168.0.212:/gfs/test1 192.168.0.212:/gfs/test2 192.168.0.184:/gfs/test1 192.168.0.184:/gfs/test2 force
volume create: test: success: please start the volume to access data

2)啟動卷

[root@kub_master ~]# gluster volume start test
volume start: test: success

3)查看卷

[root@kub_master ~]# gluster volume info test
 
Volume Name: test
Type: Distributed-Replicate
Volume ID: 879fd8dc-bb14-4231-9169-42440edcc950
Status: Started
Snapshot Count: 0
Number of Bricks: 2 x 2 = 4
Transport-type: tcp
Bricks:
Brick1: 192.168.0.212:/gfs/test1
Brick2: 192.168.0.212:/gfs/test2
Brick3: 192.168.0.184:/gfs/test1
Brick4: 192.168.0.184:/gfs/test2
Options Reconfigured:
transport.address-family: inet
storage.fips-mode-rchecksum: on
nfs.disable: on
performance.client-io-threads: off

4)掛載卷

#在任意節點進行掛載
[root@kub_node1 ~]# mount -t glusterfs 192.168.0.212:/test /mnt [root@kub_node1 ~]# df -h /mnt Filesystem Size Used Avail Use% Mounted on 192.168.0.212:/test 99G 9.2G 86G 10% /mnt

4.分布式復制卷擴容

#擴容命令
[root@kub_master ~]# gluster volume add-brick test 192.168.0.208:/gfs/test1 192.168.0.208:/gfs/test2 force volume add-brick: success
#擴容后查看容量,明顯增大
[root@kub_node1 ~]# df -h /mnt Filesystem Size Used Avail Use% Mounted on 192.168.0.212:/test 148G 14G 130G 10% /mnt

5. 上傳資料到/mnt目錄下,查看數據分布

[root@kub_node1 ~]# cd /mnt
[root@kub_node1 mnt]# rz                                                                                       [root@kub_node1 mnt]# ^C
[root@kub_node1 mnt]# ll
total 89
-rw-r--r-- 1 root root 91014 Oct  4 18:07 xiaoniaofeifei.zip
[root@kub_node1 mnt]# unzip xiaoniaofeifei.zip 
Archive:  xiaoniaofeifei.zip
  inflating: sound1.mp3              
   creating: img/
  inflating: img/bg1.jpg             
  inflating: img/bg2.jpg             
  inflating: img/number1.png         
  inflating: img/number2.png         
  inflating: img/s1.png              
  inflating: img/s2.png              
  inflating: 21.js                   
  inflating: 2000.png                
  inflating: icon.png                
  inflating: index.html              
[root@kub_node1 mnt]# tree /gfs/
/gfs/
├── test1
│   └── img
│       ├── bg1.jpg
│       └── s2.png
└── test2
    └── img
        ├── bg1.jpg
        └── s2.png

4 directories, 4 files
[root@kub_node2 ~]# tree /gfs/
/gfs/
├── test1
│   └── img
│       └── bg2.jpg
└── test2
    └── img
        └── bg2.jpg

4 directories, 2 files
[root@kub_master ~]# tree /gfs/
/gfs/
├── test1
│   ├── 2000.png
│   ├── 21.js
│   ├── icon.png
│   ├── img
│   │   ├── number1.png
│   │   ├── number2.png
│   │   └── s1.png
│   ├── index.html
│   ├── sound1.mp3
│   └── xiaoniaofeifei.zip
└── test2
    ├── 2000.png
    ├── 21.js
    ├── icon.png
    ├── img
    │   ├── number1.png
    │   ├── number2.png
    │   └── s1.png
    ├── index.html
    ├── sound1.mp3
    └── xiaoniaofeifei.zip

4 directories, 18 files

八、glusterfs對接k8s后端存儲

1. 創建endpoint

[root@kub_master ~]# cd k8s/
[root@kub_master k8s]# mkdir glusterfs-volume
[root@kub_master k8s]# cd glusterfs-volume/
[root@kub_master glusterfs-volume]# vi  glusterfs-ep.yaml
[root@kub_master glusterfs-volume]# cat glusterfs-ep.yaml 
apiVersion: v1
kind: Endpoints
metadata:
  name: glusterfs
  namespace: default
subsets:
- addresses:
  - ip: 192.168.0.212
  - ip: 192.168.0.208
  - ip: 192.168.0.184
  ports:
  - port: 49152  #默認端口
    protocol: TCP
[root@kub_master glusterfs-volume]# kubectl create -f glusterfs-ep.yaml 
endpoints "glusterfs" created
[root@kub_master glusterfs-volume]# kubectl get ep
NAME         ENDPOINTS                                                     AGE
glusterfs 192.168.0.184:49152,192.168.0.208:49152,192.168.0.212:49152 8s
kubernetes   192.168.0.212:6443                                            13d
mysql        172.16.46.5:3306                                              1h
myweb        172.16.46.4:8080,172.16.66.2:8080                             1h

2. 創建service

[root@kub_master glusterfs-volume]# vi  glusterfs-svc.yaml
[root@kub_master glusterfs-volume]# cat glusterfs-svc.yaml 
apiVersion: v1
kind: Service
metadata:
  name: glusterfs
  namespace: default
spec:
  ports:
  - port: 49152
    protocol: TCP
    targetPort: 49152
  sessionAffinity: None
  type: ClusterIP
[root@kub_master glusterfs-volume]# kubectl create -f glusterfs-svc.yaml 
service "glusterfs" created
[root@kub_master glusterfs-volume]# kubectl get svc
NAME         CLUSTER-IP        EXTERNAL-IP   PORT(S)          AGE
glusterfs 192.168.19.6      <none>        49152/TCP 6s
kubernetes   192.168.0.1       <none>        443/TCP          13d
mysql        192.168.213.147   <none>        3306/TCP         1h
myweb        192.168.58.131    <nodes>       8080:30001/TCP   1h
[root@kub_master glusterfs-volume]# kubectl describe svc glusterfs
Name: glusterfs #依靠名字關聯
Namespace:        default
Labels:            <none>
Selector:        <none>
Type:            ClusterIP
IP:            192.168.19.6
Port:            <unset>    49152/TCP
Endpoints: 192.168.0.184:49152,192.168.0.208:49152,192.168.0.212:49152
Session Affinity:    None
No events.

3. 創建gluster類型pv

[root@kub_master glusterfs-volume]# gluster volume list
test
[root@kub_master glusterfs-volume]# vim glusterfs-pv.yaml
[root@kub_master glusterfs-volume]# cat glusterfs-pv.yaml 
apiVersion: v1
kind: PersistentVolume
metadata:
  name: gluster
  labels:
    type: glusterfs
spec:
  capacity:
    storage: 50Gi
  accessModes:
    - ReadWriteMany
  glusterfs:
    endpoints: "glusterfs"
    path: "test"
    readOnly: false
[root@kub_master glusterfs-volume]# kubectl create -f glusterfs-pv.yaml 
persistentvolume "gluster" created
[root@kub_master glusterfs-volume]# kubectl get pv
NAME      CAPACITY   ACCESSMODES   RECLAIMPOLICY   STATUS      CLAIM           REASON    AGE
gluster   50Gi       RWX           Retain          Available                             14s
mysql     10Gi       RWX           Recycle         Bound       default/mysql             1h

4.創建gluster類型pvc

[root@kub_master glusterfs-volume]# vim glusterfs-pvc.yaml
[root@kub_master glusterfs-volume]# cat glusterfs-pvc.yaml 
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: gluster
spec:
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 15Gi

[root@kub_master glusterfs-volume]# kubectl create -f glusterfs-pvc.yaml 
persistentvolumeclaim "gluster" created
[root@kub_master glusterfs-volume]# kubectl get pvc
NAME      STATUS    VOLUME    CAPACITY   ACCESSMODES   AGE
gluster   Bound     gluster   50Gi       RWX           6s
mysql     Bound     mysql     10Gi       RWX           1h
[root@kub_master glusterfs-volume]# kubectl get pv
NAME      CAPACITY   ACCESSMODES   RECLAIMPOLICY   STATUS    CLAIM             REASON    AGE
gluster   50Gi       RWX           Retain          Bound     default/gluster             3m
mysql     10Gi       RWX           Recycle         Bound     default/mysql               1h

5. 在pod中使用gluster

[root@kub_master glusterfs-volume]# vim nginx-pod-gluster.yaml
[root@kub_master glusterfs-volume]# cat nginx-pod-gluster.yaml 
apiVersion: v1
kind: Pod
metadata:
  name: test
  labels:
    app: web
spec:
  containers:
    - name: test
      image: 192.168.0.212:5000/nginx:1.13
      ports:
        - containerPort: 80
      volumeMounts:
        - name: nfs-vol2
          mountPath: /usr/share/nginx/html
  volumes:
    - name: nfs-vol2 persistentVolumeClaim: claimName: gluster
[root@kub_master glusterfs-volume]# kubectl create -f nginx-pod-gluster.yaml 
pod "test" created
[root@kub_master glusterfs-volume]# kubectl get pod
NAME          READY     STATUS    RESTARTS   AGE
mysql-79gfq   1/1       Running   0          1h
myweb-48t21   1/1       Running   0          1h
myweb-znzkb   1/1       Running   0          1h
test          1/1       Running   0          5s
[root@kub_master glusterfs-volume]# kubectl get pod -o wide
NAME          READY     STATUS    RESTARTS   AGE       IP            NODE
mysql-79gfq   1/1       Running   0          1h        172.16.46.5   192.168.0.184
myweb-48t21   1/1       Running   0          1h        172.16.66.2   192.168.0.208
myweb-znzkb   1/1       Running   0          1h        172.16.46.4   192.168.0.184
test          1/1       Running   0          20s       172.16.46.3   192.168.0.184
[root@kub_master glusterfs-volume]# curl 172.16.46.3 
<!DOCTYPE HTML>
<html>
    <head>
        <meta http-equiv="Content-Type" content="text/html; charset=utf-8">
        <meta id="viewport" name="viewport" content="width=device-width,user-scalable=no" />
        <script type="text/javascript" src="21.js"></script>
        <title>小鳥飛飛飛-文章庫小游戲</title>
        <style type="text/css">
            body {
                margin:0px;
            }
        </style>
        <script language=javascript>
            var mebtnopenurl = 'http://www.wenzhangku.com/weixin/';
            window.shareData = {
                    "imgUrl": "http://www.wenzhangku.com/weixin/xiaoniaofeifei/icon.png",
                    "timeLineLink": "http://www.wenzhangku.com/weixin/xiaoniaofeifei/",
                    "tTitle": "小鳥飛飛飛-文章庫小游戲",
                    "tContent": "從前有一只鳥,飛着飛着就死了。"
            };
        document.addEventListener('WeixinJSBridgeReady', function onBridgeReady() {
            
            WeixinJSBridge.on('menu:share:appmessage', function(argv) {
                WeixinJSBridge.invoke('sendAppMessage', {
                    "img_url": window.shareData.imgUrl,
                    "link": window.shareData.timeLineLink,
                    "desc": window.shareData.tContent,
                    "title": window.shareData.tTitle
                }, function(res) {
                    document.location.href = mebtnopenurl;
                })
            });

            WeixinJSBridge.on('menu:share:timeline', function(argv) {
                WeixinJSBridge.invoke('shareTimeline', {
                    "img_url": window.shareData.imgUrl,
                    "img_width": "640",
                    "img_height": "640",
                    "link": window.shareData.timeLineLink,
                    "desc": window.shareData.tContent,
                    "title": window.shareData.tTitle
                }, function(res) {
                    document.location.href = mebtnopenurl;
                });
            });
        }, false);
        function dp_submitScore(a,b){
            if(a&&b>=a&&b>10){
                //alert("新紀錄哦!你過了"+b+"關!")
                dp_share(b)
            }
        }
            
        function dp_Ranking(){
            document.location.href = mebtnopenurl;
        }
        function dp_share(t){
            document.title = "我玩小鳥飛飛飛過了"+t+"關!你能超過灑家我嗎?";
            document.getElementById("share").style.display="";
            window.shareData.tTitle = document.title;
        }
        </script>
        </head>
    <body>
        <div style="text-align:center;">
            <canvas id="linkScreen">
                很遺憾,您的瀏覽器不支持HTML5,請使用支持HTML5的瀏覽器。
            </canvas>
        </div>
        <div id="mask_container" align="center" style="width: 100%; height: 100%; position: absolute; left: 0px; top: 0px; display: none; z-index: 100000; background-color: rgb(255, 255, 255);">
                <img id="p2l" src="img/p2l.jpg" style="position: absolute;left: 50%;top: 50%;-webkit-transform:translateX(-50%) translateY(-50%);transform:translateX(-50%) translateY(-50%)" >
        </div>
        <div id=share style="display:none">
        <img  width=100% src="2000.png"  style="position:absolute;top:0;left:0;display:" onclick="document.getElementById('share').style.display='none';">
        </div>
        
<div style="display:none;">
這里加統計
</div>
    </body>
</html>

添加端口映射,瀏覽器訪問

[root@kub_master glusterfs-volume]# vim nginx-pod-gluster.yaml 
[root@kub_master glusterfs-volume]# cat nginx-pod-gluster.yaml 
apiVersion: v1
kind: Pod
metadata:
  name: nginx
  labels:
    app: web
spec:
  containers:
    - name: nginx
      image: 192.168.0.212:5000/nginx:1.13
      ports:
        - containerPort: 80
          hostPort: 81
      volumeMounts:
        - name: nfs-vol2
          mountPath: /usr/share/nginx/html
  volumes:
    - name: nfs-vol2
      persistentVolumeClaim:
         claimName: gluster
[root@kub_master glusterfs-volume]# kubectl create -f nginx-pod-gluster.yaml 
pod "nginx" created
[root@kub_master glusterfs-volume]# kubectl get pod
NAME          READY     STATUS    RESTARTS   AGE
mysql-79gfq   1/1       Running   0          1h
myweb-48t21   1/1       Running   0          2h
myweb-znzkb   1/1       Running   0          2h
nginx 1/1       Running   0 4s
test          1/1       Running   0          5m
[root@kub_master glusterfs-volume]# kubectl get pod -o wide
NAME          READY     STATUS    RESTARTS   AGE       IP            NODE
mysql-79gfq   1/1       Running   0          1h        172.16.46.5   192.168.0.184
myweb-48t21   1/1       Running   0          2h        172.16.66.2   192.168.0.208
myweb-znzkb   1/1       Running   0          2h        172.16.46.4   192.168.0.184
nginx         1/1       Running   0          11s       172.16.81.3   192.168.0.212
test          1/1       Running   0          5m        172.16.46.3   192.168.0.184
#測試訪問
[root@kub_master glusterfs-volume]# curl 192.168.0.212:81 <!DOCTYPE HTML> <html> <head> <meta http-equiv="Content-Type" content="text/html; charset=utf-8"> <meta id="viewport" name="viewport" content="width=device-width,user-scalable=no" /> <script type="text/javascript" src="21.js"></script> <title>小鳥飛飛飛-文章庫小游戲</title> <style type="text/css"> body { margin:0px; } </style> <script language=javascript> var mebtnopenurl = 'http://www.wenzhangku.com/weixin/'; window.shareData = { "imgUrl": "http://www.wenzhangku.com/weixin/xiaoniaofeifei/icon.png", "timeLineLink": "http://www.wenzhangku.com/weixin/xiaoniaofeifei/", "tTitle": "小鳥飛飛飛-文章庫小游戲", "tContent": "從前有一只鳥,飛着飛着就死了。" }; document.addEventListener('WeixinJSBridgeReady', function onBridgeReady() { WeixinJSBridge.on('menu:share:appmessage', function(argv) { WeixinJSBridge.invoke('sendAppMessage', { "img_url": window.shareData.imgUrl, "link": window.shareData.timeLineLink, "desc": window.shareData.tContent, "title": window.shareData.tTitle }, function(res) { document.location.href = mebtnopenurl; }) }); WeixinJSBridge.on('menu:share:timeline', function(argv) { WeixinJSBridge.invoke('shareTimeline', { "img_url": window.shareData.imgUrl, "img_width": "640", "img_height": "640", "link": window.shareData.timeLineLink, "desc": window.shareData.tContent, "title": window.shareData.tTitle }, function(res) { document.location.href = mebtnopenurl; }); }); }, false); function dp_submitScore(a,b){ if(a&&b>=a&&b>10){ //alert("新紀錄哦!你過了"+b+"關!") dp_share(b) } } function dp_Ranking(){ document.location.href = mebtnopenurl; } function dp_share(t){ document.title = "我玩小鳥飛飛飛過了"+t+"關!你能超過灑家我嗎?"; document.getElementById("share").style.display=""; window.shareData.tTitle = document.title; } </script> </head> <body> <div style="text-align:center;"> <canvas id="linkScreen"> 很遺憾,您的瀏覽器不支持HTML5,請使用支持HTML5的瀏覽器。 </canvas> </div> <div id="mask_container" align="center" style="width: 100%; height: 100%; position: absolute; left: 0px; top: 0px; display: none; z-index: 100000; background-color: rgb(255, 255, 255);"> <img id="p2l" src="img/p2l.jpg" style="position: absolute;left: 50%;top: 50%;-webkit-transform:translateX(-50%) translateY(-50%);transform:translateX(-50%) translateY(-50%)" > </div> <div id=share style="display:none"> <img width=100% src="2000.png" style="position:absolute;top:0;left:0;display:" onclick="document.getElementById('share').style.display='none';"> </div> <div style="display:none;"> 這里加統計 </div> </body> </html>


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM