k8s入坑之路(15)kubernetes共享存儲與StatefulSet有狀態


共享存儲

docker默認是無狀態,當有狀態服務時需要用到共享存儲

  • 為什么需要共享存儲:
    •   1.最常見有狀態服務,本地存儲有些程序會把文件保存在服務器目錄中,如果容器重新啟停則會丟失。
    •   2.如果使用volume將目錄掛載到容器中,涉及到備份及高可用問題。如果宿主機出現問題則會造成不可用狀態。

kubernetes中提供了共享存儲

1.pv(PresistentVolume持久卷)  

2.pvc (PresistentVolumeClaim持久卷聲明)  

 

PV

pv中定義了:
pv的容量
pv的訪問模式(readWriteOnce:可讀可寫,但支持被單個pod掛載,replicas為1
readOnlyMany:表示以只讀的方式被多個pod掛載,就是replicas可以大於1
readWriteMany:這種存儲可以以讀寫方式被多個pod共享,就是replicas可以大於1) pv連接的存儲后端地址

pv使用nfs類型:

 

###將nfs mount到本地目錄中,然后掛載到pod里。

 

 

StorageClass管理pv與pvc

StorageClass管理GFS pv例子:

kubernetes中自動管理共享存儲pv api,當pod數量過多共享存儲需求量大,所以對應的有了storage-class,能夠幫助我們自動的去創建pv。省去了pv的創建與回收。 

 

 

 

 ##pvc通過pv StorageClass-name去綁定pv

架構圖如下:

 

 

##手動pv事先創建好,一個pv只能綁定一個后端。當pvc使用時進行綁定。

##自動的后端對應一個StorageClass,pvc根據StorageClass去創建相應大小的pv。pvc與pod是由用戶去負責,用戶創建了pvc匹配不到的話 pod及pvc會處於pendding狀態。如果匹配到k8s就會為他們自動建立起綁定關系。

##一個pv可以給多個pvc使用,一個pvc只能綁定一個pv,一個pv只能綁定一個后端存儲。

storageclass創建pv
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: glusterfs-storage-class
provisioner: kubernetes.io/glusterfs
parameters:
  resturl: "http://10.155.20.120:30001"
  restauthenabled: "false"
glusterfs-storage-class.yaml

##指定了后端存儲地址以及storageclass name

storageclass創建pv
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: glusterfs-pvc
spec:
  storageClassName: glusterfs-storage-class
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 1Gi
glusterfs-pvc.yaml

###指定了storageclass name以及權限和大小

驗證pvc

kubectl apply -f gluster-pvc.yaml

kubetctl get pvc 

kubectl get pv 查看是否綁定 查看yaml中是否互相綁定了volumeName

 

pod使用pvc
apiVersion: apps/v1
kind: Deployment
metadata:
  name: web-deploy
spec:
  strategy:
    rollingUpdate:
      maxSurge: 25%
      maxUnavailable: 25%
    type: RollingUpdate
  selector:
    matchLabels:
      app: web-deploy
  replicas: 2
  template:
    metadata:
      labels:
        app: web-deploy
    spec:
      containers:
      - name: web-deploy
        image: hub.mooc.com/kubernetes/springboot-web:v1
        ports:
        - containerPort: 8080
        volumeMounts:
          - name: gluster-volume
            mountPath: "/mooc-data"
            readOnly: false
      volumes:
      - name: gluster-volume
        persistentVolumeClaim:
          claimName: glusterfs-pvc
pod-pvc.yaml

 

 

 

glusterFS部署

glusterfs部署要求:

  1. 至少需要3個節點(保證數據存在三個副本)
  2. 每個節點要有一塊裸磁盤沒有經過分區
1、各個節點運行
yum -y install glusterfs glusterfs-fuse

2、查看api-server和kubelet是否支持
ps -ef |grep apiserver |grep allow-pri 需要--allow-privileged=true 
glusterfs安裝

運行glusterfs以deamonset方式運行

kind: DaemonSet
apiVersion: extensions/v1beta1
metadata:
  name: glusterfs
  labels:
    glusterfs: daemonset
  annotations:
    description: GlusterFS DaemonSet
    tags: glusterfs
spec:
  template:
    metadata:
      name: glusterfs
      labels:
        glusterfs: pod
        glusterfs-node: pod
    spec:
      nodeSelector:
        storagenode: glusterfs
      hostNetwork: true
      containers:
      - image: gluster/gluster-centos:latest
        imagePullPolicy: IfNotPresent
        name: glusterfs
        env:
        # alternative for /dev volumeMount to enable access to *all* devices
        - name: HOST_DEV_DIR
          value: "/mnt/host-dev"
        # set GLUSTER_BLOCKD_STATUS_PROBE_ENABLE to "1" so the
        # readiness/liveness probe validate gluster-blockd as well
        - name: GLUSTER_BLOCKD_STATUS_PROBE_ENABLE
          value: "1"
        - name: GB_GLFS_LRU_COUNT
          value: "15"
        - name: TCMU_LOGDIR
          value: "/var/log/glusterfs/gluster-block"
        resources:
          requests:
            memory: 100Mi
            cpu: 100m
        volumeMounts:
---
kind: DaemonSet
apiVersion: extensions/v1beta1
metadata:
  name: glusterfs
  labels:
    glusterfs: daemonset
  annotations:
    description: GlusterFS DaemonSet
    tags: glusterfs
spec:
  template:
    metadata:
      name: glusterfs
      labels:
        glusterfs: pod
        glusterfs-node: pod
    spec:
      nodeSelector:
        storagenode: glusterfs #在要部署的node上打上標簽
      hostNetwork: true
      containers:
      - image: gluster/gluster-centos:latest
        imagePullPolicy: IfNotPresent
        name: glusterfs
        env:
        # alternative for /dev volumeMount to enable access to *all* devices
        - name: HOST_DEV_DIR
          value: "/mnt/host-dev"
        # set GLUSTER_BLOCKD_STATUS_PROBE_ENABLE to "1" so the
        # readiness/liveness probe validate gluster-blockd as well
        - name: GLUSTER_BLOCKD_STATUS_PROBE_ENABLE
          value: "1"
        - name: GB_GLFS_LRU_COUNT
          value: "15"
        - name: TCMU_LOGDIR
          value: "/var/log/glusterfs/gluster-block"
        resources:
          requests:
            memory: 100Mi
            cpu: 100m
        volumeMounts:
        - name: glusterfs-heketi
          mountPath: "/var/lib/heketi"
        - name: glusterfs-run
          mountPath: "/run"
        - name: glusterfs-lvm
          mountPath: "/run/lvm"
        - name: glusterfs-etc
          mountPath: "/etc/glusterfs"
        - name: glusterfs-logs
          mountPath: "/var/log/glusterfs"
        - name: glusterfs-config
          mountPath: "/var/lib/glusterd"
        - name: glusterfs-host-dev
          mountPath: "/mnt/host-dev"
        - name: glusterfs-misc
          mountPath: "/var/lib/misc/glusterfsd"
        - name: glusterfs-block-sys-class
          mountPath: "/sys/class"
        - name: glusterfs-block-sys-module
          mountPath: "/sys/module"
        - name: glusterfs-cgroup
          mountPath: "/sys/fs/cgroup"
          readOnly: true
        - name: glusterfs-ssl
          mountPath: "/etc/ssl"
          readOnly: true
        - name: kernel-modules
          mountPath: "/usr/lib/modules"
          readOnly: true
        securityContext:
          capabilities: {}
          privileged: true
        readinessProbe:
          timeoutSeconds: 3
          initialDelaySeconds: 40
          exec:
            command:
            - "/bin/bash"
            - "-c"
            - "if command -v /usr/local/bin/status-probe.sh; then /usr/local/bin/status-probe.sh readiness; else systemctl status glusterd.service; fi"
          periodSeconds: 25
          successThreshold: 1
          failureThreshold: 50
        livenessProbe:
          timeoutSeconds: 3
          initialDelaySeconds: 40
          exec:
            command:
            - "/bin/bash"
            - "-c"
            - "if command -v /usr/local/bin/status-probe.sh; then /usr/local/bin/status-probe.sh liveness; else systemctl status glusterd.service; fi"
          periodSeconds: 25
          successThreshold: 1
          failureThreshold: 50
      volumes:
      - name: glusterfs-heketi
        hostPath:
          path: "/var/lib/heketi"
      - name: glusterfs-run
      - name: glusterfs-lvm
        hostPath:
          path: "/run/lvm"
      - name: glusterfs-etc
        hostPath:
          path: "/etc/glusterfs"
      - name: glusterfs-logs
        hostPath:
          path: "/var/log/glusterfs"
      - name: glusterfs-config
        hostPath:
          path: "/var/lib/glusterd"
      - name: glusterfs-host-dev
        hostPath:
          path: "/dev"
      - name: glusterfs-misc
        hostPath:
          path: "/var/lib/misc/glusterfsd"
      - name: glusterfs-block-sys-class
        hostPath:
          path: "/sys/class"
      - name: glusterfs-block-sys-module
        hostPath:
          path: "/sys/module"
      - name: glusterfs-cgroup
        hostPath:
          path: "/sys/fs/cgroup"
      - name: glusterfs-ssl
        hostPath:
          path: "/etc/ssl"
      - name: kernel-modules
        hostPath:
          path: "/usr/lib/modules"
glusterfs-deamonset.yaml

為glusterfs節點打上標簽並部署

kubectl label node node-2 storagenode=glusterfs

kubectl apply -f glusterfs-deamonset.yaml

kubectl get pods -o wide

為了方便操作引用heketi服務

heketi部署

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: heketi-clusterrolebinding
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: heketi-clusterrole
subjects:
- kind: ServiceAccount
  name: heketi-service-account
  namespace: default

---

apiVersion: v1
kind: ServiceAccount
metadata:
  name: heketi-service-account
  namespace: default

---

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: heketi-clusterrole
rules:
- apiGroups:
  - ""
  resources:
  - pods
  - pods/status
  - pods/exec
  verbs:
  - get
  - list
  - watch
  - create
創建heketi service-account
kind: Service
apiVersion: v1
metadata:
  name: heketi
  labels:
    glusterfs: heketi-service
    deploy-heketi: support
  annotations:
kind: Service
apiVersion: v1
metadata:
  name: heketi
  labels:
    glusterfs: heketi-service
    deploy-heketi: support
  annotations:
    description: Exposes Heketi Service
spec:
  selector:
    name: heketi
  ports:
  - name: heketi
    port: 80
    targetPort: 8080

---

apiVersion: v1
kind: ConfigMap
metadata:
  name: tcp-services
  namespace: ingress-nginx
data:
  "30001": default/heketi:80

---

kind: Deployment
apiVersion: extensions/v1beta1
metadata:
  name: heketi
  labels:
    glusterfs: heketi-deployment
  annotations:
    description: Defines how to deploy Heketi
spec:
  replicas: 1
  template:
    metadata:
      name: heketi
      labels:
        name: heketi
        glusterfs: heketi-pod
    spec:
      serviceAccountName: heketi-service-account
      containers:
      - image: heketi/heketi:dev
        imagePullPolicy: Always
        name: heketi
        env:
        - name: HEKETI_EXECUTOR
          value: "kubernetes"
        - name: HEKETI_DB_PATH
          value: "/var/lib/heketi/heketi.db"
        - name: HEKETI_FSTAB
          value: "/var/lib/heketi/fstab"
        - name: HEKETI_SNAPSHOT_LIMIT
          value: "14"
        - name: HEKETI_KUBE_GLUSTER_DAEMONSET
          value: "y"
        ports:
        - containerPort: 8080
        volumeMounts:
        - name: db
          mountPath: /var/lib/heketi
        readinessProbe:
          timeoutSeconds: 3
          initialDelaySeconds: 3
          httpGet:
            path: /hello
            port: 8080
        livenessProbe:
          timeoutSeconds: 3
          initialDelaySeconds: 30
          httpGet:
            path: /hello
            port: 8080
      volumes:
      - name: db
        hostPath:
          path: "/heketi-data"
部署heketi deployment

進入heketi容器中環境變量

export HEKETI_CLI_SERVER=http://localhost:8080

修改clusterfs配置文件指明clusterfs node ip以及裸磁盤路徑

{
  "clusters": [
    {
{
    {
{
    {
      "nodes": [
        {
          "node": {
            "hostnames": {
              "manage": [
                "gluster-01"
              ],
              "storage": [
                "10.155.56.56"
              ] 
            },
            "zone": 1
          },
          "devices": [
            {
              "name": "/dev/sdb",
              "destroydata": false
            } 
          ] 
        },
        {
          "node": {
            "hostnames": {
              "manage": [
                "gluster-02"
              ],
              "storage": [
                "10.155.56.57"
              ] 
            },
            "zone": 1
          },
          "devices": [
            {
              "name": "/dev/sdb",
              "destroydata": false
            } 
          ] 
        },
        {
          "node": {
            "hostnames": {
              "manage": [
                "gluster-03"
              ],
              "storage": [
                "10.155.56.102"
              ] 
            },
            "zone": 1
          },
          "devices": [
            {
              "name": "/dev/sdb",
              "destroydata": false
            } 
          ] 
        } 
      ] 
    } 
  ] 
}
topology.json

把配置文件寫入heketi容器中

heketi-cli topology load --json topology.json#heketi根據配置文件找到glusterfs node對glusterfs做初始化操作

heketi-cli topology info #查看當前clusterfs集群拓撲

進入clusterfs node中驗證是否成功 
gluster peer status #查看信息

 

 

 

pvc

 

 

 

pvc中定義了對所需的資源的一個描述,以及需要的權限

 

 

 

 

 

 

 

 

pv與pvc進行綁定
1.pv要滿足pvc的需求(存儲大小,讀寫權限)
2.pv要與pvc storage-classname要相同
3.描述中根據字段storageclassname去自動的綁定互相綁定對方volumeName
#本質上在pvc資源描述對象中把pv的名字添加進去

 

 pvc的使用

 

        

#原理:通過pv及pvc的兩層抽象,pod在使用共享存儲時非常的簡單。pod中聲明了pvc的名字,pvc中描述了pod的需求。pvc綁定了pv,pv中描述了具體存儲后端,如何訪問,具體參數。

簡單總結:

1.pv獨立於pod存在
2.pv可以創建動態pv或者靜態pv。動態pv不需要手動去創建。靜態pv需要手動創建
3.訪問模式:ReadWriteOnce:可讀可寫只能mount到一個節點. ReadOnlyMany:PV能模式掛載到多個節點
4.回收規則:PV 支持的回收策略有: Retain. Recycle.delete
Retain 管理員回收:kubectl delete pv pv-name 創建:kubectl apply -f pv-name.yaml ;Retain策略 在刪除pvc后PV變為Released不可用狀態, 若想重新被使用,需要管理員刪除pv,重新創建pv,刪除pv並不會刪除存儲的資源,只是刪除pv對象而已;若想保留數據,請使用該Retain,
Recycle策略 – 刪除pvc自動清除PV中的數據,效果相當於執行 rm -rf /thevolume/*. 刪除pvc時.pv的狀態由Bound變為Available.此時可重新被pvc申請綁定
Delete – 刪除存儲上的對應存儲資源,例如 AWS EBS、GCE PD、Azure Disk、OpenStack Cinder Volume 等,NFS不支持delete策略
5.storageClassName :在pvc的請求存儲大小和訪問權限與創建的pv一致的情況下 根據storageClassName進行與pv綁定。常用在pvc需要和特定pv進行綁定的情況下。舉例:當有創建多個pv設置存儲的大小和訪問權限一致時,且pv,pvc沒有配置storageClassName時,pvc會根據存儲大小和訪問權限去隨機匹配。如果配置了storageClassName會根據這三個條件進行匹配。當然也可以用其他方法實現pvc與特定pv的綁定如標簽.標簽方法上一篇就是,這里就不再贅述。


StatefulSet --- 有狀態應用的守護者

解決多實例不對等pod的問題

 

 

 

 

創建無頭服務(不分配ip,service對應后端pod-ip,通過dns svr記錄解析)
apiVersion: v1
kind: Service
metadata:
  name: springboot-web-svc
spec:
  ports:
  - port: 80
    targetPort: 8080
    protocol: TCP
  clusterIP: None
  selector:
    app: springboot-web
headless-service.yaml

創建StatefulSet 

apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: springboot-web
  namespace: dev
spec:
  serviceName: springboot-web-svc #聲明使用哪個headless service來解析pod
  replicas: 2
  selector:
    matchLabels:
      app: springboot-web
  template:
    metadata:
      labels:
        app: springboot-web
    spec:
      containers:
      - name: springboot-web
        image: 172.17.166.217/kubenetes/springboot-web:v1
        ports:
        - containerPort: 8080
        livenessProbe:
          tcpSocket:
            port: 8080
          initialDelaySeconds: 20
          periodSeconds: 10
          failureThreshold: 3
          successThreshold: 1
          timeoutSeconds: 5
        readinessProbe:
          httpGet:
            path: /hello?name=test
            port: 8080
            scheme: HTTP
          initialDelaySeconds: 20
          periodSeconds: 10
          failureThreshold: 1
          successThreshold: 1
          timeoutSeconds: 5
statefulset.yaml

監控創建過程

kubectl get pod -l app=spring-boot-web -w

#statefulset創建pod名稱相對固定,前邊為pod name后端為相固定的數字編號 例如spring-boot-web-01,只有第一個啟動處於READY狀態才會去啟動第二個。pod之間可通過hostname訪問對方,ping spring-boot-web-01.springboot-web-svc.default。

StatefulSet 創建volume pod

apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: springboot-web
spec:
  serviceName: springboot-web-svc
  replicas: 2
  selector:
    matchLabels:
      app: springboot-web
  template:
    metadata:
      labels:
        app: springboot-web
    spec:
      containers:
      - name: springboot-web
        image: hub.mooc.com/kubernetes/springboot-web:v1
        ports:
        - containerPort: 8080
        livenessProbe:
          tcpSocket:
            port: 8080
          initialDelaySeconds: 20
          periodSeconds: 10
          failureThreshold: 3
          successThreshold: 1
          timeoutSeconds: 5
        readinessProbe:
          httpGet:
            path: /hello?name=test
            port: 8080
            scheme: HTTP
          initialDelaySeconds: 20
          periodSeconds: 10
          failureThreshold: 1
          successThreshold: 1
          timeoutSeconds: 5
        volumeMounts:
        - name: data
          mountPath: /mooc-data
  volumeClaimTemplates:
  - metadata:
      name: data
    spec:
      accessModes:
      - ReadWriteOnce
      storageClassName: glusterfs-storage-class
      resources:
        requests:
          storage: 1Gi
statefulset-volume.yaml

#自動創建不同編號pvc,對應pod名稱。為每個pod綁定不同的pvc,本質上是通過StatefulSet創建pod相固定的數字編號。

 

KubernetesAPI ---如何開發一個基於kubernetes的容器管理平台

apiserver 路徑規范:

 

 api之下都是核心的api,是沒有api分組的。核心組只有兩級,一級是版本一級是核心的資源。

apis非核心的api,每個api資源都用三級來表示,第一級分組,第二組版本信息,第三級是具體的資源。

分組可以更清晰整潔,使用戶能夠很容易的區分來源結構。

#資料https://kubermetes.io/docs/reference/generated/kubernetes-api/

常用客戶端:https://github.com/kubernetes-client 基於各種語言

官方客戶端:https://github.com/kubernetes/client-go

 

 

 

 

 

 




免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM