四、Pod控制器詳解


一、控制器介紹

controller用於控制pod,生產環境中都是使用控制器控制pod的。

1.1控制器的分類

  • ReplicaSet 副本集,控制pod擴容,裁減
  • Deployments 部署控制器,控制pod升級,回退,該控制器包含ReplicaSet
  • StatefulSets 部署有狀態的pod應用
  • DaemonSet 守護進程的集合,運行在所有集群節點(包括master), 比如使用Filebeat,node_exporter(普羅米修士在客戶端收集日志的工具)
  • Jobs 一次性的任務
  • Cronjob 周期性的任務

補充:ReplicationController也叫RC控制器,相當於ReplicaSet的老版本,現在不建議使用,現在使用Deployments加ReplicaSet替代RC,Deployments中是包含了ReplicaSet控制器的

1.2 有狀態應用跟無狀態應用

無狀態應用(Stateless Application)是指應用不會在會話中保存下次會話所需要的客戶端數據。每一個會話都像首次執行一樣,不會依賴之前的數據進行響應;

舉一個簡單案例,無狀態的服務器程序,就像web服務這樣,每次HTTP請求和以前都沒有啥關系,只是獲取目標URI資源。得到目標內容之后,這次連接就會斷開,沒有任何痕跡。

有狀態的應用(Stateful Application)是指應用會在會話中保存客戶端的數據,並在客戶端下一次的請求中來使用那些數據。

舉一個簡單案例,有狀態的應用就指的是使用用戶名密碼登錄的web服務器,需要使用COOKIE記錄你的登錄狀態,服務端在響應客戶端的請求的時候,會向客戶端推送一個COOKIE,這個COOKIE記錄服務端上面的一些信息。客戶端在后續的請求中,可以攜帶這個COOKIE,這樣訪問該網站就不用每次都要輸入賬號密碼了。

二、Deployment

2.1 Deployment簡介

Replicaset控制器的功能:
控制Pod副本數量實現Pod的擴容和縮容。

Deployment控制器的功能:
Deployment集成了上線部署、滾動升級、創建副本、回滾等功能
Deployment里包含並使用了ReplicaSet控制器
Deployment用於部署無狀態應用,適用於web微服務

2.2 創建Deployment

1.查看幫助

kubectl create -h

查看deployment幫助

#查看deployment幫助
kubectl explain deployment

2.使用命令創建deployment

1.創建一個名為nginx的deployment

#模擬創建一個deployment,不是真正的創建
kubectl create deployment nginx --image=nginx:1.15-alpine --port=80 --replicas=2 --dry-run=client

#上一條命令沒報錯,我們實際創建一個
kubectl create deployment nginx --image=nginx:1.15-alpine --port=80 --replicas=2

說明:

  • --port=80 相當於docker里的暴露端口
  • --replicas=2 指定副本數,默認為1
  • --dry-run=client為測試模式,相當於不是真的跑,只是先模擬測試一下

2.驗證

kubectl get deployment
kubectl get pod -o wide

此時nginx是deployment控制器名,replicas副本數為2,也就是說有兩個pod,pod名是nginx-6d9d558bb6-xxxxx,每個pod中只有一個容器,

3.查看deployment和pod詳情

kubectl describe deployment nginx
kubectl describe pod nginx-6d9d558bb6-xxxxx

4.刪除deployment

kubectl delete deployment nginx

3.YAML文件創建deployment

可以將前面創建的控制器導出為yaml文件

kubectl get deployment nginx -o yaml > deployment.yml

1, 准備YAML文件

Deployment控制器中包含了副本集,副本集在spec中定義

[root@k8s-master01 ~]# vim nginx-deployment.yml
apiVersion: apps/v1  #版本信息使用kubectl explain deployment查看
kind: Deployment
metadata:
  name: nginx # deployment控制器名
  labels:
    app: nginx-dep #deployment的標簽名
    
# deployment里使用了副本集  
spec:               
  replicas: 2                   
  selector:
    matchLabels:
      app: nginx-dep  #注意:這里是關聯pod標簽為nginx-dep的pod
      
  template:                    # 代表pod的配置模板
    metadata:
      name: nginx               #這是pod名,隨便寫
      labels:
        app: nginx-dep         # 注意:這里指pod的標簽,要跟matchLabels里的一致,他們是相關聯的
    spec:
      containers:               
      - name: nginx              #容器名隨便寫,多個容器不能重名
        image: nginx:1.15-alpine
        imagePullPolicy: IfNotPresent
        ports:
        - containerPort: 80

這里需要注意replicas定義的matchLabels名要跟template里定義的pod的標簽名一致,這樣他們才能關聯起來

2, 應用YAML文件創建deployment

 kubectl apply -f nginx-deployment.yml

3, 查看驗證

 kubectl get deployment
 kubectl get rs
 kubectl get pods

image-20211118181100950

補充:YAML單獨創建replicaset

注意replicas必須跟template寫在一起。

1, 編寫YAML文件

[root@k8s-master01 ~]# vim replicaset.yml
apiVersion: apps/v1
kind: ReplicaSet
metadata:
  name: nginx-rs  #ReplicaSet控制器名
  namespace: default
spec:                   
  replicas: 2            # 副本數
  selector:              # 標簽選擇器,對應下面pod的標簽
    matchLabels:
      app: nginx         # 匹配的label
  template:
    metadata:
      name: nginx-pod      
      labels:           
        app: nginx #對應上面定義的標簽選擇器selector里面的名字
    spec:               
       containers: 
       - name: nginx  
         image: nginx:1.15-alpine
         ports:
         - name: http
           containerPort: 80

這里要注意的是selector標簽選擇器指定的標簽名要跟template標簽的名字對應,他們是相關聯的

2, 應用YAML文件

kubectl apply -f replicaset.yml

3, 驗證

kubectl get rs
kubectl get pods
kubectl get deployment

找不到deployment,說明單獨創建的rs並沒有創建deployment

image-20211119105407908

2.3 刪除deployment

1.刪除deployment

使用如下命令刪除deployment,那么里面的rs跟pod也會被自動刪除

kubectl delete deployment nginx-deployment
kubectl delete -f nginx-deployment.yml

2.刪除deployment中的pod

刪之前想一想之前提到的副本控制器replicaset的作用

1, 刪除nginx的pod

kubectl get pod
kubectl delete pod nginx-7d9b8757cf-xxxx

2, 再次查看,發現又重新啟動了一個pod,pod的IP也發生了變化

kubectl get pods -o wide
  • 因為在deployment控制器中定義了副本分片,如果pod數量不滿足副本分片的數量,就會啟動或縮減pod數量直到滿足副本分片數。

  • pod的IP不是固定的,一旦pod因為故障導致重啟,IP就會發生變化;

  • 如果想固定訪問某個IP就能訪問到對應的pod的話,就需要以后會提到的service了,service就相當於是一個VIP。

2.4 pod版本升級

查看幫助

kubectl set image -h

1, 升級前驗證nginx版本

kubectl get pod
kubectl describe pod nginx-6bb55459d-j7bbg |grep Image:
kubectl exec nginx-6bb55459d-j7bbg -- nginx -v

2, 升級為1.16版

#先查看pod里的容器名
kubectl get pods nginx-6bb55459d-j7bbg -o jsonpath={.spec.containers[*].name}

#升級pod鏡像版本
kubectl set image deployment nginx nginx=nginx:1.16-alpine --record

說明:

  • deployment nginx 代表名為nginx的deployment

  • nginx=nginx:1.16-alpine 前面的nginx為容器名

  • --record 表示會記錄信息,如果不加后面回退顯示版本為None(這個后面會說明)

查看容器名的其他方式:

  • kubectl describe pod <pod名>
  • kubectl edit deployment <deployment名>
  • kubectl get deployment <deployment名> -o yaml

3, 驗證

kubectl get pod #此時的pod名已發生改變
kubectl describe pod nginx-684c89cf5c-2q9gq |grep Image:
kubectl exec nginx-684c89cf5c-2q9gq -- nginx -v

#查看更新狀態
kubectl rollout status deployment nginx

2.5 pod版本回退

1, 查看版本歷史信息

還記得版本升級用的--record選項么,對,這個就是記錄你升級的版本信息的

kubectl rollout history deployment nginx

如果沒有加--record,就會顯示為 ,所以做版本更新一定要加上 --record

2, 定義要回退的版本

#--revision=1,這里的1指的前面的版本序號,會顯示這個版本的信息
kubectl rollout history deployment nginx --revision=1 

image-20211119163005509

3, 執行回退(執行后才是真的回退)

kubectl rollout undo deployment nginx --to-revision=1

4, 驗證

kubectl rollout history deployment nginx
kubectl get pods
kubectl describe pod nginx-6bb55459d-6nfgr |grep Image:

2.6 副本擴容

查看幫助

 kubectl scale -h

1, 擴容為2個副本

kubectl scale deployment nginx --replicas=2

2, 查看

kubectl get pods -o wide

3, 繼續擴容

kubectl scale deployment nginx --replicas=4
kubectl get pods -o wide

2.7 副本裁減

1, 指定副本數為1進行裁減

 kubectl scale deployment nginx --replicas=1

2, 查看驗證

kubectl get pods -o wide
kubectl get pods | wc -l #統計pod個數

三、DaemonSet

3.1 DaemonSet簡介

  • DaemonSet能夠讓所有(或者特定)的節點運行同一個pod。當新節點加入到K8S集群中,pod會被(DaemonSet)自動調度到該節點上運行,當節點從K8S集群中被移除,被DaemonSet調度的pod會被移除

  • 如果刪除DaemonSet,所有跟這個DaemonSet相關的pods都會被刪除。

  • 如果一個DaemonSet的Pod被殺死、停止、或者崩潰,那么DaemonSet將會重新創建一個新的副本在這台計算節點上。

  • DaemonSet一般應用於日志收集、監控采集、分布式存儲守護進程等

3.2 創建DaemonSet

1, 編寫YAML文件

[root@k8s-master01 ~]# vim nginx-daemonset.yml
apiVersion: apps/v1
kind: DaemonSet
metadata:
   name: nginx-daemonset             
spec:
   selector:
      matchLabels:  #DaemonSet的標簽選擇器,要跟下面pod的標簽對應
         name: nginx-test 
   template:
      metadata:
         labels:
            name: nginx-test #pod的標簽,跟選擇器必須一致
      spec:
         tolerations:    # tolerations代表容忍,容許
         - key: node-role.kubernetes.io/k8s-master01  # 能容忍的污點key
           effect: NoSchedule   # kubectl explain pod.spec.tolerations查看(能容忍的污點effect)
         containers:
         - name: nginx
           image: nginx:1.15-alpine
           imagePullPolicy: IfNotPresent
           resources:      #資源限制
              limits:
                 memory: 100Mi
              requests:
                  memory: 100Mi

2, apply應用YAML文件

 kubectl apply -f nginx-daemonset.yml

3, 驗證

kubectl get daemonset
kubectl get pods |grep nginx-daemonset

image-20211119164204076

可以看到daemonset在每個節點都創建了pod

四、Job

4.1 job簡介

  • 對於ReplicaSet而言,它希望pod保持預期數目、持久運行下去,除非用戶明確刪除,否則這些對象一直存在,它們針對的是耐久性任務,如web服務等。

  • 對於非耐久性任務,比如壓縮文件,任務完成后,pod需要結束運行,不需要pod繼續保持在系統中,這個時候就要用到Job。

  • Job負責批量處理短暫的一次性任務 (short lived one-offff tasks),即僅執行一次的任務,它保證批處理任務的一個或多個Pod成功結束。

4.2 案例1: 計算圓周率2000位

[root@k8s-master01 ~]# vim job.yml
apiVersion: batch/v1
kind: Job
metadata:
   name: pi          # job名
spec:
   template:
      metadata:
         name: pi      # pod名
      spec:
         containers:
         - name: pi    # 容器名
           image: perl # 此鏡像有800多M,可提前導入到所有節點,也可能指定導入到某一節點然后指定調度到此節點
           imagePullPolicy: IfNotPresent
           command: ["perl", "-Mbignum=bpi", "-wle", "print bpi(2000)"]
         restartPolicy: Never    # 執行完后不再重啟

2, 應用YAML文件創建job

kubectl apply -f job.yml

3, 驗證

kubectl get jobs
kubectl get pods
kubectl logs pi--1-lb75l #查看計算結果,計算結果在日志中
kubectl logs pi--1-lb75l | wc -c #統計一共多少位
kubectl logs pi--1-lb75l | wc -L

image-20211119170311431

STATUS顯示為Completed表示計算能完成,此時才能查看計算結果

4.3 案例2: 創建固定次數job

1, 編寫YAML文件

[root@k8s-master01 ~]# vim job2.yml
apiVersion: batch/v1
kind: Job
metadata:
   name: busybox-job
spec:
   completions: 10                                                   
         # 執行job的次數
   parallelism: 1                                                     
        # 執行job的並發數,這表示一次做一個,一共做十次
   template:
      metadata:
         name: busybox-job-pod
      spec:
         containers:
         - name: busybox
           image: busybox
           imagePullPolicy: IfNotPresent
           command: ["echo", "hello"]
         restartPolicy: Never

2, 應用YAML文件創建job

 kubectl apply -f job2.yml

3, 驗證

 kubectl get job
 kubectl get pods | grep busybox
 kubectl logs busybox-job--1-4w6mz

可以看到啟動了10個pod來輸出hello

五、cronjob

類似於Linux系統的crontab,在指定的時間周期運行相關的任務,類似於計划任務

1, 編寫YAML文件

[root@k8s-master01 ~]# vim cronjob.yml
apiVersion: batch/v1
kind: CronJob
metadata:
   name: cronjob1
spec:
   schedule: "* * * * *"     # 分時日月周,全星號表示每分鍾都做
   jobTemplate:
      spec:
         template:
            spec:
               containers:
               - name: hello
                 image: busybox
                 args:
                 - /bin/sh
                 - -c
                 - date; echo hello kubernetes
                 imagePullPolicy: IfNotPresent
               restartPolicy: OnFailure #失敗了才重啟

2, 應用YAML文件創建cronjob

kubectl apply -f cronjob.yml

3, 查看驗證

kubectl get cronjob
kubectl get pods | grep cron #因為是周期性的,所以每分鍾就會有一個完成狀態的pod
#查看輸出結果
kubectl logs cronjob1-27288555--1-2bzzg

周期性的只會保留最近的3個pod

六、Ingress

6.1 Ingress簡介

前面提到的Service實現的是四層負載均衡,使用的是IP跟端口的形式,而Ingress使用的是七層負載均衡,可以使用域名進行負載均衡。

關於介紹看這篇,廢話不多說我們直接安裝,這里介紹兩種安裝方式,一種是yaml文件部署,一種是使用helm包管理工具。

重點來了,我是用的是二進制安裝的k8s,版本是v1.22.2,所以Ingress-nginx的版本盡可能用最新的,不然會報錯,詳見版本對照表

6.2 yaml部署Ingress

1.創建ingress-control.yaml文件

點擊查看代碼
[root@k8s-master01 ~]# cat ingress-control.yaml
apiVersion: v1
kind: Namespace
metadata:
  name: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx

---
# Source: ingress-nginx/templates/controller-serviceaccount.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    helm.sh/chart: ingress-nginx-4.0.1
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 1.0.0
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: controller
  name: ingress-nginx
  namespace: ingress-nginx
automountServiceAccountToken: true
---
# Source: ingress-nginx/templates/controller-configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
  labels:
    helm.sh/chart: ingress-nginx-4.0.1
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 1.0.0
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: controller
  name: ingress-nginx-controller
  namespace: ingress-nginx
data:
---
# Source: ingress-nginx/templates/clusterrole.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  labels:
    helm.sh/chart: ingress-nginx-4.0.1
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 1.0.0
    app.kubernetes.io/managed-by: Helm
  name: ingress-nginx
rules:
  - apiGroups:
      - ''
    resources:
      - configmaps
      - endpoints
      - nodes
      - pods
      - secrets
    verbs:
      - list
      - watch
  - apiGroups:
      - ''
    resources:
      - nodes
    verbs:
      - get
  - apiGroups:
      - ''
    resources:
      - services
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - networking.k8s.io
    resources:
      - ingresses
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - ''
    resources:
      - events
    verbs:
      - create
      - patch
  - apiGroups:
      - networking.k8s.io
    resources:
      - ingresses/status
    verbs:
      - update
  - apiGroups:
      - networking.k8s.io
    resources:
      - ingressclasses
    verbs:
      - get
      - list
      - watch
---
# Source: ingress-nginx/templates/clusterrolebinding.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  labels:
    helm.sh/chart: ingress-nginx-4.0.1
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 1.0.0
    app.kubernetes.io/managed-by: Helm
  name: ingress-nginx
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: ingress-nginx
subjects:
  - kind: ServiceAccount
    name: ingress-nginx
    namespace: ingress-nginx
---
# Source: ingress-nginx/templates/controller-role.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  labels:
    helm.sh/chart: ingress-nginx-4.0.1
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 1.0.0
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: controller
  name: ingress-nginx
  namespace: ingress-nginx
rules:
  - apiGroups:
      - ''
    resources:
      - namespaces
    verbs:
      - get
  - apiGroups:
      - ''
    resources:
      - configmaps
      - pods
      - secrets
      - endpoints
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - ''
    resources:
      - services
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - networking.k8s.io
    resources:
      - ingresses
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - networking.k8s.io
    resources:
      - ingresses/status
    verbs:
      - update
  - apiGroups:
      - networking.k8s.io
    resources:
      - ingressclasses
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - ''
    resources:
      - configmaps
    resourceNames:
      - ingress-controller-leader
    verbs:
      - get
      - update
  - apiGroups:
      - ''
    resources:
      - configmaps
    verbs:
      - create
  - apiGroups:
      - ''
    resources:
      - events
    verbs:
      - create
      - patch
---
# Source: ingress-nginx/templates/controller-rolebinding.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  labels:
    helm.sh/chart: ingress-nginx-4.0.1
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 1.0.0
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: controller
  name: ingress-nginx
  namespace: ingress-nginx
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: ingress-nginx
subjects:
  - kind: ServiceAccount
    name: ingress-nginx
    namespace: ingress-nginx
---
# Source: ingress-nginx/templates/controller-service-webhook.yaml
apiVersion: v1
kind: Service
metadata:
  labels:
    helm.sh/chart: ingress-nginx-4.0.1
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 1.0.0
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: controller
  name: ingress-nginx-controller-admission
  namespace: ingress-nginx
spec:
  type: ClusterIP
  ports:
    - name: https-webhook
      port: 443
      targetPort: webhook
      appProtocol: https
  selector:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/component: controller
---
# Source: ingress-nginx/templates/controller-deployment.yaml
apiVersion: apps/v1
kind: DaemonSet
metadata:
  labels:
    helm.sh/chart: ingress-nginx-4.0.1
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 1.0.0
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: controller
  name: ingress-nginx-controller
  namespace: ingress-nginx
spec:
  selector:
    matchLabels:
      app.kubernetes.io/name: ingress-nginx
      app.kubernetes.io/instance: ingress-nginx
      app.kubernetes.io/component: controller
  revisionHistoryLimit: 10
  minReadySeconds: 0
  template:
    metadata:
      labels:
        app.kubernetes.io/name: ingress-nginx
        app.kubernetes.io/instance: ingress-nginx
        app.kubernetes.io/component: controller
    spec:
      hostNetwork: true
      dnsPolicy: ClusterFirst
      containers:
        - name: controller
          image: registry.cn-beijing.aliyuncs.com/kole_chang/controller:v1.0.0
          imagePullPolicy: IfNotPresent
          lifecycle:
            preStop:
              exec:
                command:
                  - /wait-shutdown
          args:
            - /nginx-ingress-controller
            - --election-id=ingress-controller-leader
            - --controller-class=k8s.io/ingress-nginx
            - --configmap=$(POD_NAMESPACE)/ingress-nginx-controller
            - --validating-webhook=:8443
            - --validating-webhook-certificate=/usr/local/certificates/cert
            - --validating-webhook-key=/usr/local/certificates/key
            - --watch-ingress-without-class=true
          securityContext:
            capabilities:
              drop:
                - ALL
              add:
                - NET_BIND_SERVICE
            runAsUser: 101
            allowPrivilegeEscalation: true
          env:
            - name: POD_NAME
              valueFrom:
                fieldRef:
                  fieldPath: metadata.name
            - name: POD_NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
            - name: LD_PRELOAD
              value: /usr/local/lib/libmimalloc.so
          livenessProbe:
            failureThreshold: 5
            httpGet:
              path: /healthz
              port: 10254
              scheme: HTTP
            initialDelaySeconds: 10
            periodSeconds: 10
            successThreshold: 1
            timeoutSeconds: 1
          readinessProbe:
            failureThreshold: 3
            httpGet:
              path: /healthz
              port: 10254
              scheme: HTTP
            initialDelaySeconds: 10
            periodSeconds: 10
            successThreshold: 1
            timeoutSeconds: 1
          ports:
            - name: http
              containerPort: 80
              protocol: TCP
            - name: https
              containerPort: 443
              protocol: TCP
            - name: webhook
              containerPort: 8443
              protocol: TCP
          volumeMounts:
            - name: webhook-cert
              mountPath: /usr/local/certificates/
              readOnly: true
          resources:
            requests:
              cpu: 100m
              memory: 90Mi
      nodeSelector:
        kubernetes.io/os: linux
      serviceAccountName: ingress-nginx
      terminationGracePeriodSeconds: 300
      volumes:
        - name: webhook-cert
          secret:
            secretName: ingress-nginx-admission
---
# Source: ingress-nginx/templates/controller-ingressclass.yaml
# We don't support namespaced ingressClass yet
# So a ClusterRole and a ClusterRoleBinding is required
apiVersion: networking.k8s.io/v1
kind: IngressClass
metadata:
  labels:
    helm.sh/chart: ingress-nginx-4.0.1
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 1.0.0
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: controller
  name: nginx
  namespace: ingress-nginx
spec:
  controller: k8s.io/ingress-nginx
---
# Source: ingress-nginx/templates/admission-webhooks/validating-webhook.yaml
# before changing this value, check the required kubernetes version
# https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/#prerequisites
apiVersion: admissionregistration.k8s.io/v1
kind: ValidatingWebhookConfiguration
metadata:
  labels:
    helm.sh/chart: ingress-nginx-4.0.1
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 1.0.0
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: admission-webhook
  name: ingress-nginx-admission
webhooks:
  - name: validate.nginx.ingress.kubernetes.io
    matchPolicy: Equivalent
    rules:
      - apiGroups:
          - networking.k8s.io
        apiVersions:
          - v1
        operations:
          - CREATE
          - UPDATE
        resources:
          - ingresses
    failurePolicy: Fail
    sideEffects: None
    admissionReviewVersions:
      - v1
    clientConfig:
      service:
        namespace: ingress-nginx
        name: ingress-nginx-controller-admission
        path: /networking/v1/ingresses
---
# Source: ingress-nginx/templates/admission-webhooks/job-patch/serviceaccount.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  name: ingress-nginx-admission
  namespace: ingress-nginx
  annotations:
    helm.sh/hook: pre-install,pre-upgrade,post-install,post-upgrade
    helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded
  labels:
    helm.sh/chart: ingress-nginx-4.0.1
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 1.0.0
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: admission-webhook
---
# Source: ingress-nginx/templates/admission-webhooks/job-patch/clusterrole.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: ingress-nginx-admission
  annotations:
    helm.sh/hook: pre-install,pre-upgrade,post-install,post-upgrade
    helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded
  labels:
    helm.sh/chart: ingress-nginx-4.0.1
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 1.0.0
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: admission-webhook
rules:
  - apiGroups:
      - admissionregistration.k8s.io
    resources:
      - validatingwebhookconfigurations
    verbs:
      - get
      - update
---
# Source: ingress-nginx/templates/admission-webhooks/job-patch/clusterrolebinding.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: ingress-nginx-admission
  annotations:
    helm.sh/hook: pre-install,pre-upgrade,post-install,post-upgrade
    helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded
  labels:
    helm.sh/chart: ingress-nginx-4.0.1
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 1.0.0
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: admission-webhook
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: ingress-nginx-admission
subjects:
  - kind: ServiceAccount
    name: ingress-nginx-admission
    namespace: ingress-nginx
---
# Source: ingress-nginx/templates/admission-webhooks/job-patch/role.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: ingress-nginx-admission
  namespace: ingress-nginx
  annotations:
    helm.sh/hook: pre-install,pre-upgrade,post-install,post-upgrade
    helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded
  labels:
    helm.sh/chart: ingress-nginx-4.0.1
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 1.0.0
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: admission-webhook
rules:
  - apiGroups:
      - ''
    resources:
      - secrets
    verbs:
      - get
      - create
---
# Source: ingress-nginx/templates/admission-webhooks/job-patch/rolebinding.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: ingress-nginx-admission
  namespace: ingress-nginx
  annotations:
    helm.sh/hook: pre-install,pre-upgrade,post-install,post-upgrade
    helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded
  labels:
    helm.sh/chart: ingress-nginx-4.0.1
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 1.0.0
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: admission-webhook
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: ingress-nginx-admission
subjects:
  - kind: ServiceAccount
    name: ingress-nginx-admission
    namespace: ingress-nginx
---
# Source: ingress-nginx/templates/admission-webhooks/job-patch/job-createSecret.yaml
apiVersion: batch/v1
kind: Job
metadata:
  name: ingress-nginx-admission-create
  namespace: ingress-nginx
  annotations:
    helm.sh/hook: pre-install,pre-upgrade
    helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded
  labels:
    helm.sh/chart: ingress-nginx-4.0.1
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 1.0.0
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: admission-webhook
spec:
  template:
    metadata:
      name: ingress-nginx-admission-create
      labels:
        helm.sh/chart: ingress-nginx-4.0.1
        app.kubernetes.io/name: ingress-nginx
        app.kubernetes.io/instance: ingress-nginx
        app.kubernetes.io/version: 1.0.0
        app.kubernetes.io/managed-by: Helm
        app.kubernetes.io/component: admission-webhook
    spec:
      containers:
        - name: create
          image: registry.cn-beijing.aliyuncs.com/kole_chang/kube-webhook-certgen:v1.0
          imagePullPolicy: IfNotPresent
          args:
            - create
            - --host=ingress-nginx-controller-admission,ingress-nginx-controller-admission.$(POD_NAMESPACE).svc
            - --namespace=$(POD_NAMESPACE)
            - --secret-name=ingress-nginx-admission
          env:
            - name: POD_NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
      restartPolicy: OnFailure
      serviceAccountName: ingress-nginx-admission
      nodeSelector:
        kubernetes.io/os: linux
      securityContext:
        runAsNonRoot: true
        runAsUser: 2000
---
# Source: ingress-nginx/templates/admission-webhooks/job-patch/job-patchWebhook.yaml
apiVersion: batch/v1
kind: Job
metadata:
  name: ingress-nginx-admission-patch
  namespace: ingress-nginx
  annotations:
    helm.sh/hook: post-install,post-upgrade
    helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded
  labels:
    helm.sh/chart: ingress-nginx-4.0.1
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 1.0.0
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: admission-webhook
spec:
  template:
    metadata:
      name: ingress-nginx-admission-patch
      labels:
        helm.sh/chart: ingress-nginx-4.0.1
        app.kubernetes.io/name: ingress-nginx
        app.kubernetes.io/instance: ingress-nginx
        app.kubernetes.io/version: 1.0.0
        app.kubernetes.io/managed-by: Helm
        app.kubernetes.io/component: admission-webhook
    spec:
      containers:
        - name: patch
          image: registry.cn-beijing.aliyuncs.com/kole_chang/kube-webhook-certgen:v1.0
          imagePullPolicy: IfNotPresent
          args:
            - patch
            - --webhook-name=ingress-nginx-admission
            - --namespace=$(POD_NAMESPACE)
            - --patch-mutating=false
            - --secret-name=ingress-nginx-admission
            - --patch-failure-policy=Fail
          env:
            - name: POD_NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
      restartPolicy: OnFailure
      serviceAccountName: ingress-nginx-admission
      nodeSelector:
        kubernetes.io/os: linux
      securityContext:
        runAsNonRoot: true
        runAsUser: 2000

這里涉及到使用兩個鏡像,因為我的k8s是v1.22.2版本的,鏡像最好也用v1.0以上版本不然會報錯,另外鏡像已經在阿里雲上了,國內可以訪問到

image-20211125150637235

2.應用yaml文件

kubectl apply -f ingress-control.yaml

3.查看

kubectl get pod -n ingress-nginx

image-20211124212811610

狀態為Completed的兩個pod是job控制器部署的,用於檢查配置環境的,如圖所示是部署成功的狀態

參考文章:yaml部署Ingress

6.3 helm部署Ingress

helm是k8s的包管理器,好了開始安裝Ingress

官方下載Helm

1.解壓

tar -zxvf helm-v3.7.1-linux-amd64.tar.gz
mv linux-amd64/helm /usr/local/bin/helm

2.下載Ingress包

#添加ingress-ninx倉庫
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx

#查看添加的倉庫
helm repo list

#搜索ingress-nginx版本,直接安裝最新版
helm search hub ingress-nginx
helm search repo ingress-nginx

#前面是倉庫的名稱,后面是包名
helm pull ingress-nginx/ingress-nginx

#解壓下載下來的tgz包
tar xf ingress-nginx-4.0.10.tgz
cd ingress-nginx

#這里要修改幾個地方
vim values.yam

①修改鏡像地址

該values.yam會用到3個鏡像,如果k8s是v1.22.2版本的,那么鏡像版本一定要用v1.0及其以上的,修改這3個鏡像地址為國內能訪問的地址

在13、14行,18行,注釋掉19行digest校驗

registry: registry.cn-beijing.aliyuncs.com
image: kole_chang/controller

image-20211125152235545

在596、597、601行,注釋掉602行digest校驗

registry: registry.cn-beijing.aliyuncs.com
image: kole_chang/kube-webhook-certgen

image-20211125152334582

在721、722、726行,修改為如下圖所示

registry: mirrorgooglecontainers
image: defaultbackend-amd64

image-20211125152548453

②修改參數

58行,dns策略改為ClusterFirstWithHostNet

dnsPolicy: ClusterFirstWithHostNet

81行,修改為主機網絡模式

hostNetwork: true

183行,設置為守護進程集模式,這個就不用我多說了吧

kind: DaemonSet

新增283行,設置標簽,表示使用ingress標簽值為true的節點進行安裝Ingress,注意格式要跟上一行對齊

ingress: "true"

483行,改為ClusterIP類型

type: ClusterIP

3.安裝Ingress

#創建命名空間
kubectl create ns ingress-nginx

#給master01添加標簽,讓他安裝ingress
kubectl label node k8s-master01 ingress=true
kubectl get nodes -L ingress

#安裝
helm install ingress-nginx -n ingress-nginx .

#查看
kubectl get pod -n ingress-nginx
helm list -a -ningress-nginx


#報錯查看
kubectl describe pod -n ingress-nginx ingress-nginx-admission-patch--1-6zvvt
kubectl logs -n ingress-nginx ingress-nginx-admission-patch--1-6zvvt

#刪除helm包
helm delete ingress-nginx -n ingress-nginx

因為我只在master01上打了標簽,所以ingress-nginx只在master01上安裝了

image-20211125154628342

安裝成功如下圖所示

image-20211125151936387

參考資料:

Helm官方文檔

Ingress官方文檔

騰訊課堂

6.4 Ingress擴容縮容

前面yaml配置文件中寫到Ingress會安裝到標簽ingress值為true的節點上,那么我們修改節點標簽即可完成擴容

kubectl label node k8s-node01 ingress=true
kubectl get pod -n ingress-nginx -owide

縮容,刪除該標簽即可

kubectl label node k8s-master01 ingress-

6.5 部署單域名Ingress

Ingress-nginx原理上還是使用的Nginx來實現負載均衡的,所以會在其部署的節點上開啟80端口用於Nginx服務

#因為配置使用的是宿主機模式,所以會使用宿主機80端口用於nginx服務
netstat -lntp | grep 80

image-20211125160630474

1.部署一個ingress

新版本的ingress格式跟舊版本的不一致,要看ingress-nginx的版本

ingress必須跟關聯的Service在同一命名空間下。

[root@k8s-master01 ~]# vim ingress-tz.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: ingress-tz
  annotations:
    # use the shared ingress-nginx
    kubernetes.io/ingress.class: "nginx" #集群中可能使用了不止一種ingress,這里指定使用nginx的ingress,在value.yaml第98行與之對應
spec:
  rules: 
  - host: tzlinux.com
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: nginx-dep  #這里寫要綁定的service服務名
            port: 
              number: 8000   #Service暴露的端口

有時候復制粘貼到vim編輯器中,格式會亂,這時在vim編輯器中使用:set paste即可

2.應用yaml文件

kubectl apply -f ingress-tz.yaml
kubectl get ingress

3.修改win主機hosts文件

10.154.0.112是我安裝了Ingress-nginx的那台k8s-node01節點,這里要注意,只有安裝了Ingress-nginx的節點才可以使用負載均衡

image-20211125162417029

4.訪問tzlinux.com

image-20211125162726369

進入ingress的pod查看會發現我們寫的ingress-tz.yaml文件已經解析為nginx.conf的配置了

kubectl exec -it ingress-nginx-controller-rmcfn -n ingress-nginx -- sh
grep "tzlinux.com" nginx.conf -A 20 #查看后20行

6.6 部署多域名Ingress

1.編寫yaml文件

[root@k8s-master01 ~]# vim ingress-mulDomain.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: ingress-tz
  annotations:
    # use the shared ingress-nginx
    kubernetes.io/ingress.class: "nginx" #集群中可能使用了不止一種ingress,這里指定使用nginx的ingress,在value.yaml第98行與之對應
spec:
  rules: 
  - host: docker.tzlinux.com
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: nginx-dep  #這里寫要綁定的service服務名
            port: 
              number: 8000   #Service暴露的端口
  - host: k8s.tzlinux.com
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: nginx-svc  #這里寫要綁定的service服務名
            port: 
              number: 8000   #Service暴露的端口

2.應用yaml文件

kubectl apply -f ingress-mulDomain.yaml

#如果修改了yaml文件可以使用來更新
kubectl replace -f ingress-mulDomain.yaml

3.修改Hosts文件

image-20211125235042932

4.使用瀏覽器訪問

http://docker.tzlinux.com

http://k8s.tzlinux.com

遇到的一個小報錯

[root@k8s-master01 ingress-nginx]# helm install ingress-nginx -n ingress-nginx .
Error: INSTALLATION FAILED: rendered manifests contain a resource that already exists. Unable to continue with install: ClusterRole "ingress-nginx" in namespace "" exists and cannot be imported into the current release: invalid ownership metadata; annotation validation error: missing key "meta.helm.sh/release-name": must be set to "ingress-nginx"; annotation validation error: missing key "meta.helm.sh/release-namespace": must be set to "ingress-nginx"

解決

kubectl get ClusterRoles --all-namespaces | grep ingress
kubectl delete ClusterRole xxxx --all-namespaces

參考https://help.aliyun.com/document_detail/279475.html

七、參考資料

黑馬Linux-k8s第三天視頻

騰訊課堂


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM