kubernetes系列(十) - 通過Ingress實現七層代理


1. Ingress入門

1.1 Ingress簡介

kubernetes自帶的service概念只有四層代理,即表現形式為IP:Port.

如果需要實現七層代理,即綁定到域名的話,則需要另一個了,即ingress api

  • 官方在v1.11推出了ingress api接口,既而達到七層代理的效果
  • 對於ingress來說,必須要綁定一個域名

1.2 原理和組成部分

Ingress可以理解為Service的Service。它由兩部分組成

  1. Ingress Controller
    • 這是一個標准,可以有很多實現,其中ingress-nginx是最常用的
    • 以pod形式運行的
  2. Ingress策略設置
    • 以yaml形式為載體的一組聲明式的策略
    • ingress-controller會動態地按照策略生成配置文件(如:nginx.conf)

1.3 資料信息

  1. Ingress-Nginx github repo

https://github.com/kubernetes/ingress-nginx

  1. Ingress-Nginx官方網站

https://kubernetes.github.io/ingress-nginx

2. Ingress部署的幾種方式

2.1 前言

ingress的部署,需要考慮兩個方面:

  1. ingress-controller是作為pod來運行的,那么以什么方式部署比較好?
  2. ingress解決了把如何請求路由到集群內部,那它自己怎么暴露給外部比較好?

下面列舉一些目前常見的部署和暴露方式,具體使用哪種方式還是得根據實際需求來考慮決定。


2.1 Deployment+LoadBalancer模式的Service

如果要把ingress部署在公有雲,那可以選擇這種方式。用Deployment部署ingress-controller,創建一個type為LoadBalancer的service關聯這組pod。大部分公有雲,都會為LoadBalancer的service自動創建一個負載均衡器,通常還綁定了公網地址。只要把域名解析指向該地址,就實現了集群服務的對外暴露。

需要額外購買公有雲的服務!


2.2 Deployment+NodePort模式的Service

同樣用deployment模式部署ingress-controller,並創建對應的服務,但是type為NodePort。這樣,ingress就會暴露在集群節點ip的特定端口上。由於nodeport暴露的端口是隨機端口,一般會在前面再搭建一套負載均衡器來轉發請求。該方式一般用於宿主機是相對固定的環境ip地址不變的場景。

缺點

  • NodePort方式暴露ingress雖然簡單方便,但是NodePort多了一層NAT,在請求量級很大時可能對性能會有一定影響。
  • 請求節點會是類似https://www.xx.com:30076,其中30076kubectl get svc -n ingress-nginx的svc暴露出來的nodeport端口


2.3 DaemonSet+HostNetwork+nodeSelector(推薦)

DaemonSet結合nodeselector來部署ingress-controller到特定的node上,然后使用HostNetwork直接把該pod與宿主機node的網絡打通,直接使用宿主機的80/433端口就能訪問服務。這時,ingress-controller所在的node機器就很類似傳統架構的邊緣節點,比如機房入口的nginx服務器

優點

  • 該方式整個請求鏈路最簡單,性能相對NodePort模式更好

缺點

  • 由於直接利用宿主機節點的網絡和端口,一個node只能部署一個ingress-controller pod

3. Deployment+NodePort模式

3.1. 官網下載yaml,安裝ingress-nginx

kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/provider/baremetal/deploy.yaml

3.2. 創建deployment和service

  • 這里service定義為clusterip
apiVersion: apps/v1
kind: Deployment
metadata:
  name: tocgenerator-deploy
  namespace: default
  labels:
    app: tocgenerator-deploy
spec:
  replicas: 2
  revisionHistoryLimit: 2
  selector:
    matchLabels:
      app: tocgenerator-server
  template:
    metadata:
      labels:
        app: tocgenerator-server
    spec:
      containers:        
        - name: tocgenerator
          image: lzw5399/tocgenerator:latest
          imagePullPolicy: IfNotPresent
          ports:
            - containerPort: 80

---
apiVersion: v1
kind: Service
metadata:
  name: tocgenerator-svc
spec:
  selector:
    app: tocgenerator-server
  ports:
  - protocol: TCP
    port: 80
    targetPort: 80

3.3 創建https證書的secret

  • 方式1:直接指定文件創建
kubectl create secret tls mywebsite-secret --key tls.key --cert tls.crt
  • 方式2: 以yaml資源清單方式創建
apiVersion: v1
kind: Secret
metadata:
  name: mywebsite-secret
data:
  tls.crt: **************************
  tls.key: **************************

3.4. 創建ingress策略

  • ingress策略必須和service同一個namespace
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: tocgenerator-ingress
spec:
  tls:
    - hosts:
      - toc.codepie.fun
      secretName: toc-secret
  rules:
    - host: toc.codepie.fun
      http:
        paths:
          - path: /
            backend:
              serviceName: tocgenerator-svc
              servicePort: 80

3.5 查看ingress-controller的nodeport端口,並訪問

  1. 查看ingress-controller的nodeport端口
$ kubectl get svc -n ingress-nginx
NAME                                 TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)                      AGE
ingress-nginx-controller             NodePort    10.104.80.142   <none>        80:30122/TCP,443:30577/TCP   21h
  1. 訪問
https://toc.codepie.fun:30577

4. DaemonSet+HostNetwork+nodeSelector模式(推薦)

4.1 前言

為了配置kubernetes中的ingress的高可用,對於kubernetes集群以外只暴露一個訪問入口,需要使用keepalived排除單點問題。需要使用daemonset方式將ingress-controller部署在邊緣節點上。

4.2 邊緣節點

首先解釋下什么叫邊緣節點Edge Node,所謂的邊緣節點即集群內部用來向集群外暴露服務能力的節點,集群外部的服務通過該節點來調用集群內部的服務,邊緣節點是集群內外交流的一個Endpoint

邊緣節點要考慮兩個問題

  • 邊緣節點的高可用,不能有單點故障,否則整個kubernetes集群將不可用
  • 對外的一致暴露端口,即只能有一個外網訪問IP和端口

4.3 架構

為了滿足邊緣節點的以上需求,我們使用keepalived來實現。

Kubernetes中添加了ingress后,在DNS中添加A記錄,域名為你的ingress中host的內容,IP為你的keepalived的VIP,這樣集群外部就可以通過域名來訪問你的服務,也解決了單點故障。

選擇Kubernetes的node作為邊緣節點,並安裝keepalived

4.4 安裝keepalived服務

注意:keepalived服務,每個想要當作邊緣節點的機器都要安裝,一般是node節點

  1. 安裝keeplived
yum install -y keepalived
  1. 將下圖配置修改為edgenode

  1. 啟動keeplived並設置為開機啟動
systemctl start keepalived
systemctl enable keepalived

4.5 安裝ingress-nginx-controller

  1. 給邊緣節點打標簽
kubectl label nodes k8s-node01 edgenode=true
kubectl label nodes k8s-node02 edgenode=true
  1. daemonset形式安裝ingress-nginx-controller
  • 注意:以下的資源清單很長,可以直接復制過來用,會創建一系列ingess-nginx0controller相關的

  • 創建資源清單,並命名為ingress-nginx.yaml

apiVersion: v1
kind: ConfigMap
metadata:
  name: nginx-configuration
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
---

apiVersion: apps/v1
kind: Deployment
metadata:
  name: default-http-backend
  labels:
    app: default-http-backend
  namespace: ingress-nginx
spec:
  replicas: 1
  selector:
    matchLabels:
      app: default-http-backend 
  template:
    metadata:
      labels:
        app: default-http-backend
    spec:
      terminationGracePeriodSeconds: 60
      containers:
      - name: default-http-backend
        # Any image is permissable as long as:
        # 1. It serves a 404 page at /
        # 2. It serves 200 on a /healthz endpoint
        image: registry.cn-hangzhou.aliyuncs.com/google_containers/defaultbackend:1.4
        livenessProbe:
          httpGet:
            path: /healthz
            port: 8080
            scheme: HTTP
          initialDelaySeconds: 30
          timeoutSeconds: 5
        ports:
        - containerPort: 8080
        resources:
          limits:
            cpu: 10m
            memory: 20Mi
          requests:
            cpu: 10m
            memory: 20Mi
---
 
apiVersion: v1
kind: Service
metadata:
  name: default-http-backend
  namespace: ingress-nginx
  labels:
    app: default-http-backend
spec:
  ports:
  - port: 80
    targetPort: 8080
  selector:
    app: default-http-backend
---

apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: nginx-ingress-controller
  namespace: ingress-nginx
spec:
  selector:
    matchLabels:
      app: ingress-nginx
  template:
    metadata:
      labels:
        app: ingress-nginx
      annotations:
        prometheus.io/port: '10254'
        prometheus.io/scrape: 'true'
    spec:
      serviceAccountName: nginx-ingress-serviceaccount
      hostNetwork: true
      nodeSelector:
        edgenode: 'true'
      containers:
        - name: nginx-ingress-controller
          image: registry.cn-hangzhou.aliyuncs.com/google_containers/nginx-ingress-controller:0.20.0
          args:
            - /nginx-ingress-controller
            - --default-backend-service=$(POD_NAMESPACE)/default-http-backend
            - --configmap=$(POD_NAMESPACE)/nginx-configuration
            - --tcp-services-configmap=$(POD_NAMESPACE)/tcp-services
            - --udp-services-configmap=$(POD_NAMESPACE)/udp-services
          env:
            - name: POD_NAME
              valueFrom:
                fieldRef:
                  fieldPath: metadata.name
            - name: POD_NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
          ports:
          - name: http
            containerPort: 80
          - name: https
            containerPort: 443
          # livenessProbe:
          #   failureThreshold: 3
          #   httpGet:
          #     path: /healthz
          #     port: 10254
          #     scheme: HTTP
          #   initialDelaySeconds: 10
          #   periodSeconds: 10
          #   successThreshold: 1
          #   timeoutSeconds: 1
          # readinessProbe:
          #   failureThreshold: 3
          #   httpGet:
          #     path: /healthz
          #     port: 10254
          #     scheme: HTTP
          #   periodSeconds: 10
          #   successThreshold: 1
          #   timeoutSeconds: 1
---

apiVersion: v1
kind: Namespace
metadata:
  name: ingress-nginx
---

apiVersion: v1
kind: ServiceAccount
metadata:
  name: nginx-ingress-serviceaccount
  namespace: ingress-nginx
 
---
 
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
  name: nginx-ingress-clusterrole
rules:
  - apiGroups:
      - ""
    resources:
      - configmaps
      - endpoints
      - nodes
      - pods
      - secrets
    verbs:
      - list
      - watch
  - apiGroups:
      - ""
    resources:
      - nodes
    verbs:
      - get
  - apiGroups:
      - ""
    resources:
      - services
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - "extensions"
    resources:
      - ingresses
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - ""
    resources:
        - events
    verbs:
        - create
        - patch
  - apiGroups:
      - "extensions"
    resources:
      - ingresses/status
    verbs:
      - update
 
---
 
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: Role
metadata:
  name: nginx-ingress-role
  namespace: ingress-nginx
rules:
  - apiGroups:
      - ""
    resources:
      - configmaps
      - pods
      - secrets
      - namespaces
    verbs:
      - get
  - apiGroups:
      - ""
    resources:
      - configmaps
    resourceNames:
      # Defaults to "<election-id>-<ingress-class>"
      # Here: "<ingress-controller-leader>-<nginx>"
      # This has to be adapted if you change either parameter
      # when launching the nginx-ingress-controller.
      - "ingress-controller-leader-nginx"
    verbs:
      - get
      - update
  - apiGroups:
      - ""
    resources:
      - configmaps
    verbs:
      - create
  - apiGroups:
      - ""
    resources:
      - endpoints
    verbs:
      - get
 
---
 
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
metadata:
  name: nginx-ingress-role-nisa-binding
  namespace: ingress-nginx
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: nginx-ingress-role
subjects:
  - kind: ServiceAccount
    name: nginx-ingress-serviceaccount
    namespace: ingress-nginx
 
---
 
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: nginx-ingress-clusterrole-nisa-binding
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: nginx-ingress-clusterrole
subjects:
  - kind: ServiceAccount
    name: nginx-ingress-serviceaccount
    namespace: ingress-nginx

---

kind: ConfigMap
apiVersion: v1
metadata:
  name: tcp-services
  namespace: ingress-nginx
---

kind: ConfigMap
apiVersion: v1
metadata:
  name: udp-services
  namespace: ingress-nginx
  1. 應用資源清單
kubectl apply -f ingress-nginx.yaml

4.6 查看安裝是否成功

[root@k8s-master01 ingress]# kubectl get ds -n ingress-nginx
NAME                       DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR   AGE
nginx-ingress-controller   2         2         2       2            2           edgenode=true   57m
[root@k8s-master01 ingress]# kubectl get pods -n ingress-nginx -o wide
NAME                                    READY   STATUS    RESTARTS   AGE   IP            NODE          NOMINATED NODE
default-http-backend-86569b9d95-x4bsn   1/1     Running   12         24d   172.17.65.6   10.40.0.105   <none>
nginx-ingress-controller-5b7xg          1/1     Running   0          58m   10.40.0.105   10.40.0.105   <none>
nginx-ingress-controller-b5mxc          1/1     Running   0          58m   10.40.0.106   10.40.0.106   <none>


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM