k8s七層負載均衡器--Ingress和Ingress Controller


k8s七層負載均衡器--Ingress和Ingress Controller

一、四層負載Service存在的問題

1.1、Pod漂移問題

Kubernetes具有強大的副本控制能力,能保證在任意副本(Pod)掛掉時自動從其他機器啟動一個新的,還可以動態擴容等,通俗地說,這個Pod可能在任何時刻出現在任何節點上,也可能在任何時刻死在任何節點上;那么自然隨着Pod的創建和銷毀,Pod IP 肯定會動態變化;那么如何把這個動態的Pod IP暴露出去?這里借助於Kubernetes的 Service 機制,Service可以以標簽的形式選定一組帶有指定標簽的Pod,並監控和自動負載他們的Pod IP,那么我們向外暴露只暴露Service IP就行了;這就是NodePort模式:即在每個節點上開起一個端口,然后轉發到內部Pod IP 上,如下圖所示:

此時的訪問方式:http://nodeip:nodeport/ ,即數據包流向如下:客戶端請求-->node節點的ip:端口--->service的ip:端口--->pod的ip:端口

image-20210715150045819

1.2、端口管理問題

采用NodePort方式暴露服務面臨的問題是,服務一旦多起來,NodePort在每個節點上開啟的端口會及其龐大,而且難以維護;這時,我們能否使用一個Nginx直接對內進行轉發呢?眾所周知的是,Pod與Pod之間是可以互相通信的,而Pod是可以共享宿主機的網絡名稱空間的,也就是說當在共享網絡名稱空間時,Pod上所監聽的就是Node上的端口。那么這又該如何實現呢?簡單的實現就是使用 DaemonSet 在每個Node上監聽 80,然后寫好規則,因為Nginx外面綁定了宿主機80端口(就像NodePort),本身又在集群內,那么向后直接轉發到相應 Service IP 就行了,如下圖所示:

image-20210715150708131

1.3、域名分配及動態更新問題

從上面的方法,采用Nginx-Pod似乎已經解決了問題,但是其實這里面有一個很大缺陷:當每次有新服務加入又該如何修改Nginx配置呢?我們知道使用Nginx可以通過虛擬主機域名進行區分不同的服務,而每個服務通過upstream進行定義不同的負載均衡池,再加上location進行負載均衡的反向代理,在日常使用中只需要修改nginx.conf即可實現,那在K8S中又該如何實現這種方式的調度呢?假設后端的服務初始服務只有eshop,后面增加了bbs和member服務,那么又該如何將這2個服務加入到Nginx-Pod進行調度呢?總不能每次手動改或者Rolling Update 前端Nginx Pod吧!此時Ingress出現了,如果不算上面的Nginx,Ingress 包含兩大組件:Ingress Controller 和 Ingress

二、Ingress和 Ingress Controller

2.1、Ingress介紹

Ingress官網定義:Ingress可以把進入到集群內部的請求轉發到集群中的一些服務上,從而可以把服務映射到集群外部。Ingress 能把集群內Service 配置成外網能夠訪問的 URL,流量負載均衡,提供基於域名訪問的虛擬主機等。

Ingress簡單的理解就是你原來需要改Nginx配置,然后配置各種域名對應哪個 Service,現在把這個動作抽象出來,變成一個 Ingress 對象,你可以用 yaml 創建,每次不要去改Nginx 了,直接改yaml然后創建/更新就行了;那么問題來了:”Nginx 該怎么處理?”

Ingress Controller 這東西就是解決 “Nginx 的處理方式” 的;Ingress Controller 通過與 Kubernetes API 交互,動態的去感知集群中Ingress規則變化,然后讀取他,按照他自己模板生成一段 Nginx 配置,再寫到 Nginx Pod 里,最后 reload 一下,工作流程如下圖:

image-20210715151343676

實際上Ingress也是Kubernetes API的標准資源類型之一,它其實就是一組基於DNS名稱(host)或URL路徑把請求轉發到指定的Service資源的規則。用於將集群外部的請求流量轉發到集群內部完成的服務發布。我們需要明白的是,Ingress資源自身不能進行“流量穿透”,僅僅是一組規則的集合,這些集合規則還需要其他功能的輔助,比如監聽某套接字,然后根據這些規則的匹配進行路由轉發,這些能夠為Ingress資源監聽套接字並將流量轉發的組件就是Ingress Controller

注意:Ingress控制器不同於Deployment控制器的是,Ingress控制器不直接運行為kube-controller-manager的一部分,它僅僅是Kubernetes集群的一個附件,類似於CoreDNS,需要在集群上單獨部署。

2.2、Ingress Controller介紹

Ingress Controller是一個七層負載均衡調度器,客戶端的請求先到達這個七層負載均衡調度器,由七層負載均衡器在反向代理到后端pod,常見的七層負載均衡器有nginxtraefik,以我們熟悉的nginx為例,假如請求到達nginx,會通過upstream反向代理到后端pod應用,但是后端pod的ip地址是一直在變化的,因此在后端pod前需要加一個service,這個service只是起到分組的作用,那么我們upstream只需要填寫service地址即可

image-20210715151901116

2.3、Ingress和Ingress Controller總結

Ingress Controller :可以理解為控制器,它通過不斷的跟 Kubernetes API 交互,實時獲取后端Service、Pod的變化,比如新增、刪除等,結合Ingress 定義的規則生成配置,然后動態更新上邊的 Nginx 或者traefik負載均衡器,並刷新使配置生效,來達到服務自動發現的作用。

Ingress :定義規則,通過它定義某個域名的請求過來之后轉發到集群中指定的 Service。它可以通過 Yaml 文件定義,可以給一個或多個 Service 定義一個或多個 Ingress 規則。

2.4、Ingress Controller代理k8s內部應用的流程

(1)部署Ingress controller,我們ingress controller使用的是nginx
(2)創建Service,用來分組pod
(3)創建Pod應用,可以通過控制器創建pod
(4)創建Ingress http規則,測試通過http訪問應用(域名訪問或者ip:端口)
(5)創建Ingress https規則,測試通過https訪問應用(域名訪問或者ip:端口)

三、測試Ingress HTTP代理tomcat

3.1、安裝nginx ingress controller

1)創建default-http-backend

[root@k8s-master1 ingress]# cat default-backend.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:
  name: default-http-backend
  labels:
    k8s-app: default-http-backend
  namespace: kube-system
spec:
  replicas: 1
  selector:
   matchLabels:
     k8s-app: default-http-backend
  template:
    metadata:
      labels:
        k8s-app: default-http-backend
    spec:
      terminationGracePeriodSeconds: 60
      containers:
      - name: default-http-backend
        # Any image is permissable as long as:
        # 1. It serves a 404 page at /
        # 2. It serves 200 on a /healthz endpoint
        image: registry.cn-hangzhou.aliyuncs.com/hachikou/defaultbackend:1.0
        livenessProbe:
          httpGet:
            path: /healthz   #這個URI是 nginx-ingress-controller中nginx里配置好的localtion 
            port: 8080
            scheme: HTTP
          initialDelaySeconds: 30   #30s檢測一次/healthz
          timeoutSeconds: 5
        ports:
        - containerPort: 8080
#        resources:
#          limits:
#            cpu: 10m
#            memory: 20Mi
#          requests:
#            cpu: 10m
#            memory: 20Mi
      nodeName: k8s-node1
---
apiVersion: v1
kind: Service     #為default backend 創建一個service
metadata:
  name: default-http-backend
  namespace: kube-system
  labels:
    k8s-app: default-http-backend
spec:
  ports:
  - port: 80
    targetPort: 8080
  selector:
    k8s-app: default-http-backend

[root@k8s-master1 ingress]# kubectl apply -f default-backend.yaml
[root@k8s-master1 ingress]# kubectl get pods -n kube-system |grep default
default-http-backend-bb5c9474-9x746        1/1     Running   0          41s
[root@k8s-master1 ingress]# kubectl get svc -n kube-system 
NAME                   TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)                  AGE
default-http-backend   ClusterIP   10.100.84.60   <none>        80/TCP                   52s
kube-dns               ClusterIP   10.96.0.10     <none>        53/UDP,53/TCP,9153/TCP   6d19h

2)創建nginx-ingress-controller-rbac.yml

[root@k8s-master1 ingress]# cat nginx-ingress-controller-rbac.yml 
---
apiVersion: v1
kind: ServiceAccount    
metadata:
  name: nginx-ingress-serviceaccount #創建一個serveerAcount
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
  name: nginx-ingress-clusterrole   #這個ServiceAcount所綁定的集群角色
rules:
  - apiGroups:
      - "" 
    resources:    #此集群角色的權限,它能操作的API資源 
      - configmaps
      - endpoints
      - nodes
      - pods
      - secrets
    verbs:
      - list
      - watch
  - apiGroups:
      - ""
    resources:
      - nodes
    verbs:
      - get
  - apiGroups:
      - ""
    resources:
      - services
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - "extensions"
    resources:
      - ingresses
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - ""
    resources:
        - events
    verbs:
        - create
        - patch
  - apiGroups:
      - "extensions"
    resources:
      - ingresses/status
    verbs:
      - update
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: Role
metadata:         
  name: nginx-ingress-role  #這是一個角色,而非集群角色 
  namespace: kube-system
rules:  #角色的權限 
  - apiGroups:
      - ""
    resources:
      - configmaps
      - pods
      - secrets
      - namespaces
    verbs:
      - get
  - apiGroups:
      - ""
    resources:
      - configmaps
    resourceNames:
      # Defaults to "<election-id>-<ingress-class>"
      # Here: "<ingress-controller-leader>-<nginx>"
      # This has to be adapted if you change either parameter
      # when launching the nginx-ingress-controller.
      - "ingress-controller-leader-nginx"
    verbs:
      - get
      - update
  - apiGroups:
      - ""
    resources:
      - configmaps
    verbs:
      - create
  - apiGroups:
      - ""
    resources:
      - endpoints
    verbs:
      - get
      - create
      - update
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding       #角色綁定
metadata:
  name: nginx-ingress-role-nisa-binding
  namespace: kube-system
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: nginx-ingress-role
subjects:
  - kind: ServiceAccount
    name: nginx-ingress-serviceaccount #綁定在這個用戶 
    namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding      #集群綁定
metadata:
  name: nginx-ingress-clusterrole-nisa-binding
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: nginx-ingress-clusterrole
subjects:
  - kind: ServiceAccount
    name: nginx-ingress-serviceaccount   #集群綁定到這個serviceacount
    namespace: kube-system   #集群角色是可以跨namespace,但是這里只指明給這個namespce來使用

[root@k8s-master1 ingress]# kubectl apply -f nginx-ingress-controller-rbac.yml

3)創建nginx-ingress-controller.yaml

[root@k8s-master1 ingress]# cat nginx-ingress-controller.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-ingress-controller
  labels:
    k8s-app: nginx-ingress-controller
  namespace: kube-system
spec:
  replicas: 1
  selector:
    matchLabels:
       k8s-app: nginx-ingress-controller
  template:
    metadata:
      labels:
        k8s-app: nginx-ingress-controller
    spec:
      # hostNetwork makes it possible to use ipv6 and to preserve the source IP correctly regardless of docker configuration
      # however, it is not a hard dependency of the nginx-ingress-controller itself and it may cause issues if port 10254 already is taken on the host
      # that said, since hostPort is broken on CNI (https://github.com/kubernetes/kubernetes/issues/31307) we have to use hostNetwork where CNI is used
      # like with kubeadm
      # hostNetwork: true #注釋表示不使用宿主機的80口,
      terminationGracePeriodSeconds: 60
      hostNetwork: true  #表示容器使用和宿主機一樣的網絡
      serviceAccountName: nginx-ingress-serviceaccount #引用前面創建的serviceacount
      containers:   
      - image: registry.cn-hangzhou.aliyuncs.com/peter1009/nginx-ingress-controller:0.20.0      #容器使用的鏡像
        name: nginx-ingress-controller  #容器名
        readinessProbe:   #啟動這個服務時要驗證/healthz 端口10254會在運行的node上監聽。 
          httpGet:
            path: /healthz
            port: 10254
            scheme: HTTP
        livenessProbe:
          httpGet:
            path: /healthz
            port: 10254
            scheme: HTTP
          initialDelaySeconds: 10  #每隔10做健康檢查 
          timeoutSeconds: 1
        ports:
        - containerPort: 80  
          hostPort: 80    #80映射到80
#        - containerPort: 443
#          hostPort: 443
        env:
          - name: POD_NAME
            valueFrom:
              fieldRef:
                fieldPath: metadata.name
          - name: POD_NAMESPACE
            valueFrom:
              fieldRef:
                fieldPath: metadata.namespace
        args:
        - /nginx-ingress-controller
        - --default-backend-service=$(POD_NAMESPACE)/default-http-backend
#        - --default-ssl-certificate=$(POD_NAMESPACE)/ingress-secret    #這是啟用Https時用的
#      nodeSelector:  #指明運行在哪,此IP要和default backend是同一個IP
#        kubernetes.io/hostname: 10.3.1.17   #上面映射到了hostport80,確保此IP80,443沒有占用.
# 
      nodeName: k8s-node1
[root@k8s-master1 ingress]# kubectl apply -f nginx-ingress-controller.yaml
[root@k8s-master1 ingress]# kubectl get pods -n kube-system 
NAME                                        READY   STATUS    RESTARTS   AGE
default-http-backend-bb5c9474-9x746         1/1     Running   0          4m23s
nginx-ingress-controller-86d8667cb7-ps677   1/1     Running   0          87s

注意:default-backend.yaml和nginx-ingress-controller.yaml文件指定了nodeName:k8s-node1表示default和nginx-ingress-controller部署在node1節點,大家的node節點如果主機名不是k8s-node1,需要自行修改成自己的主機名,這樣才會調度成功,一定要讓default-http-backend和nginx-ingress-controller這兩個pod在一個節點上。

3.2、部署后端tomcat

[root@k8s-master1 ingress-http-tomcat]# cat demo-tomcat.yaml 
apiVersion: v1
kind: Service
metadata:
  name: tomcat
  namespace: default
spec:
  selector:
    app: tomcat
    release: canary
  ports:
  - name: http
    targetPort: 8080
    port: 8080
  - name: ajp
    targetPort: 8009
    port: 8009
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: tomcat-deploy
  namespace: default
spec:
  replicas: 2
  selector:
    matchLabels:
      app: tomcat
      release: canary
  template:
    metadata:
      labels:
        app: tomcat
        release: canary
    spec:
      containers:
      - name: tomcat
        image: tomcat:8.5.34-jre8-alpine   
        ports:
        - name: http
          containerPort: 8080
          name: ajp
          containerPort: 8009
          
[root@k8s-master1 ingress-http-tomcat]# cat demo-tomcat.yaml
[root@k8s-master1 ingress-http-tomcat]# kubectl get pods
NAME                             READY   STATUS    RESTARTS   AGE
tomcat-deploy-66b67fcf7b-qqc9l   1/1     Running   0          6s
tomcat-deploy-66b67fcf7b-w6dw9   1/1     Running   0          6s

3.3、創建ingress

[root@k8s-master1 ingress-http-tomcat]# cat ingress-myapp.yaml 
apiVersion: extensions/v1beta1          #api版本
kind: Ingress           #清單類型
metadata:                #元數據
  name: ingress-myapp    #ingress的名稱
  namespace: default     #所屬名稱空間
  annotations:           #注解信息
    kubernetes.io/ingress.class: "nginx" # 將配置傳到ingress controller中的nginx配置中
spec:      #規格
  rules:   #定義后端轉發的規則
  - host: tomcat.kubeprom.com    #通過域名進行轉發
    http:
      paths:       
      - path:       #配置訪問路徑,如果通過url進行轉發,需要修改;空默認為訪問的路徑為"/"
        backend:    #配置后端服務
          serviceName: tomcat
          servicePort: 8080

[root@k8s-master1 ingress-http-tomcat]# kubectl apply -f ingress-myapp.yaml
[root@k8s-master1 ingress-http-tomcat]# kubectl get ingress
NAME            CLASS    HOSTS                 ADDRESS   PORTS   AGE
ingress-myapp   <none>   tomcat.kubeprom.com             80      5s
[root@k8s-master1 ingress-http-tomcat]# kubectl describe ingress ingress-myapp 
Name:             ingress-myapp
Namespace:        default
Address:          
Default backend:  default-http-backend:80 (10.244.36.74:8080)
Rules:
  Host                 Path  Backends
  ----                 ----  --------
  tomcat.kubeprom.com  
                          tomcat:8080 (10.244.36.77:8080,10.244.36.78:8080)
Annotations:           kubernetes.io/ingress.class: nginx
Events:                <none>

3.4、進入ingress-controller查看nginx的配置

[root@k8s-master1 ingress-http-tomcat]# kubectl exec -it nginx-ingress-controller-86d8667cb7-ps677 -n kube-system -- cat nginx.conf
...
	server {
		server_name tomcat.kubeprom.com ;
		
		listen 80;
		
		listen [::]:80;
		
		set $proxy_upstream_name "-";
		
		location / {
			
			set $namespace      "default";
			set $ingress_name   "ingress-myapp";
			set $service_name   "tomcat";
			set $service_port   "8080";
			set $location_path  "/";
			
			rewrite_by_lua_block {
				
				balancer.rewrite()
				
			}
			
			log_by_lua_block {
				
				balancer.log()
				
				monitor.call()
			}
			
			port_in_redirect off;
			
			set $proxy_upstream_name "default-tomcat-8080";
			
			client_max_body_size                    1m;
			
			proxy_set_header Host                   $best_http_host;
			
			# Pass the extracted client certificate to the backend
			
			# Allow websocket connections
			proxy_set_header                        Upgrade           $http_upgrade;
			
			proxy_set_header                        Connection        $connection_upgrade;
			
			proxy_set_header X-Request-ID           $req_id;
			proxy_set_header X-Real-IP              $the_real_ip;
			
			proxy_set_header X-Forwarded-For        $the_real_ip;
			
			proxy_set_header X-Forwarded-Host       $best_http_host;
			proxy_set_header X-Forwarded-Port       $pass_port;
			proxy_set_header X-Forwarded-Proto      $pass_access_scheme;
			
			proxy_set_header X-Original-URI         $request_uri;
			
			proxy_set_header X-Scheme               $pass_access_scheme;
			
			# Pass the original X-Forwarded-For
			proxy_set_header X-Original-Forwarded-For $http_x_forwarded_for;
			
			# mitigate HTTPoxy Vulnerability
			# https://www.nginx.com/blog/mitigating-the-httpoxy-vulnerability-with-nginx/
			proxy_set_header Proxy                  "";
			
			# Custom headers to proxied server
			
			proxy_connect_timeout                   5s;
			proxy_send_timeout                      60s;
			proxy_read_timeout                      60s;
			
			proxy_buffering                         off;
			proxy_buffer_size                       4k;
			proxy_buffers                           4 4k;
			proxy_request_buffering                 on;
			
			proxy_http_version                      1.1;
			
			proxy_cookie_domain                     off;
			proxy_cookie_path                       off;
			
			# In case of errors try the next upstream server before returning an error
			proxy_next_upstream                     error timeout;
			proxy_next_upstream_tries               3;
			
			proxy_pass http://upstream_balancer;
			
			proxy_redirect                          off;
			
		}
		
	}
...

3.5、解析hosts並訪問

192.168.40.181 tomcat.kubeprom.com

image-20210715161036775

四、測試Ingress HTTPS代理tomcat

4.1、構建TLS站點

# 准備證書,在k8s的master1節點操作
[root@k8s-master1 ingress-https-tomcat]# openssl genrsa -out tls.key 2048
[root@k8s-master1 ingress-https-tomcat]# openssl req -new -x509 -key tls.key -out tls.crt -subj /C=CN/ST=Beijing/L=Beijing/O=DevOps/CN=tomcat.kubeprom.com
[root@k8s-master1 ingress-https-tomcat]# ls
tls.crt  tls.key

# 生成secret,在k8s的master1節點操作
[root@k8s-master1 ingress-https-tomcat]# kubectl create secret tls tomcat-ingress-secret --cert=tls.crt --key=tls.key

# 查看secret
[root@k8s-master1 ingress-https-tomcat]# kubectl get secret
NAME                    TYPE                                  DATA   AGE
default-token-cm4mx     kubernetes.io/service-account-token   3      6d19h
tomcat-ingress-secret   kubernetes.io/tls                     2      14s

# 查看tomcat-ingress-secret詳細信息
[root@k8s-master1 ingress-https-tomcat]# kubectl describe secret tomcat-ingress-secret
Name:         tomcat-ingress-secret
Namespace:    default
Labels:       <none>
Annotations:  <none>

Type:  kubernetes.io/tls

Data
====
tls.crt:  1302 bytes
tls.key:  1679 bytes

4.2、創建Ingress

[root@k8s-master1 ingress-https-tomcat]# cat ingress-tomcat-tls.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: ingress-tomcat-tls
  namespace: default
  annotations:
    kubernetes.io/ingress.class: "nginx"  # 將配置傳到ingress controller中的nginx配置中
spec:
  tls:
  - hosts:
    - tomcat.kubeprom.com
    secretName: tomcat-ingress-secret
  rules:
  - host: tomcat.kubeprom.com
    http:
      paths:
      - path:
        backend:
          serviceName: tomcat
          servicePort: 8080

[root@k8s-master1 ingress-https-tomcat]# kubectl apply -f ingress-tomcat-tls.yaml

瀏覽器訪問https://tomcat.kubeprom.com

image-20210715161707187


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM