ingress的用法與原理


前言


 

我們知道真正提供服務的是后端的pod,但是為了負載均衡,為了使用域名,為了....,service誕生了,再后來ingress誕生了,那么為什么需要有Ingress呢?先看看官網怎么說的:

Ingress exposes HTTP and HTTPS routes from outside the cluster to services within the cluster.
An Ingress may be configured to give Services externally-reachable URLs, load balance traffic, terminate SSL / TLS, and offer name based virtual hosting.

所以,Ingress的作用主要有四:

1)幫助位於集群中的Service能夠有一個對外可達的url,即讓集群外的客戶端也可以訪問到自己。(wxy:對於這一點,NodePort類型的Service也可以,后面會說到)

2)做專業的負載均衡,畢竟Service的負載均衡還是很初級的

3)終結ssl/tls。意思是說對於那些業務不提供https的,為了安全,可以有專門機構幫我們做安全方面的事,業務只需要專注業務就行了,所以可以說是"過濾"ssl/tls

4)基於名稱的虛擬hosting。這個我理解就是我們常說的Ingress是一個基於應用層提供服務的,因為Ingress不僅負責一個業務/Service, 而是可以根據名稱區分不同的"hosting"....(wxy: 繼續看看可能就理解了)

 

那么,上述的四項功能就是Ingress幫我們實現的么?其實不是的(所以說是Ingress的作用是不准確的),而是需要有一個Ingress Controller來實現這個功能,而Ingress只不過是作為集群中的Service的"發言人",去和Ingress Controller打交道,相當於去IC那里注冊一下,告知你要幫我基於怎樣的規則轉發流量,即在上述四個方面幫我業務Service對外暴露。

 

 

好了,那就讓我們一起看看到底怎么將Ingress用起來吧。

 

 

一: 創建業務的pod/service

 


1. 關於業務的pod,基本信息如下:

 

# kubectl get pods -ncattle-system-my -oyaml rancher-57f75c44f4-2mrz6
...
containers: - args: - --http-listen-port=80 - --https-listen-port=443 - --add-local=auto     ... name: rancher ports: - containerPort: 80 ---這個表示業務容器有暴露80端口號 protocol: TCP ...

 

實際的容器的端口號:

sh-4.4# cat /usr/bin/entrypoint.sh
#!/bin/bash
set -e
exec tini -- rancher --http-listen-port=80 --https-listen-port=443 --audit-log-path=${AUDIT_LOG_PATH} --audit-level=${AUDIT_LEVEL} --audit-log-maxage=${AUDIT_LOG_MAXAGE} --audit-log-maxbackup=${AUDIT_LOG_MAXBACKUP} --audit-log-maxsize=${AUDIT_LOG_MAXSIZE} "${@}"

 

2.此時業務的service如下

 

# kubectl get svc -ncattle-system-my -oyaml
...
  spec:
    clusterIP: 10.105.53.47
    ports:
    - name: http
      port: 80  ---只需要在80端口號提供http服務即可,因為ingress會為我們terminal https
      protocol: TCP
      targetPort: 80
    selector:
      app: rancher
    type: ClusterIP   ---注意,如果想要使用Ingres,那么service的類型一般就是ClusterIP就可以了,詳細后面的總結
  status:
    loadBalancer: {}  ---此時的狀態是這樣的,表示沒做什么負載均衡

 

 

 

二: 創建Ingress, 並為業務服務配置規則

 


 

# kubectl get ingress -ncattle-system-my -oyaml
...
spec:
    rules:
    - host: rancher.my.test.org   ---規則1: 對應的host即域名為他
      http:                                 這條規則是for上面創建的那個名叫rancher的service, 會訪問這個服務的80端口
        paths:
- path: /example ---可省略
backend: serviceName: rancher servicePort: 80
    - host: bar.foo.com      ---這個是用來解釋何為"name based virtual hosting"的
      http:
        paths:
        - backend:
            serviceName: service2
            servicePort: 80

    tls:                                    for https,使用的證書信息在名叫tls-rancher-ingress的secret中 - hosts:
      - rancher.my.test.org
      secretName: tls-rancher-ingress

 

0.首先看看官網是怎么描述Ingress的各個字段的含義

The Ingress spec has all the information needed to configure a load balancer or proxy server.

 Ingress resource only supports rules for directing HTTP traffic.

 即: 這些信息都是給真正的負責均衡或者說代理者用的,並且目前只用於http協議的轉發,https是http + tls,也是基於http的。

          規則包含兩部分: http rule     和      tls rule(如果是https的話,否則就不需要)

 

1.每一個http rule部分,承載了三部分信息

  • host(可選): 表示這條規則應用在哪個host上,如果沒有配置,則這條規則將應用到all inbound HTTP traffic through the IP address specified(所有經過指定ip地址的入站http流量)。
  • A list of paths: 每一個path還結合一組serviceName 和servicePort。當LB接收到一個incoming流量,只有當這個流量中的content匹配了host 和 path后,才會被轉發給后端的Service。注意可以省略"path"字段,那就表示"根"
  • backend : 即表示真正的后端Service,也即serviceName 和 servicePort指向的那個后端service。        

 其中解釋下關於Name-based virtual hosts support routing HTTP traffic to multiple host names at the same IP address.

 即,可以在一台代理機器上,為多個服務代理流量轉發,只要大家的host不同就可以,這里的host可以理解成域名。

 

2. tls rule

 關於ingress的tls,有如下的知識點
1).o nly supports a single TLS port, 443, and assumes TLS termination.
2).可以看到,在tls中的rule也可以指定host,如果這個host和http rule部分中不一樣,則they are multiplexed on the same port according to the hostname specified through the SNI TLS extension (假設 the Ingress controller 支持SNI)    ---SNI是什么鬼,以后再研究吧
3) tls中指定的secret中,必須要包含 keys named tls.crt and tls.key that contain the certificate and private key to use for TLS。
  然后,certificate 中必須包含的CN,並且和ingress中的配置 - hosts中一致
3).Ingress中的secrete中的證書是給controller用的,即告訴controller使用我要求的證書和客戶端進行tls連接
4). 關於負載均衡:
An Ingress controller is bootstrapped with some load balancing policy settings that it applies to all Ingress, such as the load balancing algorithm, backend weight scheme, and others. More advanced load balancing concepts (e.g. persistent sessions, dynamic weights) are not yet exposed through the Ingress. You can instead get these features through the load balancer used for a Service。
解析:一個ic可能在啟動的時候就加載了一下負載均衡策略,這個策略會應用到后方的所有ingress上。
           但目前還不支持通過Ingress配置ic的負載均衡策略,不過你可以在service上配置復雜的負載均衡策略在實現你的要求。
 
wxy:通過ingress讓ingress controller幫我們為業務負載均衡恐怕不行,不過你可以讓自己的業務的service負責
 
 

 

三: 創建Ingress Controller來實現ingress


 

0.關於Ingress Controler,官網是這樣解釋的

You may deploy any number of ingress controllers within a cluster. When you create an ingress, you should 
annotate each ingress with the appropriate ingress.class to indicate which ingress controller should be used 
if more than one exists within your cluster.
If you do not define a class, your cloud provider may use a default ingress controller.
Ideally, all ingress controllers should fulfill this specification, but the various ingress controllers operate slightly differently.
Note: Make sure you review your ingress controller’s documentation to understand the caveats of choosing it.
An admin can deploy an Ingress controller such that it only satisfies Ingress from a given namespace, 
but by default, controllers will watch the entire Kubernetes cluster for unsatisfied Ingress.

解析:

       真正讓一個ingress工作,一個集群中需要有一個ingress controller(這種controller不同於kube-controller-manager管理的其他類型controller,  他管理pod的方式是用戶自定義的,目前支持的ingress controller是 GCE and nginx controllers. 另外這種controller也是一種deployment)。集群中也可以部署若干個ingress controller, 但這時你的ingress就需要利用ingress.class這個annotate來指明你想用哪個ic,如果你沒有定義一個class,那么你的雲provider會使用缺省的ic。另外,新版的k8s已經使用ingressClassName字段取代annotate方式了。

       盡管,管理員可以部署一個只應用於某個給定的ns的ic,但是缺省情況,controller應該可以watch到整個k8s集群中不滿足的Ingress.
 
wxy:關於ingress.class將會在下一個章節中詳細講解,這里由於只有一個ingress controller,所以暫時先忽略這一項。
        在我實際的操作中,ingress和ingress controller的工作原理是這樣的:
        首先,ingress創建,並准備好http rule(path部分) 和 tls rule(證書部分)
        然后,一種類型的ingress controller比如ngxin創建,這種ic負責的ingress假設配置成負責整個集群(nginx官網提供的缺省配置就是如此)
        此時,ingress controller會主動watch 整個集群的ingress,然后讀取這個ingress的信息並添加到自己的代理規則中,同時更新ingress的相關字段
        具體的實驗過程戰術如下:
 
0. 官方ngxin controller是這樣描述自己的
ingress-nginx is an Ingress controller for Kubernetes using NGINX as a reverse proxy and load balancer.
Nginx is configured to automatically discover all ingress with the kubernetes.io/ingress.class: "nginx" annotation.
Please note that the ingress resource should be placed inside the same namespace of the backend resource.

解析: IC會自動去發現所有注解有"nginx"的ingress,另外要求ingress和其代理的backend(Service)位於同一個ns。

         wxy: ic和ingress無需同一個namespace,一般ic是全局的.....

 

1.創建Ingress Controller,以及相關的資源

    #kubectl apply -f ./mandatory.yaml

  mandatory.yaml文件的主要內容如下:

[root@node213 wxy]# cat mandatory.yaml 
---
#0. 先創建一個專屬的namespace
apiVersion: v1
kind: Namespace
metadata:
  name: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx

---
#1.創建三個配置,這些配置都是給controller即ngixn使用的,缺省情況下這三項配置都是空的
kind: ConfigMap
apiVersion: v1
metadata:
  name: nginx-configuration
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx

kind: ConfigMap
apiVersion: v1
metadata:
  name: tcp-services

  name: udp-services


---
#2.創建一個服務賬號給controller使用,並給這個賬號賦予一定的權限,其中對資源是Ingress類型具有watch和update其status的權限
apiVersion: v1
kind: ServiceAccount
metadata:
  name: nginx-ingress-serviceaccount
  namespace: ingress-nginx


apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
  name: nginx-ingress-clusterrole
    ...
  - apiGroups:
      - "extensions"
      - "networking.k8s.io"
    resources:
      - ingresses
    verbs:
      - get
      - list
      - watch   ---可以watch整個cluster的ingress
  - apiGroups:
      - "extensions"
      - "networking.k8s.io"
    resources:
      - ingresses/status
    verbs: 
      - update  ---並可以更新ingress的status

apiVersion: rbac.authorization.k8s.io/v1beta1
kind: Role
metadata:
  name: nginx-ingress-role
  namespace: ingress-nginx



apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding

roleRef:
  name: nginx-ingress-role
subjects:
    name: nginx-ingress-serviceaccount



apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding

roleRef:
  name: nginx-ingress-clusterrole
subjects:
    namespace: ingress-nginx

---
#3. 創建一種ingress controller(Deployment類型的),該Deployment會級聯創建對應的pod實例 
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-ingress-controller
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
  ...
      serviceAccountName: nginx-ingress-serviceaccount
      containers:
        - name: nginx-ingress-controller
          image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.26.1
          args:
            - /nginx-ingress-controller
            - --configmap=$(POD_NAMESPACE)/nginx-configuration
            - --tcp-services-configmap=$(POD_NAMESPACE)/tcp-services
            - --udp-services-configmap=$(POD_NAMESPACE)/udp-services
            - --publish-service=$(POD_NAMESPACE)/ingress-nginx
            - --annotations-prefix=nginx.ingress.kubernetes.io
        ...
          ports:
            - name: http
              containerPort: 80
            - name: https
              containerPort: 443


---
View Code

 

2. 創建ingress controller的服務

   #kubectl apply -f ./service-nodeport.yaml

ic也不過是個pod,所以作為總代理的他也要能夠對外暴露自己,也正因作為總代理,所以ic的服務一般是NodePort類型或更大范圍

附加:因為是內網環境,需要提前下載需要的文件和鏡像,在這里由於受牆所限,文件我是從github上下載的

1)部署文件

https://github.com/nginxinc/kubernetes-ingress

 

2)使用的鏡像

quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.26.1

 

 

3. Ingress被實現

ingress controller會自動找到需要被satisfies的ingress,然后讀取的他內容添加到自己的規則中,同時更新ingress的信息

1)ingress controler的service信息

# kubectl get svc -ningress-nginx

NAME            TYPE       CLUSTER-IP     EXTERNAL-IP   PORT(S)                      AGE

ingress-nginx   NodePort   10.106.118.8   <none>        80:56901/TCP,443:25064/TCP   26h

 

2) ingress contrller watch 到ingress的日志

status.go:287] updating Ingress wxy-test/rancher status from [] to [{10.106.118.8 }]
event.go:255] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"my-cattle-system", Name:"rancher", UID:"b042e1c5-b851-11ea-9fd1-286ed488c73f", APIVersion:"networking.k8s.io/v1beta1", ResourceVersion:"937884", FieldPath:""}): type: 'Normal' reason: 'UPDATE' Ingress my-cattle-system/rancher
event.go:255] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"my-cattle-system", Name:"rancher", UID:"b042e1c5-b851-11ea-9fd1-286ed488c73f", APIVersion:"networking.k8s.io/v1beta1", ResourceVersion:"937885", FieldPath:""}): type: 'Normal' reason: 'UPDATE' Ingress my-cattle-system/rancher

 

3) Ingress的信息被刷新

# kubectl get ingress -nmy-cattle-system -oyaml
annotations:
--被實現后,新增一個注解 field.cattle.io
/publicEndpoints: '[{"addresses":["10.100.126.179"],"port":443,"protocol":"HTTPS","serviceName":"wxy-test:rancher","ingressName":"wxy-test:rancher","hostname":"rancher.test.org","allNodes":false}]' nginx.ingress.kubernetes.io/proxy-connect-timeout: "30" ---創建ingress就添加的注解,是不是因為該注解的前綴符合nginx ingress controller,所以被nginx "看上了"? nginx.ingress.kubernetes.io/proxy-read-timeout: "1800" nginx.ingress.kubernetes.io/proxy-send-timeout: "1800" ... status: loadBalancer: ingress: ---新增的,為ingress controller的service的地址 - ip: 10.106.118.8

 

 

4. 詳細解析一下nginx的轉發規則

 

進入nginx實例,看看ingress這個轉發規則和證書是怎樣作用於nginx的

# kubectl exec -ti -ningress-nginx nginx-ingress-controller-744b8ccf4c-mdnws /bin/sh
$ cat ./nginx.conf
...
## start server rancher.my.test.org
        server {
                 server_name rancher.my.test.org ;
                 listen 80  ;
                 listen [::]:80  ;
                 listen 443  ssl http2 ;
                 listen [::]:443  ssl http2 ;
                 set $proxy_upstream_name "-";
                 ssl_certificate_by_lua_block {
                         certificate.call()
                 }
             location / {
                    
                     set $namespace      "my-cattle-system ";
                     set $ingress_name   "rancher";
                     set $service_name   "rancher";
                     set $service_port   "80";
                     set $location_path  "/";
...

 

 

四: 如何通過Ingress訪問業務


 

驗證方式1:curl方式訪問API

# IC_HTTPS_PORT=25064  ---nginx controller的service暴露的nodePort

# IC_IP=192.168.48.213      ---nginx controller暴露的服務ip,在這里因為是nodeport類型,所以任意一個節點的ip即可

# curl --resolve rancher.test.org:$IC_HTTPS_PORT:$IC_IP https://rancher.test.org:$IC_HTTPS_PORT –insecure

(解析:

--resolve HOST:PORT:ADDRESS  :將rancher.test.org:25064 強制解析成192.168.48.213(缺省80端口?),

於是訪問https://rancher.test.org:25064,就變成訪問nginx controller的服務,之后nginx應該可以通過請求中攜帶的host判斷具體訪問誰

)

結果:

{"type":"collection","links":{"self":"https://rancher.test.org:25064/"},"actions":{},"pagination":{"limit":1000,"total":4},"sort":{"order":"asc","reverse":"https://rancher.test.org:25064/?order=desc"},"resourceType":"apiRoot","data":[{"apiVersion":{"group":"meta.cattle.io","path":"/meta","version":"v1"},"baseType":"apiRoot","links":{"apiRoots":"https://rancher.test.org:25064/meta/apiroots","root":"h

...終於成功了

 

或者,在訪問者的機器上

# vi /etc/hosts

192.168.48.213 rancher.test.org   ---增加

於是,就可以訪問

# curl https://rancher.test.org:25064 -k

 

驗證方式2:瀏覽器方式

1)首先,因為是自定義的域名所以官方dns是不認的,所以就需要在訪問者機子上直接添加上對應的域名解析的結果,在C:\Windows\System32\drivers\etc\hosts增加:

192.168.48.214     rancher.test.org          # source server

 

2)然后,瀏覽器上訪問

  https://rancher.test.org:25064

 

 注意:

一定要使用域名訪問,否則

# curl https://192.168.48.213:42060 -k
<html>
<head><title>404 Not Found</title></head>
<body>
<center><h1>404 Not Found</h1></center>
<hr><center>openresty/1.15.8.2</center>
</body>
</html>

即,這種訪問方式表示直接訪問的是ngxin,然后nginx並不知道你真正的后端想要訪問誰

 

四:詳細說說Ingress Class


0. 官網說 

1)在Kubernetes 1.18之前,使用ingress.class注解(信息來自nginx官網)

If you're running multiple ingress controllers, or running on a cloud provider that natively handles ingress such as GKE, 
you need to specify the annotation kubernetes.io/ingress.class: "nginx" in all ingresses that you would like the ingress-nginx controller to claim.

解析: 這個注解是給nginx看的,nginx 類型的ingress controller會去watch"想要我"的ingress,然后將其添加到我的"管轄范圍中"

 

2)在Kubernetes 1.18之后,使用IngressClass類型object 和 ingressClassName字段

ingress可以由不同的controller所實現,並且不同的controller對應不同的配置,這是如何做到的呢?就是通過IngressClass這種資源,具體來說就是:
在ingress中,可以為其指定一個class,即這個ingress對應的IngressClass類型的資源
在這個IngressClass中,包含了兩部分的參數:
1)controller,表示這一類對應的ingress controller具體是誰,官方說:the name of the controller that should implement the class.
2)parameters(可選),是TypedLocalObjectReference類型的參數,是一個
   is a link to a custom resource containing additional configuration for the controller.
   也就是說,是給controller用的,即如果這個controller還需要些額外的配置信息,根據如下三元素就能找到承載配置的那個object
   
例:
apiVersion: networking.k8s.io/v1beta1
kind: IngressClass
metadata:
  name: external-lb  ---該類的名稱 spec: controller: example.com/ingress-controller --要求集群中使用這個ingress controller parameters: ---使用這個controller的時候,還需要引用一種稱為IngressParameters的crd,這個crd的名稱叫做external-lb apiGroup: k8s.example.com/v1alpha kind: IngressParameters name: external-lb

 

1.手動為Ingress添加ingress.class注解,查看nginx ingress controller的反應

如果沒有顯式創建class,發現如果為ingress指定
kubernetes.io/ingress.class: "nginx-1"則會將其從缺省的ic中刪除
在改成kubernetes.io/ingress.class: "nginx"就又添加進來了。詳細的日志如下

# kubectl logs -f -ningress-nginx nginx-ingress-controller-744b8ccf4c-8wnkn 
-------------------------------------------------------------------------------
NGINX Ingress controller
  Release:       0.26.1
  Build:         git-2de5a893a
  Repository:    https://github.com/kubernetes/ingress-nginx
  nginx version: openresty/1.15.8.2
-------------------------------------------------------------------------------
0,准備工作,包括加載一些配置,初始化一個訪問我的url(https端口號缺省443), 還會創建一個假的證書(干什么用的?)
flags.go:243] SSL certificate chain completion is disabled (--enable-ssl-chain-completion=false)
client_config.go:541] Neither --kubeconfig nor --master was specified.  Using the inClusterConfig.  This might not work.
main.go:182] Creating API client for https://10.96.0.1:443
main.go:226] Running in Kubernetes cluster version v1.12 (v1.12.1) - git (clean) commit 4ed3216f3ec431b140b1d899130a69fc671678f4 - platform linux/amd64
main.go:101] SSL fake certificate created /etc/ingress-controller/ssl/default-fake-certificate.pem
main.go:105] Using deprecated "k8s.io/api/extensions/v1beta1" package because Kubernetes version is < v1.14.0

1.啟動nginx controller,說到controller,顧名思義是掌控全局的,即不斷watch各種資源,產生各種event,進而處理
  此時watch到在cattle-system-my這個namespace下有個ingress叫做rancher
nginx.go:263] Starting NGINX Ingress controller
event.go:255] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"udp-services", UID:"84608cf7-8924-11ea-a935-286ed488c73f", APIVersion:"v1", ResourceVersion:"4772030", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/udp-services
event.go:255] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"tcp-services", UID:"84530b64-8924-11ea-a935-286ed488c73f", APIVersion:"v1", ResourceVersion:"4772026", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/tcp-services
event.go:255] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"nginx-configuration", UID:"844a2d10-8924-11ea-a935-286ed488c73f", APIVersion:"v1", ResourceVersion:"4851485", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/nginx-configuration
event.go:255] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"cattle-system-my", Name:"rancher", UID:"4acafd23-89f5-11ea-a935-286ed488c73f", APIVersion:"networking.k8s.io/v1beta1", ResourceVersion:"5065141", FieldPath:""}): type: 'Normal' reason: 'CREATE' Ingress cattle-system-my/rancher

2.在處理rancher這個ingress的時候,通過其"tls"字段得知證書存放在"cattle-system-my/tls-rancher-ingress"這個secret中
  但是沒有找到這個secret,於是報錯
  (這個過程還會有多實例pod的選舉過程,但是貌似和證書關系不大)
backend_ssl.go:46] Error obtaining X.509 certificate: key 'tls.crt' missing from Secret "cattle-system-my/tls-rancher-ingress"
nginx.go:307] Starting NGINX process
leaderelection.go:241] attempting to acquire leader lease  ingress-nginx/ingress-controller-leader-nginx...
controller.go:1125] Error getting SSL certificate "cattle-system-my/tls-rancher-ingress": local SSL certificate cattle-system-my/tls-rancher-ingress was not found. Using default certificate
controller.go:134] Configuration changes detected, backend reload required.
status.go:86] new leader elected: nginx-ingress-controller-744b8ccf4c-mdnws
controller.go:150] Backend successfully reloaded.
controller.go:159] Initial sync, sleeping for 1 second.
controller.go:1125] Error getting SSL certificate "cattle-system-my/tls-rancher-ingress": local SSL certificate cattle-system-my/tls-rancher-ingress was not found. Using default certificate
...

3.操作1:手動在這個命名空間下找了些證書素材創建了對應的secret
  -----
  kubectl create secret tls wxy-test --key tls.key --cert tls.crt
  -----
  但是經過nginx的解析后發現,證書中的指定的CN(域名)和ingress的"rule"中指定的域名不符,於是仍然報錯,
  然后使用nginx的缺省證書
store.go:446] secret cattle-system-my/tls-rancher-ingress was updated and it is used in ingress annotations. Parsing...
backend_ssl.go:66] Adding Secret "cattle-system-my/tls-rancher-ingress" to the local store
controller.go:1131] Unexpected error validating SSL certificate "cattle-system-my/tls-rancher-ingress" for server "rancher.my.test.org": x509: certificate is valid for rancher.test.org, not rancher.my.test.org
controller.go:1132] Validating certificate against DNS names. This will be deprecated in a future version.
controller.go:1137] SSL certificate "cattle-system-my/tls-rancher-ingress" does not contain a Common Name or Subject Alternative Name for server "rancher.my.test.org": x509: certificate is valid for rancher.test.org, not rancher.my.test.org
controller.go:1139] Using default certificate


4. 操作2: 重新生成證書素材,並修改secret
  -----
  openssl req -x509 -nodes -days 2920 -newkey rsa:2048 -keyout tls.key -out tls.crt -subj "/CN=rancher.my.test.org/O=nginxsvc"
  kubectl create secret tls wxy-test --key tls.key --cert tls.crt
  -----
  此時,證書重新正確加載
store.go:446] secret cattle-system-my/tls-rancher-ingress was updated and it is used in ingress annotations. Parsing...
backend_ssl.go:58] Updating Secret "cattle-system-my/tls-rancher-ingress" in the local store

5.操作3: 新增加一個ingress,controller會watch到這個default命名空間下的Ingress,然后讀進來開始解析:
  -----
  # kubectl apply -f ./test_ingres  --validate=false
  -----
  1)首先根據配置的"rule"的字段尋找對應的Service,但是在default命名空間下並沒有找到名叫test的service,於是報錯
  2)貌似會在controller的ns下找一個Service給他作為缺省,因為根據10.106.118.8指導,這個是ic的Service
event.go:255] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"default", Name:"test-ingress", UID:"1fc6a4ce-8ea8-11ea-a935-286ed488c73f", APIVersion:"networking.k8s.io/v1beta1", ResourceVersion:"6913281", FieldPath:""}): type: 'Normal' reason: 'CREATE' Ingress default/test-ingress
controller.go:811] Error obtaining Endpoints for Service "default/test": no object matching key "default/test" in local store
controller.go:134] Configuration changes detected, backend reload required.
controller.go:150] Backend successfully reloaded
status.go:287] updating Ingress default/test-ingress status from [] to [{10.106.118.8 }]
event.go:255] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"default", Name:"test-ingress", UID:"1fc6a4ce-8ea8-11ea-a935-286ed488c73f", APIVersion:"networking.k8s.io/v1beta1", ResourceVersion:"6913380", FieldPath:""}): type: 'Normal' reason: 'UPDATE' Ingress default/test-ingress
controller.go:811] Error obtaining Endpoints for Service "default/test": no object matching key "default/test" in local store
x n



6. 操作4:為一個ingress添加class的annotation,指定實現這個ingress的ic由nginx-1這個class決定
           貌似ic缺省屬於nginx這個class(缺省部署方式下並沒有發現對應的IngressClass類型資源)
           於是ngixn會將這個ingress對應的規則從自己的配置中刪除
           載3設置成ngixn,則又會添加回來
           -----
            # kubectl edit ingress rancher -ncattle-system-my
                   kubernetes.io/ingress.class: "nginx-1"
                   kubernetes.io/ingress.class: "nginx"
           -----
store.go:381] removing ingress rancher based on annotation kubernetes.io/ingress.class
event.go:255] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"cattle-system-my", Name:"rancher", UID:"4acafd23-89f5-11ea-a935-286ed488c73f", APIVersion:"networking.k8s.io/v1beta1", ResourceVersion:"5065141", FieldPath:""}): type: 'Normal' reason: 'DELETE' Ingress cattle-system-my/rancher
controller.go:134] Configuration changes detected, backend reload required.
controller.go:150] Backend successfully reloaded.


store.go:378] creating ingress rancher based on annotation kubernetes.io/ingress.class
event.go:255] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"cattle-system-my", Name:"rancher", UID:"4acafd23-89f5-11ea-a935-286ed488c73f", APIVersion:"networking.k8s.io/v1beta1", ResourceVersion:"6963589", FieldPath:""}): type: 'Normal' reason: 'CREATE' Ingress cattle-system-my/rancher
controller.go:134] Configuration changes detected, backend reload required.
controller.go:150] Backend successfully reloaded.
View Code

 

2.關於nginx.ingress.kubernetes.io/ingress.class這個注解
如果配置成
nginx.ingress.kubernetes.io/ingress.class:          nginx-1
 
則,ingress不會被刪除,ic那邊的日志如下:
I0628 02:06:58.235693       9 event.go:255] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"wxy-test", Name:"rancher", UID:"b042e1c5-b851-11ea-9fd1-286ed488c73f", APIVersion:"networking.k8s.io/v1beta1", ResourceVersion:"1058054", FieldPath:""}): type: 'Normal' reason: 'UPDATE' Ingress wxy-test/rancher 
 
這說明,對於前綴有ngxin字樣的注解,ngxin controller可以識別到屬於自己管轄的范疇,所以無論設置成什么,都會被nginx controller watch到
 
wxy碎碎念:
這玩意和pv/pvc的class是類似的,貌似所有的class都是起到類似的效果:“媒婆”
ingress需要有controller去實現我制定的規則,所以我讓class宣告:我ingress需要什么樣子的controller,如果有這樣的controller請速速到我的碗里來!
 
附:
typedLocalObjectReference:
contains enough information to let you locate the typed referenced object inside the same namespace.
解析: 提供足夠的信息,讓你可以定位到同namespace下的你所引用的那種類型的object

 

五: 配置Ingress的注解,實現更強大的功能


 1.nginx.ingress.kubernetes.io/ssl-redirect

使用場景:用於308報錯或301報錯

首先,curl http://stickyingress.example.com:28217,ok

然后,修改ingress,增加tls的內容,然后也訪問了對應的https,

最后,再訪問http地址,報錯

HTTP/1.1 308 Permanent Redirect
Server: openresty/1.15.8.2
Date: Sun, 28 Jun 2020 10:44:30 GMT
Content-Type: text/html
Content-Length: 177
Connection: keep-alive
Location: https://stickyingress.example.com/   ---即要求訪問https的地址

解決辦法:

nginx.ingress.kubernetes.io/ssl-redirect: "false"

 

2. Session 與 Cookie

nginx.ingress.kubernetes.io/affinity

其實目前支持取值cookie, 

 

 

 

 

六: 總結


 

1.關於Ingress Controler,官網是這樣解釋的

 
 另外,使用ingress的話一般是ClusterIP類型的Service,因為官網都說了,如果你不想用ingress的話,

You can expose a Service in multiple ways that don't directly involve the Ingress resource:

Use Service.Type=LoadBalancer
Use Service.Type=NodePort

其實也好理解,都直接向外暴露服務了,還需要什么ingress,當然我們都知道如上兩種類型的service是 "大於"ClusterIP,所以同樣是可以使用ingress,只是沒有必要

另外,還有個誤區是不要以為使用service就不能用https,還是可以的,是要業務支持https那他自己肯定會想辦法搞定tls需要的證書

wxy:我理解,使用ingress是不是也是為了怕把業務的證書暴露出去,因為往往業務的證書都是私有的ca簽發的,到了真正的大環境中有時候也是不被承認的.....

首先,ingress自己是沒有接收請求的功能,他不過是一堆規則,想要能夠接收請求就需要創建一種ingress-controller,在這里我們選擇的是nginx。

然后, nginx這個組件想要能夠提供服務,還需要一個service將他暴露出去,於是就需要再為ingress-controller部署一個service,並且至少是NodePort類型。

最后,這些nginx官方都給我們想好了,只需要下載對應的manifest,然后根據需要變更參數即可

 
 
 
  
wxy碎碎念:
為何需要ngixn來做我們的轉發,使用serice不行么?
答:這是一個老生常談的問題:service是做四層轉發,所以對應應用層想使用https的話就不行,所以我們可以利用nginx作為https服務器和外部打交道,然后自己使用http和內網的業務srvice打交道,結合看前面說的ingress的官方定義就知道了,盡管實現這個功能實際上是ingress controller做的


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM