ingress官方文檔地址:http://docs.kubernetes.org.cn/ https://feisky.gitbooks.io/kubernetes/content/plugins/ingress.html
什么是 Ingress?
通常情況下,service和pod的IP僅可在集群內部訪問。集群外部的請求需要通過負載均衡轉發到service在Node上暴露的NodePort上,然后再由kube-proxy將其轉發給相關的Pod。
而Ingress就是為進入集群的請求提供路由規則的集合,如下圖所示
internet | [ Ingress ] --|-----|-- [ Services ]
Ingress可以給service提供集群外部訪問的URL、負載均衡、SSL終止、HTTP路由等。為了配置這些Ingress規則,集群管理員需要部署一個Ingress controller,它監聽Ingress和service的變化,並根據規則配置負載均衡並提供訪問入口。
新版寫法

#ingress apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: springboot-ssl namespace: default spec: tls: - hosts: - csk8s.mingcloud.net secretName: zs-tls rules: - host: csk8s.mingcloud.net http: paths: - pathType: Prefix path: / backend: service: name: springboot-ssl port: number: 80
Ingress格式
apiVersion: extensions/v1beta1 kind: Ingress metadata: name: test-ingress spec: rules: - http: paths: - path: /testpath backend: serviceName: test servicePort: 80
每個Ingress都需要配置rules,目前Kubernetes僅支持http規則。上面的示例表示請求/testpath時轉發到服務test的80端口。
根據Ingress Spec配置的不同,Ingress可以分為以下幾種類型:
注:單個服務還可以通過設置Service.Type=NodePort或者Service.Type=LoadBalancer來對外暴露。
路由到多服務的Ingress
路由到多服務的Ingress即根據請求路徑的不同轉發到不同的后端服務上,比如
foo.bar.com -> 178.91.123.132 -> / foo s1:80 / bar s2:80
可以通過下面的Ingress來定義:
apiVersion: extensions/v1beta1 kind: Ingress metadata: name: test spec: rules: - host: foo.bar.com http: paths: - path: /foo backend: serviceName: s1 servicePort: 80 - path: /bar backend: serviceName: s2 servicePort: 80
使用kubectl create -f創建完ingress后:
kubectl get ing NAME RULE BACKEND ADDRESS test - foo.bar.com /foo s1:80 /bar s2:80
虛擬主機Ingress
虛擬主機Ingress即根據名字的不同轉發到不同的后端服務上,而他們共用同一個的IP地址,如下所示
foo.bar.com --| |-> foo.bar.com s1:80 | 178.91.123.132 | bar.foo.com --| |-> bar.foo.com s2:80
下面是一個基於Host header路由請求的Ingress:
apiVersion: extensions/v1beta1 kind: Ingress metadata: name: test spec: rules: - host: foo.bar.com http: paths: - backend: serviceName: s1 servicePort: 80 - host: bar.foo.com http: paths: - backend: serviceName: s2 servicePort: 80
注:沒有定義規則的后端服務稱為默認后端服務,可以用來方便的處理404頁面。
TLS Ingress
TLS Ingress通過Secret獲取TLS私鑰和證書(名為tls.crt和tls.key),來執行TLS終止。如果Ingress中的TLS配置部分指定了不同的主機,則它們將根據通過SNI TLS擴展指定的主機名(假如Ingress controller支持SNI)在多個相同端口上進行復用。
定義一個包含tls.crt和tls.key的secret:
apiVersion: v1 data: tls.crt: base64 encoded cert tls.key: base64 encoded key kind: Secret metadata: name: testsecret namespace: default type: Opaque
Ingress中引用secret:
apiVersion: extensions/v1beta1 kind: Ingress metadata: name: no-rules-map spec: tls: - secretName: testsecret backend: serviceName: s1 servicePort: 80
更新Ingress
可以通過kubectl edit ing name的方法來更新ingress:
kubectl get ing NAME RULE BACKEND ADDRESS test - 178.91.123.132 foo.bar.com /foo s1:80 $ kubectl edit ing test
這會彈出一個包含已有IngressSpec yaml文件的編輯器,修改並保存就會將其更新到kubernetes API server,進而觸發Ingress Controller重新配置負載均衡:
spec: rules: - host: foo.bar.com http: paths: - backend: serviceName: s1 servicePort: 80 path: /foo - host: bar.baz.com http: paths: - backend: serviceName: s2 servicePort: 80 path: /foo ..
更新后:
kubectl get ing NAME RULE BACKEND ADDRESS test - 178.91.123.132 foo.bar.com /foo s1:80 bar.baz.com /foo s2:80
當然,也可以通過kubectl replace -f new-ingress.yaml命令來更新,其中new-ingress.yaml是修改過的Ingress yaml。
新版本寫法

apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: ingress-monitoring-service namespace: monitorin annotations: kubernetes.io/ingress.class: nginx spec: rules: - host: prometheus.msinikube.com http: paths: - path: / pathType: Prefix backend: service: name: prom-prometheus-operator-prometheus port: number: 9090 - host: alertmanager.csminikube.com http: paths: - path: / pathType: Prefix backend: service: name: prom-prometheus-operator-alertmanager port: number: 9093 - host: grafana.csminikube.com http: paths: - path: / pathType: Prefix backend: service: name: prom-grafana port: number: 80
kubectl create secret tls zs-tls --key SSL.key --cert FullSSL.crt kubectl create secret tls zs-tls --key SSL.key --cert FullSSL.crt -n default

apiVersion: extensions/v1beta1 kind: Ingress metadata: name: web-demo namespace: dev spec: rules: - host: web-dev.mooc.com http: paths: - backend: serviceName: web-demo servicePort: 80 path: / tls: - hosts: - web-dev.mooc.com secretName: mooc-tls
ingress-nginx安裝
安裝文檔地址https://kubernetes.github.io/ingress-nginx/deploy/

--- apiVersion: v1 kind: Namespace metadata: name: ingress-nginx labels: app.kubernetes.io/name: ingress-nginx app.kubernetes.io/part-of: ingress-nginx --- kind: ConfigMap apiVersion: v1 metadata: name: nginx-configuration namespace: ingress-nginx labels: app.kubernetes.io/name: ingress-nginx app.kubernetes.io/part-of: ingress-nginx --- kind: ConfigMap apiVersion: v1 metadata: name: tcp-services namespace: ingress-nginx labels: app.kubernetes.io/name: ingress-nginx app.kubernetes.io/part-of: ingress-nginx --- kind: ConfigMap apiVersion: v1 metadata: name: udp-services namespace: ingress-nginx labels: app.kubernetes.io/name: ingress-nginx app.kubernetes.io/part-of: ingress-nginx --- apiVersion: v1 kind: ServiceAccount metadata: name: nginx-ingress-serviceaccount namespace: ingress-nginx labels: app.kubernetes.io/name: ingress-nginx app.kubernetes.io/part-of: ingress-nginx --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: nginx-ingress-clusterrole labels: app.kubernetes.io/name: ingress-nginx app.kubernetes.io/part-of: ingress-nginx rules: - apiGroups: - "" resources: - configmaps - endpoints - nodes - pods - secrets verbs: - list - watch - apiGroups: - "" resources: - nodes verbs: - get - apiGroups: - "" resources: - services verbs: - get - list - watch - apiGroups: - "" resources: - events verbs: - create - patch - apiGroups: - "extensions" - "networking.k8s.io" resources: - ingresses verbs: - get - list - watch - apiGroups: - "extensions" - "networking.k8s.io" resources: - ingresses/status verbs: - update --- apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: nginx-ingress-role namespace: ingress-nginx labels: app.kubernetes.io/name: ingress-nginx app.kubernetes.io/part-of: ingress-nginx rules: - apiGroups: - "" resources: - configmaps - pods - secrets - namespaces verbs: - get - apiGroups: - "" resources: - configmaps resourceNames: # Defaults to "<election-id>-<ingress-class>" # Here: "<ingress-controller-leader>-<nginx>" # This has to be adapted if you change either parameter # when launching the nginx-ingress-controller. - "ingress-controller-leader-nginx" verbs: - get - update - apiGroups: - "" resources: - configmaps verbs: - create - apiGroups: - "" resources: - endpoints verbs: - get --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: nginx-ingress-role-nisa-binding namespace: ingress-nginx labels: app.kubernetes.io/name: ingress-nginx app.kubernetes.io/part-of: ingress-nginx roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: nginx-ingress-role subjects: - kind: ServiceAccount name: nginx-ingress-serviceaccount namespace: ingress-nginx --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: nginx-ingress-clusterrole-nisa-binding labels: app.kubernetes.io/name: ingress-nginx app.kubernetes.io/part-of: ingress-nginx roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: nginx-ingress-clusterrole subjects: - kind: ServiceAccount name: nginx-ingress-serviceaccount namespace: ingress-nginx --- apiVersion: apps/v1 kind: Deployment metadata: name: nginx-ingress-controller namespace: ingress-nginx labels: app.kubernetes.io/name: ingress-nginx app.kubernetes.io/part-of: ingress-nginx spec: replicas: 2 selector: matchLabels: app.kubernetes.io/name: ingress-nginx app.kubernetes.io/part-of: ingress-nginx template: metadata: labels: app.kubernetes.io/name: ingress-nginx app.kubernetes.io/part-of: ingress-nginx annotations: prometheus.io/port: "10254" prometheus.io/scrape: "true" spec: # wait up to five minutes for the drain of connections terminationGracePeriodSeconds: 300 serviceAccountName: nginx-ingress-serviceaccount hostNetwork: true nodeSelector: #kubernetes.io/os: linux app : ingress containers: - name: nginx-ingress-controller image: 172.17.166.172/kubenetes/nginx-ingress-controller:0.30.0 args: - /nginx-ingress-controller - --configmap=$(POD_NAMESPACE)/nginx-configuration - --tcp-services-configmap=$(POD_NAMESPACE)/tcp-services - --udp-services-configmap=$(POD_NAMESPACE)/udp-services - --publish-service=$(POD_NAMESPACE)/ingress-nginx - --annotations-prefix=nginx.ingress.kubernetes.io - --default-ssl-certificate=default/zs-tls securityContext: allowPrivilegeEscalation: true capabilities: drop: - ALL add: - NET_BIND_SERVICE # www-data -> 101 runAsUser: 101 env: - name: POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace ports: - name: http containerPort: 80 protocol: TCP - name: https containerPort: 443 protocol: TCP livenessProbe: failureThreshold: 3 httpGet: path: /healthz port: 10254 scheme: HTTP initialDelaySeconds: 10 periodSeconds: 10 successThreshold: 1 timeoutSeconds: 10 readinessProbe: failureThreshold: 3 httpGet: path: /healthz port: 10254 scheme: HTTP periodSeconds: 10 successThreshold: 1 timeoutSeconds: 10 lifecycle: preStop: exec: command: - /wait-shutdown --- apiVersion: v1 kind: LimitRange metadata: name: ingress-nginx namespace: ingress-nginx labels: app.kubernetes.io/name: ingress-nginx app.kubernetes.io/part-of: ingress-nginx spec: limits: - min: memory: 90Mi cpu: 100m type: Container
注意事項:
k8s1.20以上需更換api版本#可使用s/具體內容/替換內容/g批量進行替換
修改controller鏡像地址下載后上傳自己庫 修改地址
修改replicas數量,需要高可用幾個
修改controller網絡模式為hostNetwork,默認為NodePort,調度策略修改為指定node。
給指定node,打上標簽,部署controller
kubectl label node nodename app=ingress
深入Ingress-nginx
- 1.deployment
- 2.四層代理
- 3.定制配置
- 4.https
- 5.訪問控制
1.deployment修改為Daemonset
將deployment yaml文件導出
kubectl get deploy -n ingress-nginx nginx-ingress-controller -o yaml > nginx-ingress-controller.yaml
修改文件

apiVersion: apps/v1 kind: Deamonset metadata: annotations: deployment.kubernetes.io/revision: "1" kubectl.kubernetes.io/last-applied-configuration: | {"apiVersion":"apps/v1","kind":"Deployment","metadata":{"annotations":{},"labels":{"app.kubernetes.io/name":"ingress-nginx","app.kubernetes.io/part-of":"ingress-nginx"},"name":"nginx-ingress-controller","namespace":"ingress-nginx"},"spec":{"replicas":2,"selector":{"matchLabels":{"app.kubernetes.io/name":"ingress-nginx","app.kubernetes.io/part-of":"ingress-nginx"}},"template":{"metadata":{"annotations":{"prometheus.io/port":"10254","prometheus.io/scrape":"true"},"labels":{"app.kubernetes.io/name":"ingress-nginx","app.kubernetes.io/part-of":"ingress-nginx"}},"spec":{"containers":[{"args":["/nginx-ingress-controller","--configmap=$(POD_NAMESPACE)/nginx-configuration","--tcp-services-configmap=$(POD_NAMESPACE)/tcp-services","--udp-services-configmap=$(POD_NAMESPACE)/udp-services","--publish-service=$(POD_NAMESPACE)/ingress-nginx","--annotations-prefix=nginx.ingress.kubernetes.io","--default-ssl-certificate=default/zs-tls"],"env":[{"name":"POD_NAME","valueFrom":{"fieldRef":{"fieldPath":"metadata.name"}}},{"name":"POD_NAMESPACE","valueFrom":{"fieldRef":{"fieldPath":"metadata.namespace"}}}],"image":"172.17.166.172/kubenetes/nginx-ingress-controller:0.30.0","lifecycle":{"preStop":{"exec":{"command":["/wait-shutdown"]}}},"livenessProbe":{"failureThreshold":3,"httpGet":{"path":"/healthz","port":10254,"scheme":"HTTP"},"initialDelaySeconds":10,"periodSeconds":10,"successThreshold":1,"timeoutSeconds":10},"name":"nginx-ingress-controller","ports":[{"containerPort":80,"name":"http","protocol":"TCP"},{"containerPort":443,"name":"https","protocol":"TCP"}],"readinessProbe":{"failureThreshold":3,"httpGet":{"path":"/healthz","port":10254,"scheme":"HTTP"},"periodSeconds":10,"successThreshold":1,"timeoutSeconds":10},"securityContext":{"allowPrivilegeEscalation":true,"capabilities":{"add":["NET_BIND_SERVICE"],"drop":["ALL"]},"runAsUser":101}}],"hostNetwork":true,"nodeSelector":{"app":"ingress"},"serviceAccountName":"nginx-ingress-serviceaccount","terminationGracePeriodSeconds":300}}}} creationTimestamp: "2021-07-07T09:34:24Z" generation: 1 labels: app.kubernetes.io/name: ingress-nginx app.kubernetes.io/part-of: ingress-nginx managedFields: - apiVersion: apps/v1 fieldsType: FieldsV1 fieldsV1: f:metadata: f:annotations: .: {} f:kubectl.kubernetes.io/last-applied-configuration: {} f:labels: .: {} f:app.kubernetes.io/name: {} f:app.kubernetes.io/part-of: {} f:spec: f:progressDeadlineSeconds: {} f:replicas: {} f:revisionHistoryLimit: {} f:selector: {} f:strategy: f:rollingUpdate: .: {} f:maxSurge: {} f:maxUnavailable: {} f:type: {} f:template: f:metadata: f:annotations: .: {} f:prometheus.io/port: {} f:prometheus.io/scrape: {} f:labels: .: {} f:app.kubernetes.io/name: {} f:app.kubernetes.io/part-of: {} f:spec: f:containers: k:{"name":"nginx-ingress-controller"}: .: {} f:args: {} f:env: .: {} k:{"name":"POD_NAME"}: .: {} f:name: {} f:valueFrom: .: {} f:fieldRef: .: {} f:apiVersion: {} f:fieldPath: {} k:{"name":"POD_NAMESPACE"}: .: {} f:name: {} f:valueFrom: .: {} f:fieldRef: .: {} f:apiVersion: {} f:fieldPath: {} f:image: {} f:imagePullPolicy: {} f:lifecycle: .: {} f:preStop: .: {} f:exec: .: {} f:command: {} f:livenessProbe: .: {} f:failureThreshold: {} f:httpGet: .: {} f:path: {} f:port: {} f:scheme: {} f:initialDelaySeconds: {} f:periodSeconds: {} f:successThreshold: {} f:timeoutSeconds: {} f:name: {} f:ports: .: {} k:{"containerPort":80,"protocol":"TCP"}: .: {} f:containerPort: {} f:hostPort: {} f:name: {} f:protocol: {} k:{"containerPort":443,"protocol":"TCP"}: .: {} f:containerPort: {} f:hostPort: {} f:name: {} f:protocol: {} f:readinessProbe: .: {} f:failureThreshold: {} f:httpGet: .: {} f:path: {} f:port: {} f:scheme: {} f:periodSeconds: {} f:successThreshold: {} f:timeoutSeconds: {} f:resources: {} f:securityContext: .: {} f:allowPrivilegeEscalation: {} f:capabilities: .: {} f:add: {} f:drop: {} f:runAsUser: {} f:terminationMessagePath: {} f:terminationMessagePolicy: {} f:dnsPolicy: {} f:hostNetwork: {} f:nodeSelector: .: {} f:app: {} f:restartPolicy: {} f:schedulerName: {} f:securityContext: {} f:serviceAccount: {} f:serviceAccountName: {} f:terminationGracePeriodSeconds: {} manager: kubectl-client-side-apply operation: Update time: "2021-07-07T09:34:24Z" - apiVersion: apps/v1 fieldsType: FieldsV1 fieldsV1: f:metadata: f:annotations: f:deployment.kubernetes.io/revision: {} f:status: f:availableReplicas: {} f:conditions: .: {} k:{"type":"Available"}: .: {} f:lastTransitionTime: {} f:lastUpdateTime: {} f:message: {} f:reason: {} f:status: {} f:type: {} k:{"type":"Progressing"}: .: {} f:lastTransitionTime: {} f:lastUpdateTime: {} f:message: {} f:reason: {} f:status: {} f:type: {} f:observedGeneration: {} f:readyReplicas: {} f:replicas: {} f:updatedReplicas: {} manager: kube-controller-manager operation: Update time: "2021-07-07T09:34:35Z" name: nginx-ingress-controller namespace: ingress-nginx resourceVersion: "1100470" uid: 9651c048-0a73-46f3-9753-affd00074ddb spec: revisionHistoryLimit: 10 selector: matchLabels: app.kubernetes.io/name: ingress-nginx app.kubernetes.io/part-of: ingress-nginx updatestrategy: rollingUpdate: maxUnavailable: 25% type: RollingUpdate template: metadata: annotations: prometheus.io/port: "10254" prometheus.io/scrape: "true" creationTimestamp: null labels: app.kubernetes.io/name: ingress-nginx app.kubernetes.io/part-of: ingress-nginx spec: containers: - args: - /nginx-ingress-controller - --configmap=$(POD_NAMESPACE)/nginx-configuration - --tcp-services-configmap=$(POD_NAMESPACE)/tcp-services - --udp-services-configmap=$(POD_NAMESPACE)/udp-services - --publish-service=$(POD_NAMESPACE)/ingress-nginx - --annotations-prefix=nginx.ingress.kubernetes.io - --default-ssl-certificate=default/zs-tls env: - name: POD_NAME valueFrom: fieldRef: apiVersion: v1 fieldPath: metadata.name - name: POD_NAMESPACE valueFrom: fieldRef: apiVersion: v1 fieldPath: metadata.namespace image: 172.17.166.172/kubenetes/nginx-ingress-controller:0.30.0 imagePullPolicy: IfNotPresent lifecycle: preStop: exec: command: - /wait-shutdown livenessProbe: failureThreshold: 3 httpGet: path: /healthz port: 10254 scheme: HTTP initialDelaySeconds: 10 periodSeconds: 10 successThreshold: 1 timeoutSeconds: 10 name: nginx-ingress-controller ports: - containerPort: 80 hostPort: 80 name: http protocol: TCP - containerPort: 443 hostPort: 443 name: https protocol: TCP readinessProbe: failureThreshold: 3 httpGet: path: /healthz port: 10254 scheme: HTTP periodSeconds: 10 successThreshold: 1 timeoutSeconds: 10 resources: {} securityContext: allowPrivilegeEscalation: true capabilities: add: - NET_BIND_SERVICE drop: - ALL runAsUser: 101 terminationMessagePath: /dev/termination-log terminationMessagePolicy: File dnsPolicy: ClusterFirst hostNetwork: true nodeSelector: app: ingress restartPolicy: Always schedulerName: default-scheduler securityContext: {} serviceAccount: nginx-ingress-serviceaccount serviceAccountName: nginx-ingress-serviceaccount terminationGracePeriodSeconds: 300
#需要將deployment中不支持Deamonset的參數進行刪除
查看是否安裝成功
kubectl describe ingress --all-namespaces kubectl get daemonsets.apps ingress_nginx_controller kubectl get pods -n ingress -l app=ingress
擴展nginx只需要給node打上標簽deamonset會自動安裝
kubectl label node node-2 app=ingress 去掉ingress 只需要去掉label kubectl label node node app-
2.四層代理服務發現
查看當前ingress-nginx下的configmap
kubectl get cm -n ingress-nginx
導出tcp configmap
kubectl get cm -n ingress-nginx tcp-services -o yaml >tcp-service.yaml
編輯文件

apiVersion: v1 kind: ConfigMap metadata: name: pr-services namespace: monitorin data: "30000": monitorin/prometheus-operator-prometheus
##配置數據端口及需要轉發到命名空間下的某個service
3.自定義配置
進入controller容器中查看nginx.conf文件
kubectl exec -it -n ingress-nginx nginx-ingress-controller-697b7b8655-4zkj7 -- /bin/bash
##在新的版本中采用了lua模塊不用頻繁的去reload。lua模塊對應了腳本和指令能夠動態的給conf文件傳參
- 創建一個config文件修改默認配置
kind: ConfigMap apiVersion: v1 metadata: name: nginx-configuration namespace: ingress-nginx labels: app: ingress-nginx data: proxy-body-size: "64m" proxy-read-timeout: "180" proxy-send-timeout: "180"
- 定義添加一些head
apiVersion: v1 kind: ConfigMap data: proxy-set-headers: "ingress-nginx/custom-headers" metadata: name: nginx-configuration namespace: ingress-nginx labels: app.kubernetes.io/name: ingress-nginx app.kubernetes.io/part-of: ingress-nginx --- apiVersion: v1 kind: ConfigMap data: X-Different-Name: "true" X-Request-Start: t=${msec} X-Using-Nginx-Controller: "true" metadata: name: custom-headers namespace: ingress-nginx
#引用ingress-nginx/custom-headers在其之下添加一些新的head
只在某個域名下添加head

apiVersion: extensions/v1beta1 kind: Ingress metadata: annotations: nginx.ingress.kubernetes.io/configuration-snippet: | more_set_headers "Request-Id: $req_id"; name: web-demo namespace: dev spec: rules: - host: web-dev.mooc.com http: paths: - backend: serviceName: web-demo servicePort: 80 path: /
- 定義模板文件
#將模板文件掛載到containerd
- 創建configmap
將容器中配置文件取出
kubectl exec -n ingress-nginx nginx-ingress-controller-697b7b8655-zcpxq -- tar cf - template/nginx.tmpl | tar xf - -C nginx.tmpl
kubectl cp ingress-nginx/nginx-ingress-controller-697b7b8655-4zkj7:template/nginx.tmpl nginx.tmpl 取出文件
傳入文件
kubectl cp nginx.tmpl ingress-nginx/nginx-ingress-controller-697b7b8655-4zkj7:template/
###kubectl cp使用的是tar 相對路徑
創建configmap
kubectl create cm nginx-template --form-file nginx.tmpl
刪除之前的
kubectl delete cm nginx-template
修改nginx-template
kubectl edit cm -n ingress-nginx nginx-template
4.nginx tls
創建secret
openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout mooc.key -out mooc.crt -subj "/CN=*.mooc.com/O=*.mooc.com" kubectl create secret tls mooc-tls --key mooc.key --cert mooc.crt
編輯controller文件
#添加證書secret命名空間及名稱
啟動tsl yaml

apiVersion: extensions/v1beta1 kind: Ingress metadata: name: web-demo namespace: dev spec: rules: - host: web-dev.mooc.com http: paths: - backend: serviceName: web-demo servicePort: 80 path: / tls: - hosts: - web-dev.mooc.com secretName: mooc-tls
5.session保持

apiVersion: networking.k8s.io/v1 kind: Ingress metadata: annotations: nginx.ingress.kubernetes.io/affinity: cookie #session保持 cookie nginx.ingress.kubernetes.io/session-cookie-hash: sha1 #算法sha1 nginx.ingress.kubernetes.io/session-cookie-name: route #session名稱 name: springboot-ssl namespace: default spec: rules: - host: csk8s.mingcloud.net http: paths: - backend: serviceName: springboot-ssl servicePort: 80 path: / ~
6.流量控制
需要指向相同的域名,ingress會把域名指向兩個service
架構圖:
- 權重

apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: web-canary-b namespace: canary annotations: nginx.ingress.kubernetes.io/canary: "true" nginx.ingress.kubernetes.io/canary-weight: "90" spec: rules: - host: canary.mooc.com http: paths: - path: / backend: serviceName: web-canary-b servicePort: 80
- cookie流量定向控制

#ingress apiVersion: extensions/v1beta1 kind: Ingress metadata: name: web-canary-b namespace: canary annotations: nginx.ingress.kubernetes.io/canary: "true" nginx.ingress.kubernetes.io/canary-by-cookie: "web-canary" spec: rules: - host: canary.mooc.com http: paths: - path: / backend: serviceName: web-canary-b servicePort: 80
添加cookie進行訪問
- 通過header定向流量

#ingress apiVersion: extensions/v1beta1 kind: Ingress metadata: name: web-canary-b namespace: canary annotations: nginx.ingress.kubernetes.io/canary: "true" nginx.ingress.kubernetes.io/canary-by-header: "web-canary" spec: rules: - host: canary.mooc.com http: paths: - path: / backend: serviceName: web-canary-b servicePort: 80
通過自定義head訪問
- 組合方式

#ingress apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: web-canary-b namespace: canary annotations: nginx.ingress.kubernetes.io/canary: "true" nginx.ingress.kubernetes.io/canary-by-header: "web-canary" nginx.ingress.kubernetes.io/canary-by-cookie: "web-canary" nginx.ingress.kubernetes.io/canary-weight: "90" spec: rules: - host: canary.mooc.com http: paths: - pathType: Prefix path: / backend: service: name: web-canary-b port: number: 80
優先級最高的為head
其次為cookie
最后為權重
###可以定義多個service來指向同一個域名
apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: cs-c namespace: dev annotations: kubernetes.io/ingress.class: nginx spec: rules: - host: cs.igs.com http: paths: - path: / pathType: Prefix backend: service: name: tomcat-c port: number: 80 --- apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: ingress-monitoring-service namespace: dev annotations: kubernetes.io/ingress.class: nginx spec: rules: - host: cs.igs.com http: paths: - path: / pathType: Prefix backend: service: name: tomcat-b port: number: 80
###以上權重 會話保持等 通過定義多個ingress文件清單定義不同的名字來指定同一個域名進行權重及會話保持