深入理解 Ingress


Ingress為彌補NodePort不足而生

NodePort一些不足:

• 一個端口只能一個服務使用,端口需提前規划

• 只支持4層負載均衡

nginx 動態感知pod ip 的變化,根據變化動態設置nginx 的upstream,並實現負載均衡

ingress controller 動態刷新 pod ip 列表 更新到 nginx 的配置文件

 

Pod與Ingress的關系

通過Service相關聯

通過Ingress Controller實現Pod的負載均衡 - 支持TCP/UDP 4層和HTTP 7層

image.png

 

Ingress Controller

image.png

 

1. 部署Ingress Controller

 

Nginx:官方維護的Ingress Controller

部署文檔:https://github.com/kubernetes/ingress-nginx/blob/master/docs/deploy/index.md

注意事項:

• 鏡像地址修改成國內的:registry.aliyuncs.com/google_containers/nginx-ingress-controller:0.26.1

• 使用宿主機網絡:hostNetwork: true

 

 

wget https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/mandatory.yaml

mandatory.yaml 里一共有3個configmap  一個是配置七層的,二個是配置四層的分別對應udp tcp

從apiserver 動態獲取 endpoint ,從apiserver訪問就必須授權,就需要創建serviceaccount,僅僅獲取ip 列表,所以授予的權限 查看即可。

image.png

image.png

由於國外的鏡像往往不能被 很好d 拉取。

更換配置中的默認鏡像地址registry.cn-hangzhou.aliyuncs.com/benjamin-learn//nginx-ingress-controller:0.20.0

image.png

hostNetwork 字段和 container 一個層級

image.png

hostNetwork: true

 

創建ingress-controller 發現pod 起不來

報錯提示1:

Error generating self-signed certificate: could not create temp pem file /etc/ingress-co

ntroller/ssl/default-fake-certificate.pem: open /etc/ingress-controller/ssl/default-fake-certificate.pem797363033: permission denied

解決方法:

原因:隨着版本提高,安全限制越來越高,對於權限的管理也越來越精細

node節點分別  chmod -R 777 /var/lib/docker  授權任意用戶有docker臨時文件的任意權限

 

 

完整  ingress-controller的 yaml

mandatory.yaml

apiVersion: v1 kind: Namespace metadata:  name: ingress-nginx  labels:  app.kubernetes.io/name: ingress-nginx  app.kubernetes.io/part-of: ingress-nginx  ---  kind: ConfigMap apiVersion: v1 metadata:  name: nginx-configuration  namespace: ingress-nginx  labels:  app.kubernetes.io/name: ingress-nginx  app.kubernetes.io/part-of: ingress-nginx  --- kind: ConfigMap apiVersion: v1 metadata:  name: tcp-services  namespace: ingress-nginx  labels:  app.kubernetes.io/name: ingress-nginx  app.kubernetes.io/part-of: ingress-nginx  --- kind: ConfigMap apiVersion: v1 metadata:  name: udp-services  namespace: ingress-nginx  labels:  app.kubernetes.io/name: ingress-nginx  app.kubernetes.io/part-of: ingress-nginx  --- apiVersion: v1 kind: ServiceAccount metadata:  name: nginx-ingress-serviceaccount  namespace: ingress-nginx  labels:  app.kubernetes.io/name: ingress-nginx  app.kubernetes.io/part-of: ingress-nginx  --- apiVersion: rbac.authorization.k8s.io/v1beta1 kind: ClusterRole metadata:  name: nginx-ingress-clusterrole  labels:  app.kubernetes.io/name: ingress-nginx  app.kubernetes.io/part-of: ingress-nginx rules:  - apiGroups:  - ""  resources:  - configmaps  - endpoints  - nodes  - pods  - secrets  verbs:  - list  - watch  - apiGroups:  - ""  resources:  - nodes  verbs:  - get  - apiGroups:  - ""  resources:  - services  verbs:  - get  - list  - watch  - apiGroups:  - ""  resources:  - events  verbs:  - create  - patch  - apiGroups:  - "extensions"  - "networking.k8s.io"  resources:  - ingresses  verbs:  - get  - list  - watch  - apiGroups:  - "extensions"  - "networking.k8s.io"  resources:  - ingresses/status  verbs:  - update  --- apiVersion: rbac.authorization.k8s.io/v1beta1 kind: Role metadata:  name: nginx-ingress-role  namespace: ingress-nginx  labels:  app.kubernetes.io/name: ingress-nginx  app.kubernetes.io/part-of: ingress-nginx rules:  - apiGroups:  - ""  resources:  - configmaps  - pods  - secrets  - namespaces  verbs:  - get  - apiGroups:  - ""  resources:  - configmaps  resourceNames: # Defaults to "<election-id>-<ingress-class>" # Here: "<ingress-controller-leader>-<nginx>" # This has to be adapted if you change either parameter # when launching the nginx-ingress-controller.  - "ingress-controller-leader-nginx"  verbs:  - get  - update  - apiGroups:  - ""  resources:  - configmaps  verbs:  - create  - apiGroups:  - ""  resources:  - endpoints  verbs:  - get --- apiVersion: rbac.authorization.k8s.io/v1beta1 kind: RoleBinding metadata:  name: nginx-ingress-role-nisa-binding  namespace: ingress-nginx  labels:  app.kubernetes.io/name: ingress-nginx  app.kubernetes.io/part-of: ingress-nginx roleRef:  apiGroup: rbac.authorization.k8s.io  kind: Role  name: nginx-ingress-role subjects:  - kind: ServiceAccount  name: nginx-ingress-serviceaccount  namespace: ingress-nginx --- apiVersion: rbac.authorization.k8s.io/v1beta1 kind: ClusterRoleBinding metadata:  name: nginx-ingress-clusterrole-nisa-binding  labels:  app.kubernetes.io/name: ingress-nginx  app.kubernetes.io/part-of: ingress-nginx roleRef:  apiGroup: rbac.authorization.k8s.io  kind: ClusterRole  name: nginx-ingress-clusterrole subjects:  - kind: ServiceAccount  name: nginx-ingress-serviceaccount  namespace: ingress-nginx --- apiVersion: apps/v1 kind: Deployment metadata:  name: nginx-ingress-controller  namespace: ingress-nginx  labels:  app.kubernetes.io/name: ingress-nginx  app.kubernetes.io/part-of: ingress-nginx spec:  replicas: 1  selector:  matchLabels:  app.kubernetes.io/name: ingress-nginx  app.kubernetes.io/part-of: ingress-nginx  template:  metadata:  labels:  app.kubernetes.io/name: ingress-nginx  app.kubernetes.io/part-of: ingress-nginx  annotations:  prometheus.io/port: "10254"  prometheus.io/scrape: "true"  spec:  hostNetwork: true # wait up to five minutes for the drain of connections  terminationGracePeriodSeconds: 300  serviceAccountName: nginx-ingress-serviceaccount  containers:  - name: nginx-ingress-controller  image: registry.aliyuncs.com/google_containers/nginx-ingress-controller:0.26.1  args:  - /nginx-ingress-controller  - --configmap=$(POD_NAMESPACE)/nginx-configuration  - --tcp-services-configmap=$(POD_NAMESPACE)/tcp-services  - --udp-services-configmap=$(POD_NAMESPACE)/udp-services  - --publish-service=$(POD_NAMESPACE)/ingress-nginx  - --annotations-prefix=nginx.ingress.kubernetes.io  securityContext:  allowPrivilegeEscalation: true  capabilities:  drop:  - ALL  add:  - NET_BIND_SERVICE # www-data -> 33  runAsUser: 33  env:  - name: POD_NAME  valueFrom:  fieldRef:  fieldPath: metadata.name  - name: POD_NAMESPACE  valueFrom:  fieldRef:  fieldPath: metadata.namespace  ports:  - name: http  containerPort: 80  - name: https  containerPort: 443  livenessProbe:  failureThreshold: 3  httpGet:  path: /healthz  port: 10254  scheme: HTTP  initialDelaySeconds: 10  periodSeconds: 10  successThreshold: 1  timeoutSeconds: 10  readinessProbe:  failureThreshold: 3  httpGet:  path: /healthz  port: 10254  scheme: HTTP  periodSeconds: 10  successThreshold: 1  timeoutSeconds: 10  lifecycle:  preStop:  exec:  command:  - /wait-shutdown ---

創建ingress-service.yaml

由於nodeport 默認的端口是 30000-32767   所以 80 和443 端口想通過nodeport暴露需要修改kube-apiserver的配置,並重啟kube-apiserver服務

image.png

apiVersion: v1 kind: Service metadata:  name: ingress-nginx  namespace: ingress-nginx  labels:  app.kubernetes.io/name: ingress-nginx  app.kubernetes.io/part-of: ingress-nginx spec:  type: NodePort  ports:  - name: http  port: 80  targetPort: 80  protocol: TCP  nodePort: 80 # http請求對外映射80端口  - name: https  port: 443  targetPort: 443  protocol: TCP  nodePort: 443 # https請求對外映射443端口  selector:  app.kubernetes.io/name: ingress-nginx  app.kubernetes.io/part-of: ingress-nginx  ---

ingress-nginx 名稱空間下的資源 皆為 ready

image.png

查看pod 所在節點 443 和 80 端口是否正常監聽

image.png

符合預期 80 443 端口正常監聽

image.png

2. 創建Ingress規則

創建ingress規則實際指的就是配置 nginx 的server 模塊配置,通過yaml 定義的規則自動刷新到pod中的nginx 的配置中

 

創建一個私有倉庫的secret 用於拉取 我的阿里雲私有鏡像

 

kubectl create secret docker-registry myregistry --docker-server=registry.cn-hangzhou.aliyuncs.com --docker-username=benjamin7788 --docker-password=a7260488

image.png

web-deployment.yaml 創建一個3副本的java 鏡像,使用我的私有阿里雲鏡像

增加imagePullSecrets和 containers 一個層級 

imagePullSecrets:

     - name: myregistry

apiVersion: apps/v1 kind: Deployment metadata:  labels:  app: web  name: web spec:  replicas: 3  selector:  matchLabels:  app: web  strategy:  rollingUpdate:  maxSurge: 25%  maxUnavailable: 25%  type: RollingUpdate  template:  metadata:  labels:  app: web  spec:  imagePullSecrets:  - name: myregistry  containers:  - image: registry.cn-hangzhou.aliyuncs.com/benjamin-learn/java-demo:lst  imagePullPolicy: Always  name: java  resources: {}  restartPolicy: Always 

web-service.yaml 將這個deployment 用nodeport 方式暴露

 

apiVersion: v1 kind: Service metadata:  labels:  app: web  name: web  namespace: default spec:  ports:  - port: 80  protocol: TCP  targetPort: 8080  selector:  app: web  type: NodePort

image.png

 

ingress.yaml

 

apiVersion: networking.k8s.io/v1beta1 kind: Ingress metadata:  name: example-ingress spec:  rules:  - host: example.ctnrs.com  http:  paths:  - path: /  backend:  serviceName: web  servicePort: 80

#生產環境

這個example.ctnrs.com 域名一般是由域名運營商負責解析 其中A記錄對應的值為運行ingress-controller的 node的公網ip

#測試環境

本地綁定host文件,模擬訪問。

windows集群 win+r  輸入drivers  進入 etc目錄 修改hosts文件  增加一條記錄

192.168.31.65 example.ctnrs.com 

image.png

查看域名對應規則

image.png

當ingress規則創建查看ingress-controller 容器日志 發現 成功創建了一個ingress規則

image.png

進入ingress-controller 容器發現 nginx.conf 自動增加了一個server 模塊的配置

image.png

成功訪問java應用

image.png

 

請求路徑:

http 請求--->node:80 --->(upstream)pod ip--->容器:8080

 

細節:

我們都知道nginx更新配置都需要reload 方能生效,而nginx-conroller內的nginx 采用 lua腳本將配置更新在內存中,使得不需要reload 也可以動態更新配置。

 

管理員 -->ingress yaml -->master(apiserver)--->service---master(apiserver)<--- ingress controller(pod ip) --->nginx.conf(lua腳本)--->upstream(pod ip) --->容器

 

3.根據urlL路由到多個服務

www.ctnrs.com/a  請求到a組服務器

www.ctnrs.com/b  請求到b組服務器

ingress-url.yaml

 

apiVersion: networking.k8s.io/v1beta1 kind: Ingress metadata:  name: example-ingress  annotations:  nginx.ingress.kubernetes.io/rewrite-target: / spec:  rules:  - host: example.ctnrs.com  http:  paths:  - path: /a  backend:  serviceName: web  servicePort: 80   - host: example.ctnrs.com  http:  paths:  - path: /b  backend:  serviceName: web2  servicePort: 80

創建 web2應用

kubectl create deploy web2 --image=nginx kubectl expose deploy web2 --port=80 --target-port=80

 

image.png

image.png

 

4.基於名稱的虛擬主機

ingress-vhost.yaml 

apiVersion: networking.k8s.io/v1beta1 kind: Ingress metadata:  name: example-ingress spec:  rules:  - host: example1.ctnrs.com  http:  paths:  - path: /  backend:  serviceName: web  servicePort: 80   - host: example2.ctnrs.com  http:  paths:  - path: /  backend:  serviceName: web2  servicePort: 80

綁定本地host文件 192.168.31.65  example1.ctnrs.com  example1.ctnrs.com

訪問example1.ctnrs.com

image.png

訪問 example2.ctnrs.com

image.png

4.ingress 配置https

ingress-https.yaml

 

apiVersion: networking.k8s.io/v1beta1 kind: Ingress metadata:  name: example-ingress  annotations:  nginx.ingress.kubernetes.io/force-ssl-redirect: 'true' spec:  tls:  - hosts:  - example.ctnrs.com  secretName: secret-tls  rules:  - host: example.ctnrs.com  http:  paths:  - path: /  backend:  serviceName: web  servicePort: 80

 

訪問 example.ctnrs.com 直接跳轉到 https://example.ctnrs.comimage.png

選擇信任后,默認使用的證書是 ingress-controller 內默認的證書

image.png

image.png

 

生成自簽證書

cert.sh

 

cat > ca-config.json <<EOF {  "signing": {  "default": {  "expiry": "87600h"  },  "profiles": {  "kubernetes": {  "expiry": "87600h",  "usages": [  "signing",  "key encipherment",  "server auth",  "client auth"  ]  }  }  } } EOF  cat > ca-csr.json <<EOF {  "CN": "kubernetes",  "key": {  "algo": "rsa",  "size": 2048  },  "names": [  {  "C": "CN",  "L": "Beijing",  "ST": "Beijing"  }  ] } EOF  cfssl gencert -initca ca-csr.json | cfssljson -bare ca -  cat > example.ctnrs.com-csr.json <<EOF {  "CN": "example.ctnrs.com",  "hosts": [],  "key": {  "algo": "rsa",  "size": 2048  },  "names": [  {  "C": "CN",  "L": "BeiJing",  "ST": "BeiJing"  }  ] } EOF  cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes example.ctnrs.com-csr.json | cfssljson -bare  example.ctnrs.com #kubectl create secret tls example-ctnrs-com --cert=example.ctnrs.com.pem --key=example.ctnrs.com-key.pem

執行生成證書腳本 ,生成域名證書 example.ctnrs.com-key.pem  example.ctnrs.com.pem

 

創建tls類型secret 

kubectl create secret tls secret-tls --cert=/root/learn/ssl/example.ctnrs.com.pem --key=/root/learn/ssl/example.ctnrs.com -key.pemsecret/secret-tls created

再次查看證書,已經使用了 自簽域名證書。

image.png

5.Annotations對Ingress個性化配置

ingress-controller的默認代理超時設置

image.png

ingress-annotations.yaml 

 

apiVersion: networking.k8s.io/v1beta1 kind: Ingress metadata:  name: example-ingress  annotations:  kubernetes.io/ingress.class: "nginx"  nginx.ingress.kubernetes.io/proxy-connect-timeout: "600"  nginx.ingress.kubernetes.io/proxy-send-timeout: "600"  nginx.ingress.kubernetes.io/proxy-read-timeout: "600"  nginx.ingress.kubernetes.io/proxy-body-size: "10m" spec:  rules:  - host: example.ctnrs.com  http:  paths:  - path: /  backend:  serviceName: web  servicePort: 80

通過注解的方式更改 ingress-controller的默認代理超時時間

image.png

 

官方注解詳解:https://github.com/kubernetes/ingress-nginx/blob/master/docs/user-guide/nginx-configuration/annotations.md

 

Ingress Controller 高可用方案

方案一:通過keepalived  設置vip訪問 的 主備高可用方案,使用daemonset 控制器在每個節點部署一個 ingress controller  

缺點:由於只有一台ingress controller 提供服務,所以當訪問量變大,性能會產生瓶頸。

image.png

 

通過loadbalancer 反向代理多個 ingress-controller ,實現對多個ingress controller 的負載均衡,用戶請求 lb 的ip 然后轉發到 后端 ingress controller

nodeselector + daemonset +lb  完成部署

優點:這種很好的彌補了第一種方案的 不足,多台ingress controller 同時提供服務,可提供更大的 訪問量。

缺點:lb 這時又會產生單點,需要對 lb 做高可用,這樣比較消耗 資源。

image.png

其他主流控制器:

Traefik: HTTP反向代理、負載均衡工具

Istio:服務治理,控制入口流量

 


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM