深入理解 Ingress


Ingress为弥补NodePort不足而生

NodePort一些不足:

• 一个端口只能一个服务使用,端口需提前规划

• 只支持4层负载均衡

nginx 动态感知pod ip 的变化,根据变化动态设置nginx 的upstream,并实现负载均衡

ingress controller 动态刷新 pod ip 列表 更新到 nginx 的配置文件

 

Pod与Ingress的关系

通过Service相关联

通过Ingress Controller实现Pod的负载均衡 - 支持TCP/UDP 4层和HTTP 7层

image.png

 

Ingress Controller

image.png

 

1. 部署Ingress Controller

 

Nginx:官方维护的Ingress Controller

部署文档:https://github.com/kubernetes/ingress-nginx/blob/master/docs/deploy/index.md

注意事项:

• 镜像地址修改成国内的:registry.aliyuncs.com/google_containers/nginx-ingress-controller:0.26.1

• 使用宿主机网络:hostNetwork: true

 

 

wget https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/mandatory.yaml

mandatory.yaml 里一共有3个configmap  一个是配置七层的,二个是配置四层的分别对应udp tcp

从apiserver 动态获取 endpoint ,从apiserver访问就必须授权,就需要创建serviceaccount,仅仅获取ip 列表,所以授予的权限 查看即可。

image.png

image.png

由于国外的镜像往往不能被 很好d 拉取。

更换配置中的默认镜像地址registry.cn-hangzhou.aliyuncs.com/benjamin-learn//nginx-ingress-controller:0.20.0

image.png

hostNetwork 字段和 container 一个层级

image.png

hostNetwork: true

 

创建ingress-controller 发现pod 起不来

报错提示1:

Error generating self-signed certificate: could not create temp pem file /etc/ingress-co

ntroller/ssl/default-fake-certificate.pem: open /etc/ingress-controller/ssl/default-fake-certificate.pem797363033: permission denied

解决方法:

原因:随着版本提高,安全限制越来越高,对于权限的管理也越来越精细

node节点分别  chmod -R 777 /var/lib/docker  授权任意用户有docker临时文件的任意权限

 

 

完整  ingress-controller的 yaml

mandatory.yaml

apiVersion: v1 kind: Namespace metadata:  name: ingress-nginx  labels:  app.kubernetes.io/name: ingress-nginx  app.kubernetes.io/part-of: ingress-nginx  ---  kind: ConfigMap apiVersion: v1 metadata:  name: nginx-configuration  namespace: ingress-nginx  labels:  app.kubernetes.io/name: ingress-nginx  app.kubernetes.io/part-of: ingress-nginx  --- kind: ConfigMap apiVersion: v1 metadata:  name: tcp-services  namespace: ingress-nginx  labels:  app.kubernetes.io/name: ingress-nginx  app.kubernetes.io/part-of: ingress-nginx  --- kind: ConfigMap apiVersion: v1 metadata:  name: udp-services  namespace: ingress-nginx  labels:  app.kubernetes.io/name: ingress-nginx  app.kubernetes.io/part-of: ingress-nginx  --- apiVersion: v1 kind: ServiceAccount metadata:  name: nginx-ingress-serviceaccount  namespace: ingress-nginx  labels:  app.kubernetes.io/name: ingress-nginx  app.kubernetes.io/part-of: ingress-nginx  --- apiVersion: rbac.authorization.k8s.io/v1beta1 kind: ClusterRole metadata:  name: nginx-ingress-clusterrole  labels:  app.kubernetes.io/name: ingress-nginx  app.kubernetes.io/part-of: ingress-nginx rules:  - apiGroups:  - ""  resources:  - configmaps  - endpoints  - nodes  - pods  - secrets  verbs:  - list  - watch  - apiGroups:  - ""  resources:  - nodes  verbs:  - get  - apiGroups:  - ""  resources:  - services  verbs:  - get  - list  - watch  - apiGroups:  - ""  resources:  - events  verbs:  - create  - patch  - apiGroups:  - "extensions"  - "networking.k8s.io"  resources:  - ingresses  verbs:  - get  - list  - watch  - apiGroups:  - "extensions"  - "networking.k8s.io"  resources:  - ingresses/status  verbs:  - update  --- apiVersion: rbac.authorization.k8s.io/v1beta1 kind: Role metadata:  name: nginx-ingress-role  namespace: ingress-nginx  labels:  app.kubernetes.io/name: ingress-nginx  app.kubernetes.io/part-of: ingress-nginx rules:  - apiGroups:  - ""  resources:  - configmaps  - pods  - secrets  - namespaces  verbs:  - get  - apiGroups:  - ""  resources:  - configmaps  resourceNames: # Defaults to "<election-id>-<ingress-class>" # Here: "<ingress-controller-leader>-<nginx>" # This has to be adapted if you change either parameter # when launching the nginx-ingress-controller.  - "ingress-controller-leader-nginx"  verbs:  - get  - update  - apiGroups:  - ""  resources:  - configmaps  verbs:  - create  - apiGroups:  - ""  resources:  - endpoints  verbs:  - get --- apiVersion: rbac.authorization.k8s.io/v1beta1 kind: RoleBinding metadata:  name: nginx-ingress-role-nisa-binding  namespace: ingress-nginx  labels:  app.kubernetes.io/name: ingress-nginx  app.kubernetes.io/part-of: ingress-nginx roleRef:  apiGroup: rbac.authorization.k8s.io  kind: Role  name: nginx-ingress-role subjects:  - kind: ServiceAccount  name: nginx-ingress-serviceaccount  namespace: ingress-nginx --- apiVersion: rbac.authorization.k8s.io/v1beta1 kind: ClusterRoleBinding metadata:  name: nginx-ingress-clusterrole-nisa-binding  labels:  app.kubernetes.io/name: ingress-nginx  app.kubernetes.io/part-of: ingress-nginx roleRef:  apiGroup: rbac.authorization.k8s.io  kind: ClusterRole  name: nginx-ingress-clusterrole subjects:  - kind: ServiceAccount  name: nginx-ingress-serviceaccount  namespace: ingress-nginx --- apiVersion: apps/v1 kind: Deployment metadata:  name: nginx-ingress-controller  namespace: ingress-nginx  labels:  app.kubernetes.io/name: ingress-nginx  app.kubernetes.io/part-of: ingress-nginx spec:  replicas: 1  selector:  matchLabels:  app.kubernetes.io/name: ingress-nginx  app.kubernetes.io/part-of: ingress-nginx  template:  metadata:  labels:  app.kubernetes.io/name: ingress-nginx  app.kubernetes.io/part-of: ingress-nginx  annotations:  prometheus.io/port: "10254"  prometheus.io/scrape: "true"  spec:  hostNetwork: true # wait up to five minutes for the drain of connections  terminationGracePeriodSeconds: 300  serviceAccountName: nginx-ingress-serviceaccount  containers:  - name: nginx-ingress-controller  image: registry.aliyuncs.com/google_containers/nginx-ingress-controller:0.26.1  args:  - /nginx-ingress-controller  - --configmap=$(POD_NAMESPACE)/nginx-configuration  - --tcp-services-configmap=$(POD_NAMESPACE)/tcp-services  - --udp-services-configmap=$(POD_NAMESPACE)/udp-services  - --publish-service=$(POD_NAMESPACE)/ingress-nginx  - --annotations-prefix=nginx.ingress.kubernetes.io  securityContext:  allowPrivilegeEscalation: true  capabilities:  drop:  - ALL  add:  - NET_BIND_SERVICE # www-data -> 33  runAsUser: 33  env:  - name: POD_NAME  valueFrom:  fieldRef:  fieldPath: metadata.name  - name: POD_NAMESPACE  valueFrom:  fieldRef:  fieldPath: metadata.namespace  ports:  - name: http  containerPort: 80  - name: https  containerPort: 443  livenessProbe:  failureThreshold: 3  httpGet:  path: /healthz  port: 10254  scheme: HTTP  initialDelaySeconds: 10  periodSeconds: 10  successThreshold: 1  timeoutSeconds: 10  readinessProbe:  failureThreshold: 3  httpGet:  path: /healthz  port: 10254  scheme: HTTP  periodSeconds: 10  successThreshold: 1  timeoutSeconds: 10  lifecycle:  preStop:  exec:  command:  - /wait-shutdown ---

创建ingress-service.yaml

由于nodeport 默认的端口是 30000-32767   所以 80 和443 端口想通过nodeport暴露需要修改kube-apiserver的配置,并重启kube-apiserver服务

image.png

apiVersion: v1 kind: Service metadata:  name: ingress-nginx  namespace: ingress-nginx  labels:  app.kubernetes.io/name: ingress-nginx  app.kubernetes.io/part-of: ingress-nginx spec:  type: NodePort  ports:  - name: http  port: 80  targetPort: 80  protocol: TCP  nodePort: 80 # http请求对外映射80端口  - name: https  port: 443  targetPort: 443  protocol: TCP  nodePort: 443 # https请求对外映射443端口  selector:  app.kubernetes.io/name: ingress-nginx  app.kubernetes.io/part-of: ingress-nginx  ---

ingress-nginx 名称空间下的资源 皆为 ready

image.png

查看pod 所在节点 443 和 80 端口是否正常监听

image.png

符合预期 80 443 端口正常监听

image.png

2. 创建Ingress规则

创建ingress规则实际指的就是配置 nginx 的server 模块配置,通过yaml 定义的规则自动刷新到pod中的nginx 的配置中

 

创建一个私有仓库的secret 用于拉取 我的阿里云私有镜像

 

kubectl create secret docker-registry myregistry --docker-server=registry.cn-hangzhou.aliyuncs.com --docker-username=benjamin7788 --docker-password=a7260488

image.png

web-deployment.yaml 创建一个3副本的java 镜像,使用我的私有阿里云镜像

增加imagePullSecrets和 containers 一个层级 

imagePullSecrets:

     - name: myregistry

apiVersion: apps/v1 kind: Deployment metadata:  labels:  app: web  name: web spec:  replicas: 3  selector:  matchLabels:  app: web  strategy:  rollingUpdate:  maxSurge: 25%  maxUnavailable: 25%  type: RollingUpdate  template:  metadata:  labels:  app: web  spec:  imagePullSecrets:  - name: myregistry  containers:  - image: registry.cn-hangzhou.aliyuncs.com/benjamin-learn/java-demo:lst  imagePullPolicy: Always  name: java  resources: {}  restartPolicy: Always 

web-service.yaml 将这个deployment 用nodeport 方式暴露

 

apiVersion: v1 kind: Service metadata:  labels:  app: web  name: web  namespace: default spec:  ports:  - port: 80  protocol: TCP  targetPort: 8080  selector:  app: web  type: NodePort

image.png

 

ingress.yaml

 

apiVersion: networking.k8s.io/v1beta1 kind: Ingress metadata:  name: example-ingress spec:  rules:  - host: example.ctnrs.com  http:  paths:  - path: /  backend:  serviceName: web  servicePort: 80

#生产环境

这个example.ctnrs.com 域名一般是由域名运营商负责解析 其中A记录对应的值为运行ingress-controller的 node的公网ip

#测试环境

本地绑定host文件,模拟访问。

windows集群 win+r  输入drivers  进入 etc目录 修改hosts文件  增加一条记录

192.168.31.65 example.ctnrs.com 

image.png

查看域名对应规则

image.png

当ingress规则创建查看ingress-controller 容器日志 发现 成功创建了一个ingress规则

image.png

进入ingress-controller 容器发现 nginx.conf 自动增加了一个server 模块的配置

image.png

成功访问java应用

image.png

 

请求路径:

http 请求--->node:80 --->(upstream)pod ip--->容器:8080

 

细节:

我们都知道nginx更新配置都需要reload 方能生效,而nginx-conroller内的nginx 采用 lua脚本将配置更新在内存中,使得不需要reload 也可以动态更新配置。

 

管理员 -->ingress yaml -->master(apiserver)--->service---master(apiserver)<--- ingress controller(pod ip) --->nginx.conf(lua脚本)--->upstream(pod ip) --->容器

 

3.根据urlL路由到多个服务

www.ctnrs.com/a  请求到a组服务器

www.ctnrs.com/b  请求到b组服务器

ingress-url.yaml

 

apiVersion: networking.k8s.io/v1beta1 kind: Ingress metadata:  name: example-ingress  annotations:  nginx.ingress.kubernetes.io/rewrite-target: / spec:  rules:  - host: example.ctnrs.com  http:  paths:  - path: /a  backend:  serviceName: web  servicePort: 80   - host: example.ctnrs.com  http:  paths:  - path: /b  backend:  serviceName: web2  servicePort: 80

创建 web2应用

kubectl create deploy web2 --image=nginx kubectl expose deploy web2 --port=80 --target-port=80

 

image.png

image.png

 

4.基于名称的虚拟主机

ingress-vhost.yaml 

apiVersion: networking.k8s.io/v1beta1 kind: Ingress metadata:  name: example-ingress spec:  rules:  - host: example1.ctnrs.com  http:  paths:  - path: /  backend:  serviceName: web  servicePort: 80   - host: example2.ctnrs.com  http:  paths:  - path: /  backend:  serviceName: web2  servicePort: 80

绑定本地host文件 192.168.31.65  example1.ctnrs.com  example1.ctnrs.com

访问example1.ctnrs.com

image.png

访问 example2.ctnrs.com

image.png

4.ingress 配置https

ingress-https.yaml

 

apiVersion: networking.k8s.io/v1beta1 kind: Ingress metadata:  name: example-ingress  annotations:  nginx.ingress.kubernetes.io/force-ssl-redirect: 'true' spec:  tls:  - hosts:  - example.ctnrs.com  secretName: secret-tls  rules:  - host: example.ctnrs.com  http:  paths:  - path: /  backend:  serviceName: web  servicePort: 80

 

访问 example.ctnrs.com 直接跳转到 https://example.ctnrs.comimage.png

选择信任后,默认使用的证书是 ingress-controller 内默认的证书

image.png

image.png

 

生成自签证书

cert.sh

 

cat > ca-config.json <<EOF {  "signing": {  "default": {  "expiry": "87600h"  },  "profiles": {  "kubernetes": {  "expiry": "87600h",  "usages": [  "signing",  "key encipherment",  "server auth",  "client auth"  ]  }  }  } } EOF  cat > ca-csr.json <<EOF {  "CN": "kubernetes",  "key": {  "algo": "rsa",  "size": 2048  },  "names": [  {  "C": "CN",  "L": "Beijing",  "ST": "Beijing"  }  ] } EOF  cfssl gencert -initca ca-csr.json | cfssljson -bare ca -  cat > example.ctnrs.com-csr.json <<EOF {  "CN": "example.ctnrs.com",  "hosts": [],  "key": {  "algo": "rsa",  "size": 2048  },  "names": [  {  "C": "CN",  "L": "BeiJing",  "ST": "BeiJing"  }  ] } EOF  cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes example.ctnrs.com-csr.json | cfssljson -bare  example.ctnrs.com #kubectl create secret tls example-ctnrs-com --cert=example.ctnrs.com.pem --key=example.ctnrs.com-key.pem

执行生成证书脚本 ,生成域名证书 example.ctnrs.com-key.pem  example.ctnrs.com.pem

 

创建tls类型secret 

kubectl create secret tls secret-tls --cert=/root/learn/ssl/example.ctnrs.com.pem --key=/root/learn/ssl/example.ctnrs.com -key.pemsecret/secret-tls created

再次查看证书,已经使用了 自签域名证书。

image.png

5.Annotations对Ingress个性化配置

ingress-controller的默认代理超时设置

image.png

ingress-annotations.yaml 

 

apiVersion: networking.k8s.io/v1beta1 kind: Ingress metadata:  name: example-ingress  annotations:  kubernetes.io/ingress.class: "nginx"  nginx.ingress.kubernetes.io/proxy-connect-timeout: "600"  nginx.ingress.kubernetes.io/proxy-send-timeout: "600"  nginx.ingress.kubernetes.io/proxy-read-timeout: "600"  nginx.ingress.kubernetes.io/proxy-body-size: "10m" spec:  rules:  - host: example.ctnrs.com  http:  paths:  - path: /  backend:  serviceName: web  servicePort: 80

通过注解的方式更改 ingress-controller的默认代理超时时间

image.png

 

官方注解详解:https://github.com/kubernetes/ingress-nginx/blob/master/docs/user-guide/nginx-configuration/annotations.md

 

Ingress Controller 高可用方案

方案一:通过keepalived  设置vip访问 的 主备高可用方案,使用daemonset 控制器在每个节点部署一个 ingress controller  

缺点:由于只有一台ingress controller 提供服务,所以当访问量变大,性能会产生瓶颈。

image.png

 

通过loadbalancer 反向代理多个 ingress-controller ,实现对多个ingress controller 的负载均衡,用户请求 lb 的ip 然后转发到 后端 ingress controller

nodeselector + daemonset +lb  完成部署

优点:这种很好的弥补了第一种方案的 不足,多台ingress controller 同时提供服务,可提供更大的 访问量。

缺点:lb 这时又会产生单点,需要对 lb 做高可用,这样比较消耗 资源。

image.png

其他主流控制器:

Traefik: HTTP反向代理、负载均衡工具

Istio:服务治理,控制入口流量

 


免责声明!

本站转载的文章为个人学习借鉴使用,本站对版权不负任何法律责任。如果侵犯了您的隐私权益,请联系本站邮箱yoyou2525@163.com删除。



 
粤ICP备18138465号  © 2018-2025 CODEPRJ.COM