概述:
官方原始文件使用的是deployment,replicate 為 1,這樣將會在某一台節點上啟動對應的nginx-ingress-controller pod。外部流量訪問至該節點,由該節點負載分擔至內部的service。考慮到單點故障的問題,改為DaemonSet然后刪掉replicate ,配合親和性部署在指定節點上啟動nginx-ingress-controller pod,確保有多個節點啟動nginx-ingress-controller pod,生產環境中建議ingress節點打上污點不允許業務pod進行調度,以避免業務應用與Ingress服務發生資源爭搶。后續將這些節點加入到外部硬件負載均衡組實現高可用性。
雲服務器方式介紹:
1.使用daemonset方式將ingress-controller部署在相應節點,一般在k8s-node節點上,master節點為了集群穩定不建議部署。
2.申請使用SLB負載均衡高可用IP對應解析訪問域名,指向三台ingress節點主機(把ingress節點主機綁定至后端服務器組),達到高可用目的。
大概結構圖如下:
私有服務器方式
下面針對的是企業自建服務器部署高可用架構(雲服務器部署架構也基本差不多)
大致結構:
在Kubernetes中添加了ingress后,公網域名解析設置為:域名解析到機房公網IP服務器(nginx服務器),配置nginx轉發到keepalived的VIP
這樣集群外部就可以通過域名來訪問你的服務,也解決了單點故障。
注意:本實驗針對生產環境,ingress服務器沒有公網IP,備案的域名解析到Nginx服務器上的,所以沒使用到ingress的域名,
沒使用到ingress的spec.rules.host,也可以不用配置域名,通過nginx反向代理到ingress宿主機的IP訪問。
或者VIP使用公網IP,把VIP解析到域名,使用域名訪問(可以不用nginx)
選擇Kubernetes部署了ingress的三個node作為節點,都安裝keepalived。
修改配置文件/etc/keepalived/keepalived.conf
除了priority優先級不一樣,其他三個node節點都一樣
注意:修改如下所示
啟動keeplived
systemctl start keepalived
systemctl enable keepalived
創建deployment和service
(用於測試ingress請求后端業務pod)
$ vim deployment.yaml apiVersion: extensions/v1beta1 kind: Deployment metadata: name: deployment spec: replicas: 2 template: metadata: labels: name: nginx spec: containers: - name: nginx image: wangyanglinux/myapp:v3 imagePullPolicy: IfNotPresent ports: - containerPort: 80 --- apiVersion: v1 kind: Service metadata: name: svc-1 spec: ports: - port: 80 targetPort: 80 protocol: TCP selector: name: nginx #當name=nginx時匹配 $ kubectl apply -f deployment.yaml
安裝ingress-nginx-controller
官網安裝文件地址:
https://github.com/kubernetes/ingress-nginx/tree/main/deploy/static/provider/baremetal/deploy.yaml
1.給要部署的ingress節點打標簽
nginx-ingress-controller會隨意選擇一個node節點運行pod,為此需要我們把nginx-ingress-controller運行到指定的node節點上。
首先需要給需要運行nginx-ingress-controller的node節點打標簽
kubectl label nodes k8s-node01 edgenode=true kubectl label nodes k8s-node02 edgenode=true kubectl label nodes k8s-node03 edgenode=true
查看node標簽
kubectl get node --show-labels
2.daemonset形式安裝ingress-nginx-controller(修改原來ingress部署的yaml文件,注意修改標紅處)
- deployment改為daemonset
- 注釋replicate #注銷此行,DaemonSet不需要此參數
- 添加hostNetwork #添加該字段讓pod使用物理機網絡,在物理機暴露服務端口80,注意:物理機80端口不能被占用
- dnsPolicy:ClusterFirstWithHostNet #使用hostNetwork后容器會使用物理機網絡包括DNS,會無法解析內部service,使用此參數可以讓容器同時使用 hostNetwork 與 kube-dns 作為 Pod 預設 DNS 配置。
- 添加節點親和性屬性
apiVersion: v1 kind: ConfigMap metadata: name: nginx-configuration namespace: ingress-nginx labels: app.kubernetes.io/name: ingress-nginx app.kubernetes.io/part-of: ingress-nginx --- apiVersion: apps/v1 kind: Deployment metadata: name: default-http-backend labels: app: default-http-backend namespace: ingress-nginx spec: replicas: 1 selector: matchLabels: app: default-http-backend template: metadata: labels: app: default-http-backend spec: terminationGracePeriodSeconds: 60 containers: - name: default-http-backend # Any image is permissable as long as: # 1. It serves a 404 page at / # 2. It serves 200 on a /healthz endpoint image: registry.cn-hangzhou.aliyuncs.com/google_containers/defaultbackend:1.4 livenessProbe: httpGet: path: /healthz port: 8080 scheme: HTTP initialDelaySeconds: 30 timeoutSeconds: 5 ports: - containerPort: 8080 resources: limits: cpu: 10m memory: 20Mi requests: cpu: 10m memory: 20Mi --- apiVersion: v1 kind: Service metadata: name: default-http-backend namespace: ingress-nginx labels: app: default-http-backend spec: ports: - port: 80 targetPort: 8080 selector: app: default-http-backend --- apiVersion: apps/v1 kind: DaemonSet metadata: name: nginx-ingress-controller namespace: ingress-nginx spec: selector: matchLabels: app: ingress-nginx template: metadata: labels: app: ingress-nginx annotations: prometheus.io/port: '10254' prometheus.io/scrape: 'true' spec: serviceAccountName: nginx-ingress-serviceaccount hostNetwork: true
dnsPolicy: ClusterFirstWithHostNet nodeSelector: edgenode: 'true' containers: - name: nginx-ingress-controller image: registry.cn-hangzhou.aliyuncs.com/google_containers/nginx-ingress-controller:0.20.0 args: - /nginx-ingress-controller - --default-backend-service=$(POD_NAMESPACE)/default-http-backend - --configmap=$(POD_NAMESPACE)/nginx-configuration - --tcp-services-configmap=$(POD_NAMESPACE)/tcp-services - --udp-services-configmap=$(POD_NAMESPACE)/udp-services env: - name: POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace ports: - name: http containerPort: 80 - name: https containerPort: 443 # livenessProbe: # failureThreshold: 3 # httpGet: # path: /healthz # port: 10254 # scheme: HTTP # initialDelaySeconds: 10 # periodSeconds: 10 # successThreshold: 1 # timeoutSeconds: 1 # readinessProbe: # failureThreshold: 3 # httpGet: # path: /healthz # port: 10254 # scheme: HTTP # periodSeconds: 10 # successThreshold: 1 # timeoutSeconds: 1 --- apiVersion: v1 kind: Namespace metadata: name: ingress-nginx --- apiVersion: v1 kind: ServiceAccount metadata: name: nginx-ingress-serviceaccount namespace: ingress-nginx --- apiVersion: rbac.authorization.k8s.io/v1beta1 kind: ClusterRole metadata: name: nginx-ingress-clusterrole rules: - apiGroups: - "" resources: - configmaps - endpoints - nodes - pods - secrets verbs: - list - watch - apiGroups: - "" resources: - nodes verbs: - get - apiGroups: - "" resources: - services verbs: - get - list - watch - apiGroups: - "extensions" resources: - ingresses verbs: - get - list - watch - apiGroups: - "" resources: - events verbs: - create - patch - apiGroups: - "extensions" resources: - ingresses/status verbs: - update --- apiVersion: rbac.authorization.k8s.io/v1beta1 kind: Role metadata: name: nginx-ingress-role namespace: ingress-nginx rules: - apiGroups: - "" resources: - configmaps - pods - secrets - namespaces verbs: - get - apiGroups: - "" resources: - configmaps resourceNames: # Defaults to "<election-id>-<ingress-class>" # Here: "<ingress-controller-leader>-<nginx>" # This has to be adapted if you change either parameter # when launching the nginx-ingress-controller. - "ingress-controller-leader-nginx" verbs: - get - update - apiGroups: - "" resources: - configmaps verbs: - create - apiGroups: - "" resources: - endpoints verbs: - get --- apiVersion: rbac.authorization.k8s.io/v1beta1 kind: RoleBinding metadata: name: nginx-ingress-role-nisa-binding namespace: ingress-nginx roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: nginx-ingress-role subjects: - kind: ServiceAccount name: nginx-ingress-serviceaccount namespace: ingress-nginx --- apiVersion: rbac.authorization.k8s.io/v1beta1 kind: ClusterRoleBinding metadata: name: nginx-ingress-clusterrole-nisa-binding roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: nginx-ingress-clusterrole subjects: - kind: ServiceAccount name: nginx-ingress-serviceaccount namespace: ingress-nginx --- kind: ConfigMap apiVersion: v1 metadata: name: tcp-services namespace: ingress-nginx --- kind: ConfigMap apiVersion: v1 metadata: name: udp-services namespace: ingress-nginx
應用資源清單
kubectl apply -f ingress-nginx.yaml
查看安裝是否成功
kubectl get ds -n ingress-nginx kubectl get pods -n ingress-nginx -o wide [root@master ingress]# kubectl get pod -n ingress-nginx -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES nginx-ingress-controller-3sfom 1/1 Running 0 13m 192.168.3.1 node1 <none> <none> nginx-ingress-controller-5jdeq 1/1 Running 0 13m 192.168.3.2 node2 <none> <none> nginx-ingress-controller-1hdkr 1/1 Running 0 13m 192.168.3.3 node3 <none> <none>
可以看到三個ingress-controller已經根據我們選擇,部署在3個node節點上,使用宿主機的網絡
Ingress HTTPS 代理訪問
創建https證書的secret
kubectl create secret tls tls-secret --key tls.key --cert tls.crt
創建ingress策略
$ vim https_ingress.yaml
apiVersion: extensions/v1beta1 kind: Ingress metadata: name: https spec: tls: - hosts: - www.test.com secretName: tls-secret #上面創建時保存的secret名稱 rules: - host: www.test.com http: paths: - path: / backend: serviceName: svc-1 servicePort: 80
$ kubectl apply -f https_ingress.yaml
然后就是分別在三台ingress服務器上部署keepalived,使用VIP (略)
最后部署nginx轉發,把業務請求路徑轉發到VIP(略)
測試