K8S從入門到放棄系列-(15)Kubernetes集群Ingress部署


Ingress是kubernetes集群對外提供服務的一種方式.ingress部署相對比較簡單,官方把相關資源配置文件,都已經集合到一個yml文件中(mandatory.yaml),鏡像地址也修改為quay.io。

1、部署

官方地址:https://github.com/kubernetes/ingress-nginx

 1.1 下載部署文件:

## mandatory.yaml為ingress所有資源yml文件的集合
### 若是單獨部署,需要分別下載configmap.yaml、namespace.yaml、rbac.yaml、service-nodeport.yaml、with-rbac.yaml
[root@k8s-master01 ingress-master]# wget https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/mandatory.yaml
### service-nodeport.yaml為ingress通過nodeport對外提供服務,注意默認nodeport暴露端口為隨機,可以編輯該文件自定義端口
[root@k8s-master01 ingress-master]# wget https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/provider/baremetal/service-nodeport.yaml

 1.2 應用yml文件創建ingress資源

[root@k8s-master01 ingress-master]# kubectl apply -f mandatory.yaml 
namespace/ingress-nginx created
configmap/nginx-configuration created
configmap/tcp-services created
configmap/udp-services created
serviceaccount/nginx-ingress-serviceaccount created
clusterrole.rbac.authorization.k8s.io/nginx-ingress-clusterrole created
role.rbac.authorization.k8s.io/nginx-ingress-role created
rolebinding.rbac.authorization.k8s.io/nginx-ingress-role-nisa-binding created
clusterrolebinding.rbac.authorization.k8s.io/nginx-ingress-clusterrole-nisa-binding created
deployment.apps/nginx-ingress-controller created
[root@k8s-master01 ingress-master]# kubectl apply -f service-nodeport.yaml
service/ingress-nginx created

 1.3 查看資源創建

[root@k8s-master01 ingress-master]# kubectl get pods -n ingress-nginx -o wide
NAME                                        READY   STATUS    RESTARTS   AGE   IP             NODE         NOMINATED NODE   READINESS GATES
nginx-ingress-controller-86449c74bb-cbkgp   1/1     Running   0          19s   10.254.88.48   k8s-node02   <none>           <none>
### 通過創建的svc可以看到已經把ingress-nginx service在主機映射的端口為33848(http),45891(https)
[root@k8s-master01 ingress-master]# kubectl get svc -n ingress-nginx
NAME            TYPE       CLUSTER-IP       EXTERNAL-IP   PORT(S)                      AGE
ingress-nginx   NodePort   10.254.102.184   <none>        80:33848/TCP,443:45891/TCP   43s

說明: 
  Ingress Contronler 通過與 Kubernetes API 交互,動態的去感知集群中 Ingress 規則變化,然后讀取它,按照自定義的規則,規則就是寫明了哪個域名對應哪個service,生成一段 Nginx 配置,再寫到 Nginx-ingress-control的 Pod 里,這個 Ingress Contronler 的pod里面運行着一個nginx服務,控制器會把生成的nginx配置寫入/etc/nginx.conf文件中,然后 reload 一下 使用配置生效。以此來達到域名分配置及動態更新的問題。

2、驗證

 2.1 創建svc及后端deployment

[root@k8s-master01 ingress-master]# cat test-ingress-pods.yml 
apiVersion: v1
kind: Service
metadata:
  name: myapp-svc
  namespace: default
spec:
  selector:
    app: myapp
    env: test
  ports:
  - name: http
    port: 80
    targetPort: 80
---
apiVersion: apps/v1
kind: Deployment
metadata: 
  name: myapp-test
spec:
  replicas: 2
  selector: 
    matchLabels:
      app: myapp
      env: test
  template:
    metadata:
      labels:
        app: myapp
        env: test
    spec:
      containers:
      - name: myapp
        image: nginx:1.15-alpine 
        ports:
        - name: httpd
          containerPort: 80
## 查看pod資源部署
[root@k8s-master01 ingress-master]# kubectl get pods
NAME                             READY   STATUS    RESTARTS   AGE
myapp-test-66cf5bf7d5-5cnjv      1/1     Running   0          3m39s
myapp-test-66cf5bf7d5-vdkml      1/1     Running   0          3m39s
## 查看svc
[root@k8s-master01 ingress-master]# kubectl get svc
NAME           TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)          AGE
myapp-svc      ClusterIP   10.254.155.238   <none>        80/TCP           4m40s

  2.2 創建ingress規則

## ingress規則中,要指定需要綁定暴露的svc名稱
[root@k8s-master01 ingress-master]# cat test-ingress-myapp.yml apiVersion: extensions/v1beta1 kind: Ingress metadata: name: ingress-myapp namespace: default annotations: kubernetes.io/ingress.class: "nginx" spec: rules: - host: www.tchua.top http: paths: - path: backend: serviceName: myapp-svc servicePort: 80
[root@k8s-master01 ingress-master]# kubectl apply -f test-ingress-myapp.yml
[root@k8s-master01 ingress-master]# kubectl get ingress
NAME             HOSTS           ADDRESS   PORTS   AGE
ingress-myapp    www.tchua.top             80      13s

 2.3 在win主機配置hosts域名解析

## 這里隨機解析任一台節點主機都可以

172.16.11.123 www.tchua.top

然后主機瀏覽器訪問http://www.tchua.top:33848,這里訪問時需要加上svc映射到主機時隨機產生的nodePort端口號。

 

總結:

  1、上面我們創建一個針對於nginx的deployment資源,pod為2個;

  2、為nginx的pod暴露service服務,名稱為myapp-svc

  3、通過ingress把nginx暴露出去

這里對於nginx創建的svc服務,其實在實際調度過程中,流量是直接通過ingress然后調度到后端的pod,而沒有經過svc服務,svc只是提供一個收集pod服務的作用。

3、Ingress高可用

  上面我們只是解決了集群對外提供服務的功能,並沒有對ingress進行高可用的部署,Ingress高可用,我們可以通過修改deployment的副本數來實現高可用,但是由於ingress承載着整個集群流量的接入,所以生產環境中,建議把ingress通過DaemonSet的方式部署集群中,而且該節點打上污點不允許業務pod進行調度,以避免業務應用與Ingress服務發生資源爭搶。然后通過SLB把ingress節點主機添為后端服務器,進行流量轉發。

##修改mandatory.yaml
### 主要修改pod相關
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
  name: nginx-ingress-controller
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
spec:
  selector:
    matchLabels:
      app.kubernetes.io/name: ingress-nginx
      app.kubernetes.io/part-of: ingress-nginx
  template:
    metadata:
      labels:
        app.kubernetes.io/name: ingress-nginx
        app.kubernetes.io/part-of: ingress-nginx
      annotations:
        prometheus.io/port: "10254"
        prometheus.io/scrape: "true"
    spec:
      serviceAccountName: nginx-ingress-serviceaccount
      hostNetwork: true
      dnsPolicy: ClusterFirstWithHostNet
      nodeSelector:
        vanje/ingress-controller-ready: "true"
      tolerations:
      - key: "node-role.kubernetes.io/master"
        operator: "Equal"
        value: ""
        effect: "NoSchedule"
      containers:
        - name: nginx-ingress-controller
          image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.25.0
          args:
            - /nginx-ingress-controller
            - --configmap=$(POD_NAMESPACE)/nginx-configuration
            - --tcp-services-configmap=$(POD_NAMESPACE)/tcp-services
            - --udp-services-configmap=$(POD_NAMESPACE)/udp-services
            - --publish-service=$(POD_NAMESPACE)/ingress-nginx
            - --annotations-prefix=nginx.ingress.kubernetes.io
          securityContext:
            allowPrivilegeEscalation: true
            capabilities:
              drop:
                - ALL
              add:
                - NET_BIND_SERVICE
            # www-data -> 33
            runAsUser: 33
          env:
            - name: POD_NAME
              valueFrom:
                fieldRef:
                  fieldPath: metadata.name
            - name: POD_NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
          ports:
            - name: http
              containerPort: 80
            - name: https
              containerPort: 443
          livenessProbe:
            failureThreshold: 3
            httpGet:
              path: /healthz
              port: 10254
              scheme: HTTP
            initialDelaySeconds: 10
            periodSeconds: 10
            successThreshold: 1
            timeoutSeconds: 10
          readinessProbe:
            failureThreshold: 3
            httpGet:
              path: /healthz
              port: 10254
              scheme: HTTP
            periodSeconds: 10
            successThreshold: 1
            timeoutSeconds: 10

 修改參數如下:

  kind: Deployment #修改為DaemonSet
  replicas: 1 #注銷此行,DaemonSet不需要此參數
  hostNetwork: true #添加該字段讓docker使用物理機網絡,在物理機暴露服務端口(80),注意物理機80端口提前不能被占用
  dnsPolicy: ClusterFirstWithHostNet #使用hostNetwork后容器會使用物理機網絡包括DNS,會無法解析內部service,使用此參數讓容器使用K8S的DNS
  nodeSelector:vanje/ingress-controller-ready: "true" #添加節點標簽
  tolerations: 添加對指定節點容忍度

這里我在2台master節點部署(生產環境不要使用master節點,應該部署在獨立的節點上),因為我們采用DaemonSet的方式,所以我們需要對2個節點打標簽以及容忍度。

## 給節點打標簽
[root@k8s-master01 ingress-master]# kubectl label nodes k8s-master02 vanje/ingress-controller-ready=true [root@k8s-master01 ingress-master]# kubectl label nodes k8s-master03 vanje/ingress-controller-ready=true
## 節點打污點
### master節點我之前已經打過污點,如果你沒有打污點,執行下面2條命令。此污點名稱需要與yaml文件中pod的容忍污點對應
[root@k8s-master02 ~]# kubectl taint nodes k8s-master02 node-role.kubernetes.io/master=:NoSchedule
[root@k8s-master03 ~]# kubectl taint nodes k8s-master03 node-role.kubernetes.io/master=:NoSchedule

  3.2)創建資源

[root@k8s-master01 ingress-master]# kubectl apply -f mandatory.yaml
## 查看資源分布情況
### 可以看到兩個ingress-controller已經根據我們選擇,部署在2個master節點上
[root@k8s-master01 ingress-master]# kubectl get pod -n ingress-nginx -o wide
NAME                             READY   STATUS    RESTARTS   AGE    IP              NODE           NOMINATED NODE   READINESS GATES
nginx-ingress-controller-298dq   1/1     Running   0          134m   172.16.11.122   k8s-master03   <none>           <none>
nginx-ingress-controller-sh9h2   1/1     Running   0          134m   172.16.11.121   k8s-master02   <none>           <none>

 3.3)測試

這里直接使用上面創建的pod及對應svc測試即可,另外注意一點,因為我們創建的ingress-controller采用的時hostnetwork模式,所以無需在創建ingress-svc服務來把端口映射到節點主機上。

## 創建pod及svc
[root@k8s-master01 ingress-master]# kubectl test-ingress-pods.yml
## 創建ingress規則 [root@k8s
-master01 ingress-master]# kubectl test-ingress-myapp.yml

 在win主機上直接解析,IP地址為k8s-master03/k8s-master02 任意節點ip即可,訪問的時候也無需再加端口

 


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM