安裝配置ingress-nginx支持https訪問


說明:

​ 1.k8s版本:v1.23;

​ 2.內網測試環境1台master,2台node節點,使用 DaemonSet+HostNetwork+nodeSelector 方式部署 ingress-nginx 到 node02 節點,node02打標簽作為邊緣節點;

​ 3.測試了https配置;

Ingress簡介:

IngressKubernetes 1.1 版本新增的資源對象,用於將不用URL的訪問請求轉發到后端不同的Service,以實現HTTP 層(7層)的業務路由機制。簡單點說:Ingress 是 HTTP 層的服務暴露規則。也可以理解為Service的Service。對於Ingress來說,必須要綁定一個域名。

它由兩部分組成:

  1. Ingress Controller:

    • Ingress Controller是Service的入口網關,有很多種,最常見的就是Ingress-Nginx
    • pod形式運行的;
  2. Ingress策略設置(k8s中的ingress資源):

    • yaml形式為載體的一組聲明式的策略;(可使用 kubectl get ingress -n namespaces 查看)

Ingress部署的幾種方式:

  1. Deployment+LoadBalancer模式的Service

    如果要把ingress部署在公有雲,那可以選擇這種方式。用Deployment部署ingress-controller,創建一個typeLoadBalancerservice關聯這組pod。大部分公有雲,都會為LoadBalancer的service自動創建一個負載均衡器,通常還綁定了公網地址。只要把域名解析指向該地址,就實現了集群服務的對外暴露。

    缺點:需要額外購買公有雲的負載均衡服務,不適用於沒有負載均衡器的非公有雲服務;

    1. Deployment+NodePort模式的Service

      同樣用deployment模式部署ingress-controller,並創建對應的服務,但是typeNodePort。這樣,ingress就會暴露在集群節點ip的特定端口上。由於nodeport暴露的端口是隨機端口(端口數會大於30000),一般會在前面再搭建一套負載均衡器來轉發請求。該方式一般用於宿主機是相對固定的環境ip地址不變的場景。

      缺點:

      • NodePort方式暴露ingress雖然簡單方便,但是NodePort多了一層NAT,在請求量級很大時可能對性能會有一定影響。
      • 請求節點會是類似https://www.xx.com:30076,其中30076kubectl get svc -n ingress-nginxsvc暴露出來的nodeport端口。

      1. DaemonSet+HostNetwork+nodeSelector(推薦)

        DaemonSet結合nodeselector來部署ingress-controller到特定的node上(邊緣節點),然后使用HostNetwork直接把該pod與宿主機node的網絡打通,直接使用宿主機的80/433端口就能訪問服務。這時,ingress-controller所在的node機器就很類似傳統架構的邊緣節點,比如機房入口的nginx服務器。

        優點:

        • 該方式整個請求鏈路最簡單,性能相對NodePort模式更好。

        缺點:

        • 由於直接利用宿主機節點的網絡和端口,一個node只能部署一個ingress-controller pod

        因為此次是內網測試環境,所以使用第3中方法部署測試


使用Helm以DaemonSet+HostNetwork+nodeSelector的方式部署ingress-nginx

nginx-ingress-controall官網:https://kubernetes.github.io/ingress-nginx/

現有的測試環境是1台master+2台node,我們選擇node02做為邊緣節點,給他打上邊緣節點的標簽,這樣部署的ingress-controallpod會只跑在node02這個節點上。(如果是生產環境,可以選擇2台node作為邊緣節點,為了避免單點故障,可使用keepalive提高高可用)

#給node02節點打上邊緣節點的標簽
kubectl label nodes node02 edgenode=true

#查看各節點的標簽
kubectl get node --show-labels

拉取helm源:

#添加helm源
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx

#更新源
helm repo update

#拉取相關配置修改values.yaml
helm pull ingress-nginx/ingress-nginx

修改values.yaml

commonLabels: {}
controller:
  name: controller
  image:
    registry: k8s.gcr.io		#如果怕牆此處可換為阿里鏡像源
    image: ingress-nginx/controller
    tag: "v1.1.1"
    digest: sha256:0bc88eb15f9e7f84e8e56c14fa5735aaa488b840983f87bd79b1054190e660de
    pullPolicy: IfNotPresent
    runAsUser: 101
    allowPrivilegeEscalation: true
  existingPsp: ""
  containerName: controller
  containerPort:
    http: 80
    https: 443
  config: {}
  configAnnotations: {}
  proxySetHeaders: {}
  addHeaders: {}
  dnsConfig: {}
  hostname: {}
  dnsPolicy: ClusterFirst
  reportNodeInternalIp: false
  watchIngressWithoutClass: false
  ingressClassByName: false
  allowSnippetAnnotations: true
  hostNetwork: true			#此處改為true
  hostPort:
    enabled: false
    ports:
      http: 80
      https: 443
  electionID: ingress-controller-leader
  ingressClassResource:
    name: nginx
    enabled: true
    default: false
    controllerValue: "k8s.io/ingress-nginx"
    parameters: {}
  ingressClass: nginx
  podLabels: {}
  podSecurityContext: {}
  sysctls: {}
  publishService:
    enabled: true
    pathOverride: ""
  scope:
    enabled: false
    namespace: ""
    namespaceSelector: ""
  configMapNamespace: ""
  tcp:
    configMapNamespace: ""
    annotations: {}
  udp:
    configMapNamespace: ""
    annotations: {}
  maxmindLicenseKey: ""
  extraArgs: {}
  extraEnvs: []
  kind: DaemonSet			#此處改為DaemonSet,控制器將以DaemonSet方式運行在特定node
  annotations: {}
  labels: {}
  updateStrategy: {}
  minReadySeconds: 0
  tolerations: []
  affinity: {}
  topologySpreadConstraints: []
  terminationGracePeriodSeconds: 300
  nodeSelector:
    kubernetes.io/os: linux
    edgenode: 'true'			#此處加上剛才給node02打的標簽,控制器將運行在node02上
  livenessProbe:
    httpGet:
      path: "/healthz"
      port: 10254
      scheme: HTTP
    initialDelaySeconds: 10
    periodSeconds: 10
    timeoutSeconds: 1
    successThreshold: 1
    failureThreshold: 5
  readinessProbe:
    httpGet:
      path: "/healthz"
      port: 10254
      scheme: HTTP
    initialDelaySeconds: 10
    periodSeconds: 10
    timeoutSeconds: 1
    successThreshold: 1
    failureThreshold: 3
  healthCheckPath: "/healthz"
  healthCheckHost: ""
  podAnnotations: {}
  replicaCount: 1
  minAvailable: 1
  resources:
    requests:
      cpu: 100m
      memory: 90Mi
  autoscaling:
    enabled: false
    minReplicas: 1
    maxReplicas: 11
    targetCPUUtilizationPercentage: 50
    targetMemoryUtilizationPercentage: 50
    behavior: {}
  autoscalingTemplate: []
  keda:
    apiVersion: "keda.sh/v1alpha1"
    enabled: false
    minReplicas: 1
    maxReplicas: 11
    pollingInterval: 30
    cooldownPeriod: 300
    restoreToOriginalReplicaCount: false
    scaledObject:
      annotations: {}
    triggers: []
    behavior: {}
  enableMimalloc: true
  customTemplate:
    configMapName: ""
    configMapKey: ""
  service:
    enabled: true
    appProtocol: true
    annotations: {}
    labels: {}
    externalIPs: []
    loadBalancerSourceRanges: []
    enableHttp: true
    enableHttps: true
    ipFamilyPolicy: "SingleStack"
    ipFamilies:
      - IPv4
    ports:
      http: 80
      https: 443
    targetPorts:
      http: http
      https: https
    type: ClusterIP			#此處改為ClusterIP,默認為LoadBalancer
    nodePorts:
      http: ""
      https: ""
      tcp: {}
      udp: {}
    external:
      enabled: true
    internal:
      enabled: false
      annotations: {}
      loadBalancerSourceRanges: []
  extraContainers: []
  extraVolumeMounts: []
  extraVolumes: []
  extraInitContainers: []
  extraModules: []
  admissionWebhooks:
    annotations: {}
    enabled: true
    failurePolicy: Fail
    port: 8443
    certificate: "/usr/local/certificates/cert"
    key: "/usr/local/certificates/key"
    namespaceSelector: {}
    objectSelector: {}
    labels: {}
    existingPsp: ""
    service:
      annotations: {}
      externalIPs: []
      loadBalancerSourceRanges: []
      servicePort: 443
      type: ClusterIP
    createSecretJob:
      resources: {}
    patchWebhookJob:
      resources: {}
    patch:
      enabled: true
      image:
        registry: k8s.gcr.io
        image: ingress-nginx/kube-webhook-certgen
        tag: v1.1.1
        digest: sha256:64d8c73dca984af206adf9d6d7e46aa550362b1d7a01f3a0a91b20cc67868660
        pullPolicy: IfNotPresent
      priorityClassName: ""
      podAnnotations: {}
      nodeSelector:
        kubernetes.io/os: linux
      tolerations: []
      labels: {}
      runAsUser: 2000
  metrics:
    port: 10254
    enabled: false
    service:
      annotations: {}
      externalIPs: []
      loadBalancerSourceRanges: []
      servicePort: 10254
      type: ClusterIP
    serviceMonitor:
      enabled: false
      additionalLabels: {}
      namespace: ""
      namespaceSelector: {}
      scrapeInterval: 30s
      targetLabels: []
      relabelings: []
      metricRelabelings: []
    prometheusRule:
      enabled: false
      additionalLabels: {}
      rules: []
  lifecycle:
    preStop:
      exec:
        command:
          - /wait-shutdown
  priorityClassName: ""
revisionHistoryLimit: 10
defaultBackend:
  enabled: true			#此處改為true,說明創建個默認的頁面,如果有不匹配的請求將返回這個頁面
  name: defaultbackend
  image:
    registry: k8s.gcr.io
    image: defaultbackend-amd64
    tag: "1.5"
    pullPolicy: IfNotPresent
    runAsUser: 65534
    runAsNonRoot: true
    readOnlyRootFilesystem: true
    allowPrivilegeEscalation: false
  existingPsp: ""
  extraArgs: {}
  serviceAccount:
    create: true
    name: ""
    automountServiceAccountToken: true
  extraEnvs: []
  port: 8080
  livenessProbe:
    failureThreshold: 3
    initialDelaySeconds: 30
    periodSeconds: 10
    successThreshold: 1
    timeoutSeconds: 5
  readinessProbe:
    failureThreshold: 6
    initialDelaySeconds: 0
    periodSeconds: 5
    successThreshold: 1
    timeoutSeconds: 5
  tolerations: []
  affinity: {}
  podSecurityContext: {}
  containerSecurityContext: {}
  podLabels: {}
  nodeSelector:
    kubernetes.io/os: linux
  podAnnotations: {}
  replicaCount: 1
  minAvailable: 1
  resources: {}
  extraVolumeMounts: []
  extraVolumes: []
  autoscaling:
    annotations: {}
    enabled: false
    minReplicas: 1
    maxReplicas: 2
    targetCPUUtilizationPercentage: 50
    targetMemoryUtilizationPercentage: 50
  service:
    annotations: {}
    externalIPs: []
    loadBalancerSourceRanges: []
    servicePort: 80
    type: ClusterIP
  priorityClassName: ""
  labels: {}
rbac:
  create: true
  scope: false
podSecurityPolicy:
  enabled: false
serviceAccount:
  create: true
  name: ""
  automountServiceAccountToken: true
  annotations: {}
imagePullSecrets: []
tcp: {}
udp: {}
dhParam:

使用helm安裝:

#創建個ingress-nginx的命名空間
kubectl create ns ingress-nginx

#使用helm執行安裝
helm install ingress-nginx ingress-nginx/ingress-nginx -f values.yaml -n ingress-nginx

查看創建的資源:

kubectl get all -n ingress-nginx

可以看到啟動了2個pod,一個為ingress-controller,一個為默認的后端defaultbackend

NAME                                                READY   STATUS    RESTARTS   AGE
pod/ingress-nginx-controller-kqqgj                  1/1     Running   0          21m
pod/ingress-nginx-defaultbackend-7df596dbc9-9c6ws   1/1     Running   0          21m

NAME                                         TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)          AGE
service/ingress-nginx-controller             ClusterIP   10.106.80.36    <none>        80/TCP,443/TCP   21m
service/ingress-nginx-controller-admission   ClusterIP   10.111.63.107   <none>        443/TCP          21m
service/ingress-nginx-defaultbackend         ClusterIP   10.96.124.173   <none>        80/TCP           21m

NAME                                      DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR                          AGE
daemonset.apps/ingress-nginx-controller   1         1         1       1            1           edgenode=true,kubernetes.io/os=linux   21m

NAME                                           READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/ingress-nginx-defaultbackend   1/1     1            1           21m

NAME                                                      DESIRED   CURRENT   READY   AGE
replicaset.apps/ingress-nginx-defaultbackend-7df596dbc9   1         1         1       21m

ingress-controall部署好之后,我們使用nginx鏡像部署個測試后端,之后再部署個ingress資源把后測試后端的server通過ingress暴露出去。

我們創建個test-nginx.yaml,定義了個nginxpod和他對應的serviceservice使用ClusterIP方式暴露80端口。

#test-nginx.yaml
apiVersion: v1
kind: Service
metadata:
  name: nginx-test-service
  namespace: nginx-test
spec:
  selector:
    app: nginx-test
  ports:
  - name: http
    port: 81	# 后端Pod的端口
    targetPort: 80	# svc暴露的端口
---
apiVersion: apps/v1
kind: Deployment
metadata: 
  name: nginx-test-deployment
  namespace: nginx-test
spec:
  replicas: 1
  selector: 
    matchLabels:
      app: nginx-test
  template:
    metadata:
      labels:
        app: nginx-test
    spec:
      containers:
      - name: nginx-test
        image: nginx:1.15-alpine 
        imagePullPolicy: IfNotPresent
        lifecycle:
          postStart:
            exec:
              command: ["/bin/sh","-c","echo nginx-test.wdyxgames.com > /usr/share/nginx/html/index.html"] 
        ports:
        - name: httpd
          containerPort: 81		#pod暴露出來的端口

創建測試后端podservice

#創建nginx-test命名空間
kubectl create ns nginx-test

#使用yaml創建pod和svc
kubectl apply -f test-nginx.yaml

#查看創建的資源
kubectl get all -n nginx-test

#####
NAME                                       READY   STATUS    RESTARTS   AGE
pod/nginx-test-deployment-fdf785bb-k6xxl   1/1     Running   0          24s

NAME                         TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)   AGE
service/nginx-test-service   ClusterIP   10.97.180.229   <none>        80/TCP    24s

NAME                                    READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/nginx-test-deployment   1/1     1            1           24s

NAME                                             DESIRED   CURRENT   READY   AGE
replicaset.apps/nginx-test-deployment-fdf785bb   1         1         1       24s

至此,我們創建好了后端測試資源和ingress-controall,我們再創建ingress資源,把后端測試的service暴露到公網中去

#test-nginx-ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  annotations:
    kubernetes.io/ingress.class: "nginx"
  name: example
spec:
  rules: # 一個ingress可以配置多個rules
  - host: nginx-test.wdyxgames.com # 域名配置,可以不寫,匹配*,此域名就是瀏覽器里訪問的URL
    http:
      paths: # 相當於nginx的location,同一個host可以配置多個path,此處我們寫所有
      - backend:
          service:
            name: nginx-test-service  # 代理到哪個svc,與上面創建的測試后端svc對應
            port:
              number: 80 # svc暴露出來的端口,與上面創建的測試后端svc對應
        path: /
        pathType: Prefix
#執行文件安裝
kubectl apply -f  test-nginx-ingress.yaml  -n nginx-test

綁定host測試,此處我們綁到了2個hostnode02邊緣節點的ip上去,一個是在ingress中定義了的nginx-test.wdyxgames.com,一個是沒有定義了的nginx-test1.wdyxgames.com,使用瀏覽器訪問:

可以看見http://nginx-test.wdyxgames.com訪問成功,http://nginx-test1.wdyxgames.com因為沒有在ingress中定義返回的是defaultbakend中的nginx返回的頁面

ingress-nginx配置使用證書支持https
  1. 首先把證書導入到k8ssecret中去:

    kubectl create secret tls wdyxgames-tls --key _.wdyxgames.com.key --cert _.wdyxgames.com.crt -n nginx-test
    
  2. 再創建個ingress資源文件指定使用https

    #test-nginx-ingress-https.yaml
    apiVersion: networking.k8s.io/v1
    kind: Ingress
    metadata:
      annotations:
        kubernetes.io/ingress.class: "nginx"
      name: example
    spec:
      rules: # 一個ingress可以配置多個rules
      - host: nginx-test.wdyxgames.com # 域名配置,可以不寫,匹配*,此域名就是瀏覽器里訪問的URL
        http:
          paths: # 相當於nginx的location,同一個host可以配置多個path,此處我們寫所有
          - backend:
              service:
                name: nginx-test-service  # 代理到哪個svc,與上面創建的測試后端svc對應
                port:
                  number: 80 # svc暴露出來的端口,與上面創建的測試后端svc對應
            path: /
            pathType: Prefix
      tls:
        - hosts:
          - nginx-test.wdyxgames.com
          secretName: wdyxgames-tls
    

    對比上面,只是添加了tls處的內容:

    #使用命令創建
    kubectl apply -f  test-nginx-ingress-https.yaml  -n nginx-test
    
    #####
    Error from server (BadRequest): error when creating "test-nginx-ingress-https.yaml": admission webhook "validate.nginx.ingress.kubernetes.io" denied the request: host "nginx-test.wdyxgames.com" and path "/" is already defined in ingress nginx-test/example
    #會有報錯,是因為的http的ingress已經創建對應的轉發關系,不可再創建
    
    #刪掉之前http的ingress,再創建
    kubectl delete -f  test-nginx-ingress.yaml  -n nginx-test
    
  3. 使用瀏覽器訪問,可見已經支持https訪問了:


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM