kubernetes1.20 部署 traefik2.3


一、概述

Traefik 是一個開源的可以使服務發布變得輕松有趣的邊緣路由器。它負責接收你系統的請求,然后使用合適的組件來對這些請求進行處理。
除了眾多的功能之外,Traefik 的與眾不同之處還在於它會自動發現適合你服務的配置。當 Traefik 在檢查你的服務時,會找到服務的相關信息並找到合適的服務來滿足對應的請求。
Traefik 兼容所有主流的集群技術,比如 Kubernetes,Docker,Docker Swarm,AWS,Mesos,Marathon,等等;並且可以同時處理多種方式。(甚至可以用於在裸機上運行的比較舊的軟件。)
有了Traefik,就不需要維護和同步一個單獨的配置文件:一切都會自動、實時地發生(沒有重新啟動,沒有連接中斷)。使用Traefik,您可以花時間在系統中開發和部署新特性,而不是配置和維護其工作狀態。

二、概念

Edge Router

Traefik 是一個邊緣路由器,是你整個平台的大門,攔截並路由每個傳入的請求:它知道所有的邏輯和規則,這些規則確定哪些服務處理哪些請求;傳統的反向代理需要一個配置文件,其中包含路由到你服務的所有可能路由,而 Traefik 會實時檢測服務並自動更新路由規則,可以自動服務發現。

Auto Service Discovery

傳統的邊緣路由器(或反向代理)需要一個包含到服務的每個可能路由的配置文件,Traefik從服務本身獲取它們。
在部署您的服務時,您需要附加一些信息,告訴Traefik服務可以處理的請求的特征。
這意味着在部署服務時,Traefik會立即檢測到它並實時更新路由規則。反之亦然:當您從基礎設施中刪除服務時,路由將相應地消失。
您不再需要創建和同步混雜着IP地址或其他規則的配置文件。

核心概念

  • Providers 用來自動發現平台上的服務,可以是編排工具、容器引擎或者 key-value 存儲等,比如 Docker、Kubernetes、File
  • Entrypoints 監聽傳入的流量(端口等…),是網絡入口點,它們定義了接收請求的端口(HTTP 或者 TCP)。
  • Routers 分析請求(host, path, headers, SSL, …),負責將傳入請求連接到可以處理這些請求的服務上去。
  • Services 將請求轉發給你的應用(load balancing, …),負責配置如何獲取最終將處理傳入請求的實際服務。
  • Middlewares 中間件,用來修改請求或者根據請求來做出一些判斷(authentication, rate limiting, headers, …),中間件被附件到路由上,是一種在請求發送到你的服務之前(或者在服務的響應發送到客戶端之前)調整請求的一種方法。

三、安裝

由於 Traefik 2.X 版本和之前的 1.X 版本不兼容,而且1.X 已經停止更新了。這里選擇功能更加強大的 2.X 版本來和大家進行講解,使用的鏡像是 traefik:2.3.7。

創建 traefik-crd.yaml 文件

在 traefik v2.1 版本后,開始使用 CRD(Custom Resource Definition)來完成路由配置等,所以需要提前創建 CRD 資源。

## IngressRoute
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
  name: ingressroutes.traefik.containo.us
spec:
  scope: Namespaced
  group: traefik.containo.us
  version: v1alpha1
  names:
    kind: IngressRoute
    plural: ingressroutes
    singular: ingressroute
---
## IngressRouteTCP
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
  name: ingressroutetcps.traefik.containo.us
spec:
  scope: Namespaced
  group: traefik.containo.us
  version: v1alpha1
  names:
    kind: IngressRouteTCP
    plural: ingressroutetcps
    singular: ingressroutetcp
---
## Middleware
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
  name: middlewares.traefik.containo.us
spec:
  scope: Namespaced
  group: traefik.containo.us
  version: v1alpha1
  names:
    kind: Middleware
    plural: middlewares
    singular: middleware
---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
  name: tlsoptions.traefik.containo.us
spec:
  scope: Namespaced
  group: traefik.containo.us
  version: v1alpha1
  names:
    kind: TLSOption
    plural: tlsoptions
    singular: tlsoption
---
## TraefikService
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
  name: traefikservices.traefik.containo.us
spec:
  scope: Namespaced
  group: traefik.containo.us
  version: v1alpha1
  names:
    kind: TraefikService
    plural: traefikservices
    singular: traefikservice
---
## TraefikTLSStore
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
  name: tlsstores.traefik.containo.us
spec:
  scope: Namespaced
  group: traefik.containo.us
  version: v1alpha1
  names:
    kind: TLSStore
    plural: tlsstores
    singular: tlsstore
---
## IngressRouteUDP
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
  name: ingressrouteudps.traefik.containo.us
spec:
  scope: Namespaced
  group: traefik.containo.us
  version: v1alpha1
  names:
    kind: IngressRouteUDP
    plural: ingressrouteudps
    singular: ingressrouteudp
# 部署 CRD 資源
# kubectl create -f traefik-crd.yaml
Warning: apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition
customresourcedefinition.apiextensions.k8s.io/ingressroutes.traefik.containo.us created
customresourcedefinition.apiextensions.k8s.io/ingressroutetcps.traefik.containo.us created
customresourcedefinition.apiextensions.k8s.io/middlewares.traefik.containo.us created
customresourcedefinition.apiextensions.k8s.io/tlsoptions.traefik.containo.us created
customresourcedefinition.apiextensions.k8s.io/traefikservices.traefik.containo.us created
customresourcedefinition.apiextensions.k8s.io/tlsstores.traefik.containo.us created
customresourcedefinition.apiextensions.k8s.io/ingressrouteudps.traefik.containo.us created

創建rbac權限

Kubernetes 在 1.6 以后的版本中引入了基於角色的訪問控制(RBAC)策略,方便對 Kubernetes 資源和 API 進行細粒度控制。Traefik 需要一定的權限,所以這里提前創建好 Traefik ServiceAccount 並分配一定的權限。

# cat traefik-rbac.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  name: traefik-ingress-controller
  namespace: kube-system
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: traefik-ingress-controller
rules:
  - apiGroups:
      - ""
    resources:
      - services
      - endpoints
      - secrets
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - extensions
      - networking.k8s.io
    resources:
      - ingresses
      - ingressclasses
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - extensions
    resources:
      - ingresses/status
    verbs:
      - update
  - apiGroups:
      - traefik.containo.us
    resources:
      - middlewares
      - ingressroutes
      - traefikservices
      - ingressroutetcps
      - ingressrouteudps
      - tlsoptions
      - tlsstores
    verbs:
      - get
      - list
      - watch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: traefik-ingress-controller
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: traefik-ingress-controller
subjects:
  - kind: ServiceAccount
    name: traefik-ingress-controller
    namespace: kube-system
# 部署 Traefik RBAC 資源
# kubectl create -f traefik-rbac.yaml 
serviceaccount/traefik-ingress-controller created
Warning: rbac.authorization.k8s.io/v1beta1 ClusterRole is deprecated in v1.17+, unavailable in v1.22+; use rbac.authorization.k8s.io/v1 ClusterRole
clusterrole.rbac.authorization.k8s.io/traefik-ingress-controller created
Warning: rbac.authorization.k8s.io/v1beta1 ClusterRoleBinding is deprecated in v1.17+, unavailable in v1.22+; use rbac.authorization.k8s.io/v1 ClusterRoleBinding
clusterrolebinding.rbac.authorization.k8s.io/traefik-ingress-controller created

另一種寫法:

apiVersion: v1
kind: ServiceAccount
metadata:
  namespace: kube-system
  name: traefik-ingress-controller
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: traefik-ingress-controller
rules:
  - apiGroups: [""]
    resources: ["services","endpoints","secrets"]
    verbs: ["get","list","watch"]
  - apiGroups: ["extensions"]
    resources: ["ingresses","networking.k8s.io"]
    verbs: ["get","list","watch"]
  - apiGroups: ["extensions"]
    resources: ["ingresses/status"]
    verbs: ["update"]
  - apiGroups: ["traefik.containo.us"]
    resources: ["middlewares"]
    verbs: ["get","list","watch"]
  - apiGroups: ["traefik.containo.us"]
    resources: ["ingressroutes","traefikservices"]
    verbs: ["get","list","watch"]
  - apiGroups: ["traefik.containo.us"]
    resources: ["ingressroutetcps","ingressrouteudps"]
    verbs: ["get","list","watch"]
  - apiGroups: ["traefik.containo.us"]
    resources: ["tlsoptions","tlsstores"]
    verbs: ["get","list","watch"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: traefik-ingress-controller
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: traefik-ingress-controller
subjects:
  - kind: ServiceAccount
    name: traefik-ingress-controller
    namespace: kube-system

創建 Traefik 配置文件

在 Traefik 中的配置可以使用兩種不同的方式:

  • 動態配置:完全動態的路由配置
  • 靜態配置:啟動配置

靜態配置中的元素(這些元素不會經常更改)連接到 providers 並定義 Treafik 將要監聽的 entrypoints。

在 Traefik 中有三種方式定義靜態配置:在配置文件中、在命令行參數中、通過環境變量傳遞

動態配置包含定義系統如何處理請求的所有配置內容,這些配置是可以改變的,而且是無縫熱更新的,沒有任何請求中斷或連接損耗。

由於 Traefik 配置很多,使用 CLI 定義操作過於繁瑣,盡量使用將其配置選項放到配置文件中,然后存入 ConfigMap,將其掛入 traefik 中。

# cat traefik-config.yaml
kind: ConfigMap
apiVersion: v1
metadata:
  name: traefik-config
  namespace: kube-system
data:
  traefik.yaml: |-
    serversTransport:
      insecureSkipVerify: true  ## Traefik 忽略驗證代理服務的 TLS 證書
    api:
      insecure: true            ## 允許 HTTP 方式訪問 API
      dashboard: true           ## 啟用 Dashboard
      debug: true               ## 啟用 Debug 調試模式
    metrics:
      prometheus: metrics       ## 配置 Prometheus 監控指標數據,並使用默認配置
    entryPoints:
      web:
        address: ":80"          ## 配置 80 端口,並設置入口名稱為 web
      websecure:
        address: ":443"         ## 配置 443 端口,並設置入口名稱為 websecure
      traefik:
        address: ":8090"        ## 配置 8090 端口,並設置入口名稱為 dashboard
      metrics:
        address: ":8082"        ## 配置 8082 端口,作為metrics收集入口
      tcpep:
        address: ":8000"        ## 配置 8000 端口,作為tcp入口
      udpep:
        address: ":9000/udp"    ## 配置 9000 端口,作為udp入口
    providers:
      kubernetescrd:            ## 啟用 Kubernetes CRD 方式來配置路由規則
        ingressclass: traefik-v2.3
      kubernetesingress:        ## 啟動 Kubernetes Ingress 方式來配置路由規則
        ingressclass: traefik-v2.3
    log:
      filePath: "/etc/traefik/logs/traefik.log"              ## 設置調試日志文件存儲路徑,如果為空則輸出到控制台
      level: error              ## 設置調試日志級別
      format: json                ## 設置調試日志格式
    accessLog:
      filePath: "/etc/traefik/logs/access.log"              ## 設置訪問日志文件存儲路徑,如果為空則輸出到控制台
      format: json                ## 設置訪問調試日志格式
      bufferingSize: 0          ## 設置訪問日志緩存行數
      filters:
        #statusCodes: ["200"]   ## 設置只保留指定狀態碼范圍內的訪問日志
        retryAttempts: true     ## 設置代理訪問重試失敗時,保留訪問日志
        minDuration: 20         ## 設置保留請求時間超過指定持續時間的訪問日志
      fields:                   ## 設置訪問日志中的字段是否保留(keep 保留、drop 不保留)
        defaultMode: keep       ## 設置默認保留訪問日志字段
        names:                  ## 針對訪問日志特別字段特別配置保留模式
          ClientUsername: drop  
        headers:                ## 設置 Header 中字段是否保留
          defaultMode: keep     ## 設置默認保留 Header 中字段
          names:                ## 針對 Header 中特別字段特別配置保留模式
            User-Agent: redact
            Authorization: drop
            Content-Type: keep
# 部署 Traefik ConfigMap 資源
# kubectl create -f traefik-config.yaml 
configmap/traefik-config created

部署 Traefik

提前給節點設置Label,當程序部署Pod會自動調度到設置 Label的node節點上。

# kubectl label nodes develop-master-1 IngressProxy=traefik2.3
node/develop-master-1 labeled
# kubectl label nodes develop-worker-1 IngressProxy=traefik2.3
node/develop-worker-1 labeled
# kubectl label nodes develop-worker-2 IngressProxy=traefik2.3
node/develop-worker-2 labeled

# 驗證節點標簽是否成功
# kubectl get node --show-labels              
NAME               STATUS   ROLES                              AGE   VERSION   LABELS
develop-master-1   Ready    control-plane,etcd,master,worker   98d   v1.20.4   IngressProxy=traefik2.3,beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=develop-master-1,kubernetes.io/os=linux,node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/etcd=,node-role.kubernetes.io/master=,node-role.kubernetes.io/worker=
develop-worker-1   Ready    worker                             98d   v1.20.4   IngressProxy=traefik2.3,beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=develop-worker-1,kubernetes.io/os=linux,node-role.kubernetes.io/worker=
develop-worker-2   Ready    worker                             98d   v1.20.4   IngressProxy=traefik2.3,beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=develop-worker-2,kubernetes.io/os=linux,node-role.kubernetes.io/worker=,worker=worker2

# 節點刪除Label標簽
# kubectl label nodes develop-master-1 IngressProxy-
# kubectl label nodes develop-worker-1 IngressProxy-
# kubectl label nodes develop-worker-2 IngressProxy-
# cat traefik-deploy.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: traefik-v2
  namespace: kube-system
  labels:
    app: traefik-v2
spec:
  replicas: 2
  selector:
    matchLabels:
      app: traefik-v2
  template:
    metadata:
      labels:
        app: traefik-v2
    spec:
      serviceAccountName: traefik-ingress-controller
      terminationGracePeriodSeconds: 1
      containers:
        - name: traefik-v2
          image: traefik:v2.3
          args:
            - --configfile=/config/traefik.yaml
          ports:
            - name: web
              containerPort: 80
              hostPort: 80           #hostPort方式,將端口暴露到集群節點
            - name: websecure
              containerPort: 443
              hostPort: 443          #hostPort方式,將端口暴露到集群節點
            - name: admin
              containerPort: 8090
            - name: tcpep
              containerPort: 8000
            - name: udpep
              containerPort: 9000
          resources:
            limits:
              cpu: 500m
              memory: 1024Mi
            requests:
              cpu: 300m
              memory: 1024Mi
          securityContext:
            capabilities:              ## 只開放網絡權限    
              drop:
                - ALL
              add:
                - NET_BIND_SERVICE
          volumeMounts:
          - mountPath: "/config"
            name: "config"
          - mountPath: /etc/traefik/logs
            name: logdir
          - mountPath: /etc/localtime
            name: timezone
            readOnly: true
      volumes:
        - name: config
          configMap:
            name: traefik-config 
        - name: logdir
          hostPath:
            path: /data/traefik/logs
            type: "DirectoryOrCreate"
        - name: timezone
          hostPath:
            path: /etc/localtime
            type: File
      tolerations:            
        - operator: "Exists"        ## 設置容忍所有污點,防止節點被設置污點
      hostNetwork: true             ## 開啟host網絡,提高網絡入口的網絡性能
      nodeSelector:                 ## 設置node篩選器,在特定label的節點上啟動
        IngressProxy: "traefik2.3"
---
apiVersion: v1
kind: Service
metadata:
  name: traefik-v2
  namespace: kube-system
spec:
  type: LoadBalancer
  selector:
    app: traefik-v2
  ports:
    - protocol: TCP
      port: 80
      name: web
      targetPort: 80
    - protocol: TCP
      port: 443
      name: websecure
      targetPort: 443
    - protocol: TCP
      port: 8090
      name: admin
      targetPort: 8090
    - protocol: TCP
      port: 8000
      name: tcpep
      targetPort: 8000
---
apiVersion: v1
kind: Service
metadata:
  name: traefikudp-v2
  namespace: kube-system
spec:
  type: LoadBalancer
  selector:
    app: traefik-v2
  ports:
    - protocol: UDP
      port: 9000
      name: udpep
      targetPort: 9000

使用Deployment類型部署,以便於在多服務器間擴展,使用 hostport 方式占用服務器 80、443 端口,方便流量進入。

# 部署 Traefik
# kubectl create -f traefik-deploy.yaml 
deployment.apps/traefik-v2 created
service/traefik-v2 created
service/traefikudp-v2 created

到此 Traefik v2.3 應用已經部署完成。
這時候就可以通過節點http://IP:8090,可以看到dashboard相關信息

四、路由配置

1、配置 HTTP 路由規則 (Traefik Dashboard 為例)

Traefik 應用已經部署完成,但是想讓外部訪問 Kubernetes 內部服務,還需要配置路由規則,這里開啟了 Traefik Dashboard 配置,所以首先配置 Traefik Dashboard 看板的路由規則,使外部能夠訪問 Traefik Dashboard。

創建 Traefik Dashboard 路由規則文件 traefik-dashboard-route.yaml

因為靜態配置文件指定了ingressclass,所以這里的annotations 要指定,否則訪問會404

# cat traefik-dashboard-route.yaml
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
  name: traefik-dashboard
  namespace: kube-system
  annotations:
    kubernetes.io/ingress.class: traefik-v2.3     
spec:
  entryPoints:
    - web
  routes:
  - match: Host(`www.traefiktest.com`) 
    kind: Rule
    services:
    - name: api@internal
      kind: TraefikService
# 部署Traefik Dashboard 路由規則對象
# kubectl create -f traefik-dashboard-route.yaml
ingressroute.traefik.containo.us/traefik-dashboard created

客戶端通過域名訪問服務,必須要進行 DNS 解析,可以通過 DNS 服務器進行域名解析,也可以修改 hosts 文件將 Traefik 指定節點的 IP 和自定義 host 綁定

# cat hosts
192.168.2.163 www.traefiktest.com

打開任意瀏覽器輸入地址:http://www.traefiktest.com進行訪問,打開 Traefik Dashboard.

此處沒有配置驗證登錄,如果想配置驗證登錄,使用middleware即可。

2、配置 HTTP 路由規則

Traefik 已經部署完成,但是想讓外部訪問 Kubernetes 內部服務,還需要配置路由規則,這里用whoami 舉例。

# 首先創建whoami 的 deployment
# cat whoami.yaml
## 創建一個http應用
apiVersion: apps/v1
kind: Deployment
metadata:
  name: whoami
  namespace: default
  labels:
    app: traefiklabs
    name: whoami
spec:
  replicas: 2
  selector:
    matchLabels:
      app: traefiklabs
      task: whoami
  template:
    metadata:
      labels:
        app: traefiklabs
        task: whoami
    spec:
      containers:
        - name: whoami
          image: traefik/whoami
          ports:
            - containerPort: 80
---
## http應用的service
apiVersion: v1
kind: Service
metadata:
  name: whoami
  namespace: default
spec:
  ports:
    - name: http
      port: 80
  selector:
    app: traefiklabs
    task: whoami
---
## 創建一個tcp應用
kind: Deployment
apiVersion: apps/v1
metadata:
  name: whoamitcp
  namespace: default
  labels:
    app: traefiklabs
    name: whoamitcp
spec:
  replicas: 2
  selector:
    matchLabels:
      app: traefiklabs
      task: whoamitcp
  template:
    metadata:
      labels:
        app: traefiklabs
        task: whoamitcp
    spec:
      containers:
        - name: whoamitcp
          image: traefik/whoamitcp
          ports:
            - containerPort: 8080
---
## tcp應用的service
apiVersion: v1
kind: Service
metadata:
  name: whoamitcp
  namespace: default
spec:
  ports:
    - protocol: TCP
      port: 8080
  selector:
    app: traefiklabs
    task: whoamitcp
---
## 創建一個ucp應用
kind: Deployment
apiVersion: apps/v1
metadata:
  name: whoamiudp
  namespace: default
  labels:
    app: traefiklabs
    name: whoamiudp
spec:
  replicas: 2
  selector:
    matchLabels:
      app: traefiklabs
      task: whoamiudp
  template:
    metadata:
      labels:
        app: traefiklabs
        task: whoamiudp
    spec:
      containers:
        - name: whoamiudp
          image: traefik/whoamiudp:latest
          ports:
            - containerPort: 8080
---
## ucp應用的service
apiVersion: v1
kind: Service
metadata:
  name: whoamiudp
  namespace: default
spec:
  ports:
    - port: 8080
  selector:
    app: traefiklabs
    task: whoamiudp
# kubectl create -f whoami.yaml
deployment.apps/whoami created
service/whoami created
deployment.apps/whoamitcp created
service/whoamitcp created
deployment.apps/whoamiudp created
service/whoamiudp created
# 創建 whoami 路由規則文件 whoami-ingreoute.yaml
# cat whoami-ingreoute.yaml
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
  name: myingressroute
  namespace: default
  annotations:
    kubernetes.io/ingress.class: traefik-v2.3   
spec:
  entryPoints:
    - web # 跟ConfigMap中的保持一致
  routes:
  - match: Host(`whoami.foxchan.com`) && PathPrefix(`/`)
    kind: Rule
    services:
    - name: whoami
      port: 80
# kubectl create -f whoami-ingreoute.yaml 
ingressroute.traefik.containo.us/myingressroute created

主機hosts文件添加如下解析

192.168.2.163 whoami.foxchan.com

查看效果,可以看到是同一個瀏覽器訪問的是同一個pod ,這個跟下面使用kuboard來創建traefik ingressroute,同一個瀏覽器不停刷新訪問不同的pod


問題

1.通過上述使用yaml文件創建的traefik ingressroute,可以訪問使用,通過kuboard界面上也能查看到,不過位置是在集群管理-自定義資源-traefik.containo.us中


2.yaml文件部署的traefik ingressroute,訪問是的時候指定pod了,但是通過kuboard界面部署的traefik ingressroute,訪問的時候pod 來回變換

使用建議

應用部署到service后就可以了,然后使用kuboard來創建traefik ingressroute,記得設置annotations

  annotations:
    kubernetes.io/ingress.class: traefik-v2.3

3、配置 HTTPS 路由規則

用 HTTPS 來訪問我們這個應用的話,就需要監聽 websecure 這個入口點,也就是通過 443 端口來訪問,同樣用 HTTPS 訪問應用必然就需要證書,用 openssl 來創建一個自簽名的證書:

# openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout tls.key -out tls.crt -subj "/CN=whoami.foxchan.com"
Generating a 2048 bit RSA private key
...................................+++
...........................................+++
writing new private key to 'tls.key'
-----

# ll
-rw-r--r-- 1 root root 1119 7月   8 15:55 tls.crt
-rw-r--r-- 1 root root 1704 7月   8 15:55 tls.key

通過 Secret 對象來引用證書文件

# 要注意證書文件名稱必須是 tls.crt 和 tls.key
# kubectl create secret tls who-tls --cert=tls.crt --key=tls.key
secret/who-tls created
# 另一種方式
kubectl create secret generic who-tls --from-file=tls.crt --from-file=tls.key -n default

創建一個 HTTPS 訪問應用的 IngressRoute 對象:

# cat whoami-ingreoute-tls.yaml
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
  name: ingressroutetls
  annotations:
    kubernetes.io/ingress.class: traefik-v2.3   
spec:
  entryPoints:
    - websecure # 跟ConfigMap中的保持一致
  routes:
  - match: Host(`whoami.foxchan.com`) 
    kind: Rule
    services:
    - name: whoami
      port: 80
  tls:
    secretName: who-tls
# kubectl create -f whoami-ingreoute-tls.yaml
ingressroute.traefik.containo.us/ingressroutetls created


使用建議

應用部署到service后就可以了,然后使用kuboard創建secret,然后來創建traefik ingressroute,記得設置annotations

  annotations:
    kubernetes.io/ingress.class: traefik-v2.3

4、配置 TCP 路由規則

# cat whoami-tcp.yaml
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRouteTCP
metadata:
  name: ingressroutetcpwho
  annotations:
    kubernetes.io/ingress.class: traefik-v2.3 
spec:
  entryPoints:
    - tcpep # 跟ConfigMap中的保持一致
  routes:
  - match: HostSNI(`*`)
    services:
    - name: whoamitcp
      port: 8080

5、配置udp路由規則

# whoami-udp.yaml
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRouteUDP
metadata:
  name: ingressrouteudpwho
  annotations:
    kubernetes.io/ingress.class: traefik-v2.3   
spec:
  entryPoints:                  
    - udpep # 跟ConfigMap中的保持一致
  routes:                      
  - services:                  
    - name: whoamiudp                 
      port: 8080 

五、中間件

中間件是 Traefik2.0 中一個非常有特色的功能,可以根據自己的各種需求去選擇不同的中間件來滿足服務,Traefik 官方已經內置了許多不同功能的中間件,其中一些可以修改請求,頭信息,一些負責重定向,一些添加身份驗證等等,而且中間件還可以通過鏈式組合的方式來適用各種情況。

白名單舉例

設置dashboard只能白名單的ip可以訪問

# 創建白名單中間件
# cat middleware-ipwhitelist.yaml

# 然后將這個中間件附加到 dashboard的服務上面去
# cat traefik-dashboard-route.yaml
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
  name: traefik-dashboard
  namespace: kube-system
  annotations:
    kubernetes.io/ingress.class: traefik-v2.3     
spec:
  entryPoints:
    - web
  routes:
  - match: Host(`whoami.foxchan.com`) 
    kind: Rule
    services:
    - name: api@internal
      kind: TraefikService
    middlewares:       #這里添加中間件的名字
      - name: gs-ipwhitelist

這個時候我們再去訪問dashboard,不在白名單的就會報403

六、路由配置(高級)

在開始traefik的高級用法之前,還需要了解一個TraefikService,通過把TraefikService注冊到CRD來實現更復雜的請求設置。

TraefikService 目前能用於以下功能

  • servers load balancing.(負載均衡)
  • services Weighted Round Robin load balancing.(權重輪詢)
  • services mirroring.(鏡像)

1、負載均衡

# 創建k8s service
# cat svc-service.yaml
apiVersion: v1
kind: Service
metadata:
  name: svc1
  namespace: default
spec:
  ports:
    - name: http
      port: 80
  selector:
    app: v1
---
apiVersion: v1
kind: Service
metadata:
  name: svc2
  namespace: default
spec:
  ports:
    - name: http
      port: 80
  selector:
    app: v2 
# 創建IngressRoute
# cat svc-service-ingressroute.yaml
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
  name: ingressroutelb
  namespace: default

spec:
  entryPoints:
    - web
  routes:
  - match: Host(`whoami.foxchan.com`)
    kind: Rule
    services:
    - name: svc1
      namespace: default
    - name: svc2
      namespace: default

2、權重輪詢

# 創建TraefikService
# cat wrr-service.yaml
apiVersion: traefik.containo.us/v1alpha1
kind: TraefikService
metadata:
  name: wrr
  namespace: default
spec:
  weighted:
    services:
      - name: svc1    
        port: 80
        weight: 3          # 定義權重
        kind: Service      # 可選,默認就是 Service 
      - name: svc2
        port: 80     
        weight: 1
# 創建IngressRoute
# 需要注意的是現在我們配置的 Service 不再是直接的 Kubernetes 對象了,而是上面我們定義的 TraefikService 對象
# cat wrr-service-ingressout.yaml
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
  name: ingressroutewrr
  namespace: default
spec:
  entryPoints:
    - web
  routes:
  - match: Host(`who.foxchan.com`)
    kind: Rule
    services:
    - name: wrr
      namespace: default
      kind: TraefikService

3、鏡像

1.流量復制到k8s 的service

# Mirroring from a k8s Service
apiVersion: traefik.containo.us/v1alpha1
kind: TraefikService
metadata:
  name: mirror-k8s
  namespace: default
spec:
  mirroring:
    name: svc1       # 發送 100% 的請求到 K8S 的 Service "v1"
    port: 80
    mirrors:
      - name: svc2   # 然后復制 20% 的請求到 v2
        port: 80
        percent: 20

2.流量從Traefik Service 導入

# Mirroring from a Traefik Service
apiVersion: traefik.containo.us/v1alpha1
kind: TraefikService
metadata:
  name: mirror-ts
  namespace: default
spec:
  mirroring:
    name: mirror-k8s          #流量入口從TraefikService 來
    kind: TraefikService
     mirrors:
       - name: svc2
         port: 80
         percent: 20

3.創建IngressRoute

apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
  name: ingressroute-mirror
  namespace: default
spec:
  entryPoints:
    - web
  routes:
  - match: Host(`who.foxchan.com`) 
    kind: Rule
    services:
    - name: mirror-k8s          
      namespace: default
      kind: TraefikService

yaml文件下載地址

https://files.cnblogs.com/files/sanduzxcvbnm/yaml文件.zip

traefik鏡像版本升級

文檔中使用的traefik鏡像是traefik:v2.3,若是升級到traefik:v2.5,啟動后則會報錯如下:

E0708 17:33:27.478347    1 reflector.go:138] pkg/mod/k8s.io/client-go@v0.20.2/tools/cache/reflector.go:167: Failed to watch *v1alpha1.MiddlewareTCP: failed to list *v1alpha1.MiddlewareTCP: middlewaretcps.traefik.containo.us is forbidden: User "system:serviceaccount:kube-system:traefik-ingress-controller" cannot list resource "middlewaretcps" in API group "traefik.containo.us" at the cluster scope

v2.3啟動后日志顯示

v2.5啟動后日志顯示

看來若是升級的話,得改yaml文件中的內容才行

參考資料:
https://blog.51cto.com/foxhound/2545116?source=dra
https://www.cnblogs.com/heian99/p/14608414.html
https://blog.51cto.com/u_13760351/2764008


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM