Traefik
Traefik是一個用Golang開發的輕量級的Http反向代理和負載均衡器。由於可以自動配置和刷新backend節點,目前可以被絕大部分容器平台支持,例如Kubernetes,Swarm,Rancher等。由於traefik會實時與Kubernetes API交互,所以對於Service的節點變化,traefik的反應會更加迅速。總體來說traefik可以在Kubernetes中完美的運行.
Traefik 還有很多特性如下:
- 速度快
- 不需要安裝其他依賴,使用 GO 語言編譯可執行文件
- 支持最小化官方 Docker 鏡像
- 支持多種后台,如 Docker, Swarm mode, Kubernetes, Marathon, Consul, Etcd, Rancher, Amazon ECS 等等
- 支持 REST API
- 配置文件熱重載,不需要重啟進程
- 支持自動熔斷功能
- 支持輪訓、負載均衡
- 提供簡潔的 UI 界面
- 支持 Websocket, HTTP/2, GRPC
- 自動更新 HTTPS 證書
- 支持高可用集群模式
接下來我們使用 Traefik 來替代 Nginx + Ingress Controller 來實現反向代理和服務暴漏。
那么二者有什么區別呢?簡單點說吧,在 Kubernetes 中使用 nginx 作為前端負載均衡,通過 Ingress Controller 不斷的跟 Kubernetes API 交互,實時獲取后端 Service、Pod 等的變化,然后動態更新 Nginx 配置,並刷新使配置生效,來達到服務自動發現的目的,而 Traefik 本身設計的就能夠實時跟 Kubernetes API 交互,感知后端 Service、Pod 等的變化,自動更新配置並熱重載。大體上差不多,但是 Traefik 更快速更方便,同時支持更多的特性,使反向代理、負載均衡更直接更高效。
1.Role Based Access Control configuration (Kubernetes 1.6+ only)
- kubectl apply -f https://raw.githubusercontent.com/containous/traefik/master/examples/k8s/traefik-rbac.yaml
授權,官方文檔不懂下下來看文檔
2.Deploy Træfik using a Deployment or DaemonSet
- To deploy Træfik to your cluster start by submitting one of the YAML files to the cluster with kubectl:
- kubectl apply -f https://raw.githubusercontent.com/containous/traefik/master/examples/k8s/traefik-deployment.yaml 此模板有些問題,我先用ds模板
- kubectl apply -f https://raw.githubusercontent.com/containous/traefik/master/examples/k8s/traefik-ds.yaml
deployment和ds的區別:ds會在每台node上都創造一個pod.而deploy是人為控制的副本。如果幾台很多了,沒有必要用ds,比如100台 會造100個pod,沒有意義。自己用ds模板改下,kind: Deployment
如下
- 直接找到DS模板吧kind改成deploy模式
- kind: Deployment
3.Check the Pods
- # kubectl --namespace=kube-system get pods -o wide
- traefik-ingress-controller-79877bbc66-p29jh 1/1 Running 0 32m 10.249.243.182 k8snode2-175v136
查找一下在那台服務器上,deploy會隨機分配一台服務器
4.Ingress and UI
- kubectl apply -f https://raw.githubusercontent.com/containous/traefik/master/examples/k8s/ui.yaml.
自己再造個web測試用
- apiVersion: v1
- kind: Service
- metadata:
- name: nginx-svc
- spec:
- template:
- metadata:
- labels:
- name: nginx-svc
- namespace: default
- spec:
- selector:
- run: ngx-pod
- ports:
- - protocol: TCP
- port: 80
- targetPort: 80
- ---
- apiVersion: apps/v1beta1
- kind: Deployment
- metadata:
- name: ngx-pod
- spec:
- replicas: 4
- template:
- metadata:
- labels:
- run: ngx-pod
- spec:
- containers:
- - name: nginx
- image: nginx:1.10
- ports:
- - containerPort: 80
- ---
- apiVersion: extensions/v1beta1
- kind: Ingress
- metadata:
- name: ngx-ing
- annotations:
- kubernetes.io/ingress.class: traefik
- spec:
- rules:
- - host: www.ha.com
- http:
- paths:
- - backend:
- serviceName: nginx-svc
- servicePort: 80
5.測試成功
6.HTTPS證書
- apiVersion: extensions/v1beta1
- kind: Ingress
- metadata:
- name: traefik-web-ui
- namespace: kube-system
- annotations:
- kubernetes.io/ingress.class: traefik
- spec:
- rules:
- - host: traefik-ui.minikube
- http:
- paths:
- - backend:
- serviceName: traefik-web-ui
- servicePort: 80
- tls:
- - secretName: traefik-ui-tls-cert
官方是怎么導入證書的呢? 注:key和crt必須要有
- openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout tls.key -out tls.crt -subj "/CN=traefik-ui.minikube"
- kubectl -n kube-system create secret tls traefik-ui-tls-cert --key=tls.key --cert=tls.crt
7.Basic Authentication
- A. Use htpasswd to create a file containing the username and the MD5-encoded password:
- htpasswd -c ./auth myusername
- You will be prompted for a password which you will have to enter twice. htpasswd will create a file with the following:
- cat auth
- myusername:$apr1$78Jyn/1K$ERHKVRPPlzAX8eBtLuvRZ0
- B. Now use kubectl to create a secret in the monitoring namespace using the file created by htpasswd
- kubectl create secret generic mysecret --from-file auth --namespace=monitoring
- Note
- Secret must be in same namespace as the Ingress object.
- C. Attach the following annotations to the Ingress object:
- ingress.kubernetes.io/auth-type: "basic"
- ingress.kubernetes.io/auth-secret: "mysecret"
- They specify basic authentication and reference the Secret mysecret containing the credentials.
- Following is a full Ingress example based on Prometheus:
- #配置文件如下
- apiVersion: extensions/v1beta1
- kind: Ingress
- metadata:
- name: prometheus-dashboard
- namespace: monitoring
- annotations:
- kubernetes.io/ingress.class: traefik
- ingress.kubernetes.io/auth-type: "basic"
- ingress.kubernetes.io/auth-secret: "mysecret"
- spec:
- rules:
- - host: dashboard.prometheus.example.com
- http:
- paths:
- - backend:
- serviceName: prometheus
- servicePort: 9090
模板1 多域名暴漏端口:再看一下 UI 頁面,立馬更新過來,可以看到剛剛配置的 dashboard.k8s.traefik
和 ela.k8s.traefik
- apiVersion: extensions/v1beta1
- kind: Ingress
- metadata:
- name: dashboard-ela-k8s-traefik
- namespace: kube-system
- annotations:
- kubernetes.io/ingress.class: traefik
- spec:
- rules:
- - host: dashboard.k8s.traefik
- http:
- paths:
- - path: /
- backend:
- serviceName: kubernetes-dashboard
- servicePort: 80
- - host: ela.k8s.traefik
- http:
- paths:
- - path: /
- backend:
- serviceName: elasticsearch-logging
- servicePort: 9200
模板2
注意:這里我們根據路徑來轉發,需要指明 rule 為 PathPrefixStrip,配置為 traefik.frontend.rule.type: PathPrefixStrip
再看一下 UI 頁面,也是立馬更新過來,可以看到剛剛配置的 my.k8s.traefik/dashboard
和 my.k8s.traefik/kibana
- apiVersion: extensions/v1beta1
- kind: Ingress
- metadata:
- name: my-k8s-traefik
- namespace: kube-system
- annotations:
- kubernetes.io/ingress.class: traefik
- traefik.frontend.rule.type: PathPrefixStrip
- spec:
- rules:
- - host: my.k8s.traefik
- http:
- paths:
- - path: /dashboard
- backend:
- serviceName: kubernetes-dashboard
- servicePort: 80
- - path: /kibana
- backend:
- serviceName: kibana-logging
- servicePort: 5601
8.自動熔斷
在集群中,當某一個服務大量出現請求錯誤,或者請求響應時間過久,或者返回500+錯誤狀態碼時,我們希望可以主動剔除該服務,也就是不在將請求轉發到該服務上,而這一個過程是自動完成,不需要人工執行。Traefik 通過配置很容易就能幫我們實現,Traefik 可以通過定義策略來主動熔斷服務。
NetworkErrorRatio() > 0.5
:監測服務錯誤率達到50%時,熔斷。LatencyAtQuantileMS(50.0) > 50
:監測延時大於50ms時,熔斷。ResponseCodeRatio(500, 600, 0, 600) > 0.5
:監測返回狀態碼為[500-600]在[0-600]區間占比超過50%時,熔斷。
案例
- apiVersion: v1
- kind: Service
- metadata:
- name: wensleydale
- annotations:
- traefik.backend.circuitbreaker: "NetworkErrorRatio() > 0.5"
- traefik.backend.circuitbreaker: LatencyAtQuantileMS(50.0) > 2000 #>2秒熔斷
9.官方文檔:
其他多看官方文檔
https://docs.traefik.io/user-guide/kubernetes/
10.update
由於業務需求,node會擴充, ds模式多了會浪費資源 20台node+,我們怎么把traefik固定在幾台機器上。查了一些文檔找到了這個解決方法。
給node打標簽,用ds模式啟動標簽化節點 :https://www.kubernetes.org.cn/daemonset 參考文檔。
案例:
給三台node打標簽
- kubectl label node k8snode10-146v78-taiji traefik=svc
- kubectl label node k8snode10-146v78-taiji traefik=svc
- kubectl label node k8snode10-146v78-taiji traefik=svc
- ##########取消標簽
- kubectl label node k8snode1-174v136-taiji traefik-
-
- 查看標簽
- [root@k8s-m1 Traefik]# kubectl get nodes --show-labels
- NAME STATUS ROLES AGE VERSION LABELS
- k8snode1-174v136-taiji Ready node 42d v1.10.4 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/hostname=k8snode1-174v136-taiji,node-role.kubernetes.io/node=,traefik=svc
- [root@k8s-m1 Traefik]# cat traefik-ds.yaml
- kind: DaemonSet
- apiVersion: extensions/v1beta1
- metadata:
- name: traefik-ingress-controller
- namespace: kube-system
- labels:
- k8s-app: traefik-ingress-lb
- spec:
- template:
- metadata:
- labels:
- k8s-app: traefik-ingress-lb
- name: traefik-ingress-lb
- spec:
- nodeSelector:
- traefik: "svc" #重點2行
- ...................
- 驗證
- [root@k8s-m1 Traefik]# kubectl get ds -n kube-system
- NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR
- traefik-ingress-controller 3 3 3 3 3 traefik=svc
總結:后期可以根據業務量加標簽擴展traefik節點
11.限流限速
官文 : Valid values for extractorfunc
are: * client.ip
* request.host
* request.header.<header name>
我們根據2個維度 (request.host | client.ip)
重點首選保障前端HAPROXY,LVS,nginx,CDN傳遞過來的IP是用戶IP,而不是上層負載IP
haproxy可以用配置 實現
- option forwardfor #2者選一個即可
- option forwardfor header Client-IP
ngx可以通過upstream通過 X-Forwarded-For傳遞IP
- proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
1. Client.ip驗證:我們通過HAPROXY 進行測試,出傳遞了client-IP到 traefik上進行測試。client.ip限流限速效果滿足
- "time":"2019-01-10T02:03:25Z"} "BackendName":"test.if.org/" "DownstreamStatusLine":"200 OK" "request_Client-Ip":"127.0.0.1"
- "time":"2019-01-10T02:03:25Z"} "BackendName":"test.if.org/" "DownstreamStatusLine":"200 OK" "request_Client-Ip":"127.0.0.1"
- "time":"2019-01-10T02:03:26Z"} "BackendName":"Træfik" "DownstreamStatusLine":"429 Too Many Requests" "request_Client-Ip":"127.0.0.1"
- "time":"2019-01-10T02:03:26Z"} "BackendName":"Træfik" "DownstreamStatusLine":"429 Too Many Requests" "request_Client-Ip":"127.0.0.1"
- "time":"2019-01-10T02:03:26Z"} "BackendName":"Træfik" "DownstreamStatusLine":"429 Too Many Requests" "request_Client-Ip":"127.0.0.1"
- "time":"2019-01-10T02:03:29Z"} "BackendName":"test.if.org/" "DownstreamStatusLine":"200 OK" "request_Client-Ip":"127.0.0.1"
- "time":"2019-01-10T02:03:29Z"} "BackendName":"Træfik" "DownstreamStatusLine":"429 Too Many Requests" "request_Client-Ip":"127.0.0.1"
- "time":"2019-01-10T02:03:29Z"} "BackendName":"Træfik" "DownstreamStatusLine":"429 Too Many Requests" "request_Client-Ip":"127.0.0.1"
- "time":"2019-01-10T02:03:32Z"} "BackendName":"test.if.org/" "DownstreamStatusLine":"200 OK" "request_Client-Ip":"127.0.0.1"
- "time":"2019-01-10T02:03:32Z"} "BackendName":"test.if.org/" "DownstreamStatusLine":"200 OK" "request_Client-Ip":"127.0.0.1"
2.Request.host驗證:我們通過HAPROXY 進行測試 ,client.ip限流限速效果滿足
- "time":"2019-01-10T03:14:10Z"} "RequestHost":"test.if.org" "DownstreamStatusLine":"200 OK" "request_Client-Ip":"127.0.0.1"
- "time":"2019-01-10T03:14:11Z"} "RequestHost":"test.if.org" "DownstreamStatusLine":"200 OK" "request_Client-Ip":"127.0.0.1"
- "time":"2019-01-10T03:14:11Z"} "RequestHost":"test.if.org" "DownstreamStatusLine":"200 OK" "request_Client-Ip":"127.0.0.1"
- "time":"2019-01-10T03:14:11Z"} "RequestHost":"test.if.org" "DownstreamStatusLine":"200 OK" "request_Client-Ip":"127.0.0.1"
- "time":"2019-01-10T03:14:13Z"} "RequestHost":"test.if.org" "DownstreamStatusLine":"200 OK" "request_Client-Ip":"127.0.0.1"
- "time":"2019-01-10T03:14:14Z"} "RequestHost":"test.if.org" "DownstreamStatusLine":"200 OK" "request_Client-Ip":"127.0.0.1"
- "time":"2019-01-10T03:14:14Z"} "RequestHost":"test.if.org" "DownstreamStatusLine":"429 Too Many Requests" "request_Client-Ip":"127.0.0.1"
- "time":"2019-01-10T03:14:14Z"} "RequestHost":"test.if.org" "DownstreamStatusLine":"429 Too Many Requests" "request_Client-Ip":"127.0.0.1"
- "time":"2019-01-10T03:14:15Z"} "RequestHost":"test.if.org" "DownstreamStatusLine":"200 OK" "request_Client-Ip":"127.0.0.1"
- "time":"2019-01-10T03:14:15Z"} "RequestHost":"test.if.org" "DownstreamStatusLine":"429 Too Many Requests" "request_Client-Ip":"127.0.0.1"
- "time":"2019-01-10T03:14:21Z"} "RequestHost":"test.if.org" "DownstreamStatusLine":"200 OK" "request_Client-Ip":"127.0.0.1"
- "time":"2019-01-10T03:14:22Z"} "RequestHost":"test.if.org" "DownstreamStatusLine":"200 OK" "request_Client-Ip":"127.0.0.1"
- "time":"2019-01-10T03:14:22Z"} "RequestHost":"test.if.org" "DownstreamStatusLine":"200 OK" "request_Client-Ip":"127.0.0.1"
- "time":"2019-01-10T03:14:22Z"} "RequestHost":"test.if.org" "DownstreamStatusLine":"200 OK" "request_Client-Ip":"127.0.0.1"
- "time":"2019-01-10T03:14:23Z"} "RequestHost":"test.if.org" "DownstreamStatusLine":"200 OK" "request_Client-Ip":"127.0.0.1"
- "time":"2019-01-10T03:14:23Z"} "RequestHost":"test.if.org" "DownstreamStatusLine":"200 OK" "request_Client-Ip":"127.0.0.1"
- "time":"2019-01-10T03:14:23Z"} "RequestHost":"test.if.org" "DownstreamStatusLine":"429 Too Many Requests" "request_Client-Ip":"127.0.0.1"
- "time":"2019-01-10T03:14:24Z"} "RequestHost":"test.if.org" "DownstreamStatusLine":"429 Too Many Requests" "request_Client-Ip":"127.0.0.1"
12.金絲雀發布 (A,B)發布
金絲雀發布功能,在K8S-traefik里在v1.7.5時官方進行了修正
- [k8s] Support canary weight for external name service (#4135 by yue9944882)
- "test.if.org/": {
- "servers": {
- "hpa-httpd-5856fd66bf-2qpm6": {
- "url": "http://10.249.221.61:80",
- "weight": 90000
- },
- "hpb-httpd-6bc6f55488-mllq2": {
- "url": "http://10.249.89.29:80",
- "weight": 10000
- }
- },
13.保持會話,session親和性,sticky特性
原理會話粘粘:在客戶端第一次發起請求時,反向代理為客戶端分配一個服務端,並且將該服務端的地址以SetCookie的形式發送給客戶端,這樣客戶端下一次訪問該反向代理時,便會帶着這個cookie,里面包含了上一次反向代理分配給該客戶端的服務端信息。這種機制是通過一個名為Sticky的插件實現的。而Traefik則集成了與Nginx的Sticky相同功能,並且可以在Kubernetes中方便的開啟和配置該特性。
解決:認證服務器第一次認證到A POD 第2次訪問到B POD導致,認證失效問題,保障一致性
service層配置
- metadata:
- annotations:
- traefik.ingress.kubernetes.io/affinity: "true"
驗證
web請求頭里帶了Cookie信息