Istio的流量管理(實操一)
使用官方的Bookinfo應用進行測試。涵蓋官方文檔Traffic Management章節中的請求路由,故障注入,流量遷移,TCP流量遷移,請求超時,熔斷處理和流量鏡像。不含ingress和Egree,后續再補充。
部署Bookinfo應用
Bookinfo應用說明
官方提供的測試應用如下,包含如下4個組件:
productpage:productpage服務會調用details和reviews來填充web頁面.details:details服務包含book信息.reviews:reviews服務包含書評,它會調用ratings服務.ratings:ratings服務包與書評相關的含排名信息
reviews 包含3個版本:
- v1版本不會調用
ratings服務. - v2版本會調用
ratings服務,並按照1到5的黑色星展示排名 - v2版本會調用
ratings服務,並按照1到5的紅色星展示排名

部署
Bookinfo應用部署在default命名空間下,使用自動注入sidecar的方式:
-
通過如下命令在
default命名空間(當然也可以部署在其他命名空間下面,Bookinfo配置文件中並沒有指定部署的命名空間)中啟用自動注入sidecar:$ cat <<EOF | oc -n <target-namespace> create -f - apiVersion: "k8s.cni.cncf.io/v1" kind: NetworkAttachmentDefinition metadata: name: istio-cni EOF$ kubectl label namespace default istio-injection=enabled -
切換到
default命名空間下,部署Bookinfo應用:$ kubectl apply -f samples/bookinfo/platform/kube/bookinfo.yaml等待一段時間,Bookinfo的所有pod就可以成功啟動,查看pod和service:
$ oc get pod NAME READY STATUS RESTARTS AGE details-v1-78d78fbddf-5mfv9 2/2 Running 0 2m27s productpage-v1-85b9bf9cd7-mfn47 2/2 Running 0 2m27s ratings-v1-6c9dbf6b45-nm6cs 2/2 Running 0 2m27s reviews-v1-564b97f875-ns9vz 2/2 Running 0 2m27s reviews-v2-568c7c9d8f-6r6rq 2/2 Running 0 2m27s reviews-v3-67b4988599-ddknm 2/2 Running 0 2m27s$ oc get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE details ClusterIP 10.84.97.183 <none> 9080/TCP 3m33s kubernetes ClusterIP 10.84.0.1 <none> 443/TCP 14d productpage ClusterIP 10.84.98.111 <none> 9080/TCP 3m33s ratings ClusterIP 10.84.237.68 <none> 9080/TCP 3m33s reviews ClusterIP 10.84.39.249 <none> 9080/TCP 3m33s使用如下命令判斷Bookinfo應用是否正確安裝:
$ kubectl exec -it $(kubectl get pod -l app=ratings -o jsonpath='{.items[0].metadata.name}') -c ratings -- curl productpage:9080/productpage | grep -o "<title>.*</title>" <title>Simple Bookstore App</title> #返回的結果也可以直接通過svc的endpoint進行訪問
$ oc describe svc productpage|grep Endpoint Endpoints: 10.83.1.85:9080$ curl -s 10.83.1.85:9080/productpage | grep -o "<title>.*</title>"可在openshift中創建
router(屬於kuberenetes的ingress gateway)進行訪問(將${HOST_NAME}替換為實際的主機名)kind: Route apiVersion: route.openshift.io/v1 metadata: name: productpage namespace: default labels: app: productpage service: productpage annotations: openshift.io/host.generated: 'true' spec: host: ${HOST_NAME} to: kind: Service name: productpage weight: 100 port: targetPort: http wildcardPolicy: None如果發現按照官方文檔創建ingress之后無法通過ingress(route)進行訪問,但可以通過k8s的service進行訪問。需要確認下istio sidecar的版本是不是和istio版本匹配。通常升級之后,sidecar也需要重新注入。
-
配置默認的destination rules
配置帶mutual TLS(一開始學習istio時不建議配置)
$ kubectl apply -f samples/bookinfo/networking/destination-rule-all.yaml配置不帶mutual TLS
$ kubectl apply -f samples/bookinfo/networking/destination-rule-all-mtls.yaml獲取配置的destination rules
$ kubectl get destinationrules -o yaml獲取到的destination rules如下,注意默認安裝下,除了
reviews外的service只有v1版本- apiVersion: networking.istio.io/v1beta1 kind: DestinationRule metadata: annotations: ... name: details namespace: default spec: host: details #對應kubernetes service "details" subsets: - labels: #實際的details的deployment只有一個標簽"version: v1" version: v1 name: v1 - labels: version: v2 name: v2 - apiVersion: networking.istio.io/v1beta1 kind: DestinationRule metadata: annotations: ... name: productpage namespace: default spec: host: productpage subsets: - labels: version: v1 name: v1 - apiVersion: networking.istio.io/v1beta1 kind: DestinationRule metadata: annotations: ... name: ratings namespace: default spec: host: ratings subsets: - labels: version: v1 name: v1 - labels: version: v2 name: v2 - labels: version: v2-mysql name: v2-mysql - labels: version: v2-mysql-vm name: v2-mysql-vm - apiVersion: networking.istio.io/v1beta1 kind: DestinationRule metadata: annotations: ... name: reviews # kubernetes service "reviews"實際中有3個版本 namespace: default spec: host: reviews subsets: - labels: version: v1 name: v1 - labels: version: v2 name: v2 - labels: version: v3 name: v3
卸載
使用如下命令可以卸載Bookinfo
$ samples/bookinfo/platform/kube/cleanup.sh
流量管理
請求路由
下面展示如何根據官方提供的Bookinfo微服務的多個版本動態地路由請求。在上面部署BookInfo應用之后,該應用有3個reviews服務,分別提供:無排名,有黑星排名,有紅星排名三種顯示。由於默認情況下istio會使用輪詢模式將請求一次分發到3個reviews服務上,因此在刷新/productpage的頁面時,可以看到如下變化:
-
V1版本:

-
V2版本:

-
V3版本:

本次展示如何將請求僅分發到某一個reviews服務上。
首先創建如下virtual service:
$ kubectl apply -f samples/bookinfo/networking/virtual-service-all-v1.yaml
查看路由信息
$ kubectl get virtualservices -o yaml
- apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
annotations:
...
name: details
namespace: default
spec:
hosts:
- details
http:
- route:
- destination:
host: details
subset: v1
- apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
annotations:
...
name: productpage
namespace: default
spec:
hosts:
- productpage
http:
- route:
- destination:
host: productpage
subset: v1
- apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
annotations:
...
name: ratings
namespace: default
spec:
hosts:
- ratings
http:
- route:
- destination:
host: ratings
subset: v1
- apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
annotations:
...
name: reviews
namespace: default
spec:
hosts:
- reviews
http:
- route:
- destination: #可以看到流量都分發到`reviews`服務的v1版本上
host: reviews #kubernetes的服務,解析為reviews.default.svc.cluster.local
subset: v1 #將v1修改為v2就可以將請求分只發到v2版本上
此時再刷新/productpage的頁面時,發現只顯示無排名的頁面
卸載:
$ kubectl delete -f samples/bookinfo/networking/virtual-service-all-v1.yaml
基於用戶ID的路由
下面展示基於HTTP首部字段的路由,首先在/productpage頁面中使用名為jason的用戶登陸(密碼隨便寫)。
部署啟用基於用戶的路由:
$ kubectl apply -f samples/bookinfo/networking/virtual-service-reviews-test-v2.yaml
創建的VirtualService如下
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
annotations:
...
name: reviews
namespace: default
spec:
hosts:
- reviews
http:
- match: #將HTTP請求首部中有end-user:jason字段的請求路由到v2
- headers:
end-user:
exact: jason
route:
- destination:
host: reviews
subset: v2
- route: #HTTP請求首部中不帶end-user:jason字段的請求會被路由到v1
- destination:
host: reviews
subset: v1
刷新/productpage頁面,可以看到只會顯示v2版本(帶黑星排名)頁面,退出jason登陸,可以看到只顯示v1版本(不帶排名)頁面。
卸載:
$ kubectl delete -f samples/bookinfo/networking/virtual-service-reviews-test-v2.yaml
故障注入
本節使用故障注入來測試應用的可靠性。
首先使用如下配置固定請求路徑:
$ kubectl apply -f samples/bookinfo/networking/virtual-service-all-v1.yaml
$ kubectl apply -f samples/bookinfo/networking/virtual-service-reviews-test-v2.yaml
執行后,請求路徑變為:
productpage→reviews:v2→ratings(僅適用於用戶jason)productpage→reviews:v1(適用於除jason外的其他用戶)
注入HTTP延時故障
為了測試Bookinfo應用的彈性,為用戶jason在reviews:v2 和ratings 的微服務間注入7s的延時,用來模擬Bookinfo的內部bug。
注意reviews:v2在調用ratings服務時,有一個10s的硬編碼超時時間,因此即使引入了7s的延時,端到端流程上也不會看到任何錯誤。
注入故障,來延緩來自測試用戶jason的流量:
$ kubectl apply -f samples/bookinfo/networking/virtual-service-ratings-test-delay.yaml
查看部署的virtual service信息:
$ kubectl get virtualservice ratings -o yaml
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
annotations:
...
name: ratings
namespace: default
spec:
hosts:
- ratings
http:
- fault: #將來自jason的全部流量注入5s的延遲,流量目的地為v1版本的ratings服務
delay:
fixedDelay: 7s
percentage:
value: 100
match:
- headers:
end-user:
exact: jason
route:
- destination:
host: ratings
subset: v1
- route: #非來自jason的流量不受影響
- destination:
host: ratings
subset: v1
打開 /productpage 頁面,使用jason用戶登陸並刷新瀏覽器頁面,可以看到7s內不會加載頁面,且頁面上可以看到如下錯誤信息:

相同服務的virtualservice的配置會被覆蓋,因此此處沒必要清理
注入HTTP中斷故障
在ratings微服務上模擬為測試用戶jason引入HTTP中斷故障,這種場景下,在加載頁面時會看到錯誤信息Ratings service is currently unavailable.
使用如下命令為用戶jason注入HTTP中斷
$ kubectl apply -f samples/bookinfo/networking/virtual-service-ratings-test-abort.yaml
獲取部署的ratings的virtual service信息
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
annotations:
...
name: ratings
namespace: default
spec:
hosts:
- ratings
http:
- fault: #對來自用戶jason的請求直接響應500錯誤碼
abort:
httpStatus: 500
percentage:
value: 100
match:
- headers:
end-user:
exact: jason
route:
- destination:
host: ratings
subset: v1
- route:
- destination:
host: ratings
subset: v1
打開 /productpage 頁面,使用jason用戶登陸,可以看到如下錯誤。退出用戶jason后該錯誤消失。

刪除注入的中斷故障
$ kubectl delete -f samples/bookinfo/networking/virtual-service-ratings-test-abort.yaml
卸載
$ kubectl delete -f samples/bookinfo/networking/virtual-service-all-v1.yaml
流量遷移
本章展示如何將流量從一個版本的微服務遷移到另一個版本的微服務,如將流量從老版本切換到新版本。通常情況下會逐步切換流量,istio下可以基於百分比進行流量切換。注意各個版本的權重之和必須等於100,否則會報total destination weight ${weight-total}= 100的錯誤,${weight-total}為當前配置的權重之和。
基於權重的路由
-
首先將所有微服務的流量都分發到v1版本的微服務,打開
/productpage頁面可以看到該頁面上沒有任何排名信息。$ kubectl apply -f samples/bookinfo/networking/virtual-service-all-v1.yaml -
使用如下命令將50%的流量從reviews:v1遷移到review:v3
$ kubectl apply -f samples/bookinfo/networking/virtual-service-reviews-50-v3.yaml -
獲取virtual service信息
$ kubectl get virtualservice reviews -o yamlapiVersion: networking.istio.io/v1beta1 kind: VirtualService metadata: annotations: ... name: reviews namespace: default spec: hosts: - reviews http: - route: #50%的流量到v1,50%的流量到v3。 - destination: host: reviews subset: v1 weight: 50 - destination: host: reviews subset: v3 weight: 50 -
登陸並刷新
/productpage,可以看到50%概率會看到v1的頁面,50%的概率會看到v2的頁面
卸載
$ kubectl delete -f samples/bookinfo/networking/virtual-service-all-v1.yaml
TCP流量遷移
本節展示如何將TCP流量從一個版本遷移到另一個版本。例如將TCP的流量從老版本遷移到新版本。
基於權重的TCP路由
單獨創建一個命名空間部署tcp-echo應用
$ kubectl create namespace istio-io-tcp-traffic-shifting
openshift下面需要授權1337的用戶進行sidecar注入
$ oc adm policy add-scc-to-group privileged system:serviceaccounts:istio-io-tcp-traffic-shifting
$ oc adm policy add-scc-to-group anyuid system:serviceaccounts:istio-io-tcp-traffic-shifting
創建NetworkAttachmentDefinition,使用istio-cni
$ cat <<EOF | oc -n istio-io-tcp-traffic-shifting create -f -
apiVersion: "k8s.cni.cncf.io/v1"
kind: NetworkAttachmentDefinition
metadata:
name: istio-cni
EOF
對命名空間istio-io-tcp-traffic-shifting使用自動注入sidecar的方式
$ kubectl label namespace istio-io-tcp-traffic-shifting istio-injection=enabled
部署tcp-echo應用
$ kubectl apply -f samples/tcp-echo/tcp-echo-services.yaml -n istio-io-tcp-traffic-shifting
將tcp-echo服務的流量全部分發到v1版本
$ kubectl apply -f samples/tcp-echo/tcp-echo-all-v1.yaml -n istio-io-tcp-traffic-shifting
tcp-echo服務的pod如下,包含v1和v2兩個版本
$ oc get pod
NAME READY STATUS RESTARTS AGE
tcp-echo-v1-5cb688897c-hk277 2/2 Running 0 16m
tcp-echo-v2-64b7c58f68-hk9sr 2/2 Running 0 16m
默認部署的gateway如下,可以看到它使用了istio默認安裝的ingress gateway,通過端口31400進行訪問
$ oc get gateways.networking.istio.io tcp-echo-gateway -oyaml
apiVersion: networking.istio.io/v1beta1
kind: Gateway
metadata:
annotations:
...
name: tcp-echo-gateway
namespace: istio-io-tcp-traffic-shifting
spec:
selector:
istio: ingressgateway
servers:
- hosts:
- '*'
port:
name: tcp
number: 31400
protocol: TCP
對應綁定的virtual service為tcp-echo。此處host為"*",表示只要訪問到gateway tcp-echo-gateway 31400端口上的流量都會被分發到該virtual service中。
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: tcp-echo
spec:
hosts:
- "*"
gateways:
- tcp-echo-gateway
tcp:
- match:
- port: 31400
route:
- destination: #轉發到的后端服務的信息
host: tcp-echo
port:
number: 9000
subset: v1
由於沒有安裝ingress gateway(沒有生效),按照gateway的原理,可以通過istio默認安裝的ingress gateway模擬ingress的訪問方式。可以看到默認的ingress gateway pod中打開了31400端口:
$ oc exec -it istio-ingressgateway-64f6f9d5c6-qrnw2 /bin/sh -n istio-system
$ ss -ntl
State Recv-Q Send-Q Local Address:Port Peer Address:Port
LISTEN 0 0 0.0.0.0:15090 0.0.0.0:*
LISTEN 0 0 127.0.0.1:15000 0.0.0.0:*
LISTEN 0 0 0.0.0.0:31400 0.0.0.0:*
LISTEN 0 0 0.0.0.0:80 0.0.0.0:*
LISTEN 0 0 *:15020 *:*
通過ingress gateway pod的kubernetes service進行訪問:
$ oc get svc |grep ingress
istio-ingressgateway LoadBalancer 10.84.93.45 ...
$ for i in {1..10}; do (date; sleep 1) | nc 10.84.93.45 31400; done
one Wed May 13 11:17:44 UTC 2020
one Wed May 13 11:17:45 UTC 2020
one Wed May 13 11:17:46 UTC 2020
one Wed May 13 11:17:47 UTC 2020
可以看到所有的流量都分發到了v1版本(打印"one")的tcp-echo服務
直接使用tcp-echo對應的kubernetes service進行訪問是不受istio管控的,需要通過virtual service進行訪問
下面將20%的流量從tcp-echo:v1 遷移到tcp-echo:v2
$ kubectl apply -f samples/tcp-echo/tcp-echo-20-v2.yaml -n istio-io-tcp-traffic-shifting
查看部署的路由規則
$ kubectl get virtualservice tcp-echo -o yaml -n istio-io-tcp-traffic-shifting
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
annotations:
...
name: tcp-echo
namespace: istio-io-tcp-traffic-shifting
spec:
gateways:
- tcp-echo-gateway
hosts:
- '*'
tcp:
- match:
- port: 31400
route:
- destination:
host: tcp-echo
port:
number: 9000
subset: v1
weight: 80
- destination:
host: tcp-echo
port:
number: 9000
subset: v2
weight: 20
再次進行測試,結果如下:
$ for i in {1..10}; do (date; sleep 1) | nc 10.84.93.45 31400; done
one Wed May 13 13:17:44 UTC 2020
two Wed May 13 13:17:45 UTC 2020
one Wed May 13 13:17:46 UTC 2020
one Wed May 13 13:17:47 UTC 2020
one Wed May 13 13:17:48 UTC 2020
one Wed May 13 13:17:49 UTC 2020
one Wed May 13 13:17:50 UTC 2020
one Wed May 13 13:17:51 UTC 2020
one Wed May 13 13:17:52 UTC 2020
two Wed May 13 13:17:53 UTC 2020
卸載
執行如下命令卸載tcp-echo應用
$ kubectl delete -f samples/tcp-echo/tcp-echo-all-v1.yaml -n istio-io-tcp-traffic-shifting
$ kubectl delete -f samples/tcp-echo/tcp-echo-services.yaml -n istio-io-tcp-traffic-shifting
$ kubectl delete namespace istio-io-tcp-traffic-shifting
請求超時
本節介紹如何使用istio在Envoy上配置請求超時時間。用到了官方的例子Bookinfo
部署路由
$ kubectl apply -f samples/bookinfo/networking/virtual-service-all-v1.yaml
HTTP請求的超時時間在路由規則的timeout字段中指定。默認情況下禁用HTTP的超時,下面會將review服務的超時時間設置為1s,為了校驗效果,將ratings 服務延時2s。
-
將請求路由到v2版本的
review服務,即調用ratings服務的版本,此時review服務沒有設置超時$ kubectl apply -f - <<EOF apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: reviews spec: hosts: - reviews http: - route: - destination: host: reviews subset: v2 EOF -
為
rating服務增加2s延時$ kubectl apply -f - <<EOF apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: ratings spec: hosts: - ratings http: - fault: delay: percent: 100 fixedDelay: 2s route: - destination: host: ratings subset: v1 EOF -
打開
/productpage頁面,可以看到Bookinfo應用正在,但刷新頁面后會有2s的延時 -
為review服務設置0.5s的請求超時
$ kubectl apply -f - <<EOF apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: reviews spec: hosts: - reviews http: - route: - destination: host: reviews subset: v2 timeout: 0.5s EOF -
此時刷新頁面,大概1s返回結果,
reviews不可用響應花了1s,而不是0.5s的原因是
productpage服務硬編碼了一次重試,因此reviews服務在返回前會超時2次。Bookinfo應用是有自己內部的超時機制的,具體參見fault-injection
卸載
$ kubectl delete -f samples/bookinfo/networking/virtual-service-all-v1.yaml
斷路
本節將顯示如何為連接、請求和異常值檢測配置熔斷。斷路是創建彈性微服務應用程序的重要模式,允許編寫的程序能夠限制錯誤,延遲峰值以及非期望的網絡的影響。
在default命名空間(已經開啟自動注入sidecar)下部署httpbin
$ kubectl apply -f samples/httpbin/httpbin.yaml
配置斷路器
-
創建destination rule,在調用httpbin服務時應用斷路策略。
$ kubectl apply -f - <<EOF apiVersion: networking.istio.io/v1alpha3 kind: DestinationRule metadata: name: httpbin spec: host: httpbin trafficPolicy: connectionPool: tcp: maxConnections: 1 #到一個目的主機的HTTP1/TCP 的最大連接數 http: http1MaxPendingRequests: 1 #到一個目標的處於pending狀態的最大HTTP請求數 maxRequestsPerConnection: 1 #到一個后端的每條連接上的最大請求數 outlierDetection: #控制從負載平衡池中逐出不正常主機的設置 consecutiveErrors: 1 interval: 1s baseEjectionTime: 3m maxEjectionPercent: 100 EOF -
校驗destination rule的正確性
$ kubectl get destinationrule httpbin -o yaml apiVersion: networking.istio.io/v1beta1 kind: DestinationRule metadata: annotations: ... name: httpbin namespace: default spec: host: httpbin trafficPolicy: connectionPool: http: http1MaxPendingRequests: 1 maxRequestsPerConnection: 1 tcp: maxConnections: 1 outlierDetection: baseEjectionTime: 3m consecutiveErrors: 1 interval: 1s maxEjectionPercent: 100
添加客戶端
創建一個客戶端,向httpbin服務發送請求。客戶端是一個名為 fortio的簡單負載測試工具,fortio可以控制連接數,並發數和發出去的HTTP調用延時。下面將使用該客戶端觸發設置在 DestinationRule中的斷路器策略。
-
部署
fortio服務$ kubectl apply -f samples/httpbin/sample-client/fortio-deploy.yaml -
登陸到客戶端的pod,使用名為的fortio工具調用
httpbin,使用-curl指明期望執行一次調用$ FORTIO_POD=$(kubectl get pod | grep fortio | awk '{ print $1 }') $ kubectl exec -it $FORTIO_POD -c fortio /usr/bin/fortio -- load -curl http://httpbin:8000/get調用結果如下,可以看到請求成功:
$ kubectl exec -it $FORTIO_POD -c fortio /usr/bin/fortio -- load -curl http://httpbin:8000/get HTTP/1.1 200 OK server: envoy date: Thu, 14 May 2020 01:21:47 GMT content-type: application/json content-length: 586 access-control-allow-origin: * access-control-allow-credentials: true x-envoy-upstream-service-time: 11 { "args": {}, "headers": { "Content-Length": "0", "Host": "httpbin:8000", "User-Agent": "fortio.org/fortio-1.3.1", "X-B3-Parentspanid": "b5cd907bcfb5158f", "X-B3-Sampled": "0", "X-B3-Spanid": "407597df02737b32", "X-B3-Traceid": "45f3690565e5ca9bb5cd907bcfb5158f", "X-Forwarded-Client-Cert": "By=spiffe://cluster.local/ns/default/sa/httpbin;Hash=dac158cf40c0f28f3322e6219c45d546ef8cc3b7df9d993ace84ab6e44aab708;Subject=\"\";URI=spiffe://cluster.local/ns/default/sa/default" }, "origin": "127.0.0.1", "url": "http://httpbin:8000/get" }
觸發斷路器
在上面的DestinationRule設定中指定了maxConnections: 1 和 http1MaxPendingRequests: 1,表示如果並發的連接數和請求數大於1,則后續的請求和連接會失敗,此時觸發斷路。
-
使用兩條並發的連接 (
-c 2) ,並發出20個請求 (-n 20):$ kubectl exec -it $FORTIO_POD -c fortio /usr/bin/fortio -- load -c 2 -qps 0 -n 20 -loglevel Warning http://httpbin:8000/get 05:50:30 I logger.go:97> Log level is now 3 Warning (was 2 Info) Fortio 1.3.1 running at 0 queries per second, 16->16 procs, for 20 calls: http://httpbin:8000/get Starting at max qps with 2 thread(s) [gomax 16] for exactly 20 calls (10 per thread + 0) 05:50:30 W http_client.go:679> Parsed non ok code 503 (HTTP/1.1 503) 05:50:30 W http_client.go:679> Parsed non ok code 503 (HTTP/1.1 503) 05:50:30 W http_client.go:679> Parsed non ok code 503 (HTTP/1.1 503) 05:50:30 W http_client.go:679> Parsed non ok code 503 (HTTP/1.1 503) Ended after 51.51929ms : 20 calls. qps=388.2 Aggregated Function Time : count 20 avg 0.0041658472 +/- 0.003982 min 0.000313105 max 0.017104987 sum 0.083316943 # range, mid point, percentile, count >= 0.000313105 <= 0.001 , 0.000656552 , 15.00, 3 > 0.002 <= 0.003 , 0.0025 , 70.00, 11 > 0.003 <= 0.004 , 0.0035 , 80.00, 2 > 0.005 <= 0.006 , 0.0055 , 85.00, 1 > 0.008 <= 0.009 , 0.0085 , 90.00, 1 > 0.012 <= 0.014 , 0.013 , 95.00, 1 > 0.016 <= 0.017105 , 0.0165525 , 100.00, 1 # target 50% 0.00263636 # target 75% 0.0035 # target 90% 0.009 # target 99% 0.016884 # target 99.9% 0.0170829 Sockets used: 6 (for perfect keepalive, would be 2) Code 200 : 16 (80.0 %) Code 503 : 4 (20.0 %) Response Header Sizes : count 20 avg 184.05 +/- 92.03 min 0 max 231 sum 3681 Response Body/Total Sizes : count 20 avg 701.05 +/- 230 min 241 max 817 sum 14021 All done 20 calls (plus 0 warmup) 4.166 ms avg, 388.2 qps主要關注的內容如下,可以看到大部分請求都是成功的,但也有一小部分失敗
Sockets used: 6 (for perfect keepalive, would be 2) Code 200 : 16 (80.0 %) Code 503 : 4 (20.0 %) -
將並發連接數提升到3
$ kubectl exec -it $FORTIO_POD -c fortio /usr/bin/fortio -- load -c 3 -qps 0 -n 30 -loglevel Warning http://httpbin:8000/get 06:00:30 I logger.go:97> Log level is now 3 Warning (was 2 Info) Fortio 1.3.1 running at 0 queries per second, 16->16 procs, for 30 calls: http://httpbin:8000/get Starting at max qps with 3 thread(s) [gomax 16] for exactly 30 calls (10 per thread + 0) 06:00:30 W http_client.go:679> Parsed non ok code 503 (HTTP/1.1 503) 06:00:30 W http_client.go:679> Parsed non ok code 503 (HTTP/1.1 503) 06:00:30 W http_client.go:679> Parsed non ok code 503 (HTTP/1.1 503) 06:00:30 W http_client.go:679> Parsed non ok code 503 (HTTP/1.1 503) 06:00:30 W http_client.go:679> Parsed non ok code 503 (HTTP/1.1 503) 06:00:30 W http_client.go:679> Parsed non ok code 503 (HTTP/1.1 503) 06:00:30 W http_client.go:679> Parsed non ok code 503 (HTTP/1.1 503) 06:00:30 W http_client.go:679> Parsed non ok code 503 (HTTP/1.1 503) 06:00:30 W http_client.go:679> Parsed non ok code 503 (HTTP/1.1 503) 06:00:30 W http_client.go:679> Parsed non ok code 503 (HTTP/1.1 503) 06:00:30 W http_client.go:679> Parsed non ok code 503 (HTTP/1.1 503) 06:00:30 W http_client.go:679> Parsed non ok code 503 (HTTP/1.1 503) 06:00:30 W http_client.go:679> Parsed non ok code 503 (HTTP/1.1 503) 06:00:30 W http_client.go:679> Parsed non ok code 503 (HTTP/1.1 503) 06:00:30 W http_client.go:679> Parsed non ok code 503 (HTTP/1.1 503) 06:00:30 W http_client.go:679> Parsed non ok code 503 (HTTP/1.1 503) 06:00:30 W http_client.go:679> Parsed non ok code 503 (HTTP/1.1 503) 06:00:30 W http_client.go:679> Parsed non ok code 503 (HTTP/1.1 503) 06:00:30 W http_client.go:679> Parsed non ok code 503 (HTTP/1.1 503) 06:00:30 W http_client.go:679> Parsed non ok code 503 (HTTP/1.1 503) 06:00:30 W http_client.go:679> Parsed non ok code 503 (HTTP/1.1 503) Ended after 18.885972ms : 30 calls. qps=1588.5 Aggregated Function Time : count 30 avg 0.0015352119 +/- 0.002045 min 0.000165718 max 0.006403746 sum 0.046056356 # range, mid point, percentile, count >= 0.000165718 <= 0.001 , 0.000582859 , 70.00, 21 > 0.002 <= 0.003 , 0.0025 , 73.33, 1 > 0.003 <= 0.004 , 0.0035 , 83.33, 3 > 0.004 <= 0.005 , 0.0045 , 90.00, 2 > 0.005 <= 0.006 , 0.0055 , 93.33, 1 > 0.006 <= 0.00640375 , 0.00620187 , 100.00, 2 # target 50% 0.000749715 # target 75% 0.00316667 # target 90% 0.005 # target 99% 0.00634318 # target 99.9% 0.00639769 Sockets used: 23 (for perfect keepalive, would be 3) Code 200 : 9 (30.0 %) Code 503 : 21 (70.0 %) Response Header Sizes : count 30 avg 69 +/- 105.4 min 0 max 230 sum 2070 Response Body/Total Sizes : count 30 avg 413.5 +/- 263.5 min 241 max 816 sum 12405 All done 30 calls (plus 0 warmup) 1.535 ms avg, 1588.5 qps可以看到發生了短路,只有30%的請求成功
Sockets used: 23 (for perfect keepalive, would be 3) Code 200 : 9 (30.0 %) Code 503 : 21 (70.0 %) -
查詢
istio-proxy獲取更多信息$ kubectl exec $FORTIO_POD -c istio-proxy -- pilot-agent request GET stats | grep httpbin | grep pending cluster.outbound|8000||httpbin.default.svc.cluster.local.circuit_breakers.default.rq_pending_open: 0 cluster.outbound|8000||httpbin.default.svc.cluster.local.circuit_breakers.high.rq_pending_open: 0 cluster.outbound|8000||httpbin.default.svc.cluster.local.upstream_rq_pending_active: 0 cluster.outbound|8000||httpbin.default.svc.cluster.local.upstream_rq_pending_failure_eject: 0 cluster.outbound|8000||httpbin.default.svc.cluster.local.upstream_rq_pending_overflow: 93 cluster.outbound|8000||httpbin.default.svc.cluster.local.upstream_rq_pending_total: 139
卸載
$ kubectl delete destinationrule httpbin
$ kubectl delete deploy httpbin fortio-deploy
$ kubectl delete svc httpbin fortio
鏡像
本節展示istio的流量鏡像功能。鏡像會將活動的流量的副本發送到鏡像的服務上。
該任務中,首先將所有的流量分發到v1的測試服務上,然后通過鏡像將一部分流量分發到v2。
-
首先部署兩個版本的httpbin服務
httpbin-v1:
$ cat <<EOF | istioctl kube-inject -f - | kubectl create -f - apiVersion: apps/v1 kind: Deployment metadata: name: httpbin-v1 spec: replicas: 1 selector: matchLabels: app: httpbin version: v1 #v1版本標簽 template: metadata: labels: app: httpbin version: v1 spec: containers: - image: docker.io/kennethreitz/httpbin imagePullPolicy: IfNotPresent name: httpbin command: ["gunicorn", "--access-logfile", "-", "-b", "0.0.0.0:80", "httpbin:app"] ports: - containerPort: 80 EOFhttpbin-v2:
$ cat <<EOF | istioctl kube-inject -f - | kubectl create -f - apiVersion: apps/v1 kind: Deployment metadata: name: httpbin-v2 spec: replicas: 1 selector: matchLabels: app: httpbin version: v2 #v2版本標簽 template: metadata: labels: app: httpbin version: v2 spec: containers: - image: docker.io/kennethreitz/httpbin imagePullPolicy: IfNotPresent name: httpbin command: ["gunicorn", "--access-logfile", "-", "-b", "0.0.0.0:80", "httpbin:app"] ports: - containerPort: 80 EOFhttpbin Kubernetes service:
$ kubectl create -f - <<EOF apiVersion: v1 kind: Service metadata: name: httpbin labels: app: httpbin spec: ports: - name: http port: 8000 targetPort: 80 selector: app: httpbin EOF -
啟動一個
sleep服務,提供curl功能cat <<EOF | istioctl kube-inject -f - | kubectl create -f - apiVersion: apps/v1 kind: Deployment metadata: name: sleep spec: replicas: 1 selector: matchLabels: app: sleep template: metadata: labels: app: sleep spec: containers: - name: sleep image: tutum/curl command: ["/bin/sleep","infinity"] imagePullPolicy: IfNotPresent EOF
創建默認路由策略
默認kubernetes會對httpbin的所有版本的服務進行負載均衡,這一步中,將所有的流量分發到v1
-
創建一個默認的路由,將所有流量分發大v1版本的服務
$ kubectl apply -f - <<EOF apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: httpbin spec: hosts: - httpbin http: - route: - destination: host: httpbin subset: v1 # 100%將流量分發到v1 weight: 100 --- apiVersion: networking.istio.io/v1alpha3 kind: DestinationRule metadata: name: httpbin spec: host: httpbin subsets: - name: v1 labels: version: v1 - name: v2 labels: version: v2 EOF -
向該服務發送部分流量
$ export SLEEP_POD=$(kubectl get pod -l app=sleep -o jsonpath={.items..metadata.name}) $ kubectl exec -it $SLEEP_POD -c sleep -- sh -c 'curl http://httpbin:8000/headers' | python -m json.tool { "headers": { "Accept": "*/*", "Content-Length": "0", "Host": "httpbin:8000", "User-Agent": "curl/7.35.0", "X-B3-Parentspanid": "a35a08a1875f5d18", "X-B3-Sampled": "0", "X-B3-Spanid": "7d1e0a1db0db5634", "X-B3-Traceid": "3b5e9010f4a50351a35a08a1875f5d18", "X-Forwarded-Client-Cert": "By=spiffe://cluster.local/ns/default/sa/default;Hash=6dd991f0846ac27dc7fb878ebe8f7b6a8ebd571bdea9efa81d711484505036d7;Subject=\"\";URI=spiffe://cluster.local/ns/default/sa/default" } } -
校驗
v1和v2版本的httpbin pod的日志,可以看到v1服務是有訪問日志的,而v2則沒有$ export V1_POD=$(kubectl get pod -l app=httpbin,version=v1 -o jsonpath={.items..metadata.name}) $ kubectl logs -f $V1_POD -c httpbin ... 127.0.0.1 - - [14/May/2020:06:17:57 +0000] "GET /headers HTTP/1.1" 200 518 "-" "curl/7.35.0" 127.0.0.1 - - [14/May/2020:06:18:16 +0000] "GET /headers HTTP/1.1" 200 518 "-" "curl/7.35.0"$ export V2_POD=$(kubectl get pod -l app=httpbin,version=v2 -o jsonpath={.items..metadata.name}) $ kubectl logs -f $V2_POD -c httpbin <none>
將流量鏡像到v2
-
修改路由規則,將流量鏡像到v2
$ kubectl apply -f - <<EOF apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: httpbin spec: hosts: - httpbin http: - route: - destination: host: httpbin subset: v1 #100%將流量分發到v1 weight: 100 mirror: host: httpbin subset: v2 #100%將流量鏡像到v2 mirror_percent: 100 EOF當流量配置了鏡像時,發送到鏡像服務的請求會在Host/Authority首部之后加上
-shadow,如cluster-1變為cluster-1-shadow。需要注意的是,鏡像的請求是"發起並忘記"的方式,即會丟棄對鏡像請求的響應。可以使用
mirror_percent字段鏡像一部分流量,而不是所有的流量。如果沒有出現該字段,為了兼容老版本,會鏡像所有的流量。 -
發送流量
$ kubectl exec -it $SLEEP_POD -c sleep -- sh -c 'curl http://httpbin:8000/headers' | python -m json.tool查看v1和v2服務的日志,可以看到此時將
v1服務的請求鏡像到了v2服務上$ kubectl logs -f $V1_POD -c httpbin ... 127.0.0.1 - - [14/May/2020:06:17:57 +0000] "GET /headers HTTP/1.1" 200 518 "-" "curl/7.35.0" 127.0.0.1 - - [14/May/2020:06:18:16 +0000] "GET /headers HTTP/1.1" 200 518 "-" "curl/7.35.0" 127.0.0.1 - - [14/May/2020:06:32:09 +0000] "GET /headers HTTP/1.1" 200 518 "-" "curl/7.35.0" 127.0.0.1 - - [14/May/2020:06:32:37 +0000] "GET /headers HTTP/1.1" 200 518 "-" "curl/7.35.0" $ kubectl logs -f $V2_POD -c httpbin ... 127.0.0.1 - - [14/May/2020:06:32:37 +0000] "GET /headers HTTP/1.1" 200 558 "-" "curl/7.35.0"
卸載
$ kubectl delete virtualservice httpbin
$ kubectl delete destinationrule httpbin
$ kubectl delete deploy httpbin-v1 httpbin-v2 sleep
$ kubectl delete svc httpbin
