一,k8s安裝istio
https://github.com/istio/istio/releases/tag/1.12.0
https://github.com/istio/istio/releases/istio-1.12.0-linux-amd64.tar.gz
root@master001:~/istio/istio-1.12.0/bin# cp -a istioctl /usr/bin/
root@master001:~/istio-1.12.0# istioctl install --set profile=demo This will install the Istio 1.12.0 demo profile with ["Istio core" "Istiod" "Ingress gateways" "Egress gateways"] components into the cluster. Proceed? (y/N) y
root@slave001:~# docker images |grep istio
istio/proxyv2:1.12.0
istio/pilot:1.12.0
root@slave001:~# kubectl get po -A NAMESPACE NAME READY STATUS RESTARTS AGE istio-system istio-egressgateway-7f4864f59c-nz69w 1/1 Running 0 9m48s istio-system istio-ingressgateway-55d9fb9f-trmkq 1/1 Running 0 9m29s istio-system istiod-555d47cb65-dlfs4 1/1 Running 0 24m
root@master001:~/istio/istio-1.12.0/bin# kubectl get svc -n istio-system NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE istio-egressgateway ClusterIP 10.100.185.189 <none> 80/TCP,443/TCP 28m istio-ingressgateway LoadBalancer 10.100.94.111 <pending> 15021:60211/TCP,80:57328/TCP,443:61049/TCP,31400:2464/TCP,15443:59853/TCP 28m istiod ClusterIP 10.100.136.217 <none> 15010/TCP,15012/TCP,443/TCP,15014/TCP 29m
二,Istio部署在線書店bookinfo
2.1、在線書城功能介紹
在線書店-bookinfo:該應用由四個單獨的微服務構成,這個應用模仿在線書店的一個分類,顯示一本書的信息,頁面上會顯示一本書的描述,書籍的細節(ISBN、頁數等),以及關於這本書的一些評論。
Bookinfo應用分為四個單獨的微服務
1)productpage這個微服務會調用details和reviews兩個微服務,用來生成頁面;
2)details這個微服務中包含了書籍的信息;
3)reviews這個微服務中包含了書籍相關的評論,它還會調用ratings微服務;
4)ratings這個微服務中包含了由書籍評價組成的評級信息。
reviews微服務有3個版本
1)v1版本不會調用ratings服務;
2)v2版本會調用ratings服務,並使用1到5個黑色星形圖標來顯示評分信息;
3)v3版本會調用ratings服務,並使用1到5個紅色星形圖標來顯示評分信息。

Bookinfo應用中的幾個微服務是由不同的語言編寫的。這些服務對istio並無依賴,但是構成了一個有代表性的服務網格的例子:它由多個服務、多個語言構成,並且reviews服務具有多個版本。
要在Istio中運行這一應用,無需對應用自身做出任何改變。 只要簡單的在 Istio 環境中對服務進行配置和運行,具體一點說就是把 Envoy sidecar 注入到每個服務之中。 最終的部署結果將如下圖所示:

所有的微服務都和Envoy sidecar集成在一起,被集成服務所有的出入流量都被envoy sidecar 所劫持,這樣就為外部控制准備了所需的 Hook,然后就可以利用Istio控制平面為應用提供服務路由、遙測數據收集以及策略實施等功能。
2.2、在線書城部署
1)istio默認自動注入 sidecar,需要為default命名空間打上標簽istio-injection=enabled
root@master001:~# kubectl label namespace default istio-injection=enabled
root@master001:~/istio-canary# kubectl describe ns default |grep istio-injection Labels: istio-injection=enabled
root@master001:~/istio-canary# kubectl label namespace default istio-injection=disabled --overwrite namespace/default labeled root@master001:~/istio-canary# kubectl get namespace -L istio-injection
2)使用kubectl部署應用bookinfo
root@master001:~/istio/istio-1.12.0# kubectl apply -f samples/bookinfo/platform/kube/bookinfo.yaml
鏡像
docker.io/istio/examples-bookinfo-details-v1:1.16.2
docker.io/istio/examples-bookinfo-productpage-v1:1.16.2
docker.io/istio/examples-bookinfo-ratings-v1:1.16.2
docker.io/istio/examples-bookinfo-reviews-v1:1.16.2
docker.io/istio/examples-bookinfo-reviews-v2:1.16.2
docker.io/istio/examples-bookinfo-reviews-v3:1.16.2
root@slave001:~/bookinfo# kubectl get po NAME READY STATUS RESTARTS AGE details-v1-79f774bdb9-zdmwn 2/2 Running 0 50m productpage-v1-6b746f74dc-g6tgw 2/2 Running 0 50m ratings-v1-b6994bb9-gtv7t 2/2 Running 0 50m reviews-v1-545db77b95-k4tfn 2/2 Running 0 12m reviews-v2-7bf8c9648f-p7mc6 2/2 Running 0 8m20s reviews-v3-84779c7bbc-fb5bq 2/2 Running 0 119s
root@master001:~/istio/istio-1.12.0# kubectl get services NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE canary ClusterIP 10.100.134.132 <none> 80/TCP 24d details ClusterIP 10.100.52.34 <none> 9080/TCP 53s kubernetes ClusterIP 10.100.0.1 <none> 443/TCP 59d productpage ClusterIP 10.100.241.70 <none> 9080/TCP 53s ratings ClusterIP 10.100.69.124 <none> 9080/TCP 53s reviews ClusterIP 10.100.177.75 <none> 9080/TCP 53s
root@master001:~/istio/istio-1.12.0# cat samples/bookinfo/platform/kube/bookinfo.yaml # Copyright Istio Authors # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ################################################################################################## # This file defines the services, service accounts, and deployments for the Bookinfo sample. # # To apply all 4 Bookinfo services, their corresponding service accounts, and deployments: # # kubectl apply -f samples/bookinfo/platform/kube/bookinfo.yaml # # Alternatively, you can deploy any resource separately: # # kubectl apply -f samples/bookinfo/platform/kube/bookinfo.yaml -l service=reviews # reviews Service # kubectl apply -f samples/bookinfo/platform/kube/bookinfo.yaml -l account=reviews # reviews ServiceAccount # kubectl apply -f samples/bookinfo/platform/kube/bookinfo.yaml -l app=reviews,version=v3 # reviews-v3 Deployment ################################################################################################## ################################################################################################## # Details service ################################################################################################## apiVersion: v1 kind: Service metadata: name: details labels: app: details service: details spec: ports: - port: 9080 name: http selector: app: details --- apiVersion: v1 kind: ServiceAccount metadata: name: bookinfo-details labels: account: details --- apiVersion: apps/v1 kind: Deployment metadata: name: details-v1 labels: app: details version: v1 spec: replicas: 1 selector: matchLabels: app: details version: v1 template: metadata: labels: app: details version: v1 spec: serviceAccountName: bookinfo-details containers: - name: details image: docker.io/istio/examples-bookinfo-details-v1:1.16.2 imagePullPolicy: IfNotPresent ports: - containerPort: 9080 securityContext: runAsUser: 1000 --- ################################################################################################## # Ratings service ################################################################################################## apiVersion: v1 kind: Service metadata: name: ratings labels: app: ratings service: ratings spec: ports: - port: 9080 name: http selector: app: ratings --- apiVersion: v1 kind: ServiceAccount metadata: name: bookinfo-ratings labels: account: ratings --- apiVersion: apps/v1 kind: Deployment metadata: name: ratings-v1 labels: app: ratings version: v1 spec: replicas: 1 selector: matchLabels: app: ratings version: v1 template: metadata: labels: app: ratings version: v1 spec: serviceAccountName: bookinfo-ratings containers: - name: ratings image: docker.io/istio/examples-bookinfo-ratings-v1:1.16.2 imagePullPolicy: IfNotPresent ports: - containerPort: 9080 securityContext: runAsUser: 1000 --- ################################################################################################## # Reviews service ################################################################################################## apiVersion: v1 kind: Service metadata: name: reviews labels: app: reviews service: reviews spec: ports: - port: 9080 name: http selector: app: reviews --- apiVersion: v1 kind: ServiceAccount metadata: name: bookinfo-reviews labels: account: reviews --- apiVersion: apps/v1 kind: Deployment metadata: name: reviews-v1 labels: app: reviews version: v1 spec: replicas: 1 selector: matchLabels: app: reviews version: v1 template: metadata: labels: app: reviews version: v1 spec: serviceAccountName: bookinfo-reviews containers: - name: reviews image: docker.io/istio/examples-bookinfo-reviews-v1:1.16.2 imagePullPolicy: IfNotPresent env: - name: LOG_DIR value: "/tmp/logs" ports: - containerPort: 9080 volumeMounts: - name: tmp mountPath: /tmp - name: wlp-output mountPath: /opt/ibm/wlp/output securityContext: runAsUser: 1000 volumes: - name: wlp-output emptyDir: {} - name: tmp emptyDir: {} --- apiVersion: apps/v1 kind: Deployment metadata: name: reviews-v2 labels: app: reviews version: v2 spec: replicas: 1 selector: matchLabels: app: reviews version: v2 template: metadata: labels: app: reviews version: v2 spec: serviceAccountName: bookinfo-reviews containers: - name: reviews image: docker.io/istio/examples-bookinfo-reviews-v2:1.16.2 imagePullPolicy: IfNotPresent env: - name: LOG_DIR value: "/tmp/logs" ports: - containerPort: 9080 volumeMounts: - name: tmp mountPath: /tmp - name: wlp-output mountPath: /opt/ibm/wlp/output securityContext: runAsUser: 1000 volumes: - name: wlp-output emptyDir: {} - name: tmp emptyDir: {} --- apiVersion: apps/v1 kind: Deployment metadata: name: reviews-v3 labels: app: reviews version: v3 spec: replicas: 1 selector: matchLabels: app: reviews version: v3 template: metadata: labels: app: reviews version: v3 spec: serviceAccountName: bookinfo-reviews containers: - name: reviews image: docker.io/istio/examples-bookinfo-reviews-v3:1.16.2 imagePullPolicy: IfNotPresent env: - name: LOG_DIR value: "/tmp/logs" ports: - containerPort: 9080 volumeMounts: - name: tmp mountPath: /tmp - name: wlp-output mountPath: /opt/ibm/wlp/output securityContext: runAsUser: 1000 volumes: - name: wlp-output emptyDir: {} - name: tmp emptyDir: {} --- ################################################################################################## # Productpage services ################################################################################################## apiVersion: v1 kind: Service metadata: name: productpage labels: app: productpage service: productpage spec: ports: - port: 9080 name: http selector: app: productpage --- apiVersion: v1 kind: ServiceAccount metadata: name: bookinfo-productpage labels: account: productpage --- apiVersion: apps/v1 kind: Deployment metadata: name: productpage-v1 labels: app: productpage version: v1 spec: replicas: 1 selector: matchLabels: app: productpage version: v1 template: metadata: labels: app: productpage version: v1 spec: serviceAccountName: bookinfo-productpage containers: - name: productpage image: docker.io/istio/examples-bookinfo-productpage-v1:1.16.2 imagePullPolicy: IfNotPresent ports: - containerPort: 9080 volumeMounts: - name: tmp mountPath: /tmp securityContext: runAsUser: 1000 volumes: - name: tmp emptyDir: {} ---
root@master001:~/istio/istio-1.12.0# kubectl get serviceAccount --namespace=default NAME SECRETS AGE bookinfo-details 1 4m28s bookinfo-productpage 1 4m27s bookinfo-ratings 1 4m28s bookinfo-reviews 1 4m28s default 1 59d
service account,主要是給service使用的一個賬號。
為了讓Pod中的進程、服務能訪問k8s集群而提出的一個概念,基於service account,pod中的進程、服務能獲取到一個username和令牌Token,從而調用kubernetes集群的api server。
3)確認 Bookinfo 應用是否正在運行,在某個Pod中用curl命令對應用發送請求,例如ratings確認運行正常
root@slave001:~# kubectl exec -it $(kubectl get pod -l app=ratings -o jsonpath='{.items[0].metadata.name}') -c ratings -- curl productpage:9080/productpage | grep -o "<title>.*</title>" <title>Simple Bookstore App</title>
4)確定Ingress的IP和端口
現在Bookinfo服務已經啟動並運行,你需要使應用程序可以從Kubernetes集群外部訪問,例如從瀏覽器訪問,那可以用Istio Gateway來實現這個目標。
# 1、為應用程序定義gateway網關
root@master001:~/istio/istio-1.12.0# cat samples/bookinfo/networking/bookinfo-gateway.yaml apiVersion: networking.istio.io/v1alpha3 kind: Gateway metadata: name: bookinfo-gateway spec: selector: istio: ingressgateway # use istio default controller servers: - port: number: 80 name: http protocol: HTTP hosts: - "*" --- apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: bookinfo spec: hosts: - "*" gateways: - bookinfo-gateway http: - match: - uri: exact: /productpage - uri: prefix: /static - uri: exact: /login - uri: exact: /logout - uri: prefix: /api/v1/products route: - destination: host: productpage port: number: 9080
root@master001:~/istio/istio-1.12.0# kubectl apply -f samples/bookinfo/networking/bookinfo-gateway.yaml gateway.networking.istio.io/bookinfo-gateway created virtualservice.networking.istio.io/bookinfo created root@master001:~/istio/istio-1.12.0# kubectl get gateway NAME AGE bookinfo-gateway 12s root@master001:~/istio/istio-1.12.0# kubectl get virtualservice NAME GATEWAYS HOSTS AGE bookinfo ["bookinfo-gateway"] ["*"] 20s
#2,確定ingress ip和端口
root@master001:~/istio/istio-1.12.0# kubectl get svc istio-ingressgateway -n istio-system NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE istio-ingressgateway LoadBalancer 10.100.94.111 <pending> 15021:60211/TCP,80:57328/TCP,443:61049/TCP,31400:2464/TCP,15443:59853/TCP 22h
#3,獲取Istio Gateway的地址
root@master001:~/istio/istio-1.12.0# kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name=="http2")].nodePort}' 57328
root@master001:~/istio/istio-1.12.0# export INGRESS_PORT=$(kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name=="http2")].nodePort}') root@master001:~/istio/istio-1.12.0# echo $INGRESS_PORT 57328
root@master001:~/istio/istio-1.12.0# export SECURE_INGRESS_PORT=$(kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name=="https")].nodePort}') root@master001:~/istio/istio-1.12.0# echo $SECURE_INGRESS_PORT 61049
設置gateway url
root@master001:~/istio/istio-1.12.0# INGRESS_HOST=192.168.192.151 root@master001:~/istio/istio-1.12.0# export GATEWAY_URL=$INGRESS_HOST:$INGRESS_PORT root@master001:~/istio/istio-1.12.0# echo $GATEWAY_URL 192.168.192.151:57328
4,使用curl命令確認能從集群外部訪問bookinfo應用程序
root@master001:~/istio/istio-1.12.0# curl -s http://${GATEWAY_URL}/productpage | grep -o "<title>.*</title>" <title>Simple Bookstore App</title>
瀏覽器訪問http://192.168.192.151:57328/productpage
5,擴展:添加外部IP-extertal-IP
spec: clusterIP: 10.100.94.111 clusterIPs: - 10.100.94.111 externalIPs: - 192.168.192.151
6,卸載bookinfo服務
# 1.刪除路由規則,並銷毀應用的 Pod
root@master001:~/istio/istio-1.12.0# bash samples/bookinfo/platform/kube/cleanup.sh
namespace ? [default] y
NAMESPACE y not found.
using NAMESPACE=defaule
# 2.確認應用已經關停 kubectl get virtualservices #-- there should be no virtual services kubectl get destinationrules #-- there should be no destination rules kubectl get gateway #-- there should be no gateway kubectl get pods #-- the Bookinfo pods should be deleted
三,Istio實現灰度發布
金絲雀部署 新老版本逐步交替
root@master001:~/istio-canary# cat deployment.yaml apiVersion: apps/v1 kind: Deployment metadata: name: appv1 labels: app: v1 spec: replicas: 1 selector: matchLabels: app: v1 apply: canary template: metadata: labels: app: v1 apply: canary spec: containers: - name: nginx image: xianchao/canary:v1 imagePullPolicy: IfNotPresent ports: - containerPort: 80 --- apiVersion: apps/v1 kind: Deployment metadata: name: appv2 labels: app: v2 spec: replicas: 1 selector: matchLabels: app: v2 apply: canary template: metadata: labels: app: v2 apply: canary spec: containers: - name: nginx image: xianchao/canary:v2 imagePullPolicy: IfNotPresent ports: - containerPort: 80 root@master001:~/istio-canary# cat service.yaml apiVersion: v1 kind: Service metadata: name: canary labels: apply: canary spec: selector: apply: canary ports: - protocol: TCP port: 80 targetPort: 80 root@master001:~/istio-canary# cat gateway.yaml apiVersion: networking.istio.io/v1beta1 kind: Gateway metadata: name: canary-gateway spec: selector: istio: ingressgateway servers: - port: number: 80 name: http protocol: HTTP hosts: - "*" root@master001:~/istio-canary# cat virtual.yaml apiVersion: networking.istio.io/v1beta1 kind: VirtualService metadata: name: canary spec: hosts: - "*" gateways: - canary-gateway http: - route: - destination: host: canary.default.svc.cluster.local subset: v1 weight: 90 - destination: host: canary.default.svc.cluster.local subset: v2 weight: 10 --- apiVersion: networking.istio.io/v1beta1 kind: DestinationRule metadata: name: canary spec: host: canary.default.svc.cluster.local subsets: - name: v1 labels: app: v1 - name: v2 labels: app: v2
root@master001:~/istio-canary# kubectl get gateway NAME AGE canary-gateway 7m55s root@master001:~/istio-canary# kubectl get virtualservices NAME GATEWAYS HOSTS AGE canary ["canary-gateway"] ["*"] 5m53s
驗證效果
kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name=="http2")].nodePort}' for i in `seq 1 100`; do curl 192.168.192.151:57328;done > 1.txt
四,Istio核心資源
4.1、Gateway
在Kubernetes環境中,Ingress controller用於管理進入集群的流量。在Istio服務網格中 Istio Ingress Gateway承擔相應的角色,它使用新的配置模型(Gateway 和 VirtualServices)完成流量管理的功能。通過下圖做一個總的描述。
1、用戶向某端口發出請求
2、負載均衡器監聽端口,並將請求轉發到集群中的某個節點上。Istio Ingress Gateway Service 會監聽集群節點端口的請求
3、Istio Ingress Gateway Service 將請求交給Istio Ingress Gateway Pod 處理。IngressGateway Pod 通過 Gateway 和 VirtualService 配置規則處理請求。其中,Gateway 用來配置端口、協議和證書;VirtualService 用來配置一些路由信息(找到請求對應處理的服務App Service)
4、Istio Ingress Gateway Pod將請求轉給App Service
5、最終的請求會交給App Service 關聯的App Deployment處理
root@master001:~/istio/istio-1.12.0# cat /root/istio-canary/gateway.yaml apiVersion: networking.istio.io/v1beta1 kind: Gateway metadata: name: canary-gateway spec: selector: istio: ingressgateway servers: - port: number: 80 name: http protocol: HTTP hosts: - "*" # *表示通配符,通過任何域名都可以訪問
網關是一個運行在網格邊緣的負載均衡器,用於接收傳入或傳出的HTTP/TCP連接。主要工作是接受外部請求,把請求轉發到內部服務。網格邊緣的Ingress 流量,會通過對應的 Istio IngressGateway Controller 進入到集群內部。
在上面這個yaml里我們配置了一個監聽80端口的入口網關,它會將80端口的http流量導入到集群內對應的Virtual Service上。
4.2、VirtualService
VirtualService是Istio流量治理的一個核心配置,可以說是Istio流量治理中最重要、最復雜的。VirtualService在形式上表示一個虛擬服務,將滿足條件的流量都轉發到對應的服務后端,這個服務后端可以是一個服務,也可以是在DestinationRule中定義的服務的子集。
root@master001:~/istio-canary# cat virtual.yaml apiVersion: networking.istio.io/v1beta1 kind: VirtualService metadata: name: canary spec: hosts: - "*" gateways: - canary-gateway http: - route: - destination: host: canary.default.svc.cluster.local subset: v1 weight: 90 - destination: host: canary.default.svc.cluster.local subset: v2 weight: 10 --- apiVersion: networking.istio.io/v1beta1 kind: DestinationRule metadata: name: canary spec: host: canary.default.svc.cluster.local subsets: - name: v1 labels: app: v1 - name: v2 labels: app: v2
# 這個虛擬服務會收到上一個gateway中所有80端口來的http流量
4.2.1、hosts
VirtualService 主要由以下部分組成
虛擬主機名稱,如果在 Kubernetes 集群中,則這個主機名可以是service服務名。hosts字段列出了virtual service的虛擬主機。它是客戶端向服務發送請求時使用的一個或多個地址,通過該字段提供的地址訪問virtual service,進而訪問后端服務。在集群內部(網格內)使用時通常與kubernetes的Service同名;當需要在集群外部(網格外)訪問時,該字段為gateway請求的地址,即與gateway的hosts字段相同。
hosts:
- reviews
virtual service的主機名可以是IP地址、DNS名稱,也可以是短名稱(例如Kubernetes服務短名稱),該名稱會被隱式或顯式解析為全限定域名(FQDN),具體取決於istio依賴的平台。可以使用前綴通配符(“*”)為所有匹配的服務創建一組路由規則。virtual service的hosts不一定是Istio服務注冊表的一部分,它們只是虛擬目的地,允許用戶為網格無法路由到的虛擬主機建立流量模型。
virtual service的hosts短域名在解析為完整的域名時,補齊的namespace是VirtualService所在的命名空間,而非Service所在的命名空間。如上例的hosts會被解析為:reviews.default.svc.cluster.local。
virtualservice配置路由規則
由規則的功能是:滿足http.match條件的流量都被路由到http.route.destination,執行重定向(HTTPRedirect)、重寫(HTTPRewrite)、重試(HTTPRetry)、故障注入(HTTPFaultInjection)、跨站(CorsPolicy)策略等。HTTPRoute不僅可以做路由匹配,還可以做一些寫操作來修改請求本身。
root@master001:~/istio/istio-1.12.0/samples/bookinfo/networking# cat virtual-service-reviews-jason-v2-v3.yaml apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: reviews spec: hosts: - reviews http: - match: - headers: end-user: exact: jason route: - destination: host: reviews subset: v2 - route: - destination: host: reviews subset: v3
在 http 字段包含了虛擬服務的路由規則,用來描述匹配條件和路由行為,它們把 HTTP/1.1、HTTP2 和 gRPC 等流量發送到 hosts 字段指定的目標。
示例中的第一個路由規則有一個條件,以 match 字段開始。此路由接收來自 ”jason“ 用戶的所有請求,把請求發送到destination指定的v2子集。
路由規則優先級:
在上面例子中,不滿足第一個路由規則的流量均流向一個默認的目標,該目標在第二條規則中指定。因此,第二條規則沒有 match 條件,直接將流量導向 v3 子集。
多路由規則:
詳細配置可參考:https://istio.io/latest/zh/docs/reference/config/networking/virtual-service/#HTTPMatchRequest
apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: bookinfo spec: hosts: - bookinfo.com http: - match: - uri: prefix: /reviews route: - destination: host: reviews - match: - uri: prefix: /ratings route: - destination: host: ratings
路由規則是將特定流量子集路由到指定目標地址的工具。可以在流量端口、header 字段、URI 等內容上設置匹配條件。例如,上面這個虛擬服務讓用戶發送請求到兩個獨立的服務:ratings 和 reviews,相當於訪問http://bookinfo.com/ratings 和http://bookinfo.com/reviews,虛擬服務規則根據請求的 URI 把請求路由到特定的目標地址。
4.2.2、gateway
流量來源網關
4.2.3、路由
路由的destination字段指定了匹配條件的流量的實際地址。與virtual service的主機不同,該host必須是存在於istio的服務注冊表(如kubernetes services,consul services等)中的真實目的地或由ServiceEntries聲明的hosts,否則Envoy不知道應該將流量發送到哪里。它可以是一個帶代理的網格服務或使用service entry添加的非網格服務。在kubernetes作為平台的情況下,host表示名為kubernetes的service名稱:
- destination: host: canary.default.svc.cluster.local subset: v1 weight: 90
4.3、DestinationRule
destination rule是istio流量路由功能的重要組成部分。一個virtual service可以看作是如何將流量分發給特定的目的地,然后調用destination rule來配置分發到該目的地的流量。destination rule在virtual service的路由規則之后起作用(即在virtual service的math->route-destination之后起作用,此時流量已經分發到真實的service上),應用於真實的目的地。
可以使用destination rule來指定命名的服務子集,例如根據版本對服務的實例進行分組,然后通過virtual service的路由規則中的服務子集將控制流量分發到不同服務的實例中
root@master001:~/istio-canary# cat DestinationRule.yaml apiVersion: networking.istio.io/v1beta1 kind: DestinationRule metadata: name: canary spec: host: canary.default.svc.cluster.local subsets: - name: v1 labels: app: v1 - name: v2 labels: app: v2
在虛擬服務中使用Hosts配置默認綁定的路由地址,用http.route字段,設置http進入的路由地址,可以看到,上面導入到了目標規則為v1和v2的子集。
v1子集對應的是具有如下標簽的pod:
root@master001:~/istio-canary# cat deployment.yaml apiVersion: apps/v1 kind: Deployment metadata: name: appv1 labels: app: v1 spec: replicas: 1 selector: matchLabels: app: v1 apply: canary
五,Istio核心功能演示
5,1 斷路器
斷路器是創建彈性微服務應用程序的重要模式。斷路器使應用程序可以適應網絡故障和延遲等網絡不良影響。
官網:https://istio.io/latest/zh/docs/tasks/traffic-management/circuit-breaking/
1)在k8s集群創建后端服務
root@master001:~/istio/istio-1.12.0# cat samples/httpbin/httpbin.yaml # Copyright Istio Authors # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ################################################################################################## # httpbin service ################################################################################################## apiVersion: v1 kind: ServiceAccount metadata: name: httpbin --- apiVersion: v1 kind: Service metadata: name: httpbin labels: app: httpbin service: httpbin spec: ports: - name: http port: 8000 targetPort: 80 selector: app: httpbin --- apiVersion: apps/v1 kind: Deployment metadata: name: httpbin spec: replicas: 1 selector: matchLabels: app: httpbin version: v1 template: metadata: labels: app: httpbin version: v1 spec: serviceAccountName: httpbin containers: - image: docker.io/kennethreitz/httpbin imagePullPolicy: IfNotPresent name: httpbin ports: - containerPort: 80
root@master001:~/istio/istio-1.12.0# kubectl apply -f samples/httpbin/httpbin.yaml
root@master001:~/istio/istio-1.12.0# kubectl get pods | grep httpbin
httpbin-74fb669cc6-qflbt 2/2 Running 0 9m47s
2)配置斷路器
# 創建一個目標規則,在調用httpbin服務時應用斷路器設置
root@master001:~/istio/istio-1.12.0# kubectl apply -f httpbin_destination.yaml
destinationrule.networking.istio.io/httpbin created
root@master001:~/istio/istio-1.12.0# cat httpbin_destination.yaml apiVersion: networking.istio.io/v1beta1 kind: DestinationRule metadata: name: httpbin spec: host: httpbin trafficPolicy: connectionPool: tcp: maxConnections: 1 http: http1MaxPendingRequests: 1 maxRequestsPerConnection: 1 outlierDetection: consecutiveGatewayErrors: 1 interval: 1s baseEjectionTime: 3m maxEjectionPercent: 100
參數說明
apiVersion: networking.istio.io/v1beta1 kind: DestinationRule metadata: name: httpbin spec: host: httpbin trafficPolicy: connectionPool: #連接池(TCP | HTTP)配置,例如:連接數、並發請求等 tcp: maxConnections: 1 #TCP連接池中的最大連接請求數,當超過這個值,會返回503代碼。如兩個請求過來,就會有一個請求返回503。 http: http1MaxPendingRequests: 1 #連接到目標主機的最大掛起請求數,也就是待處理請求數。指的是virtualservice路由規則中配置的destination。 maxRequestsPerConnection: 1 #連接池中每個連接最多處理1個請求后就關閉,並根據需要重新創建連接池中的連接 outlierDetection: #異常檢測配置,傳統意義上的熔斷配置,即對規定時間內服務錯誤數的監測 consecutiveGatewayErrors: 1 #連續錯誤數1,即連續返回502-504狀態碼的Http請求錯誤數 interval: 1s #錯誤異常的掃描間隔1s,即在interval(1s)內連續發生consecutiveGatewayErrors(1)個錯誤,則觸發服務熔斷 baseEjectionTime: 3m #基本驅逐時間3分鍾,實際驅逐時間為baseEjectionTime*驅逐次數 maxEjectionPercent: 100 #最大驅逐百分比100%
3)添加客戶端訪問httpbin服務
創建一個客戶端以將流量發送給httpbin服務。該客戶端是一個簡單的負載測試客戶端,Fortio可以控制連接數,並發數和HTTP調用延遲。使用此客戶端來“跳閘”在DestinationRule中設置的斷路器策略。
# 通過執行下面的命令部署fortio客戶端
root@master001:~/istio/istio-1.12.0# kubectl apply -f samples/httpbin/sample-client/fortio-deploy.yaml
root@master001:~/istio/istio-1.12.0# kubectl get pods|grep for
fortio-deploy-687945c6dc-jf4tk 2/2 Running 0
root@master001:~/istio/istio-1.12.0# kubectl exec fortio-deploy-687945c6dc-jf4tk -c fortio -- /usr/bin/fortio curl http://httpbin:8000/get HTTP/1.1 200 OK server: envoy date: Thu, 16 Dec 2021 14:36:09 GMT content-type: application/json content-length: 594 access-control-allow-origin: * access-control-allow-credentials: true x-envoy-upstream-service-time: 19 { "args": {}, "headers": { "Host": "httpbin:8000", "User-Agent": "fortio.org/fortio-1.17.1", "X-B3-Parentspanid": "deca8f54c7772ca1", "X-B3-Sampled": "1", "X-B3-Spanid": "c515a0eeb0cc6f9e", "X-B3-Traceid": "99bb8f10f877af2ddeca8f54c7772ca1", "X-Envoy-Attempt-Count": "1", "X-Forwarded-Client-Cert": "By=spiffe://cluster.local/ns/default/sa/httpbin;Hash=f2ee10d932d71c46db56ea674d2139149fab3952b8e83bbc75f38c3e89b8ad1f;Subject=\"\";URI=spiffe://cluster.local/ns/default/sa/default" }, "origin": "127.0.0.6", "url": "http://httpbin:8000/get" }
4)觸發斷路器
在DestinationRule設置中,指定了maxConnections: 1和 http1MaxPendingRequests: 1。這些規則表明,如果超過一個以上的連接並發請求,則istio-proxy在為進一步的請求和連接打開路由時,應該會看到下面的情況 。
# 以兩個並發連接(-c 2)和發送20個請求(-n 20)調用服務
root@master001:~/istio/istio-1.12.0# cat samples/httpbin/sample-client/fortio-deploy.yaml apiVersion: v1 kind: Service metadata: name: fortio labels: app: fortio service: fortio spec: ports: - port: 8080 name: http selector: app: fortio --- apiVersion: apps/v1 kind: Deployment metadata: name: fortio-deploy spec: replicas: 1 selector: matchLabels: app: fortio template: metadata: annotations: # This annotation causes Envoy to serve cluster.outbound statistics via 15000/stats # in addition to the stats normally served by Istio. The Circuit Breaking example task # gives an example of inspecting Envoy stats via proxy config. proxy.istio.io/config: |- proxyStatsMatcher: inclusionPrefixes: - "cluster.outbound" - "cluster_manager" - "listener_manager" - "server" - "cluster.xds-grpc" labels: app: fortio spec: containers: - name: fortio image: fortio/fortio:latest_release imagePullPolicy: Always ports: - containerPort: 8080 name: http-fortio - containerPort: 8079 name: grpc-ping
root@master001:~/istio/istio-1.12.0# kubectl exec -it fortio-deploy-687945c6dc-jf4tk -c fortio -- /usr/bin/fortio load -c 2 -qps 0 -n 20 -loglevel Warning http://httpbin:8000/get 14:37:17 I logger.go:127> Log level is now 3 Warning (was 2 Info) Fortio 1.17.1 running at 0 queries per second, 2->2 procs, for 20 calls: http://httpbin:8000/get Starting at max qps with 2 thread(s) [gomax 2] for exactly 20 calls (10 per thread + 0) 14:37:17 W http_client.go:806> [1] Non ok http code 503 (HTTP/1.1 503) 14:37:17 W http_client.go:806> [0] Non ok http code 503 (HTTP/1.1 503) 14:37:17 W http_client.go:806> [1] Non ok http code 503 (HTTP/1.1 503) 14:37:17 W http_client.go:806> [0] Non ok http code 503 (HTTP/1.1 503) 14:37:17 W http_client.go:806> [1] Non ok http code 503 (HTTP/1.1 503) Ended after 57.850746ms : 20 calls. qps=345.72 Aggregated Function Time : count 20 avg 0.0056645873 +/- 0.00577 min 0.000439136 max 0.024627815 sum 0.113291747 # range, mid point, percentile, count >= 0.000439136 <= 0.001 , 0.000719568 , 15.00, 3 > 0.001 <= 0.002 , 0.0015 , 20.00, 1 > 0.003 <= 0.004 , 0.0035 , 55.00, 7 > 0.004 <= 0.005 , 0.0045 , 65.00, 2 > 0.005 <= 0.006 , 0.0055 , 75.00, 2 > 0.006 <= 0.007 , 0.0065 , 80.00, 1 > 0.007 <= 0.008 , 0.0075 , 85.00, 1 > 0.012 <= 0.014 , 0.013 , 90.00, 1 > 0.016 <= 0.018 , 0.017 , 95.00, 1 > 0.02 <= 0.0246278 , 0.0223139 , 100.00, 1 # target 50% 0.00385714 # target 75% 0.006 # target 90% 0.014 # target 99% 0.0237023 # target 99.9% 0.0245353 Sockets used: 6 (for perfect keepalive, would be 2) Jitter: false Code 200 : 15 (75.0 %) Code 503 : 5 (25.0 %) #斷開了 Response Header Sizes : count 20 avg 172.6 +/- 99.65 min 0 max 231 sum 3452 Response Body/Total Sizes : count 20 avg 678.35 +/- 252.5 min 241 max 825 sum 13567 All done 20 calls (plus 0 warmup) 5.665 ms avg, 345.7 qps
5.2、超時
在生產環境中經常會碰到由於調用方等待下游的響應過長,堆積大量的請求阻塞了自身服務,造成雪崩的情況,通過超時處理來避免由於無限期等待造成的故障,進而增強服務的可用性,Istio 使用虛擬服務來優雅實現超時處理。
下面例子模擬客戶端調用 nginx,nginx 將請求轉發給 tomcat。nginx 服務設置了超時時間為2秒,如果超出這個時間就不在等待,返回超時錯誤。tomcat服務設置了響應時間延遲10秒,任何請求都需要等待10秒后才能返回。client 通過訪問 nginx 服務去反向代理 tomcat服務,由於 tomcat服務需要10秒后才能返回,但nginx 服務只等待2秒,所以客戶端會提示超時錯誤。
1)創建deployment
root@master001:~/istio/istio-1.12.0/yaml/nginx-tomcat# cat nginx-tomcat-deployment.yaml --- apiVersion: apps/v1 kind: Deployment metadata: name: nginx-tomcat labels: server: nginx app: web spec: replicas: 1 selector: matchLabels: server: nginx app: web template: metadata: name: nginx labels: server: nginx app: web spec: containers: - name: nginx image: nginx:latest imagePullPolicy: IfNotPresent --- apiVersion: apps/v1 kind: Deployment metadata: name: tomcat labels: server: tomcat app: web spec: replicas: 1 selector: matchLabels: server: tomcat app: web template: metadata: name: tomcat labels: server: tomcat app: web spec: containers: - name: tomcat image: docker.io/kubeguide/tomcat-app:v1 imagePullPolicy: IfNotPresent
2)創建service
root@master001:~/istio/istio-1.12.0/yaml/nginx-tomcat# cat nginx-tomcat-svc.yaml --- apiVersion: v1 kind: Service metadata: name: nginx-svc spec: selector: server: nginx ports: - name: http port: 80 targetPort: 80 protocol: TCP --- apiVersion: v1 kind: Service metadata: name: tomcat-svc spec: selector: server: tomcat ports: - name: http port: 8080 targetPort: 8080 protocol: TCP
3)創建VirtualService
root@master001:~/istio/istio-1.12.0/yaml/nginx-tomcat# cat virtual-tomcat-nginx.yaml --- apiVersion: networking.istio.io/v1beta1 kind: VirtualService metadata: name: nginx-vs spec: hosts: - nginx-svc http: - route: - destination: host: nginx-svc timeout: 20s # 說明調用 nginx-svc 的 k8s service,請求超時時間是 2s --- apiVersion: networking.istio.io/v1beta1 kind: VirtualService metadata: name: tomcat-vs spec: hosts: - tomcat-svc http: - fault: delay: percentage: value: 50 fixedDelay: 10s route: - destination: host: tomcat-svc
root@master001:~/istio/istio-1.12.0/yaml/nginx-tomcat# kubectl get virtualservices NAME GATEWAYS HOSTS AGE canary ["canary-gateway"] ["*"] 22d nginx-vs ["nginx-svc"] 36m tomcat-vs ["tomcat-svc"] 36m
4)設置超時時間
root@master001:~/istio/istio-1.12.0/yaml/nginx-tomcat# kubectl exec -it nginx-tomcat-7dd6f74846-qc25b -- /bin/sh # vim /etc/nginx/conf.d/default.conf # apt-get update # apt-get install vim -y # vim /etc/nginx/conf.d/default.conf # nginx -t # nginx -s reload
# cat /etc/nginx/conf.d/default.conf server { listen 80; listen [::]:80; server_name localhost; #access_log /var/log/nginx/host.access.log main; location / { #root /usr/share/nginx/html; #index index.html index.htm; proxy_pass http://tomcat-svc:8080; proxy_http_version 1.1; } #error_page 404 /404.html; # redirect server error pages to the static page /50x.html # error_page 500 502 503 504 /50x.html; location = /50x.html { root /usr/share/nginx/html; } # proxy the PHP scripts to Apache listening on 127.0.0.1:80 # #location ~ \.php$ { # proxy_pass http://127.0.0.1; #} # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000 # #location ~ \.php$ { # root html; # fastcgi_pass 127.0.0.1:9000; # fastcgi_index index.php; # fastcgi_param SCRIPT_FILENAME /scripts$fastcgi_script_name; # include fastcgi_params; #} # deny access to .htaccess files, if Apache's document root # concurs with nginx's one # #location ~ /\.ht { # deny all; #} }
5)登錄客戶端驗證超時
/ # time wget -q -O - http://nginx-svc wget: server returned error: HTTP/1.1 504 Gateway Timeout Command exited with non-zero status 1 real 0m 2.01s user 0m 0.00s sys 0m 0.00s / # while true; do wget -q -O - http://nginx-svc; done wget: server returned error: HTTP/1.1 504 Gateway Timeout wget: server returned error: HTTP/1.1 504 Gateway Timeout wget: server returned error: HTTP/1.1 504 Gateway Timeout wget: server returned error: HTTP/1.1 504 Gateway Timeout
nginx延遲改為20s, 50%tomcat故障10s
/ # time wget -q -O - http://nginx-svc <!DOCTYPE html> ... </html> real 0m 10.01s user 0m 0.00s sys 0m 0.00s
5.3、故障注入和重試
Istio 重試機制就是如果調用服務失敗,Envoy 代理嘗試連接服務的最大次數。而默認情況下,Envoy 代理在失敗后並不會嘗試重新連接服務,除非我們啟動 Istio 重試機制。
下面例子模擬客戶端調用 nginx,nginx 將請求轉發給 tomcat。tomcat 通過故障注入而中止對外服務,nginx 設置如果訪問 tomcat 失敗則會重試 3 次。
1)創建pod
# 刪除之前的pod並新建
root@master001:~/istio/istio-1.12.0/yaml/nginx-tomcat# kubectl apply -f .
2)創建VirtualService
root@master001:~/istio/istio-1.12.0/yaml/nginx-tomcat# cat virtual-attempt.yaml --- apiVersion: networking.istio.io/v1beta1 kind: VirtualService metadata: name: nginx-vs spec: hosts: - nginx-svc http: - route: - destination: host: nginx-svc retries: attempts: 3 # 調用 nginx-svc 的 k8s service,在初始調用失敗后最多重試 3 次來連接到服務子集,每個重試都有 2 秒的超時 perTryTimeout: 2s --- apiVersion: networking.istio.io/v1beta1 kind: VirtualService metadata: name: tomcat-vs spec: hosts: - tomcat-svc http: - fault: abort: percentage: value: 100 httpStatus: 503 # 每次調用 tomcat-svc 的service,100%都會返回錯誤狀態碼503 route: - destination: host: tomcat-svc
3)驗證超時與重試
root@master001:~/istio/istio-1.12.0/yaml/nginx-tomcat# kubectl exec -it nginx-tomcat-7dd6f74846-hjswq -- /bin/sh # apt-get update && apt-get install vim -y # vim /etc/nginx/conf.d/default.conf # nginx -t # nginx -s reload
location / { #root /usr/share/nginx/html; #index index.html index.htm; proxy_pass http://tomcat-svc:8080; proxy_http_version 1.1; }
root@master001:~# kubectl run busybox --image busybox:1.28 --restart=Never --rm -it busybox -- sh
If you don't see a command prompt, try pressing enter.
/ # wget -q -O - http://nginx-svc
wget: server returned error: HTTP/1.1 503 Service Unavailable
查看日志


