Istio安裝配置及使用


Istio安裝配置及使用

一、k8s安裝Istio

軟件下載地址:https://github.com/istio/istio/

# 1、下載軟件包並解壓
[root@k8s-master1 istio]# ls
istio-1.10.1-linux-amd64.tar.gz
[root@k8s-master1 istio]# tar xf istio-1.10.1-linux-amd64.tar.gz 
[root@k8s-master1 istio]# cd istio-1.10.1/
[root@k8s-master1 istio-1.10.1]# ll
total 24
drwxr-x---  2 root root    22 Jun  5 04:44 bin	# 包含istioctl的客戶端文件。istioctl工具用於手動注入Envoy sidecar代理
-rw-r--r--  1 root root 11348 Jun  5 04:44 LICENSE
drwxr-xr-x  5 root root    52 Jun  5 04:44 manifests
-rw-r-----  1 root root   854 Jun  5 04:44 manifest.yaml
-rw-r--r--  1 root root  5866 Jun  5 04:44 README.md
drwxr-xr-x 20 root root   332 Jun  5 04:44 samples	# 示例應用程序
drwxr-xr-x  3 root root    57 Jun  5 04:44 tools
[root@k8s-master1 istio-1.10.1]# cp bin/istioctl /usr/bin/

# 2、安裝
[root@k8s-master1 istio]# istioctl install --set profile=demo -y
✔ Istio core installed                                                                                                 
✔ Istiod installed                                                                                                     
✔ Egress gateways installed                                                                                            
✔ Ingress gateways installed                                                                                           
✔ Installation complete                                                                                                Thank you for installing Istio 1.10.  Please take a few minutes to tell us about your install/upgrade experience!  https://forms.gle/KjkrDnMPByq7akrYA

# 3、驗證istio是否部署成功
[root@k8s-master1 istio-1.10.1]# kubectl get pods -n istio-system
NAME                                    READY   STATUS    RESTARTS   AGE
istio-egressgateway-659cc7697b-42jdf    1/1     Running   0          18s
istio-ingressgateway-569f64cdf8-9xn7f   1/1     Running   0          18s
istiod-85c958cd6-j6p5n                  1/1     Running   0          22s

# 4、卸載istio集群,暫時不執行
~]# istioctl manifest generate --set profile=demo | kubectl delete -f -

二、Istio部署在線書店bookinfo

2.1、在線書城功能介紹

在線書店-bookinfo:該應用由四個單獨的微服務構成,這個應用模仿在線書店的一個分類,顯示一本書的信息,頁面上會顯示一本書的描述,書籍的細節(ISBN、頁數等),以及關於這本書的一些評論。

Bookinfo應用分為四個單獨的微服務
1)productpage這個微服務會調用details和reviews兩個微服務,用來生成頁面;
2)details這個微服務中包含了書籍的信息;
3)reviews這個微服務中包含了書籍相關的評論,它還會調用ratings微服務;
4)ratings這個微服務中包含了由書籍評價組成的評級信息。

reviews微服務有3個版本
1)v1版本不會調用ratings服務;
2)v2版本會調用ratings服務,並使用1到5個黑色星形圖標來顯示評分信息;
3)v3版本會調用ratings服務,並使用1到5個紅色星形圖標來顯示評分信息。

image-20210714150740220

Bookinfo應用中的幾個微服務是由不同的語言編寫的。這些服務對istio並無依賴,但是構成了一個有代表性的服務網格的例子:它由多個服務、多個語言構成,並且reviews服務具有多個版本。

要在Istio中運行這一應用,無需對應用自身做出任何改變。 只要簡單的在 Istio 環境中對服務進行配置和運行,具體一點說就是把 Envoy sidecar 注入到每個服務之中。 最終的部署結果將如下圖所示:

image-20210714150844479

所有的微服務都和Envoy sidecar集成在一起,被集成服務所有的出入流量都被envoy sidecar 所劫持,這樣就為外部控制准備了所需的 Hook,然后就可以利用Istio控制平面為應用提供服務路由、遙測數據收集以及策略實施等功能。

2.2、在線書城部署

1)istio默認自動注入 sidecar,需要為default命名空間打上標簽istio-injection=enabled

[root@k8s-master1 istio-1.10.1]# kubectl label namespace default istio-injection=enabled
[root@k8s-master1 istio-1.10.1]# kubectl describe ns default |grep istio-injection
Labels:       istio-injection=enabled

2)使用kubectl部署應用bookinfo

[root@k8s-master1 istio-1.10.1]# pwd
/root/istio/istio-1.10.1
[root@k8s-master1 istio-1.10.1]# cat samples/bookinfo/platform/kube/bookinfo.yaml 
# Copyright Istio Authors
#
#   Licensed under the Apache License, Version 2.0 (the "License");
#   you may not use this file except in compliance with the License.
#   You may obtain a copy of the License at
#
#       http://www.apache.org/licenses/LICENSE-2.0
#
#   Unless required by applicable law or agreed to in writing, software
#   distributed under the License is distributed on an "AS IS" BASIS,
#   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
#   See the License for the specific language governing permissions and
#   limitations under the License.

##################################################################################################
# This file defines the services, service accounts, and deployments for the Bookinfo sample.
#
# To apply all 4 Bookinfo services, their corresponding service accounts, and deployments:
#
#   kubectl apply -f samples/bookinfo/platform/kube/bookinfo.yaml
#
# Alternatively, you can deploy any resource separately:
#
#   kubectl apply -f samples/bookinfo/platform/kube/bookinfo.yaml -l service=reviews # reviews Service
#   kubectl apply -f samples/bookinfo/platform/kube/bookinfo.yaml -l account=reviews # reviews ServiceAccount
#   kubectl apply -f samples/bookinfo/platform/kube/bookinfo.yaml -l app=reviews,version=v3 # reviews-v3 Deployment
##################################################################################################

##################################################################################################
# Details service
##################################################################################################
apiVersion: v1
kind: Service
metadata:
  name: details
  labels:
    app: details
    service: details
spec:
  ports:
  - port: 9080
    name: http
  selector:
    app: details
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: bookinfo-details
  labels:
    account: details
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: details-v1
  labels:
    app: details
    version: v1
spec:
  replicas: 1
  selector:
    matchLabels:
      app: details
      version: v1
  template:
    metadata:
      labels:
        app: details
        version: v1
    spec:
      serviceAccountName: bookinfo-details
      containers:
      - name: details
        image: docker.io/istio/examples-bookinfo-details-v1:1.16.2
        imagePullPolicy: IfNotPresent
        ports:
        - containerPort: 9080
        securityContext:
          runAsUser: 1000
---
##################################################################################################
# Ratings service
##################################################################################################
apiVersion: v1
kind: Service
metadata:
  name: ratings
  labels:
    app: ratings
    service: ratings
spec:
  ports:
  - port: 9080
    name: http
  selector:
    app: ratings
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: bookinfo-ratings
  labels:
    account: ratings
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: ratings-v1
  labels:
    app: ratings
    version: v1
spec:
  replicas: 1
  selector:
    matchLabels:
      app: ratings
      version: v1
  template:
    metadata:
      labels:
        app: ratings
        version: v1
    spec:
      serviceAccountName: bookinfo-ratings
      containers:
      - name: ratings
        image: docker.io/istio/examples-bookinfo-ratings-v1:1.16.2
        imagePullPolicy: IfNotPresent
        ports:
        - containerPort: 9080
        securityContext:
          runAsUser: 1000
---
##################################################################################################
# Reviews service
##################################################################################################
apiVersion: v1
kind: Service
metadata:
  name: reviews
  labels:
    app: reviews
    service: reviews
spec:
  ports:
  - port: 9080
    name: http
  selector:
    app: reviews
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: bookinfo-reviews
  labels:
    account: reviews
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: reviews-v1
  labels:
    app: reviews
    version: v1
spec:
  replicas: 1
  selector:
    matchLabels:
      app: reviews
      version: v1
  template:
    metadata:
      labels:
        app: reviews
        version: v1
    spec:
      serviceAccountName: bookinfo-reviews
      containers:
      - name: reviews
        image: docker.io/istio/examples-bookinfo-reviews-v1:1.16.2
        imagePullPolicy: IfNotPresent
        env:
        - name: LOG_DIR
          value: "/tmp/logs"
        ports:
        - containerPort: 9080
        volumeMounts:
        - name: tmp
          mountPath: /tmp
        - name: wlp-output
          mountPath: /opt/ibm/wlp/output
        securityContext:
          runAsUser: 1000
      volumes:
      - name: wlp-output
        emptyDir: {}
      - name: tmp
        emptyDir: {}
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: reviews-v2
  labels:
    app: reviews
    version: v2
spec:
  replicas: 1
  selector:
    matchLabels:
      app: reviews
      version: v2
  template:
    metadata:
      labels:
        app: reviews
        version: v2
    spec:
      serviceAccountName: bookinfo-reviews
      containers:
      - name: reviews
        image: docker.io/istio/examples-bookinfo-reviews-v2:1.16.2
        imagePullPolicy: IfNotPresent
        env:
        - name: LOG_DIR
          value: "/tmp/logs"
        ports:
        - containerPort: 9080
        volumeMounts:
        - name: tmp
          mountPath: /tmp
        - name: wlp-output
          mountPath: /opt/ibm/wlp/output
        securityContext:
          runAsUser: 1000
      volumes:
      - name: wlp-output
        emptyDir: {}
      - name: tmp
        emptyDir: {}
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: reviews-v3
  labels:
    app: reviews
    version: v3
spec:
  replicas: 1
  selector:
    matchLabels:
      app: reviews
      version: v3
  template:
    metadata:
      labels:
        app: reviews
        version: v3
    spec:
      serviceAccountName: bookinfo-reviews
      containers:
      - name: reviews
        image: docker.io/istio/examples-bookinfo-reviews-v3:1.16.2
        imagePullPolicy: IfNotPresent
        env:
        - name: LOG_DIR
          value: "/tmp/logs"
        ports:
        - containerPort: 9080
        volumeMounts:
        - name: tmp
          mountPath: /tmp
        - name: wlp-output
          mountPath: /opt/ibm/wlp/output
        securityContext:
          runAsUser: 1000
      volumes:
      - name: wlp-output
        emptyDir: {}
      - name: tmp
        emptyDir: {}
---
##################################################################################################
# Productpage services
##################################################################################################
apiVersion: v1
kind: Service
metadata:
  name: productpage
  labels:
    app: productpage
    service: productpage
spec:
  ports:
  - port: 9080
    name: http
  selector:
    app: productpage
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: bookinfo-productpage
  labels:
    account: productpage
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: productpage-v1
  labels:
    app: productpage
    version: v1
spec:
  replicas: 1
  selector:
    matchLabels:
      app: productpage
      version: v1
  template:
    metadata:
      labels:
        app: productpage
        version: v1
    spec:
      serviceAccountName: bookinfo-productpage
      containers:
      - name: productpage
        image: docker.io/istio/examples-bookinfo-productpage-v1:1.16.2
        imagePullPolicy: IfNotPresent
        ports:
        - containerPort: 9080
        volumeMounts:
        - name: tmp
          mountPath: /tmp
        securityContext:
          runAsUser: 1000
      volumes:
      - name: tmp
        emptyDir: {}
---

[root@k8s-master1 istio-1.10.1]# kubectl apply -f samples/bookinfo/platform/kube/bookinfo.yaml

# 確認所有的服務和 Pod 都已經正確的定義和啟動:
[root@k8s-master1 istio-1.10.1]# kubectl get services
NAME          TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)    AGE
details       ClusterIP   10.108.71.3      <none>        9080/TCP   67s
kubernetes    ClusterIP   10.96.0.1        <none>        443/TCP    5d18h
productpage   ClusterIP   10.100.76.24     <none>        9080/TCP   67s
ratings       ClusterIP   10.96.56.183     <none>        9080/TCP   67s
reviews       ClusterIP   10.102.223.234   <none>        9080/TCP   67s
[root@k8s-master1 istio-1.10.1]# kubectl get pods 
NAME                              READY   STATUS    RESTARTS   AGE
details-v1-79f774bdb9-gqhd6       2/2     Running   0          75s
productpage-v1-6b746f74dc-r9m5v   2/2     Running   0          75s
ratings-v1-b6994bb9-qkm4c         2/2     Running   0          74s
reviews-v1-545db77b95-9jlcz       2/2     Running   0          75s
reviews-v2-7bf8c9648f-dnbvt       2/2     Running   0          75s
reviews-v3-84779c7bbc-p98kv       2/2     Running   0          75s

3)確認 Bookinfo 應用是否正在運行,在某個Pod中用curl命令對應用發送請求,例如ratings:

[root@k8s-master1 istio-1.10.1]# kubectl exec -it $(kubectl get pod -l app=ratings -o jsonpath='{.items[0].metadata.name}') -c ratings -- curl productpage:9080/productpage | grep -o "<title>.*</title>"
<title>Simple Bookstore App</title> # 出現這個說明正常

4)確定Ingress的IP和端口

現在Bookinfo服務已經啟動並運行,你需要使應用程序可以從Kubernetes集群外部訪問,例如從瀏覽器訪問,那可以用Istio Gateway來實現這個目標。

# 1、為應用程序定義gateway網關
[root@k8s-master1 istio-1.10.1]# cat samples/bookinfo/networking/bookinfo-gateway.yaml
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
  name: bookinfo-gateway
spec:
  selector:
    istio: ingressgateway # use istio default controller
  servers:
  - port:
      number: 80
      name: http
      protocol: HTTP
    hosts:
    - "*"
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: bookinfo
spec:
  hosts:
  - "*"
  gateways:
  - bookinfo-gateway
  http:
  - match:
    - uri:
        exact: /productpage
    - uri:
        prefix: /static
    - uri:
        exact: /login
    - uri:
        exact: /logout
    - uri:
        prefix: /api/v1/products
    route:
    - destination:
        host: productpage
        port:
          number: 9080
          
[root@k8s-master1 istio-1.10.1]# kubectl apply -f samples/bookinfo/networking/bookinfo-gateway.yaml
[root@k8s-master1 istio-1.10.1]# kubectl get gateway
NAME               AGE
bookinfo-gateway   16s
[root@k8s-master1 istio-1.10.1]# kubectl get virtualservice
NAME       GATEWAYS               HOSTS   AGE
bookinfo   ["bookinfo-gateway"]   ["*"]   25s

# 2、確定ingress ip和端口
[root@k8s-master1 istio-1.10.1]# kubectl get svc istio-ingressgateway -n istio-system
NAME                   TYPE           CLUSTER-IP     EXTERNAL-IP   PORT(S)                                                                      AGE
istio-ingressgateway   LoadBalancer   10.108.58.69   <pending>     15021:32688/TCP,80:31860/TCP,443:30213/TCP,31400:31526/TCP,15443:32654/TCP   36m

# 如果EXTERNAL-IP值已設置,說明環境正在使用外部負載均衡,可以用其為ingress gateway 提供服務。 如果EXTERNAL-IP值為<none>(或持續顯示<pending>), 說明環境沒有提供外部負載均衡,無法使用ingress gateway。在這種情況下,你可以使用服務的NodePort訪問網關。

# 3、獲取Istio Gateway的地址
[root@k8s-master1 istio-1.10.1]# export INGRESS_PORT=$(kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name=="http2")].nodePort}')
[root@k8s-master1 istio-1.10.1]# echo $INGRESS_PORT
31860
[root@k8s-master1 istio-1.10.1]# export SECURE_INGRESS_PORT=$(kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name=="https")].nodePort}')
[root@k8s-master1 istio-1.10.1]# echo $SECURE_INGRESS_PORT
30213
# 設置GATEWAY_URL
[root@k8s-master1 istio-1.10.1]# INGRESS_HOST=192.168.40.180
[root@k8s-master1 istio-1.10.1]# export GATEWAY_URL=$INGRESS_HOST:$INGRESS_PORT
[root@k8s-master1 istio-1.10.1]# echo $GATEWAY_URL
192.168.40.180:31860

# 4、curl命令來確認是否能夠從集群外部訪問 Bookinfo 應用程序
[root@k8s-master1 istio-1.10.1]# curl -s http://${GATEWAY_URL}/productpage | grep -o "<title>.*</title>"
<title>Simple Bookstore App</title>

# 5、瀏覽器打開網址http://$GATEWAY_URL/productpage,也就是192.168.40.180:31860/productpage來瀏覽應用的Web頁面,如果刷新幾次應用的頁面,就會看到 productpage 頁面中會隨機展示 reviews 服務的不同版本的效果(紅色、黑色的星形或者沒有顯示)

image-20210714152613188

istio的ingressgateway訪問:https://istio.io/docs/examples/bookinfo/#determine-the-ingress-ip-and-port

5)擴展:添加外部IP-extertal-IP

[root@k8s-master1 istio-1.10.1]# kubectl edit svc istio-ingressgateway -n istio-system
spec:
  clusterIP: 10.108.58.69
  clusterIPs:
  - 10.108.58.69
  externalIPs:
  - 192.168.40.180
  
[root@k8s-master1 istio-1.10.1]# kubectl get svc istio-ingressgateway -n istio-system
NAME                   TYPE           CLUSTER-IP     EXTERNAL-IP      PORT(S)                                                                      AGE
istio-ingressgateway   LoadBalancer   10.108.58.69   192.168.40.180   15021:32688/TCP,80:31860/TCP,443:30213/TCP,31400:31526/TCP,15443:32654/TCP   49m

image-20210714152959875

# 在windows機器上的C:\Windows\System32\drivers\etc\hosts里面最后一行加上如下域名解析:
192.168.40.180 productpage.kubeprom.com

在瀏覽器訪問:http://productpage.kubeprom.com/productpage

6)卸載bookinfo服務

# 1.刪除路由規則,並銷毀應用的 Pod
sh samples/bookinfo/platform/kube/cleanup.sh

# 2.確認應用已經關停
kubectl get virtualservices     #-- there should be no virtual services
kubectl get destinationrules   #-- there should be no destination rules
kubectl get gateway           #-- there should be no gateway
kubectl get pods               #-- the Bookinfo pods should be deleted

三、Istio實現灰度發布

3.1、什么是灰度發布

灰度發布也叫金絲雀部署 ,是指通過控制流量的比例,實現新老版本的逐步更替。比如對於服務A 有 version1、 version2 兩個版本 , 當前兩個版本同時部署,但是version1比例90% ,version2比例10% ,看運行效果,如果效果好逐步調整流量占比 80~20 ,70~30 ·····10~90 ,0,100 ,最終version1版本下線。

灰度發布的特點:

1)新老板共存
2)可以實時根據反饋動態調整占比
3)理論上不存在服務完全宕機的情況。
4)適合於服務的平滑升級與動態更新。

3.2、使用istio進行金絲雀發布

1)創建deployment

[root@k8s-master1 istio-canary]# cat deployment.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:
  name: appv1
  labels:
    app: v1
spec:
  replicas: 1
  selector:
    matchLabels:
      app: v1
      apply: canary
  template:
    metadata:
      labels:
        app: v1
        apply: canary
    spec:
      containers:
      - name: nginx
        image: xianchao/canary:v1
        imagePullPolicy: IfNotPresent
        ports:
        - containerPort: 80
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: appv2
  labels:
    app: v2
spec:
  replicas: 1
  selector:
    matchLabels:
      app: v2
      apply: canary
  template:
    metadata:
      labels:
        app: v2
        apply: canary
    spec:
      containers:
      - name: nginx
        image: xianchao/canary:v2
        imagePullPolicy: IfNotPresent
        ports:
        - containerPort: 80
        
[root@k8s-master1 istio-canary]# kubectl apply -f deployment.yaml 
deployment.apps/appv1 created
deployment.apps/appv2 created
[root@k8s-master1 istio-canary]# kubectl get pods
NAME                     READY   STATUS    RESTARTS   AGE
appv1-6f7b58fd99-2w8xq   2/2     Running   0          9s
appv2-f78cb577-gpj2j     2/2     Running   0          9s

2)創建service

[root@k8s-master1 istio-canary]# cat service.yaml 
apiVersion: v1
kind: Service
metadata:
  name: canary
  labels:
    apply: canary
spec:
  selector:
    apply: canary
  ports:
    - protocol: TCP
      port: 80
      targetPort: 80
      
[root@k8s-master1 istio-canary]# kubectl apply -f service.yaml 
service/canary created
[root@k8s-master1 istio-canary]# kubectl get svc
NAME         TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)   AGE
canary       ClusterIP   10.99.119.233   <none>        80/TCP    4s
kubernetes   ClusterIP   10.96.0.1       <none>        443/TCP   5d19h
[root@k8s-master1 istio-canary]# kubectl describe svc canary 
Name:              canary
Namespace:         default
Labels:            apply=canary
Annotations:       <none>
Selector:          apply=canary
Type:              ClusterIP
IP Families:       <none>
IP:                10.99.119.233
IPs:               10.99.119.233
Port:              <unset>  80/TCP
TargetPort:        80/TCP
Endpoints:         10.244.36.85:80,10.244.36.86:80
Session Affinity:  None
Events:            <none>

3)創建gateway

[root@k8s-master1 istio-canary]# cat gateway.yaml 
apiVersion: networking.istio.io/v1beta1
kind: Gateway
metadata:
  name: canary-gateway
spec:
  selector:
    istio: ingressgateway
  servers:
  - port:
      number: 80
      name: http
      protocol: HTTP
    hosts:
    - "*"
    
[root@k8s-master1 istio-canary]# kubectl apply -f gateway.yaml 
gateway.networking.istio.io/canary-gateway created
[root@k8s-master1 istio-canary]# kubectl get gateway
NAME             AGE
canary-gateway   8s

4)創建virtualservice

[root@k8s-master1 istio-canary]# cat virtual.yaml 
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
  name: canary
spec:
  hosts:
  - "*"
  gateways:
  - canary-gateway
  http:
  - route:
    - destination:
        host: canary.default.svc.cluster.local
        subset: v1
      weight: 90
    - destination:
        host: canary.default.svc.cluster.local
        subset: v2
      weight: 10
---
apiVersion: networking.istio.io/v1beta1
kind: DestinationRule
metadata:
  name: canary
spec:
  host: canary.default.svc.cluster.local
  subsets:
  - name: v1
    labels:
      app: v1
  - name: v2
    labels:
      app: v2
      
[root@k8s-master1 istio-canary]# kubectl apply -f virtual.yaml 
virtualservice.networking.istio.io/canary created
destinationrule.networking.istio.io/canary created
[root@k8s-master1 istio-canary]# kubectl get virtualservices
NAME     GATEWAYS             HOSTS   AGE
canary   ["canary-gateway"]   ["*"]   8s

5)驗證金絲雀發布效果

# 獲取Ingress_port
[root@k8s-master1 istio-canary]# kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name=="http2")].nodePort}'
31860

# 驗證金絲雀發布效果,結果有90次出現v1,10次出現canary-v2,符合我們預先設計的流量走向。
[root@k8s-master1 istio-canary]# for i in `seq 1 100`; do curl 192.168.40.180:31860;done > 1.txt

四、istio核心資源

官網:https://istio.io/latest/docs/concepts/traffic-management/

4.1、Gateway

在Kubernetes環境中,Ingress controller用於管理進入集群的流量。在Istio服務網格中 Istio Ingress Gateway承擔相應的角色,它使用新的配置模型(GatewayVirtualServices)完成流量管理的功能。通過下圖做一個總的描述。

image-20210714162605648

1、用戶向某端口發出請求
2、負載均衡器監聽端口,並將請求轉發到集群中的某個節點上。Istio Ingress Gateway Service 會監聽集群節點端口的請求
3、Istio Ingress Gateway Service 將請求交給Istio Ingress Gateway Pod 處理。IngressGateway Pod 通過 Gateway 和 VirtualService 配置規則處理請求。其中,Gateway 用來配置端口、協議和證書;VirtualService 用來配置一些路由信息(找到請求對應處理的服務App Service)
4、Istio Ingress Gateway Pod將請求轉給App Service
5、最終的請求會交給App Service 關聯的App Deployment處理

# cat gateway.yaml
apiVersion: networking.istio.io/v1beta1 
kind: Gateway
metadata:
  name: canary-gateway
spec:
  selector:
    istio: ingressgateway
  servers:
  - port:
      number: 80
      name: http
      protocol: HTTP
    hosts:
    - "*"	# *表示通配符,通過任何域名都可以訪問

網關是一個運行在網格邊緣的負載均衡器,用於接收傳入或傳出的HTTP/TCP連接。主要工作是接受外部請求,把請求轉發到內部服務。網格邊緣的Ingress 流量,會通過對應的 Istio IngressGateway Controller 進入到集群內部。

在上面這個yaml里我們配置了一個監聽80端口的入口網關,它會將80端口的http流量導入到集群內對應的Virtual Service上。

4.2、VirtualService

VirtualService是Istio流量治理的一個核心配置,可以說是Istio流量治理中最重要、最復雜的。VirtualService在形式上表示一個虛擬服務,將滿足條件的流量都轉發到對應的服務后端,這個服務后端可以是一個服務,也可以是在DestinationRule中定義的服務的子集

# cat virtual.yaml
apiVersion:  networking.istio.io/v1beta1
kind: VirtualService
metadata:
  name: canary
spec:
  hosts:
  - "*"
  gateways:
  - canary-gateway
  http:
  - route:
    - destination:
        host: canary.default.svc.cluster.local
        subset: v1
      weight: 90
    - destination:
        host: canary.default.svc.cluster.local
        subset: v2
      weight: 10
      
# 這個虛擬服務會收到上一個gateway中所有80端口來的http流量

VirtualService 主要由以下部分組成:

4.2.1、hosts

虛擬主機名稱,如果在 Kubernetes 集群中,則這個主機名可以是service服務名。hosts字段列出了virtual service的虛擬主機。它是客戶端向服務發送請求時使用的一個或多個地址,通過該字段提供的地址訪問virtual service,進而訪問后端服務。在集群內部(網格內)使用時通常與kubernetes的Service同名;當需要在集群外部(網格外)訪問時,該字段為gateway請求的地址,即與gateway的hosts字段相同。

hosts:
- reviews

virtual service的主機名可以是IP地址、DNS名稱,也可以是短名稱(例如Kubernetes服務短名稱),該名稱會被隱式或顯式解析為全限定域名(FQDN),具體取決於istio依賴的平台。可以使用前綴通配符(“*”)為所有匹配的服務創建一組路由規則。virtual service的hosts不一定是Istio服務注冊表的一部分,它們只是虛擬目的地,允許用戶為網格無法路由到的虛擬主機建立流量模型。

virtual service的hosts短域名在解析為完整的域名時,補齊的namespace是VirtualService所在的命名空間,而非Service所在的命名空間。如上例的hosts會被解析為:reviews.default.svc.cluster.local

virtualservice配置路由規則

由規則的功能是:滿足http.match條件的流量都被路由到http.route.destination,執行重定向(HTTPRedirect)、重寫(HTTPRewrite)、重試(HTTPRetry)、故障注入(HTTPFaultInjection)、跨站(CorsPolicy)策略等。HTTPRoute不僅可以做路由匹配,還可以做一些寫操作來修改請求本身。

apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: reviews
spec:
  hosts:
  - reviews
  http:
  - match:
    - headers:
        end-user:
          exact: jason
    route:
    - destination:
        host: reviews
        subset: v2
  - route:
    - destination:
        host: reviews
        subset: v3

在 http 字段包含了虛擬服務的路由規則,用來描述匹配條件和路由行為,它們把 HTTP/1.1、HTTP2 和 gRPC 等流量發送到 hosts 字段指定的目標。

示例中的第一個路由規則有一個條件,以 match 字段開始。此路由接收來自 ”jason“ 用戶的所有請求,把請求發送到destination指定的v2子集

路由規則優先級

在上面例子中,不滿足第一個路由規則的流量均流向一個默認的目標,該目標在第二條規則中指定。因此,第二條規則沒有 match 條件,直接將流量導向 v3 子集。

多路由規則

詳細配置可參考:https://istio.io/latest/zh/docs/reference/config/networking/virtual-service/#HTTPMatchRequest

apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: bookinfo
spec:
  hosts:
    - bookinfo.com
  http:
  - match:
    - uri:
        prefix: /reviews
    route:
    - destination:
        host: reviews
  - match:
    - uri:
        prefix: /ratings
    route:
    - destination:
        host: ratings

路由規則是將特定流量子集路由到指定目標地址的工具。可以在流量端口、header 字段、URI 等內容上設置匹配條件。例如,上面這個虛擬服務讓用戶發送請求到兩個獨立的服務:ratingsreviews,相當於訪問http://bookinfo.com/ratingshttp://bookinfo.com/reviews,虛擬服務規則根據請求的 URI 把請求路由到特定的目標地址。

4.2.2、Gateway

流量來源網關

4.2.3、路由

路由的destination字段指定了匹配條件的流量的實際地址。與virtual service的主機不同,該host必須是存在於istio的服務注冊表(如kubernetes services,consul services等)中的真實目的地或由ServiceEntries聲明的hosts,否則Envoy不知道應該將流量發送到哪里。它可以是一個帶代理的網格服務或使用service entry添加的非網格服務。在kubernetes作為平台的情況下,host表示名為kubernetes的service名稱

  - destination:
        host: canary.default.svc.cluster.local
        subset: v1
      weight: 90

4.3、DestinationRule

destination rule是istio流量路由功能的重要組成部分。一個virtual service可以看作是如何將流量分發給特定的目的地,然后調用destination rule來配置分發到該目的地的流量。destination rule在virtual service的路由規則之后起作用(即在virtual service的math->route-destination之后起作用,此時流量已經分發到真實的service上),應用於真實的目的地。

可以使用destination rule來指定命名的服務子集,例如根據版本對服務的實例進行分組,然后通過virtual service的路由規則中的服務子集將控制流量分發到不同服務的實例中。

# cat DestinationRule.yaml
apiVersion: networking.istio.io/v1beta1
kind: DestinationRule
metadata:
  name: canary
spec:
  host: canary.default.svc.cluster.local
  subsets:
  - name: v1
    labels:
      app: v1
  - name: v2
    labels:
      app: v2

在虛擬服務中使用Hosts配置默認綁定的路由地址,用http.route字段,設置http進入的路由地址,可以看到,上面導入到了目標規則為v1和v2的子集。

v1子集對應的是具有如下標簽的pod:

selector:
    matchLabels:
      app: v1

流量控制流程Gateway->VirtaulService->TCP/HTTP Router->DestinationWeight->Subset:Port

五、istio核心功能演示

5.1、斷路器

斷路器是創建彈性微服務應用程序的重要模式。斷路器使應用程序可以適應網絡故障和延遲等網絡不良影響。

官網:https://istio.io/latest/zh/docs/tasks/traffic-management/circuit-breaking/

1)在k8s集群創建后端服務

[root@k8s-master1 istio-1.10.1]# cat samples/httpbin/httpbin.yaml
# Copyright Istio Authors
#
#   Licensed under the Apache License, Version 2.0 (the "License");
#   you may not use this file except in compliance with the License.
#   You may obtain a copy of the License at
#
#       http://www.apache.org/licenses/LICENSE-2.0
#
#   Unless required by applicable law or agreed to in writing, software
#   distributed under the License is distributed on an "AS IS" BASIS,
#   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
#   See the License for the specific language governing permissions and
#   limitations under the License.

##################################################################################################
# httpbin service
##################################################################################################
apiVersion: v1
kind: ServiceAccount
metadata:
  name: httpbin
---
apiVersion: v1
kind: Service
metadata:
  name: httpbin
  labels:
    app: httpbin
    service: httpbin
spec:
  ports:
  - name: http
    port: 8000
    targetPort: 80
  selector:
    app: httpbin
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: httpbin
spec:
  replicas: 1
  selector:
    matchLabels:
      app: httpbin
      version: v1
  template:
    metadata:
      labels:
        app: httpbin
        version: v1
    spec:
      serviceAccountName: httpbin
      containers:
      - image: docker.io/kennethreitz/httpbin
        imagePullPolicy: IfNotPresent
        name: httpbin
        ports:
        - containerPort: 80
        
[root@k8s-master1 istio-1.10.1]# kubectl apply -f samples/httpbin/httpbin.yaml
[root@k8s-master1 istio-1.10.1]# kubectl get pods | grep httpbin
httpbin-74fb669cc6-f2zm7   2/2     Running   0          36s

2)配置斷路器

# 創建一個目標規則,在調用httpbin服務時應用斷路器設置
[root@k8s-master1 istio-1.10.1]# cat destination.yaml 
apiVersion:  networking.istio.io/v1beta1
kind: DestinationRule
metadata:
  name: httpbin
spec:
  host: httpbin
  trafficPolicy:
    connectionPool:
      tcp:
        maxConnections: 1
      http:
        http1MaxPendingRequests: 1
        maxRequestsPerConnection: 1
    outlierDetection:
      consecutiveGatewayErrors: 1
      interval: 1s
      baseEjectionTime: 3m
      maxEjectionPercent: 100
     
[root@k8s-master1 istio-1.10.1]# kubectl apply -f destination.yaml
destinationrule.networking.istio.io/httpbin created

參數說明:

apiVersion:  networking.istio.io/v1beta1
kind: DestinationRule
metadata:
  name: httpbin
spec:
  host: httpbin
  trafficPolicy:
    connectionPool:	#連接池(TCP | HTTP)配置,例如:連接數、並發請求等
      tcp:
        maxConnections: 1  #TCP連接池中的最大連接請求數,當超過這個值,會返回503代碼。如兩個請求過來,就會有一個請求返回503。
      http:
        http1MaxPendingRequests: 1  #連接到目標主機的最大掛起請求數,也就是待處理請求數。指的是virtualservice路由規則中配置的destination。
        maxRequestsPerConnection: 1 #連接池中每個連接最多處理1個請求后就關閉,並根據需要重新創建連接池中的連接
    outlierDetection:	#異常檢測配置,傳統意義上的熔斷配置,即對規定時間內服務錯誤數的監測
      consecutiveGatewayErrors: 1	#連續錯誤數1,即連續返回502-504狀態碼的Http請求錯誤數 
      interval: 1s	#錯誤異常的掃描間隔1s,即在interval(1s)內連續發生consecutiveGatewayErrors(1)個錯誤,則觸發服務熔斷
      baseEjectionTime: 3m	#基本驅逐時間3分鍾,實際驅逐時間為baseEjectionTime*驅逐次數
      maxEjectionPercent: 100	#最大驅逐百分比100%

3)添加客戶端訪問httpbin服務

創建一個客戶端以將流量發送給httpbin服務。該客戶端是一個簡單的負載測試客戶端,Fortio可以控制連接數,並發數和HTTP調用延遲。使用此客戶端來“跳閘”在DestinationRule中設置的斷路器策略。

# 通過執行下面的命令部署fortio客戶端
[root@k8s-master1 istio-1.10.1]# cat samples/httpbin/sample-client/fortio-deploy.yaml
apiVersion: v1
kind: Service
metadata:
  name: fortio
  labels:
    app: fortio
    service: fortio
spec:
  ports:
  - port: 8080
    name: http
  selector:
    app: fortio
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: fortio-deploy
spec:
  replicas: 1
  selector:
    matchLabels:
      app: fortio
  template:
    metadata:
      annotations:
        # This annotation causes Envoy to serve cluster.outbound statistics via 15000/stats
        # in addition to the stats normally served by Istio.  The Circuit Breaking example task
        # gives an example of inspecting Envoy stats.
        sidecar.istio.io/statsInclusionPrefixes: cluster.outbound,cluster_manager,listener_manager,http_mixer_filter,tcp_mixer_filter,server,cluster.xds-grpc
      labels:
        app: fortio
    spec:
      containers:
      - name: fortio
        image: fortio/fortio:latest_release
        imagePullPolicy: Always
        ports:
        - containerPort: 8080
          name: http-fortio
        - containerPort: 8079
          name: grpc-ping
          
[root@k8s-master1 istio-1.10.1]# kubectl apply -f  samples/httpbin/sample-client/fortio-deploy.yaml
[root@k8s-master1 istio-1.10.1]# kubectl get pods|grep for
fortio-deploy-576dbdfbc4-59rrp   2/2     Running   0          76s

[root@k8s-master1 istio-1.10.1]# kubectl exec  fortio-deploy-576dbdfbc4-59rrp   -c fortio -- /usr/bin/fortio curl  http://httpbin:8000/get
HTTP/1.1 200 OK
server: envoy
date: Wed, 14 Jul 2021 09:14:42 GMT
content-type: application/json
content-length: 594
access-control-allow-origin: *
access-control-allow-credentials: true
x-envoy-upstream-service-time: 63

{
  "args": {}, 
  "headers": {
    "Host": "httpbin:8000", 
    "User-Agent": "fortio.org/fortio-1.11.3", 
    "X-B3-Parentspanid": "e43a2d2c7ab3c784", 
    "X-B3-Sampled": "1", 
    "X-B3-Spanid": "a319763c32233e40", 
    "X-B3-Traceid": "bc2ea89e4ad88616e43a2d2c7ab3c784", 
    "X-Envoy-Attempt-Count": "1", 
    "X-Forwarded-Client-Cert": "By=spiffe://cluster.local/ns/default/sa/httpbin;Hash=4ae6300ac74341739507fb693662ee381aac6ccb4b8e37fcffec81fd3a9f3dde;Subject=\"\";URI=spiffe://cluster.local/ns/default/sa/default"
  }, 
  "origin": "127.0.0.6", 
  "url": "http://httpbin:8000/get"
}

4)觸發斷路器

在DestinationRule設置中,指定了maxConnections: 1http1MaxPendingRequests: 1。這些規則表明,如果超過一個以上的連接並發請求,則istio-proxy在為進一步的請求和連接打開路由時,應該會看到下面的情況 。

# 以兩個並發連接(-c 2)和發送20個請求(-n 20)調用服務
[root@k8s-master1 istio-1.10.1]# kubectl exec -it fortio-deploy-576dbdfbc4-59rrp  -c fortio -- /usr/bin/fortio load  -c 2 -qps 0 -n 20 -loglevel Warning http://httpbin:8000/get
09:16:35 I logger.go:127> Log level is now 3 Warning (was 2 Info)
Fortio 1.11.3 running at 0 queries per second, 4->4 procs, for 20 calls: http://httpbin:8000/get
Starting at max qps with 2 thread(s) [gomax 4] for exactly 20 calls (10 per thread + 0)
09:16:35 W http_client.go:693> Parsed non ok code 503 (HTTP/1.1 503)
09:16:35 W http_client.go:693> Parsed non ok code 503 (HTTP/1.1 503)
09:16:35 W http_client.go:693> Parsed non ok code 503 (HTTP/1.1 503)
09:16:35 W http_client.go:693> Parsed non ok code 503 (HTTP/1.1 503)
Ended after 133.919851ms : 20 calls. qps=149.34
Aggregated Function Time : count 20 avg 0.013183304 +/- 0.01239 min 0.000456046 max 0.04724804 sum 0.263666083
# range, mid point, percentile, count
>= 0.000456046 <= 0.001 , 0.000728023 , 10.00, 2
> 0.005 <= 0.006 , 0.0055 , 20.00, 2
> 0.006 <= 0.007 , 0.0065 , 45.00, 5
> 0.007 <= 0.008 , 0.0075 , 55.00, 2
> 0.009 <= 0.01 , 0.0095 , 65.00, 2
> 0.01 <= 0.011 , 0.0105 , 70.00, 1
> 0.012 <= 0.014 , 0.013 , 75.00, 1
> 0.02 <= 0.025 , 0.0225 , 80.00, 1
> 0.025 <= 0.03 , 0.0275 , 90.00, 2
> 0.04 <= 0.045 , 0.0425 , 95.00, 1
> 0.045 <= 0.047248 , 0.046124 , 100.00, 1
# target 50% 0.0075
# target 75% 0.014
# target 90% 0.03
# target 99% 0.0467984
# target 99.9% 0.0472031
Sockets used: 6 (for perfect keepalive, would be 2)
Jitter: false
Code 200 : 16 (80.0 %)
Code 503 : 4 (20.0 %)	# 斷開了
Response Header Sizes : count 20 avg 184.15 +/- 92.08 min 0 max 231 sum 3683
Response Body/Total Sizes : count 20 avg 707.55 +/- 233.3 min 241 max 825 sum 14151
All done 20 calls (plus 0 warmup) 13.183 ms avg, 149.3 qps

5.2、超時

在生產環境中經常會碰到由於調用方等待下游的響應過長,堆積大量的請求阻塞了自身服務,造成雪崩的情況,通過超時處理來避免由於無限期等待造成的故障,進而增強服務的可用性Istio 使用虛擬服務來優雅實現超時處理

下面例子模擬客戶端調用 nginx,nginx 將請求轉發給 tomcat。nginx 服務設置了超時時間為2秒,如果超出這個時間就不在等待,返回超時錯誤。tomcat服務設置了響應時間延遲10秒,任何請求都需要等待10秒后才能返回。client 通過訪問 nginx 服務去反向代理 tomcat服務,由於 tomcat服務需要10秒后才能返回,但nginx 服務只等待2秒,所以客戶端會提示超時錯誤

1)創建deployment

[root@k8s-master1 timeout]# cat nginx-tomcat-deployment.yaml 
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-tomcat
  labels:
    server: nginx
    app: web
spec:
  replicas: 1
  selector:
    matchLabels:
      server: nginx
      app: web
  template:
    metadata:
      name: nginx
      labels: 
        server: nginx
        app: web
    spec:
      containers:
      - name: nginx
        image: nginx:latest
        imagePullPolicy: IfNotPresent
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: tomcat
  labels:
    server: tomcat
    app: web
spec:
  replicas: 1
  selector:
    matchLabels:
      server: tomcat
      app: web
  template:
    metadata:
      name: tomcat
      labels: 
        server: tomcat
        app: web
    spec:
      containers:
      - name: tomcat
        image: docker.io/kubeguide/tomcat-app:v1 
        imagePullPolicy: IfNotPresent
        
[root@k8s-master1 timeout]# kubectl apply -f nginx-tomcat-deployment.yaml
[root@k8s-master1 timeout]# kubectl get pods
NAME                             READY   STATUS    RESTARTS   AGE
appv1-6f7b58fd99-2w8xq           2/2     Running   2          4h19m
appv2-f78cb577-gpj2j             2/2     Running   2          4h19m
fortio-deploy-576dbdfbc4-59rrp   2/2     Running   2          3h8m
httpbin-74fb669cc6-f2zm7         2/2     Running   2          3h16m
nginx-tomcat-7dd6f74846-jrf5l    2/2     Running   0          7s
tomcat-86ddb8f5c9-s2g9s          2/2     Running   0          7s

2)創建service

[root@k8s-master1 timeout]# cat nginx-tomcat-svc.yaml 
---
apiVersion: v1
kind: Service
metadata:
  name: nginx-svc
spec:
  selector:
    server: nginx
  ports:
  - name: http
    port: 80
    targetPort: 80
    protocol: TCP
---
apiVersion: v1
kind: Service
metadata:
  name: tomcat-svc
spec:
  selector:
    server: tomcat
  ports:
  - name: http
    port: 8080
    targetPort: 8080
    protocol: TCP
    
[root@k8s-master1 timeout]# kubectl apply -f nginx-tomcat-svc.yaml
[root@k8s-master1 timeout]# kubectl get svc
NAME         TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)    AGE
canary       ClusterIP   10.99.119.233    <none>        80/TCP     4h19m
fortio       ClusterIP   10.111.48.142    <none>        8080/TCP   3h8m
httpbin      ClusterIP   10.97.205.101    <none>        8000/TCP   3h16m
kubernetes   ClusterIP   10.96.0.1        <none>        443/TCP    5d23h
nginx-svc    ClusterIP   10.102.250.203   <none>        80/TCP     43s
tomcat-svc   ClusterIP   10.99.36.197     <none>        8080/TCP   43s

3)創建VirtualService

[root@k8s-master1 timeout]# cat virtual-tomcat-nginx.yaml 
---
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
  name: nginx-vs
spec:
  hosts:
  - nginx-svc
  http:
  - route:
    - destination: 
        host: nginx-svc
    timeout: 2s	# 說明調用 nginx-svc 的 k8s service,請求超時時間是 2s
---
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
  name: tomcat-vs
spec:
  hosts:
  - tomcat-svc
  http:
  - fault:
      delay:	# 故障注入,每次調用 tomcat-svc 的 k8s service,都會延遲10s才會調用。
        percentage:
          value: 100
        fixedDelay: 10s
    route:
    - destination:
        host: tomcat-svc
        
[root@k8s-master1 timeout]# kubectl apply -f virtual-tomcat-nginx.yaml
virtualservice.networking.istio.io/nginx-vs created
virtualservice.networking.istio.io/tomcat-vs created
[root@k8s-master1 timeout]# kubectl get virtualservices
NAME        GATEWAYS             HOSTS            AGE
canary      ["canary-gateway"]   ["*"]            4h6m
nginx-vs                         ["nginx-svc"]    7s
tomcat-vs                        ["tomcat-svc"]   7s

4)設置超時時間

[root@k8s-master1 timeout]# kubectl exec -it nginx-tomcat-7dd6f74846-jrf5l -- /bin/sh
# apt-get update
# apt-get install vim -y
# vim /etc/nginx/conf.d/default.conf
# nginx -t
# nginx -s reload

image-20210714202712476

5)登錄客戶端驗證超時

[root@k8s-master1 timeout]# kubectl run busybox --image busybox:1.28 --restart=Never --rm -it busybox -- sh
If you don't see a command prompt, try pressing enter.
/ # time wget -q -O - http://nginx-svc
wget: server returned error: HTTP/1.1 504 Gateway Timeout
Command exited with non-zero status 1
real	0m 2.01s
user	0m 0.00s
sys	0m 0.00s

# 每隔2秒,由於 nginx 服務的超時時間到了而 tomcat未有響應,則提示返回超時錯誤
/ # while true; do wget -q -O - http://nginx-svc; done
wget: server returned error: HTTP/1.1 504 Gateway Timeout
wget: server returned error: HTTP/1.1 504 Gateway Timeout
wget: server returned error: HTTP/1.1 504 Gateway Timeout
wget: server returned error: HTTP/1.1 504 Gateway Timeout

# 驗證故障注入效果,執行如下語句,執行之后10s才會有結果
/ # time wget -q -O - http://tomcat-svc
wget: server returned error: HTTP/1.1 503 Service Unavailable
Command exited with non-zero status 1
real	0m 10.03s
user	0m 0.00s
sys	0m 0.00s

5.3、故障注入和重試

Istio 重試機制就是如果調用服務失敗,Envoy 代理嘗試連接服務的最大次數。而默認情況下,Envoy 代理在失敗后並不會嘗試重新連接服務,除非我們啟動 Istio 重試機制。

下面例子模擬客戶端調用 nginx,nginx 將請求轉發給 tomcat。tomcat 通過故障注入而中止對外服務,nginx 設置如果訪問 tomcat 失敗則會重試 3 次。

1)創建pod

# 刪除之前的pod並新建
[root@k8s-master1 timeout]# kubectl delete -f .
[root@k8s-master1 timeout]# kubectl apply -f nginx-tomcat-deployment.yaml
[root@k8s-master1 timeout]# kubectl apply -f nginx-tomcat-svc.yaml
[root@k8s-master1 timeout]# kubectl get pods
NAME                             READY   STATUS    RESTARTS   AGE
busybox                          1/2     Error     0          13m
nginx-tomcat-7dd6f74846-86x87    2/2     Running   0          43s
tomcat-86ddb8f5c9-m4wbq          2/2     Running   0          42s

2)創建VirtualService

[root@k8s-master1 timeout]# cat virtual-attempt.yaml 
---
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
  name: nginx-vs
spec:
  hosts:
  - nginx-svc
  http:
  - route:
    - destination: 
        host: nginx-svc
    retries:
      attempts: 3	# 調用 nginx-svc 的 k8s service,在初始調用失敗后最多重試 3 次來連接到服務子集,每個重試都有 2 秒的超時
      perTryTimeout: 2s
---
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
  name: tomcat-vs
spec:
  hosts:
  - tomcat-svc
  http:
  - fault:
      abort:
        percentage:
          value: 100
        httpStatus: 503	# 每次調用 tomcat-svc 的service,100%都會返回錯誤狀態碼503
    route:
    - destination:
        host: tomcat-svc
        
[root@k8s-master1 timeout]# kubectl apply -f virtual-attempt.yaml

3)驗證超時與重試

[root@k8s-master1 timeout]# kubectl exec -it nginx-tomcat-7dd6f74846-86x87 -- /bin/sh
# apt-get update && apt-get install vim -y
# vim /etc/nginx/conf.d/default.conf
# nginx -t
# nginx -s reload
# exit

image-20210714205036812

# 驗證重試是否生效
[root@k8s-master1 timeout]# kubectl delete pods busybox
[root@k8s-master1 timeout]# kubectl run busybox --image busybox:1.28 --restart=Never --rm -it busybox -- sh
/ # wget -q -O - http://nginx-svc
wget: server returned error: HTTP/1.1 503 Service Unavailable

# 新開終端,查看日志
[root@k8s-master1 ~]# kubectl logs -f nginx-tomcat-7dd6f74846-86x87  -c istio-proxy

image-20210714205919082


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM