Kubernetes-自動擴展器HPA、VPA、CA


image

一、Kubernetes自動擴展器

  • HPA:Pod 水平縮放器
  • VPA:Pod 垂直縮放器
  • CA:集群自動縮放器

1.1、Kubernetes Pod水平自動伸縮(HPA)

HPA官方文檔 :https://kubernetes.io/zh/docs/tasks/run-application/horizontal-pod-autoscale/

1.1.1、HPA簡介

  • HAP,全稱 Horizontal Pod Autoscaler, 可以基於 CPU 利用率自動擴縮 ReplicationController、Deployment 和 ReplicaSet 中的 Pod 數量。 除了 CPU 利用率,也可以基於其他應程序提供的自定義度量指標來執行自動擴縮。 Pod 自動擴縮不適用於無法擴縮的對象,比如 DaemonSet。
  • Pod 水平自動擴縮特性由 Kubernetes API 資源和控制器實現。資源決定了控制器的行為。 控制器會周期性的調整副本控制器或 Deployment 中的副本數量,以使得 Pod 的平均 CPU 利用率與用戶所設定的目標值匹配。

  • HPA 定期檢查內存和 CPU 等指標,自動調整 Deployment 中的副本數,比如流量變化:


  • 實際生產中,廣泛使用這四類指標:
    • 1、Resource metrics - CPU核內存利用率指標
    • 2、Pod metrics - 例如網絡利用率和流量
    • 3、Object metrics - 特定對象的指標,比如Ingress, 可以按每秒使用請求數來擴展容器
    • 4、Custom metrics - 自定義監控,比如通過定義服務響應時間,當響應時間達到一定指標時自動擴容

1.1.2、HPA示例

  • 1、首先我們部署一個nginx,副本數為2,請求cpu資源為200m。同時為了便宜測試,使用NodePort暴露服務,命名空間設置為:hpa
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: nginx
  name: nginx
  namespace: hpa
spec:
  replicas: 2
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - image: nginx
        name: nginx
        resources:
          requests:
            cpu: 200m
            memory: 100Mi
---
apiVersion: v1
kind: Service
metadata:
  name: nginx
  namespace: hpa
spec:
  type: NodePort
  ports:
  - port: 80
    targetPort: 80
  selector:
    app: nginx
  • 2、查看部署結果
# kubectl  get po -n hpa
  NAME                     READY   STATUS    RESTARTS   AGE
  nginx-5c87768612-48b4v   1/1     Running   0          8m38s
  nginx-5c87768612-kfpkq   1/1     Running   0          8m38s
  • 3、創建HPA
    • 這里創建一個HPA,用於控制我們上一步驟中創建的 Deployment,使 Pod 的副本數量維持在 1 到 10 之間。
    • HPA 將通過增加或者減少 Pod 副本的數量(通過 Deployment)以保持所有 Pod 的平均 CPU 利用率在 50% 以內。
apiVersion: autoscaling/v2beta2
kind: HorizontalPodAutoscaler
metadata:
  name: nginx
  namespace: hpa
spec:
  scaleTargetRef:
    apiVersion: apps/v1
    kind: Deployment
    name: nginx
  minReplicas: 1
  maxReplicas: 10
  metrics:
  - type: Resource
    resource:
      name: cpu
      target:
        type: Utilization
        averageUtilization: 50

  • 4、查看部署結果
# kubectl  get hpa -n hpa
  NAME    REFERENCE          TARGETS   MINPODS   MAXPODS   REPLICAS   AGE
  nginx   Deployment/nginx   0%/50%      1                10                 2          50s
  • 5、壓測觀察Pod數和HPA變化
# 執行壓測命令
# ab -c 1000 -n 100000000 http://127.0.0.1:30792/
  This is ApacheBench, Version 2.3 <$Revision: 1843412 $>
  Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
  Licensed to The Apache Software Foundation, http://www.apache.org/
  Benchmarking 127.0.0.1 (be patient)
# 觀察變化
#  kubectl  get hpa -n hpa
  NAME    REFERENCE          TARGETS    MINPODS   MAXPODS   REPLICAS   AGE
  nginx   Deployment/nginx   303%/50%   1         10        7          12m

# kubectl  get po -n hpa
  NAME                         READY   STATUS    RESTARTS   AGE
  pod/nginx-5c87768612-6b4sl   1/1     Running   0          85s
  pod/nginx-5c87768612-99mjb   1/1     Running   0          69s
  pod/nginx-5c87768612-cls7r   1/1     Running   0          85s
  pod/nginx-5c87768612-hhdr7   1/1     Running   0          69s
  pod/nginx-5c87768612-jj744   1/1     Running   0          85s
  pod/nginx-5c87768612-kfpkq   1/1     Running   0          27m
  pod/nginx-5c87768612-xb94x   1/1     Running   0          69s
  • 6、可以看出,hpa TARGETS達到了303%,需要擴容。pod數自動擴展到了7個。等待壓測結束;
# kubectl get hpa -n hpa
NAME    REFERENCE          TARGETS   MINPODS   MAXPODS   REPLICAS   AGE
nginx   Deployment/nginx   20%/50%   1         10        7          16m

---N分鍾后---

# kubectl get hpa -n hpa
  NAME    REFERENCE          TARGETS   MINPODS   MAXPODS   REPLICAS   AGE
  nginx   Deployment/nginx   0%/50%    1         10        7          18m

---再過N分鍾后---

# kubectl  get po -n hpa
  NAME                     READY   STATUS    RESTARTS   AGE
  nginx-5c87768612-jj744   1/1     Running   0          11m
  • 7、hpa示例總結
    • CPU 利用率已經降到 0,所以 HPA 將自動縮減副本數量至 1。
    • 為什么會將副本數降為1,而不是我們部署時指定的replicas: 2呢?
      • 因為在創建HPA時,指定了副本數范圍,這里是minReplicas: 1,maxReplicas: 10。所以HPA在縮減副本數時減到了1。

1.2、Kubernetes Pod垂直自動伸縮(VPA)

VPA項目托管地址 :https://github.com/kubernetes/autoscaler/tree/master/vertical-pod-autoscaler

1.2.1、VPA 簡介

  • VPA 全稱 Vertical Pod Autoscaler,即垂直 Pod 自動擴縮容,它根據容器資源使用率自動設置 CPU 和 內存 的requests,從而允許在節點上進行適當的調度,以便為每個 Pod 提供適當的資源。
  • 它既可以縮小過度請求資源的容器,也可以根據其使用情況隨時提升資源不足的容量。

  • 有些時候無法通過增加 Pod 數來擴容,比如數據庫。這時候可以通過 VPA 增加 Pod 的大小,比如調整 Pod 的 CPU 和內存:

1.2.2、VPA示例

參考博文 :https://www.jianshu.com/p/94ea8bee433e

1.2.2.1、部署metrics-server

  • 1、下載部署清單文件
# wget  https://github.com/kubernetes-sigs/metrics-server/releases/download/v0.3.7/components.yaml
  • 2、修改components.yaml文件
    • 修改了鏡像地址,gcr.io為我自己的倉庫
    • 修改了metrics-server啟動參數args,要不然會報錯unable to fully scrape metrics from source kubelet_summary…
- name: metrics-server
        image: scofield/metrics-server:v0.3.7
        imagePullPolicy: IfNotPresent
        args:
          - --cert-dir=/tmp
          - --secure-port=4443
          - /metrics-server
          - --kubelet-insecure-tls
          - --kubelet-preferred-address-types=InternalIP
  • 3、部署及驗證
# kubectl  apply -f components.yaml

# kubectl  get po -n kube-system
  NAME                                       READY   STATUS    RESTARTS   AGE
  metrics-server-7947cb98b6-xw6b8            1/1     Running   0          10m
# kubectl  top nodes

1.2.2.2、部署vertical-pod-autoscaler

  • 1、克隆autoscaler
# git clone https://github.com/kubernetes/autoscaler.git
  • 2、部署autoscaler
#  cd autoscaler/vertical-pod-autoscaler
#  ./hack/vpa-up.sh
  Warning: apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition
  customresourcedefinition.apiextensions.k8s.io/verticalpodautoscalers.autoscaling.k8s.io created
  customresourcedefinition.apiextensions.k8s.io/verticalpodautoscalercheckpoints.autoscaling.k8s.io created
  clusterrole.rbac.authorization.k8s.io/system:metrics-reader created
  clusterrole.rbac.authorization.k8s.io/system:vpa-actor created
  clusterrole.rbac.authorization.k8s.io/system:vpa-checkpoint-actor created
  clusterrole.rbac.authorization.k8s.io/system:evictioner created
  clusterrolebinding.rbac.authorization.k8s.io/system:metrics-reader created
  clusterrolebinding.rbac.authorization.k8s.io/system:vpa-actor created
  clusterrolebinding.rbac.authorization.k8s.io/system:vpa-checkpoint-actor created
  clusterrole.rbac.authorization.k8s.io/system:vpa-target-reader created
  clusterrolebinding.rbac.authorization.k8s.io/system:vpa-target-reader-binding created
  clusterrolebinding.rbac.authorization.k8s.io/system:vpa-evictionter-binding created
  serviceaccount/vpa-admission-controller created
  clusterrole.rbac.authorization.k8s.io/system:vpa-admission-controller created
  clusterrolebinding.rbac.authorization.k8s.io/system:vpa-admission-controller created
  clusterrole.rbac.authorization.k8s.io/system:vpa-status-reader created
  clusterrolebinding.rbac.authorization.k8s.io/system:vpa-status-reader-binding created
  serviceaccount/vpa-updater created
  deployment.apps/vpa-updater created
  serviceaccount/vpa-recommender created
  deployment.apps/vpa-recommender created
  Generating certs for the VPA Admission Controller in /tmp/vpa-certs.
  Generating RSA private key, 2048 bit long modulus (2 primes)
  ............................................................................+++++
  .+++++
  e is 65537 (0x010001)
  Generating RSA private key, 2048 bit long modulus (2 primes)
  ............+++++
  ...........................................................................+++++
  e is 65537 (0x010001)
  Signature ok
  subject=CN = vpa-webhook.kube-system.svc
  Getting CA Private Key
  Uploading certs to the cluster.
  secret/vpa-tls-certs created
  Deleting /tmp/vpa-certs.
  deployment.apps/vpa-admission-controller created
  service/vpa-webhook created
  • 3、驗證部署結果
# 可以看到metrics-server和vpa都已經正常運行了

# kubectl  get po -n kube-system
  NAME                                        READY   STATUS    RESTARTS   AGE
  metrics-server-7947cb98b6-xw6b8             1/1     Running   0          46m
  vpa-admission-controller-7d87559549-g77h9   1/1     Running   0          10m
  vpa-recommender-84bf7fb9db-65669            1/1     Running   0          10m
  vpa-updater-79cc46c7bb-5p889                1/1     Running   0          10m

1.2.2.3、updateMode: "Off"(此模式僅獲取資源推薦不更新Pod)

  • 1、部署一個nginx服務,部署到namespace: vpa
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: nginx
  name: nginx
  namespace: vpa
spec:
  replicas: 2
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - image: nginx
        name: nginx
        resources:
          requests:
            cpu: 100m
            memory: 250Mi
  • 2、創建一個NodePort類型的service,便於壓測Pod
# cat  nginx-vpa-ingress.yaml
apiVersion: v1
kind: Service
metadata:
  name: nginx
  namespace: vpa
spec:
  type: NodePort
  ports:
  - port: 80
    targetPort: 80
  selector:
    app: nginx

# kubectl  get svc -n vpa
  NAME    TYPE       CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE
  nginx   NodePort   10.97.250.131   <none>        80:32621/TCP   55s
  • 3、創建VPA
    • 這里先使用updateMode: "Off"模式,這種模式僅獲取資源推薦不更新Pod
# cat   nginx-vpa-demo.yaml
apiVersion: autoscaling.k8s.io/v1beta2
kind: VerticalPodAutoscaler
metadata:
  name: nginx-vpa
  namespace: vpa
spec:
  targetRef:
    apiVersion: "apps/v1"
    kind: Deployment
    name: nginx
  updatePolicy:
    updateMode: "Off"
  resourcePolicy:
    containerPolicies:
    - containerName: "nginx"
      minAllowed:
        cpu: "250m"
        memory: "100Mi"
      maxAllowed:
        cpu: "2000m"
        memory: "2048Mi"
4、查看部署結果

[root@k8s-node001 examples]# kubectl  get vpa -n vpa
NAME        AGE
nginx-vpa   2m34s
5、使用describe查看vpa詳情,主要關注Container Recommendations

[root@k8s-node001 examples]# kubectl  describe  vpa nginx-vpa   -n vpa
Name:         nginx-vpa
Namespace:    vpa
....略去10000字 哈哈......
  Update Policy:
    Update Mode:  Off
Status:
  Conditions:
    Last Transition Time:  2020-09-28T04:04:25Z
    Status:                True
    Type:                  RecommendationProvided
  Recommendation:
    Container Recommendations:
      Container Name:  nginx
      Lower Bound:
        Cpu:     250m
        Memory:  262144k
      Target:
        Cpu:     250m
        Memory:  262144k
      Uncapped Target:
        Cpu:     25m
        Memory:  262144k
      Upper Bound:
        Cpu:     803m
        Memory:  840190575
Events:          <none>
Lower Bound:                 下限值
Target:                      推薦值
Upper Bound:                 上限值
Uncapped Target:           如果沒有為VPA提供最小或最大邊界,則表示目標利用率
上述結果表明,推薦的 Pod 的 CPU 請求為 25m,推薦的內存請求為 262144k 字節。
  • 4、對nginx進行壓測,執行壓測命令
# ab -c 100 -n 10000000 http://192.168.127.124:32621/
  This is ApacheBench, Version 2.3 <$Revision: 1843412 $>
  Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
  Licensed to The Apache Software Foundation, http://www.apache.org/

  Benchmarking 192.168.127.124 (be patient)
  Completed 1000000 requests
  Completed 2000000 requests
  Completed 3000000 requests
  • 5、稍后再觀察VPA Recommendation變化
# kubectl  describe  vpa nginx-vpa   -n vpa |tail -n 20 
  Conditions:
    Last Transition Time:  2021-06-28T04:04:25Z
    Status:                True
    Type:                  RecommendationProvided
  Recommendation:
    Container Recommendations:
      Container Name:  nginx
      Lower Bound:
        Cpu:     250m
        Memory:  262144k
      Target:
        Cpu:     476m
        Memory:  262144k
      Uncapped Target:
        Cpu:     476m
        Memory:  262144k
      Upper Bound:
        Cpu:     2
        Memory:  387578728
Events:          <none>
  • 從輸出信息可以看出,VPA對Pod給出了推薦值:Cpu: 476m,因為我們這里設置了updateMode: "Off",所以不會更新Pod;

1.2.2.4、updateMode: "Auto"(此模式當目前運行的pod的資源達不到VPA的推薦值,就會執行pod驅逐,重新部署新的足夠資源的服務)

  • 1、把updateMode: "Auto",看看VPA會有什么動作
    • 並且把resources改為:memory: 50Mi,cpu: 100m
# kubectl  apply -f nginx-vpa.yaml
  deployment.apps/nginx created

# cat nginx-vpa.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: nginx
  name: nginx
  namespace: vpa
spec:
  replicas: 2
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - image: nginx
        name: nginx
        resources:
          requests:
            cpu: 100m
            memory: 50Mi

# kubectl  get po  -n vpa
  NAME                     READY   STATUS    RESTARTS   AGE
  nginx-7ff65f974c-f4vgl   1/1     Running   0          114s
  nginx-7ff65f974c-v9ccx   1/1     Running   0          114s
  • 2、再次部署vpa,這里VPA部署文件nginx-vpa-demo.yaml只改了updateMode: "Auto"name: nginx-vpa-2
# cat  nginx-vpa-demo.yaml
apiVersion: autoscaling.k8s.io/v1beta2
kind: VerticalPodAutoscaler
metadata:
  name: nginx-vpa-2
  namespace: vpa
spec:
  targetRef:
    apiVersion: "apps/v1"
    kind: Deployment
    name: nginx
  updatePolicy:
    updateMode: "Auto"
  resourcePolicy:
    containerPolicies:
    - containerName: "nginx"
      minAllowed:
        cpu: "250m"
        memory: "100Mi"
      maxAllowed:
        cpu: "2000m"
        memory: "2048Mi"

# kubectl apply -f nginx-vpa-demo.yaml
  verticalpodautoscaler.autoscaling.k8s.io/nginx-vpa created

# kubectl  get vpa -n vpa
  NAME        AGE
  nginx-vpa-2   9s
  • 3、再次壓測
# ab -c 1000 -n 100000000 http://192.168.127.124:32621/
  • 4、稍后使用describe查看vpa詳情,同樣只關注Container Recommendations
# kubectl  describe  vpa nginx-vpa-2    -n vpa |tail -n 30
      Min Allowed:
        Cpu:     250m
        Memory:  100Mi
  Target Ref:
    API Version:  apps/v1
    Kind:         Deployment
    Name:         nginx
  Update Policy:
    Update Mode:  Auto
Status:
  Conditions:
    Last Transition Time:  2021-06-28T04:48:25Z
    Status:                True
    Type:                  RecommendationProvided
  Recommendation:
    Container Recommendations:
      Container Name:  nginx
      Lower Bound:
        Cpu:     250m
        Memory:  262144k
      Target:
        Cpu:     476m
        Memory:  262144k
      Uncapped Target:
        Cpu:     476m
        Memory:  262144k
      Upper Bound:
        Cpu:     2
        Memory:  262144k
Events:          <none>
  • Target變成了Cpu: 587m ,Memory: 262144k

  • 5、查看event事件

~]# kubectl  get event -n vpa
  LAST SEEN   TYPE      REASON              OBJECT                        MESSAGE
  33m         Normal    Pulling             pod/nginx-7ff65f974c-f4vgl    Pulling image "nginx"
  33m         Normal    Pulled              pod/nginx-7ff65f974c-f4vgl    Successfully pulled image "nginx" in 15.880996269s
  33m         Normal    Created             pod/nginx-7ff65f974c-f4vgl    Created container nginx
  33m         Normal    Started             pod/nginx-7ff65f974c-f4vgl    Started container nginx
  26m         Normal    EvictedByVPA        pod/nginx-7ff65f974c-f4vgl    Pod was evicted by VPA Updater to apply resource recommendation.
  26m         Normal    Killing             pod/nginx-7ff65f974c-f4vgl    Stopping container nginx
  35m         Normal    Scheduled           pod/nginx-7ff65f974c-hnzr5    Successfully assigned vpa/nginx-7ff65f974c-hnzr5 to k8s-node005
  35m         Normal    Pulling             pod/nginx-7ff65f974c-hnzr5    Pulling image "nginx"
  34m         Normal    Pulled              pod/nginx-7ff65f974c-hnzr5    Successfully pulled image "nginx" in 40.750855715s
  34m         Normal    Scheduled           pod/nginx-7ff65f974c-v9ccx    Successfully assigned vpa/nginx-7ff65f974c-v9ccx to k8s-node004
  33m         Normal    Pulling             pod/nginx-7ff65f974c-v9ccx    Pulling image "nginx"
  33m         Normal    Pulled              pod/nginx-7ff65f974c-v9ccx    Successfully pulled image "nginx" in 15.495315629s
  33m         Normal    Created             pod/nginx-7ff65f974c-v9ccx    Created container nginx
  33m         Normal    Started             pod/nginx-7ff65f974c-v9ccx    Started container nginx
  • 從輸出信息可以了解到,vpa執行了EvictedByVPA,自動停掉了nginx,然后使用 VPA推薦的資源啟動了新的nginx,我們查看下nginx的pod可以得到確認;
~]# kubectl  describe po nginx-7ff65f974c-2m9zl -n vpa
Name:         nginx-7ff65f974c-2m9zl
Namespace:    vpa
Priority:     0
Node:         k8s-node004/192.168.100.184
Start Time:   June, 28 Sep 2021 00:46:19 -0400
Labels:       app=nginx
              pod-template-hash=7ff65f974c
Annotations:  cni.projectcalico.org/podIP: 100.67.191.53/32
              vpaObservedContainers: nginx
              vpaUpdates: Pod resources updated by nginx-vpa: container 0: cpu request, memory request
Status:       Running
IP:           100.67.191.53
IPs:
  IP:           100.67.191.53
Controlled By:  ReplicaSet/nginx-7ff65f974c
Containers:
  nginx:
    Container ID:   docker://c96bcd07f35409d47232a0bf862a76a56352bd84ef10a95de8b2e3f6681df43d
    Image:          nginx
    Image ID:       docker-pullable://nginx@sha256:c628b67d21744fce822d22fdcc0389f6bd763daac23a6b77147d0712ea7102d0
    Port:           <none>
    Host Port:      <none>
    State:          Running
      Started:      June, 28 Sep 2021 00:46:38 -0400
    Ready:          True
    Restart Count:  0
    Requests:
      cpu:        476m
      memory:     262144k
  • 看重點Requests:cpu: 476m,memory: 262144k
  • 再回頭看看部署文件
          requests:
            cpu: 100m
            memory: 50Mi
  • 隨着服務的負載的變化,VPA的推薦值也會不斷變化。當目前運行的pod的資源達不到VPA的推薦值,就會執行pod驅逐,重新部署新的足夠資源的服務。

1.2.2.5、VPA使用限制&優勢

  • 限制
    • 不能與HPA(Horizontal Pod Autoscaler )一起使用;
  • 優勢
    • Pod 資源用其所需,所以集群節點使用效率高;
    • Pod 會被安排到具有適當可用資源的節點上;
    • 不必運行基准測試任務來確定 CPU 和內存請求的合適值;
    • VPA 可以隨時調整 CPU 和內存請求,無需人為操作,因此可以減少維護時間;

1.3、Kubernetes 集群自動縮放器(CA)

CA項目托管地址 :https://github.com/kubernetes/autoscaler/tree/master/cluster-autoscaler

節點的初始化: https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet-tls-bootstrapping/

1.3.1、CA簡介

  • 集群自動伸縮器(CA)基於待處理的豆莢擴展集群節點。它會定期檢查是否有任何待處理的豆莢,如果需要更多的資源,並且擴展的集群仍然在用戶提供的約束范圍內,則會增加集群的大小。CA與雲供應商接口,請求更多節點或釋放空閑節點。它與GCP、AWS和Azure兼容。版本1.0(GA)與Kubernetes 1.8一起發布。

  • 當集群資源不足時,CA 會自動配置新的計算資源並添加到集群中:

1.4、Pod 自動縮放的前置時間

參考博文 :https://mp.weixin.qq.com/s/GKS3DJHm4p0Tjtj8nJRGmA

  • 四個因素:
    • 1.HPA 的響應耗時
    • 2.CA 的響應耗時
    • 3.節點的初始化耗時
    • 4.Pod 的創建時間

  • 默認情況下,kubelet 每 10 秒抓取一次 Pod 的 CPU 和內存占用情況;
  • 每分鍾,Metrics Server 會將聚合的指標開放給 Kubernetes API 的其他組件使用;

  • CA 每 10 秒排查不可調度的 Pod。[10]
    • 少於 100 個節點,且每個節點最多 30 個 Pod,時間不超過 30s。平均延遲大約 5s;
    • 100 到 1000個節點,不超過 60s。平均延遲大約 15s;

  • 節點的配置時間,取決於雲服務商。通常在 3~5 分鍾;

  • 容器運行時創建 Pod:啟動容器的幾毫秒和下載鏡像的幾秒鍾。如果不做鏡像緩存,幾秒到 1 分鍾不等,取決於層的大小和梳理;

  • 對於小規模的集群,最壞的情況是 6 分 30 秒。對於 100 個以上節點規模的集群,可能高達 7 分鍾;
HPA delay:          1m30s +
CA delay:           0m30s +
Cloud provider:     4m    +
Container runtime:  0m30s +
=========================
Total               6m30s
  • 突發情況,比如流量激增,你是否願意等這 7 分鍾?該如何壓縮時間?(即使調小了上述設置,依然會受雲服務商的時間限制)
HPA 的刷新時間,默認 15 秒,通過 --horizontal-pod-autoscaler-sync-period 標志控制;

Metrics Server 的指標抓取時間,默認 60 秒,通過 metric-resolution 控

CA 的掃描間隔,默認 10 秒,通過 scan-interval 控制;

節點上緩存鏡像,比如 kube-fledged等工具;


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM