kubernetes(31):Prometheus-adapter+custom-metrics-api實現自定義HPA


prometheus-adapter+custom-metrics-api實現k8s自定義HPA

參考https://blog.51cto.com/juestnow/2413581

1  HPA簡介

Horizontal Pod Autoscaling,簡稱HPA,是Kubernetes中實現POD水平自動伸縮的功能。為什么要水平而不叫垂直, 那是因為自動擴展主要分為兩種:

水平擴展(scale out),針對於實例數目的增減

垂直擴展(scal up),即單個實例可以使用的資源的增減, 比如增加cpu和增大內存

 

更多參考kubernetes(24):基於metrics-server水平擴展-資源HPA

1.1 基於資源的HPA

參考 kubernetes(24):基於metrics-server水平擴展-資源HPA

Metrics Server是一個集群范圍的資源使用數據的聚合器,是Hepster的繼承者。metrics server通過kubernetes.summary_api收集節點和pod的CPU和memory使用率,根據CPU和內存使用情況pod自動擴展。這種方式主要是資源層面的,根據資源計算。

1.2  自定義HPA(例如qps)

也可以通過prometheus和custom API server,向聚合器層注冊自定義API服務,然后使用演示程序提供的自定義metrics配置HPA,例如基於http請求訪問次數的HPA。

Metrics Server和custom-metrics-api都有多種部署方式,比較推薦

https://github.com/stefanprodan/k8s-prom-hpa

我們這里前面部署了PrometheusOperator,我們這里用它提供的custom-metrics-api。

2  部署准備

部署PrometheusOperator所有的容器組都運行在monitoring 命名空間

地址git clone https://github.com/coreos/kube-prometheus

Prometheus之前已經部署完成

否則請參考 Prometheus監控k8s(10)-PrometheusOperator-更優雅的Prometheus部署

3   部署custom-metrics-api

cd kube-prometheus/experimental/custom-metrics-api/
kubectl apply -f custom-metrics-apiserver-resource-reader-cluster-role-binding.yaml
kubectl apply -f custom-metrics-apiservice.yaml
kubectl apply -f custom-metrics-cluster-role.yaml
kubectl apply -f custom-metrics-configmap.yaml
kubectl apply -f hpa-custom-metrics-cluster-role-binding.yam

 

 

4   調整部署prometheus-adapter

prometheus的安裝參考

Prometheus監控k8s(2)-手動部署Prometheus

Prometheus監控k8s(10)-PrometheusOperator-更優雅的Prometheus部署

 

部署prometheus的時候adapter已經部署,這個修改一下

 

[root@k8s-master experimental]# kubectl get pods -n monitoring -o wide | grep prometheus-adapter
prometheus-adapter-668748ddbd-9h8g4   1/1     Running   0          19h   10.254.1.247   k8s-node-1   <none>           <none>
[root@k8s-master experimental]#

 

4.1   整理prometheus-adapter yaml

[root@k8s-master kube-prometheus]# cd -
/root/prometheus/kube-prometheus/experimental
[root@k8s-master experimental]# cd ../manifests/
[root@k8s-master manifests]# mkdir prometheus-adapter
[root@k8s-master manifests]# mv prometheus-adapter*.yaml prometheus-adapter
[root@k8s-master manifests]# cd prometheus-adapter
[root@k8s-master prometheus-adapter]# ls
prometheus-adapter-apiService.yaml                          prometheus-adapter-configMap.yaml
prometheus-adapter-clusterRoleAggregatedMetricsReader.yaml  prometheus-adapter-deployment.yaml
prometheus-adapter-clusterRoleBindingDelegator.yaml         prometheus-adapter-roleBindingAuthReader.yaml
prometheus-adapter-clusterRoleBinding.yaml                  prometheus-adapter-serviceAccount.yaml
prometheus-adapter-clusterRoleServerResources.yaml          prometheus-adapter-service.yaml
prometheus-adapter-clusterRole.yaml
[root@k8s-master prometheus-adapter]#
[root@k8s-master prometheus-adapter]# kubectl delete -f  .
clusterrole.rbac.authorization.k8s.io "prometheus-adapter" deleted
clusterrolebinding.rbac.authorization.k8s.io "prometheus-adapter" deleted
clusterrolebinding.rbac.authorization.k8s.io "resource-metrics:system:auth-delegator" deleted
clusterrole.rbac.authorization.k8s.io "resource-metrics-server-resources" deleted
configmap "adapter-config" deleted
deployment.apps "prometheus-adapter" deleted
rolebinding.rbac.authorization.k8s.io "resource-metrics-auth-reader" deleted
service "prometheus-adapter" deleted
serviceaccount "prometheus-adapter" deleted
[root@k8s-master prometheus-adapter]# mv prometheus-adapter-configMap.yaml /tmp/
[root@k8s-master prometheus-adapter]#
說明:custom-metrics-api 里面已經有configmap 不能覆蓋,刪除或者移走

 

 

4.2   生成由Prometheus adapter所需的TLS證書

### 創建secret

kubectl create secret generic volume-serving-cert --from-file=apiserver.crt --from-file=apiserver.key  -n monitoring
kubectl get secret -n monitoring | grep volume-serving-cert
kubectl get secret volume-serving-cert -n monitoring volume-serving-cert -o yaml

 

 

我是通過kubeadm安裝的k8s,已經有了APIserver證書

[root@k8s-master ~]# cd /etc/kubernetes/
manifests/ pki/
[root@k8s-master ~]# cd /etc/kubernetes/pki/
[root@k8s-master pki]# ls
apiserver.crt              apiserver.key                 ca.crt  front-proxy-ca.crt      front-proxy-client.key  serving.crt  wx.crt
apiserver-etcd-client.crt  apiserver-kubelet-client.crt  ca.key  front-proxy-ca.key      sa.key                  serving.csr  wx.csr
apiserver-etcd-client.key  apiserver-kubelet-client.key  etcd    front-proxy-client.crt  sa.pub                  serving.key  wx.key
[root@k8s-master pki]#
[root@k8s-master pki]# kubectl create secret generic volume-serving-cert --from-file=apiserver.crt --from-file=apiserver.key  -n monitoring
secret/volume-serving-cert created
[root@k8s-master pki]# kubectl get secret -n monitoring | grep volume-serving-cert
volume-serving-cert               Opaque                                2      6s
[root@k8s-master pki]# kubectl get secret volume-serving-cert -n monitoring volume-serving-cert -o yaml
apiVersion: v1
items:
- apiVersion: v1
  data:
apiserver.crt: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURXakNDQWtLZ0F3SUJBZ0lJYnQ4MS9hSW8xRGN3RFFZSktvWklodmNOQVFFTEJRQXdGVEVUTUJFR0ExVUUKQXhNS2EzVmlaWEp1WlhSbGN6QWVGdzB4T1RBNE1qa3dNVEl3TXpGYUZ3MHlNREE0TWpnd01USXdNekZhTUJreApGekFWQmdOVkJBTVREbXQxWW1VdFlYQnBjMlZ5ZG1WeU1JSUJJakFOQmdrcWhraUc5dzBCQVFFRkFBT0NBUThBCk1JSUJDZ0tDQVFFQXhzVGlsV0ZrSi82RnpRV21RNjA0NU9TYjRXcEowWGcwSDc3bW51dCtmOUVzRlRSQkMwQWcKTnBka09sZVN4aUt6Mi9GYXh2dndVZGtXaHBlY1hlT2xFbHM0VXlRanlpS
…..

 

 

如果沒有證書(參考)

#安裝CFSSL工具
wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64
chmod +x cfssl_linux-amd64
mv cfssl_linux-amd64 /usr/local/bin/cfssl

wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64
chmod +x cfssljson_linux-amd64
mv cfssljson_linux-amd64 /usr/local/bin/cfssljson

wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64
chmod +x cfssl-certinfo_linux-amd64
mv cfssl-certinfo_linux-amd64 /usr/local/bin/cfssl-certinfo

export PATH=/usr/local/bin:$PATH

cd /etc/kubernetes/pki
cat << EOF | tee apiserver.json
{
  "CN": "apiserver",
  "hosts": [""], 
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "GuangDong",
      "L": "GuangZhou",
      "O": "wx",
      "OU": "wx"
    }
  ]
}
EOF

### 生成證書
cfssl gencert -ca=/apps/work/k8s/cfssl/pki/k8s/k8s-ca.pem -ca-key=/apps/work/k8s/cfssl/pki/k8s/k8s-ca-key.pem \
    -config=/apps/work/k8s/cfssl/ca-config.json \
    -profile=kubernetes /apps/work/k8s/cfssl/k8s/apiserver.json | cfssljson -bare ./apiserver
###重命名證書名字
mv apiserver-key.pem apiserver.key
mv apiserver.pem apiserver.crt
### 創建secret
kubectl create secret generic volume-serving-cert --from-file=apiserver.crt --from-file=apiserver.key  -n monitoring
kubectl get secret -n monitoring | grep volume-serving-cert
kubectl get secret volume-serving-cert -n monitoring volume-serving-cert -o yaml

 

4.3   執行prometheus-adapter.yaml

 

[root@k8s-master pki]# cd /root/prometheus/kube-prometheus/manifests/prometheus-adapter/
[root@k8s-master prometheus-adapter]# ls
prometheus-adapter-apiService.yaml                          prometheus-adapter-clusterRole.yaml
prometheus-adapter-clusterRoleAggregatedMetricsReader.yaml  prometheus-adapter-deployment.yaml
prometheus-adapter-clusterRoleBindingDelegator.yaml         prometheus-adapter-roleBindingAuthReader.yaml
prometheus-adapter-clusterRoleBinding.yaml                  prometheus-adapter-serviceAccount.yaml
prometheus-adapter-clusterRoleServerResources.yaml          prometheus-adapter-service.yaml
[root@k8s-master prometheus-adapter]# kubectl apply -f .
apiservice.apiregistration.k8s.io/v1beta1.metrics.k8s.io created
clusterrole.rbac.authorization.k8s.io/prometheus-adapter created
clusterrole.rbac.authorization.k8s.io/system:aggregated-metrics-reader created
clusterrolebinding.rbac.authorization.k8s.io/prometheus-adapter created
clusterrolebinding.rbac.authorization.k8s.io/resource-metrics:system:auth-delegator created
clusterrole.rbac.authorization.k8s.io/resource-metrics-server-resources created
deployment.apps/prometheus-adapter created
rolebinding.rbac.authorization.k8s.io/resource-metrics-auth-reader created
service/prometheus-adapter created
serviceaccount/prometheus-adapter created
[root@k8s-master prometheus-adapter]#

 

4.4 驗證prometheus-adapter 部署是否正常

[root@k8s-master prometheus-adapter]# kubectl get pods -n monitoring -o wide | grep prometheus-adapter
prometheus-adapter-668748ddbd-d9hxz   1/1     Running   0          6m55s   10.254.1.3     k8s-node-1   <none>           <none>
[root@k8s-master prometheus-adapter]# kubectl get service -n monitoring  | grep prometheus-adapter
prometheus-adapter      ClusterIP   10.98.19.67      <none>        443/TCP                      7m
[root@k8s-master prometheus-adapter]# kubectl get --raw "/apis/custom.metrics.k8s.io" | jq .
{
  "kind": "APIGroup",
  "apiVersion": "v1",
  "name": "custom.metrics.k8s.io",
  "versions": [
    {
      "groupVersion": "custom.metrics.k8s.io/v1beta1",
      "version": "v1beta1"
    }
  ],
  "preferredVersion": {
    "groupVersion": "custom.metrics.k8s.io/v1beta1",
    "version": "v1beta1"
  }
}
[root@k8s-master prometheus-adapter]#

 

 

列出由prometheus提供的自定義指標:

kubectl get --raw "/apis/custom.metrics.k8s.io/v1beta1" | jq .
{
  "kind": "APIResourceList",
  "apiVersion": "v1",
  "groupVersion": "custom.metrics.k8s.io/v1beta1",
  "resources": [
    {
      "name": "nodes/kubelet_pleg_relist_duration_seconds_count",
      "singularName": "",
      "namespaced": true,
      "kind": "MetricValueList",
      "verbs": [
        "get"
      ]
    },
    {
      "name": "jobs.batch/node_memory_Active_file_bytes",
      "singularName": "",
      "namespaced": true,
      "kind": "MetricValueList",
      "verbs": [
        "get"
      ]
    },
    {
      "name": "namespaces/node_memory_PageTables_bytes",
      "singularName": "",
      "namespaced": false,
      "kind": "MetricValueList",
      "verbs": [
        "get"
…..

 

 

獲取monitoring命名空間中所有pod的FS信息:

 

[root@k8s-master prometheus-adapter]# kubectl get --raw "/apis/custom.metrics.k8s.io/v1beta1/namespaces/monitoring/pods/*/fs_usage_bytes" | jq .
{
  "kind": "MetricValueList",
  "apiVersion": "custom.metrics.k8s.io/v1beta1",
  "metadata": {
    "selfLink": "/apis/custom.metrics.k8s.io/v1beta1/namespaces/monitoring/pods/%2A/fs_usage_bytes"
  },
  "items": [
    {
      "describedObject": {
        "kind": "Pod",
        "namespace": "monitoring",
        "name": "alertmanager-main-0",
        "apiVersion": "/v1"
      },
      "metricName": "fs_usage_bytes",
      "timestamp": "2019-10-09T04:52:11Z",
      "value": "0"
    },
    {
      "describedObject": {
        "kind": "Pod",
        "namespace": "monitoring",
        "name": "alertmanager-main-1",
        "apiVersion": "/v1"
      },
      "metricName": "fs_usage_bytes",
      "timestamp": "2019-10-09T04:52:11Z",
      "value": "0"
    },
    {
      "describedObject": {
        "kind": "Pod",
        "namespace": "monitoring",
        "name": "alertmanager-main-2",
        "apiVersion": "/v1"
      },
      "metricName": "fs_usage_bytes",
      "timestamp": "2019-10-09T04:52:11Z",
      "value": "0"
    },
    {
      "describedObject": {
        "kind": "Pod",
        "namespace": "monitoring",
        "name": "grafana-57bfdd47f8-d9fns",
        "apiVersion": "/v1"
      },
      "metricName": "fs_usage_bytes",
      "timestamp": "2019-10-09T04:52:11Z",
      "value": "0"
    },
    {
      "describedObject": {
        "kind": "Pod",
        "namespace": "monitoring",
        "name": "kube-state-metrics-ff5cb7949-lh945",
        "apiVersion": "/v1"
      },
      "metricName": "fs_usage_bytes",
      "timestamp": "2019-10-09T04:52:11Z",
      "value": "0"
    },
    {
      "describedObject": {
        "kind": "Pod",
        "namespace": "monitoring",
        "name": "node-exporter-9mhxs",
        "apiVersion": "/v1"
      },
      "metricName": "fs_usage_bytes",
      "timestamp": "2019-10-09T04:52:11Z",
      "value": "0"
    },
    {
      "describedObject": {
        "kind": "Pod",
        "namespace": "monitoring",
        "name": "node-exporter-csqzm",
        "apiVersion": "/v1"
      },
      "metricName": "fs_usage_bytes",
      "timestamp": "2019-10-09T04:52:11Z",
      "value": "0"
    },
    {
      "describedObject": {
        "kind": "Pod",
        "namespace": "monitoring",
        "name": "node-exporter-xc8tb",
        "apiVersion": "/v1"
      },
      "metricName": "fs_usage_bytes",
      "timestamp": "2019-10-09T04:52:11Z",
      "value": "0"
    },
    {
      "describedObject": {
        "kind": "Pod",
        "namespace": "monitoring",
        "name": "prometheus-adapter-668748ddbd-d9hxz",
        "apiVersion": "/v1"
      },
      "metricName": "fs_usage_bytes",
      "timestamp": "2019-10-09T04:52:11Z",
      "value": "0"
    },
    {
      "describedObject": {
        "kind": "Pod",
        "namespace": "monitoring",
        "name": "prometheus-k8s-0",
        "apiVersion": "/v1"
      },
      "metricName": "fs_usage_bytes",
      "timestamp": "2019-10-09T04:52:11Z",
      "value": "0"
    },
    {
      "describedObject": {
        "kind": "Pod",
        "namespace": "monitoring",
        "name": "prometheus-k8s-1",
        "apiVersion": "/v1"
      },
      "metricName": "fs_usage_bytes",
      "timestamp": "2019-10-09T04:52:11Z",
      "value": "0"
    },
    {
      "describedObject": {
        "kind": "Pod",
        "namespace": "monitoring",
        "name": "prometheus-operator-55b978b89-cgczz",
        "apiVersion": "/v1"
      },
      "metricName": "fs_usage_bytes",
      "timestamp": "2019-10-09T04:52:11Z",
      "value": "0"
    },
    {
      "describedObject": {
        "kind": "Pod",
        "namespace": "monitoring",
        "name": "redis-6dc489fd96-77np5",
        "apiVersion": "/v1"
      },
      "metricName": "fs_usage_bytes",
      "timestamp": "2019-10-09T04:52:11Z",
      "value": "0"
    }
  ]
}

 

如果有這些數據證明prometheus-adapter部署正常

 

 

5  使用官方測試hpa 項目測試自定義接口擴容

 

[root@k8s-master prometheus]# cd kube-prometheus/experimental/custom-metrics-api/
[root@k8s-master custom-metrics-api]# kubectl apply -f  sample-app.yaml
servicemonitor.monitoring.coreos.com/sample-app created
service/sample-app created
deployment.apps/sample-app created
horizontalpodautoscaler.autoscaling/sample-app created
[root@k8s-master custom-metrics-api]# kubectl get pod | grep sample-app
sample-app-74684b97f-k5c5b                 1/1     Running   0          17s
[root@k8s-master custom-metrics-api]# kubectl get pod | grep sample-app  kubectl get service | grep sample-app   ^C
[root@k8s-master custom-metrics-api]# kubectl get service | grep sample-app
sample-app                ClusterIP   10.106.102.68    <none>        8080/TCP         25s
[root@k8s-master custom-metrics-api]#
[root@k8s-master custom-metrics-api]# kubectl get hpa | grep sample-app
sample-app   Deployment/sample-app   264m/500m                1         10        1          52s
[root@k8s-master custom-metrics-api]#

[root@k8s-master custom-metrics-api]# curl 10.106.102.68:8080/metrics
# HELP http_requests_total The amount of requests served by the server in total
# TYPE http_requests_total counter
http_requests_total 34

 

得到監控值http_requests_total 
以后所有的監控值后面_total 在這個接口都是去除的
 
kubectl get --raw "/apis/custom.metrics.k8s.io/v1beta1/namespaces/default/pods/*/http_requests" | jq .
[root@k8s-master custom-metrics-api]# kubectl get --raw "/apis/custom.metrics.k8s.io/v1beta1/namespaces/default/pods/*/http_requests" | jq .
{
  "kind": "MetricValueList",
  "apiVersion": "custom.metrics.k8s.io/v1beta1",
  "metadata": {
    "selfLink": "/apis/custom.metrics.k8s.io/v1beta1/namespaces/default/pods/%2A/http_requests"
  },
  "items": [
    {
      "describedObject": {
        "kind": "Pod",
        "namespace": "default",
        "name": "sample-app-74684b97f-k5c5b",
        "apiVersion": "/v1"
      },
      "metricName": "http_requests",
      "timestamp": "2019-10-09T06:12:38Z",
      "value": "418m"
    }
  ]
}
#測試自動伸縮
#安裝hey
go get -u github.com/rakyll/hey
hey -n 10000 -q 5 -c 5 http://10.106.102.68:8080

 

 

幾分鍾后,HPA開始擴大部署。

 

ot@k8s-master custom-metrics-api]# kubectl get pod | grep sample-app
sample-app-74684b97f-6gftk                 0/1     ContainerCreating   0          2s
sample-app-74684b97f-k5c5b                 1/1     Running             0          4m5s
sample-app-74684b97f-n5tvb                 0/1     ContainerCreating   0          2s
sample-app-74684b97f-sbkvn                 0/1     ContainerCreating   0          2s
[root@k8s-master custom-metrics-api]# kubectl get pod | grep sample-app
sample-app-74684b97f-6gftk                 1/1     Running             0          27s
sample-app-74684b97f-6jg8x                 1/1     Running             0          12s
sample-app-74684b97f-gq622                 1/1     Running             0          12s
sample-app-74684b97f-k5c5b                 1/1     Running             0          4m30s
sample-app-74684b97f-n5tvb                 1/1     Running             0          27s
sample-app-74684b97f-sbkvn                 1/1     Running             0          27s
sample-app-74684b97f-x6thr                 0/1     ContainerCreating   0          12s
sample-app-74684b97f-zk9dz                 1/1     Running             0          12s
[root@k8s-master custom-metrics-api]# kubectl get pod | grep sample-app
sample-app-74684b97f-6gftk                 1/1     Running             0          67s
sample-app-74684b97f-6jg8x                 1/1     Running             0          52s
sample-app-74684b97f-969vx                 1/1     Running             0          36s
sample-app-74684b97f-gq622                 1/1     Running             0          52s
sample-app-74684b97f-k5c5b                 1/1     Running             0          5m10s
sample-app-74684b97f-n5tvb                 1/1     Running             0          67s
sample-app-74684b97f-q8h2m                 1/1     Running             0          36s
sample-app-74684b97f-sbkvn                 1/1     Running             0          67s
sample-app-74684b97f-x6thr                 0/1     ContainerCreating   0          52s
sample-app-74684b97f-zk9dz                 1/1     Running             0          52s
[root@k8s-master custom-metrics-api]#
[root@k8s-master custom-metrics-api]# kubectl get pod | grep sample-app
sample-app-74684b97f-6gftk                 1/1     Running   0          112s
sample-app-74684b97f-6jg8x                 1/1     Running   0          97s
sample-app-74684b97f-969vx                 1/1     Running   0          81s
sample-app-74684b97f-gq622                 1/1     Running   0          97s
sample-app-74684b97f-k5c5b                 1/1     Running   0          5m55s
sample-app-74684b97f-n5tvb                 1/1     Running   0          112s
sample-app-74684b97f-q8h2m                 1/1     Running   0          81s
sample-app-74684b97f-sbkvn                 1/1     Running   0          112s
sample-app-74684b97f-x6thr                 1/1     Running   0          97s
sample-app-74684b97f-zk9dz                 1/1     Running   0          97s
[root@k8s-master custom-metrics-api]#

[root@k8s-master ~]# kubectl get hpa  | grep sample-app
NAME         REFERENCE               TARGETS                  MINPODS   MAXPODS   REPLICAS   AGE
sample-app   Deployment/sample-app   4656m/500m               1         10        4          4m46s
[root@k8s-master ~]# kubectl get hpa | grep sample-app
sample-app   Deployment/sample-app   3315m/500m               1         10        8         4m57s
[root@k8s-master ~]# kubectl get hpa | grep sample-app
sample-app   Deployment/sample-app   3315m/500m               1         10        10         5m


[root@k8s-master ~]# kubectl describe hpa
podinfo     sample-app
[root@k8s-master ~]# kubectl describe hpa sample-app
Name:                       sample-app
Namespace:                  default
Labels:                     <none>
Annotations:                kubectl.kubernetes.io/last-applied-configuration:
                              {"apiVersion":"autoscaling/v2beta1","kind":"HorizontalPodAutoscaler","metadata":{"annotations":{},"name":"sample-app","namespace":"default...
CreationTimestamp:          Wed, 09 Oct 2019 14:10:28 +0800
Reference:                  Deployment/sample-app
Metrics:                    ( current / target )
  "http_requests" on pods:  2899m / 500m
Min replicas:               1
Max replicas:               10
Deployment pods:            10 current / 10 desired
Conditions:
  Type            Status  Reason               Message
  ----            ------  ------               -------
  AbleToScale     True    ScaleDownStabilized  recent recommendations were higher than current one, applying the highest recent recommendation
  ScalingActive   True    ValidMetricFound     the HPA was able to successfully calculate a replica count from pods metric http_requests
  ScalingLimited  True    TooManyReplicas      the desired replica count is more than the maximum replica count
Events:
  Type    Reason             Age    From                       Message
  ----    ------             ----   ----                       -------
  Normal  SuccessfulRescale  4m52s  horizontal-pod-autoscaler  New size: 4; reason: pods metric http_requests above target
  Normal  SuccessfulRescale  4m37s  horizontal-pod-autoscaler  New size: 8; reason: pods metric http_requests above target
  Normal  SuccessfulRescale  4m21s  horizontal-pod-autoscaler  New size: 10; reason: pods metric http_requests above target
[root@k8s-master ~]#
 #又過了N久之后
[root@k8s
-master ~]# kubectl describe hpa sample-app | tail -10 Type Reason Age From Message ---- ------ ---- ---- ------- Normal SuccessfulRescale 35m horizontal-pod-autoscaler New size: 4; reason: pods metric http_requests above target Normal SuccessfulRescale 35m horizontal-pod-autoscaler New size: 8; reason: pods metric http_requests above target Normal SuccessfulRescale 35m horizontal-pod-autoscaler New size: 10; reason: pods metric http_requests above target Normal SuccessfulRescale 23m horizontal-pod-autoscaler New size: 8; reason: All metrics below target Normal SuccessfulRescale 18m horizontal-pod-autoscaler New size: 7; reason: All metrics below target Normal SuccessfulRescale 13m horizontal-pod-autoscaler New size: 6; reason: All metrics below target Normal SuccessfulRescale 8m18s horizontal-pod-autoscaler New size: 5; reason: All metrics below target Normal SuccessfulRescale 3m15s horizontal-pod-autoscaler New size: 4; reason: All metrics below target [root@k8s-master ~]# kubectl get pod | grep sample sample-app-74684b97f-6gftk 1/1 Running 0 36m sample-app-74684b97f-k5c5b 1/1 Running 0 40m sample-app-74684b97f-n5tvb 1/1 Running 0 36m sample-app-74684b97f-sbkvn 1/1 Running 0 36m [root@k8s-master ~]#

 

 

自動縮放器不會立即對使用峰值做出反應。默認情況下,度量標准同步每30秒發生一次,只有在最后5分鍾內沒有重新縮放時才能進行擴展/縮小。通過這種方式,HPA可以防止快速執行沖突的決策,並為Cluster Autoscaler提供時間。

 

 

閾值說明

m代表milli-units,例如,901m意味着milli-requests

1000m=1

例如44315m/100  實際上是400個並發,閾值是100並發

[root@k8s-master custom-metrics-api]# kubectl get hpa
NAME         REFERENCE               TARGETS                  MINPODS   MAXPODS   REPLICAS   AGE
sample-app   Deployment/sample-app   44315m/100                 1         10        4          43m
[root@k8s-master custom-metrics-api]#

 

例如設置成10000m,實際是10個並發,直接顯示10

    pods:
      metricName: http_requests
      targetAverageValue: 10000m
[root@k8s-master custom-metrics-api]# kubectl get hpa| grep sample
sample-app   Deployment/sample-app   212m/10                  1         10        10         52m
[root@k8s-master custom-metrics-api]#

 


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM