k8s-容器資源需求、資源限制及HeapSter-二十二


一、容器資源需求、資源限制

資源需求、資源限制:指的是cpu、內存等資源;

資源需求、資源限制的兩個關鍵字:

  • request:需求,最低保障,在調度時,這個節點必須要滿足request需求的資源大小;
  • limits:限制、硬限制。這個限制容器無論怎么運行都不會超過limits的值;

CPU:k8s的一個cpu對應一顆宿主機邏輯cpu。一個邏輯cpu還可以划分為1000個毫核(millcores)。所以1cpu=1000m;500m=0.5個CPU,0.5m相當於二分之一的核心;

內存的計量單位:E、P、T、G、M、K

[root@master ~]# kubectl explain pods.spec.containers.resources
[root@master ~]# kubectl explain pods.spec.containers.resources.requests
[root@master ~]# kubectl explain pods.spec.containers.resources.limits

用法參考:https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/

[root@master metrics]# pwd
/root/manifests/metrics
[root@master metrics]# vim pod-demo.yaml
apiVersion: v1
kind: Pod
metadata:
  name: pod-demo
  labels:
    app: myapp
    tier: frontend
spec:
  containers:
  - name: myapp
    image: ikubernetes/stress-ng
    command: ["/usr/bin/stress-ng", "-c 1", "--metrics-brief"]       #-c 1表示啟動一個子進程對cpu做壓測.默認stress-ng的一個子進程使用256M內存
    resources:
      requests:
        cpu: "200m"
        memory: "128Mi"
      limits:
        cpu: "500m"
        memory: "512Mi"

#創建pod
[root@master metrics]# kubectl apply -f pod-demo.yaml 
pod/pod-demo created

[root@master metrics]# kubectl get pods
NAME       READY   STATUS    RESTARTS   AGE
pod-demo   1/1     Running   0          6s

[root@master metrics]# kubectl exec pod-demo -- top
Mem: 1378192K used, 487116K free, 12540K shrd, 2108K buff, 818184K cached
CPU:  26% usr   1% sys   0% nic  71% idle   0% io   0% irq   0% sirq
Load average: 0.78 0.96 0.50 2/479 11
  PID  PPID USER     STAT   VSZ %VSZ CPU %CPU COMMAND
    6     1 root     R     6884   0%   1  26% {stress-ng-cpu} /usr/bin/stress-ng
    7     0 root     R     1504   0%   0   0% top
    1     0 root     S     6244   0%   1   0% /usr/bin/stress-ng -c 1 --metrics-

我們對容器分配了資源限制后,k8s會自動分配一個QoS,叫服務質量,通過kubectl describe pods pod_name可以查看這個字段;

[root@master metrics]# kubectl describe pods pod-demo |grep QoS
QoS Class:       Burstable

QoS可以分為三類(根據資源設置,自動歸類)

  • Guranteed:表示每個容器的cpu和內存資源設置了相同的requests和limits值,即cpu.requests=cpu.limits和memory.requests=memory.limits,Guranteed會確保這類pod有最高的優先級,會被優先運行的,即使節點上的資源不夠用;
  • Burstable:表示pod中至少有一個容器設置了cpu或內存資源的requests屬性,可能沒有定義limits屬性,那么這類pod具有中等優先級;
  • BestEffort:指沒有任何一個容器設置了requests或者limits屬性,那么這類pod是最低優先級。當這類pod的資源不夠用時,BestEffort中的容器會被優先終止,以便騰出資源來,給另外兩類pod中的容器正常運行;

 

二、HeapSter

1、介紹

HeapSter的作用是收集個節點pod的資源使用情況,然后以圖形界面展示給用戶。

image

kubelet中的cAdvisor負責收集每個節點上的資源使用情況,然后把信息存儲HeapSter中,HeapSter再把數據持久化的存儲在數據庫InfluxDB中。然后我們再通過非常優秀的Grafana來圖形化展示;

一般我們監控的指標包括k8s集群的系統指標、容器指標和應用指標。

默認InfluxDB使用的是存儲卷是emptyDir,容器一關數據就沒了,所以我們生產要換成glusterfs等存儲卷才行。

 

2、部署influxdb

InfluxDB github:https://github.com/kubernetes-retired/heapster

在node節點上先拉取鏡像:

#node01
[root@node01 ~]# docker pull fishchen/heapster-influxdb-amd64:v1.5.2

#node02
[root@node02 ~]# docker pull fishchen/heapster-influxdb-amd64:v1.5.2

在master節點上,拉取yaml文件,並修改、執行:

[root@master metrics]# wget https://raw.githubusercontent.com/kubernetes-retired/heapster/master/deploy/kube-config/influxdb/influxdb.yaml

[root@master metrics]# vim influxdb.yaml
apiVersion: apps/v1        #此處不修改也可以,如果改成apps/v1,要加下面 selector那幾行
kind: Deployment
metadata:
  name: monitoring-influxdb
  namespace: kube-system
spec:
  replicas: 1
  selector:            #加此行
    matchLabels:        #加此行
      task: monitoring        #加此行
      k8s-app: influxdb    #加此行
  template:
    metadata:
      labels:
        task: monitoring
        k8s-app: influxdb
    spec:
      containers:
      - name: influxdb
        image: fishchen/heapster-influxdb-amd64:v1.5.2    #修改此處鏡像地址    
        volumeMounts:
        - mountPath: /data
          name: influxdb-storage
      volumes:
      - name: influxdb-storage
        emptyDir: {}
---
apiVersion: v1
kind: Service
metadata:
  labels:
    task: monitoring
    # For use as a Cluster add-on (https://github.com/kubernetes/kubernetes/tree/master/cluster/addons)
    # If you are NOT using this as an addon, you should comment out this line.
    kubernetes.io/cluster-service: 'true'
    kubernetes.io/name: monitoring-influxdb
  name: monitoring-influxdb
  namespace: kube-system
spec:
  ports:
  - port: 8086
    targetPort: 8086
  selector:
    k8s-app: influxdb

#創建資源
[root@master metrics]# kubectl apply -f influxdb.yaml 
deployment.apps/monitoring-influxdb created
service/monitoring-influxdb created

#查看
[root@master metrics]# kubectl get pods -n kube-system |grep influxdb
monitoring-influxdb-5899b7fff9-2r58w    1/1     Running   0          6m59s

[root@master metrics]# kubectl get svc -n kube-system |grep influxdb
monitoring-influxdb    ClusterIP   10.101.242.217   <none>        8086/TCP        7m6s

3、部署rbac

下面我們開始部署heapster,但heapster依賴rbac,所以我們先部署rbac:

[root@master metrics]# wget https://raw.githubusercontent.com/kubernetes-retired/heapster/master/deploy/kube-config/rbac/heapster-rbac.yaml

[root@master metrics]# kubectl apply -f heapster-rbac.yaml 
clusterrolebinding.rbac.authorization.k8s.io/heapster created

4、部署heapster

#node01拉取鏡像
[root@node01 ~]# docker pull rancher/heapster-amd64:v1.5.4

#node02拉取鏡像
[root@node02 ~]# docker pull rancher/heapster-amd64:v1.5.4

#master拉取yaml文件
[root@master metrics]# wget https://raw.githubusercontent.com/kubernetes-retired/heapster/master/deploy/kube-config/influxdb/heapster.yaml

[root@master metrics]# vim heapster.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  name: heapster
  namespace: kube-system
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: heapster
  namespace: kube-system
spec:
  replicas: 1
  selector:
    matchLabels:
      task: monitoring
      k8s-app: heapster
  template:
    metadata:
      labels:
        task: monitoring
        k8s-app: heapster
    spec:
      serviceAccountName: heapster
      containers:
      - name: heapster
        image: rancher/heapster-amd64:v1.5.4    #修改此處鏡像地址
        imagePullPolicy: IfNotPresent
        command:
        - /heapster
        - --source=kubernetes:https://kubernetes.default
        - --sink=influxdb:http://monitoring-influxdb.kube-system.svc:8086
---
apiVersion: v1
kind: Service
metadata:
  labels:
    task: monitoring
    # For use as a Cluster add-on (https://github.com/kubernetes/kubernetes/tree/master/cluster/addons)
    # If you are NOT using this as an addon, you should comment out this line.
    kubernetes.io/cluster-service: 'true'
    kubernetes.io/name: Heapster
  name: heapster
  namespace: kube-system
spec:
  ports:
  - port: 80
    targetPort: 8082
  type: NodePort        #我添加了此行,
  selector:
    k8s-app: heapster

#創建
[root@master metrics]# kubectl apply -f heapster.yaml 
serviceaccount/heapster created
deployment.apps/heapster created

#查看
[root@master metrics]# kubectl get pods -n kube-system |grep heapster-
heapster-7c8f7dc8cb-kph29               1/1     Running   0          3m55s
[root@master metrics]# 
[root@master metrics]# kubectl get svc -n kube-system |grep heapster
heapster               NodePort    10.111.93.84     <none>        80:31410/TCP    4m16s    #由於用了NodePort,所以pod端口映射到了節點31410端口上

#查看pod日志
[root@master metrics]# kubectl  logs heapster-7c8f7dc8cb-kph29 -n kube-system

5、部署Grafana

#node01拉取鏡像
[root@node01 ~]# docker pull angelnu/heapster-grafana:v5.0.4

#node02拉取鏡像
[root@node02 ~]# docker pull angelnu/heapster-grafana:v5.0.4

#master拉取yaml文件
[root@master metrics]# wget https://raw.githubusercontent.com/kubernetes-retired/heapster/master/deploy/kube-config/influxdb/grafana.yaml

#編輯yaml文件
[root@master metrics]# vim grafana.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: monitoring-grafana
  namespace: kube-system
spec:
  replicas: 1
  selector:
    matchLabels:
      task: monitoring
      k8s-app: grafana

  template:
    metadata:
      labels:
        task: monitoring
        k8s-app: grafana
    spec:
      containers:
      - name: grafana
        image: angelnu/heapster-grafana:v5.0.4    #修改鏡像地址
        ports:
        - containerPort: 3000
          protocol: TCP
        volumeMounts:
        - mountPath: /etc/ssl/certs
          name: ca-certificates
          readOnly: true
        - mountPath: /var
          name: grafana-storage
        env:
        - name: INFLUXDB_HOST
          value: monitoring-influxdb
        - name: GF_SERVER_HTTP_PORT
          value: "3000"
          # The following env variables are required to make Grafana accessible via
          # the kubernetes api-server proxy. On production clusters, we recommend
          # removing these env variables, setup auth for grafana, and expose the grafana
          # service using a LoadBalancer or a public IP.
        - name: GF_AUTH_BASIC_ENABLED
          value: "false"
        - name: GF_AUTH_ANONYMOUS_ENABLED
          value: "true"
        - name: GF_AUTH_ANONYMOUS_ORG_ROLE
          value: Admin
        - name: GF_SERVER_ROOT_URL
          # If you're only using the API Server proxy, set this value instead:
          # value: /api/v1/namespaces/kube-system/services/monitoring-grafana/proxy
          value: /
      volumes:
      - name: ca-certificates
        hostPath:
          path: /etc/ssl/certs
      - name: grafana-storage
        emptyDir: {}
---
apiVersion: v1
kind: Service
metadata:
  labels:
    # For use as a Cluster add-on (https://github.com/kubernetes/kubernetes/tree/master/cluster/addons)
    # If you are NOT using this as an addon, you should comment out this line.
    kubernetes.io/cluster-service: 'true'
    kubernetes.io/name: monitoring-grafana
  name: monitoring-grafana
  namespace: kube-system
spec:
  # In a production setup, we recommend accessing Grafana through an external Loadbalancer
  # or through a public IP.
  # type: LoadBalancer
  # You could also use NodePort to expose the service at a randomly-generated port
  # type: NodePort
  ports:
  - port: 80
    targetPort: 3000
  type: NodePort        #為了能在集群外部訪問Grafana,所以我們需要定義NodePort
  selector:
    k8s-app: grafana


#創建
[root@master metrics]# kubectl apply -f grafana.yaml 
deployment.apps/monitoring-grafana created
service/monitoring-grafana created

#查看
[root@master metrics]# kubectl get pods -n kube-system |grep grafana
monitoring-grafana-84786758cc-7txwr     1/1     Running   0          3m47s

[root@master metrics]# kubectl get svc -n kube-system |grep grafana
monitoring-grafana     NodePort    10.102.42.86     <none>        80:31404/TCP    3m55s    #可見pod的端口映射到了node上的31404端口上

pod的端口已經映射到了node上的31404端口上;

此時,在集群外部,用瀏覽器訪問:http://ip:31404

如下圖

image

但是如何使用,可能還需要進一步學習influxbd、grafana等;

最后,HeapSter可能快被廢除了…

新型的監控系統比如有:Prometheus(普羅米修斯)


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM