kubernetes(八)--Helm及其它功能組件dashboard/prometheus/HPA


一、Helm

1.1、什么是Helm

在沒使用 helm 之前,向 kubernetes 部署應用,我們要依次部署 deployment、svc 等,步驟較繁瑣。況且隨着很多項目微服務化,復雜的應用在容器中部署以及管理顯得較為復雜,helm 通過打包的方式,支持發布的版本管理和控制,很大程度上簡化了 Kubernetes 應用的部署和管理

Helm 本質就是讓 K8s 的應用管理(Deployment,Service 等 ) 可配置,能動態生成。通過動態生成 K8s 資源清單文件(deployment.yaml,service.yaml)。然后調用 Kubectl 自動執行 K8s 資源部署

Helm 是官方提供的類似於 YUM 的包管理器,是部署環境的流程封裝。Helm 有兩個重要的概念:chart 和release

  • chart :是創建一個應用的信息集合,包括各種 Kubernetes 對象的配置模板、參數定義、依賴關系、文檔說明等。chart 是應用部署的自包含邏輯單元。可以將 chart 想象成 apt、yum 中的軟件安裝包
  • release :是 chart 的運行實例,代表了一個正在運行的應用。當 chart 被安裝到 Kubernetes 集群,就生成一個 release。chart 能夠多次安裝到同一個集群,每次安裝都是一個 release

Helm 包含兩個組件:Helm 客戶端和 Tiller 服務器,如下圖所示

image

Helm 客戶端負責 chart 和 release 的創建和管理以及和 Tiller 的交互。Tiller 服務器運行在 Kubernetes 集群中,它會處理 Helm 客戶端的請求,與 Kubernetes API Server 交互

1.2、Helm 部署

1)下載helm軟件包

[root@k8s-master01 helm]# wget https://storage.googleapis.com/kubernetes-helm/helm-v2.13.1-linux-amd64.tar.gz
[root@k8s-master01 helm]# ls
helm-v2.13.1-linux-amd64.tar.gz
[root@k8s-master01 helm]# tar xf helm-v2.13.1-linux-amd64.tar.gz 
[root@k8s-master01 helm]# ls
helm-v2.13.1-linux-amd64.tar.gz  linux-amd64
[root@k8s-master01 helm]# cp linux-amd64/helm /usr/local/bin/
[root@k8s-master01 helm]# chmod +x /usr/local/bin/helm 

2)Kubernetes APIServer 開啟了 RBAC 訪問控制,所以需要創建 tiller 使用的 service account: tiller 並分配合適的角色給它,創建rbac-config.yaml 文件

[root@k8s-master01 helm]# cat rbac-config.yaml 
apiVersion: v1
kind: ServiceAccount
metadata:  
  name: tiller  
  namespace: kube-system

---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:  
  name: tiller
roleRef:  
  apiGroup: rbac.authorization.k8s.io  
  kind: ClusterRole  
  name: cluster-admin
subjects:  
  - kind: ServiceAccount    
    name: tiller    
    namespace: kube-system
[root@k8s-master01 helm]# kubectl create -f rbac-config.yaml
serviceaccount/tiller created
clusterrolebinding.rbac.authorization.k8s.io/tiller created

#helm初始化
[root@k8s-master01 helm]# helm init --service-account tiller --skip-refresh
Creating /root/.helm 
Creating /root/.helm/repository 
Creating /root/.helm/repository/cache 
Creating /root/.helm/repository/local 
Creating /root/.helm/plugins 
Creating /root/.helm/starters 
Creating /root/.helm/cache/archive 
Creating /root/.helm/repository/repositories.yaml 
Adding stable repo with URL: https://kubernetes-charts.storage.googleapis.com 
Adding local repo with URL: http://127.0.0.1:8879/charts 
$HELM_HOME has been configured at /root/.helm.

Tiller (the Helm server-side component) has been installed into your Kubernetes Cluster.

Please note: by default, Tiller is deployed with an insecure 'allow unauthenticated users' policy.
To prevent this, run `helm init` with the --tiller-tls-verify flag.
For more information on securing your installation see: https://docs.helm.sh/using_helm/#securing-your-helm-installation
Happy Helming!

[root@k8s-master01 helm]# kubectl get pod -n kube-system
NAME                                   READY   STATUS    RESTARTS   AGE
coredns-5c98db65d4-6vgp6               1/1     Running   4          4d
coredns-5c98db65d4-8zbqt               1/1     Running   4          4d
etcd-k8s-master01                      1/1     Running   4          4d
kube-apiserver-k8s-master01            1/1     Running   4          4d
kube-controller-manager-k8s-master01   1/1     Running   4          4d
kube-flannel-ds-amd64-dqgj6            1/1     Running   0          17h
kube-flannel-ds-amd64-mjzxt            1/1     Running   0          17h
kube-flannel-ds-amd64-z76v7            1/1     Running   3          4d
kube-proxy-4g57j                       1/1     Running   3          3d23h
kube-proxy-qd4xm                       1/1     Running   4          4d
kube-proxy-x66cd                       1/1     Running   3          3d23h
kube-scheduler-k8s-master01            1/1     Running   4          4d
tiller-deploy-58565b5464-lrsmr         0/1     Running   0          5s

#查看helm版本信息
[root@k8s-master01 helm]# helm version
Client: &version.Version{SemVer:"v2.13.1", GitCommit:"618447cbf203d147601b4b9bd7f8c37a5d39fbb4", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.13.1", GitCommit:"618447cbf203d147601b4b9bd7f8c37a5d39fbb4", GitTreeState:"clean"}

1.3、helm倉庫

倉庫地址:https://hub.helm.sh/

image

1.4、helm自定義模板

#創建文件夾
[root@k8s-master01 helm]# mkdir hello-world
[root@k8s-master01 helm]# cd hello-world/

#創建自描述文件 Chart.yaml , 這個文件必須有 name 和 version 定義
[root@k8s-master01 hello-world]# cat Chart.yaml 
name: hello-world
version: 1.0.0

#創建模板文件,用於生成 Kubernetes 資源清單(manifests)
[root@k8s-master01 hello-world]# mkdir templates
[root@k8s-master01 hello-world]# cd templates/
[root@k8s-master01 templates]# cat deployment.yaml 
apiVersion: extensions/v1beta1
kind: Deployment
metadata:  
  name: hello-world
spec:  
  replicas: 1  
  template:    
    metadata:      
      labels:        
        app: hello-world    
    spec:      
      containers:        
        - name: hello-world          
          image: {{ .Values.image.repository }}:{{ .Values.image.tag }}
          ports:            
            - containerPort: 80              
              protocol: TCP
[root@k8s-master01 templates]# cat service.yaml 
apiVersion: v1
kind: Service
metadata:  
  name: hello-world
spec:  
  type: NodePort  
  ports:
  - port: 80    
    targetPort: 80    
    protocol: TCP  
  selector: 
    app: hello-world

#創建values.yaml
[root@k8s-master01 templates]# cd ../
[root@k8s-master01 hello-world]# cat values.yaml 
image:  
  repository: hub.dianchou.com/library/myapp
  tag: 'v1'

##使用命令 helm install RELATIVE_PATH_TO_CHART 創建一次Release
[root@k8s-master01 hello-world]# helm install .
NAME:   icy-zebu
LAST DEPLOYED: Thu Feb  6 16:15:35 2020
NAMESPACE: default
STATUS: DEPLOYED

RESOURCES:
==> v1/Pod(related)
NAME                          READY  STATUS             RESTARTS  AGE
hello-world-7676d54884-xzv2v  0/1    ContainerCreating  0         1s

==> v1/Service
NAME         TYPE      CLUSTER-IP    EXTERNAL-IP  PORT(S)       AGE
hello-world  NodePort  10.108.3.115  <none>       80:31896/TCP  1s

==> v1beta1/Deployment
NAME         READY  UP-TO-DATE  AVAILABLE  AGE
hello-world  0/1    1           0          1s

[root@k8s-master01 hello-world]# helm ls
NAME    	REVISION	UPDATED                 	STATUS  	CHART            	APP VERSION	NAMESPACE
icy-zebu	1       	Thu Feb  6 16:15:35 2020	DEPLOYED	hello-world-1.0.0	           	default  
[root@k8s-master01 hello-world]# kubectl get pod
NAME                           READY   STATUS    RESTARTS   AGE
hello-world-7676d54884-xzv2v   1/1     Running   0          33s

版本更新:

#在 values.yaml 中的值可以被部署 release 時用到的參數 --values YAML_FILE_PATH 或 --setkey1=value1, key2=value2 覆蓋掉
#安裝指定版本
helm install --set image.tag='latest' .

#升級版本
helm upgrade  -f values.yaml test 

-------------------------------------------------------------------

[root@k8s-master01 hello-world]# kubectl get svc
NAME          TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)        AGE
hello-world   NodePort    10.108.3.115   <none>        80:31896/TCP   3m19s
kubernetes    ClusterIP   10.96.0.1      <none>        443/TCP        4d2h
[root@k8s-master01 hello-world]# curl 10.0.0.11:31896
Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>

#更新
[root@k8s-master01 hello-world]# helm ls
NAME    	REVISION	UPDATED                 	STATUS  	CHART            	APP VERSION	NAMESPACE
icy-zebu	1       	Thu Feb  6 16:15:35 2020	DEPLOYED	hello-world-1.0.0	           	default  
[root@k8s-master01 hello-world]# helm upgrade icy-zebu --set image.tag='v2' . 
[root@k8s-master01 hello-world]# curl 10.0.0.11:31896
Hello MyApp | Version: v2 | <a href="hostname.html">Pod Name</a>

1.5、helm命令總結

# 列出已經部署的 Release
$ helm ls

# 查詢一個特定的 Release 的狀態
$ helm status RELEASE_NAME

# 移除所有與這個 Release 相關的 Kubernetes 資源
$ helm delete cautious-shrimp

# helm rollback RELEASE_NAME REVISION_NUMBER
$ helm rollback cautious-shrimp 1

# 使用 helm delete --purge RELEASE_NAME 移除所有與指定 Release 相關的 Kubernetes 資源和所有這個Release 的記錄
$ helm delete --purge cautious-shrimp
$ helm ls --deleted

# 使用模板動態生成K8s資源清單,非常需要能提前預覽生成的結果。
# 使用--dry-run --debug 選項來打印出生成的清單文件內容,而不執行部署
helm install . --dry-run --debug --set image.tag=lates

二、dashboard安裝

#更新helm倉庫
[root@k8s-master01 dashboard]# helm repo update
Hang tight while we grab the latest from your chart repositories...
...Skip local chart repository
...Successfully got an update from the "stable" chart repository
Update Complete. ⎈ Happy Helming!⎈

#下載dashboard相關配置
[root@k8s-master01 dashboard]# helm fetch stable/kubernetes-dashboard
[root@k8s-master01 dashboard]# ls
kubernetes-dashboard-1.10.1.tgz
[root@k8s-master01 dashboard]# tar xf kubernetes-dashboard-1.10.1.tgz 
[root@k8s-master01 dashboard]# ls
kubernetes-dashboard  kubernetes-dashboard-1.10.1.tgz
[root@k8s-master01 dashboard]# cd kubernetes-dashboard/
[root@k8s-master01 kubernetes-dashboard]# ls
Chart.yaml  README.md  templates  values.yaml

#創建kubernetes-dashboard.yaml
[root@k8s-master01 kubernetes-dashboard]# cat kubernetes-dashboard.yaml 
image:  
  repository: k8s.gcr.io/kubernetes-dashboard-amd64  
  tag: v1.10.1
ingress:  
  enabled: true  
  hosts:    
    - k8s.frognew.com  
  annotations:    
    nginx.ingress.kubernetes.io/ssl-redirect: "true"    
    nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"  
  tls:    
    - secretName: frognew-com-tls-secret      
      hosts:      
      - k8s.frognew.com
rbac:  
  clusterAdminRole: true

#helm創建
[root@k8s-master01 kubernetes-dashboard]# helm install -n kubernetes-dashboard --namespace kube-system  -f kubernetes-dashboard.yaml .

#提供外界訪問,修改成nodePort
[root@k8s-master01 kubernetes-dashboard]# kubectl get svc -n kube-system
NAME                   TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)                  AGE
kube-dns               ClusterIP   10.96.0.10      <none>        53/UDP,53/TCP,9153/TCP   4d2h
kubernetes-dashboard   ClusterIP   10.110.55.113   <none>        443/TCP                  5m10s
tiller-deploy          ClusterIP   10.97.87.192    <none>        44134/TCP                114m
[root@k8s-master01 kubernetes-dashboard]# kubectl edit svc kubernetes-dashboard -n kube-system
service/kubernetes-dashboard edited    #type: NodePort
[root@k8s-master01 kubernetes-dashboard]# kubectl get svc -n kube-system
NAME                   TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)                  AGE
kube-dns               ClusterIP   10.96.0.10      <none>        53/UDP,53/TCP,9153/TCP   4d2h
kubernetes-dashboard   NodePort    10.110.55.113   <none>        443:31147/TCP            6m54s
tiller-deploy          ClusterIP   10.97.87.192    <none>        44134/TCP                116m

#瀏覽器訪問
https://10.0.0.11:31147

image

查看token並粘貼登錄:

[root@k8s-master01 kubernetes-dashboard]# kubectl -n kube-system get secret | grep kubernetes-dashboard-token
kubernetes-dashboard-token-kcn7d                 kubernetes.io/service-account-token   3      11m
[root@k8s-master01 kubernetes-dashboard]# kubectl describe secret kubernetes-dashboard-token-kcn7d -n kube-system
Name:         kubernetes-dashboard-token-kcn7d
Namespace:    kube-system
Labels:       <none>
Annotations:  kubernetes.io/service-account.name: kubernetes-dashboard
              kubernetes.io/service-account.uid: 261d33a1-ab30-47fc-a759-4f0402ccf33d

Type:  kubernetes.io/service-account-token

Data
====
ca.crt:     1025 bytes
namespace:  11 bytes
token:      eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJrdWJlcm5ldGVzLWRhc2hib2FyZC10b2tlbi1rY243ZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6IjI2MWQzM2ExLWFiMzAtNDdmYy1hNzU5LTRmMDQwMmNjZjMzZCIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTprdWJlcm5ldGVzLWRhc2hib2FyZCJ9.IJTrscaOCfoUMQ4k_fDoVaGoaSVwhR7kmtmAT1ej36ABx7UQwYiYN4pur36l_nSwAIsAlF38hivYX_bp-wf13VWTl9_eospOFSd2lTnvPZjjgQQCpf-voZfpS4P4hTntDaPVQ_cql_2xs1uX4VaL8LpLsVZlrL_Y1MRjvEsCq-Od_ChD4jKA0xMfNUNr8HTN3cTmijYSNGJ_5FkkezRb00NGs0S1ANPyoFb8KbaqmyP9ZF_rVxl6tuolEUGkYNQ6AUJstmcoxnF5Dt5LyE6dsyT6XNDH9GvmCrDV6NiXbxrZtlVFWwgpORTvyN12d-UeNdSc8JEq2yrFB7KiJ8Xwkw

image

三、prometheus安裝

Prometheus github 地址:https://github.com/coreos/kube-prometheus

3.1、組件說明

1)MetricServer:是kubernetes集群資源使用情況的聚合器,收集數據給kubernetes集群內使用,如kubectl,hpa,scheduler等。

2)PrometheusOperator:是一個系統監測和警報工具箱,用來存儲監控數據。

3)NodeExporter:用於各node的關鍵度量指標狀態數據。

4)KubeStateMetrics:收集kubernetes集群內資源對象數據,制定告警規則。

5)Prometheus:采用pull方式收集apiserver,scheduler,controller-manager,kubelet組件數據,通過http協議傳輸。

6)Grafana:是可視化數據統計和監控平台

3.2、構建記錄

1)下載相關配置文件

[root@k8s-master01 prometheus]# git clone https://github.com/coreos/kube-prometheus.git
[root@k8s-master01 prometheus]# cd kube-prometheus/manifests/

2)修改 grafana-service.yaml 文件,使用 nodepode 方式訪問 grafana

[root@k8s-master01 manifests]# vim grafana-service.yaml 
apiVersion: v1
kind: Service
metadata:
  labels:
    app: grafana
  name: grafana
  namespace: monitoring
spec:
  type: NodePort      #添加內容
  ports:
  - name: http
    port: 3000
    targetPort: http
    nodePort: 30100   #添加內容
  selector:
    app: grafana

3)修改 prometheus-service.yaml,改為 nodepode

[root@k8s-master01 manifests]# vim prometheus-service.yaml
apiVersion: v1
kind: Service
metadata:
  labels:
    prometheus: k8s
  name: prometheus-k8s
  namespace: monitoring
spec:
  type: NodePort  #添加
  ports:
  - name: web
    port: 9090
    targetPort: web
    nodePort: 30200  #添加
  selector:
    app: prometheus
    prometheus: k8s
  sessionAffinity: ClientIP

4)修改 alertmanager-service.yaml,改為 nodepode

[root@k8s-master01 manifests]# vim alertmanager-service.yaml
apiVersion: v1
kind: Service
metadata:
  labels:
    alertmanager: main
  name: alertmanager-main
  namespace: monitoring
spec:
  type: NodePort  #添加
  ports:
  - name: web
    port: 9093
    targetPort: web
    nodePort: 30300  #添加
  selector:
    alertmanager: main
    app: alertmanager
  sessionAffinity: ClientIP

6)運行pod

[root@k8s-master01 manifests]# pwd
/root/k8s/prometheus/kube-prometheus/manifests
[root@k8s-master01 manifests]# kubectl apply -f .  #多運行幾遍
[root@k8s-master01 manifests]# kubectl apply -f .

[root@k8s-master01 manifests]# kubectl get pod -n monitoring
NAME                                   READY   STATUS    RESTARTS   AGE
alertmanager-main-0                    2/2     Running   0          51s
alertmanager-main-1                    2/2     Running   0          43s
alertmanager-main-2                    2/2     Running   0          35s
grafana-7dc5f8f9f6-79zc8               1/1     Running   0          64s
kube-state-metrics-5cbd67455c-7s7qr    4/4     Running   0          31s
node-exporter-gbwqq                    2/2     Running   0          64s
node-exporter-n4rvn                    2/2     Running   0          64s
node-exporter-r894g                    2/2     Running   0          64s
prometheus-adapter-668748ddbd-t8wk6    1/1     Running   0          64s
prometheus-k8s-0                       3/3     Running   1          46s
prometheus-k8s-1                       3/3     Running   1          46s
prometheus-operator-7447bf4dcb-tf4jc   1/1     Running   0          65s

[root@k8s-master01 manifests]# kubectl top node
NAME           CPU(cores)   CPU%   MEMORY(bytes)   MEMORY%   
k8s-master01   196m         9%     1174Mi          62%       
k8s-node01     138m         13%    885Mi           47%       
k8s-node02     104m         10%    989Mi           52%       
[root@k8s-master01 manifests]# kubectl top pod
NAME                           CPU(cores)   MEMORY(bytes)   
hello-world-7d7cfcccd5-6mmv4   0m           1Mi

3.3、prometheus訪問測試

prometheus 對應的 nodeport 端口為 30200,訪問http://MasterIP:30200

image

通過訪問http://MasterIP:30200/target可以看到 prometheus 已經成功連接上了 k8s 的 apiserver

image

查看 service-discovery

image

Prometheus 自己的指標:

http://10.0.0.11:30200/metrics

image

prometheus 的 WEB 界面上提供了基本的查詢 K8S 集群中每個 POD 的 CPU 使用情況,查詢條件如下:

sum by (pod_name)( rate(container_cpu_usage_seconds_total{image!="", pod_name!=""}[1m] ) )

image

image

3.4、grafana訪問測試

http://10.0.0.11:30100/login   admin   admin  (默認要修改密碼)

image

添加數據來源:默認已經添加好

image

image

導入模板:

image

查看node節點狀態:

image

四、HPA(Horizontal Pod Autoscaling)

Horizontal Pod Autoscaling 可以根據 CPU 利用率自動伸縮一個 Replication Controller、Deployment 或者Replica Set 中的 Pod 數量

#創建pod
[root@k8s-master01 ~]# kubectl run php-apache --image=gcr.io/google_containers/hpa-example:latest --requests=cpu=200m --expose --port=8
[root@k8s-master01 ~]# kubectl get pod
NAME                          READY   STATUS    RESTARTS   AGE
php-apache-59546868d4-s5xsx   1/1     Running   0          4s
[root@k8s-master01 ~]# kubectl top pod php-apache-59546868d4-s5xsx
NAME                          CPU(cores)   MEMORY(bytes)   
php-apache-59546868d4-s5xsx   0m           6Mi

#創建 HPA 控制器
[root@k8s-master01 ~]# kubectl autoscale deployment php-apache --cpu-percent=50 --min=1 --max=10
horizontalpodautoscaler.autoscaling/php-apache autoscaled
[root@k8s-master01 ~]# kubectl get hpa
NAME         REFERENCE               TARGETS         MINPODS   MAXPODS   REPLICAS   AGE
php-apache   Deployment/php-apache   <unknown>/50%   1         10        0          13s
[root@k8s-master01 ~]# kubectl get hpa
NAME         REFERENCE               TARGETS   MINPODS   MAXPODS   REPLICAS   AGE
php-apache   Deployment/php-apache   0%/50%    1         10        1          5m7s

#增加負載,查看負載節點數目
[root@k8s-master01 ~]# while true; do wget -q -O- http://10.244.1.24/; done
OK!OK!OK!OK!OK!OK!OK!OK!OK!OK!OK!OK!OK!OK!OK!OK!OK!OK!OK!OK!OK!OK!OK!OK!OK!OK!

[root@k8s-master01 ~]# kubectl get hpa -w
NAME         REFERENCE               TARGETS    MINPODS   MAXPODS   REPLICAS   AGE
php-apache   Deployment/php-apache   200%/50%   1         10        1          32m
php-apache   Deployment/php-apache   200%/50%   1         10        4          32m
php-apache   Deployment/php-apache   442%/50%   1         10        4          32m
php-apache   Deployment/php-apache   442%/50%   1         10        8          33m
...

[root@k8s-master01 ~]# kubectl get pod -o wide   #電腦性能低,效果不明顯

五、pod資源限制

Kubernetes 對資源的限制實際上是通過 cgroup 來控制的,cgroup 是容器的一組用來控制內核如何運行進程的相關屬性集合。針對內存、CPU 和各種設備都有對應的 cgroup

默認情況下,Pod 運行沒有 CPU 和內存的限額。這意味着系統中的任何 Pod 將能夠像執行該 Pod 所在的節點一樣,消耗足夠多的 CPU 和內存。一般會針對某些應用的 pod 資源進行資源限制,這個資源限制是通過resources 的 requests 和 limits 來實現

image

requests 要分分配的資源,limits 為最高請求的資源值。可以簡單理解為初始值和最大值

5.1、資源限制 - 名稱空間

1)算資源配額

image

2)配置對象數量配額限制

image

3)配置 CPU 和內存 LimitRange

image


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM