⒈什么是Helm
在沒使用helm之前,向kubernetes 部署應用,我們要依次部署deployment, svc等,步驟較繁瑣。況且隨着很多項目微服務化,復雜的應用在容器中部署以及管理顯得較為復雜,helm 通過打包的方式,支持發布的版本管理和控制,很大程度上簡化了 Kubernetes 應用的部署和管理
Helm 本質就是讓 K8s的應用管理 (Deployment,Service等)可配置,能動態生成。通過動態生成 K8s資源清單文件(deployment.yaml, service.yaml)。然后調用Kubectl 自動執行K8s資源部署
Helm 是官方提供的類似於YUM的包管理器,是部署環境的流程封裝。 Helm 有兩個重要的概念:chart 和release
chart是創建一個應用的信息集合,包括各種Kubernetes對象的配置模板、參數定義、依賴關系、文檔說明等。chart是應用部署的自包含邏輯單元。可以將chart想象成apt. yum 中的軟件安裝包
release 是 chart 的運行實例,代表了一個正在運行的應用。當chart 被安裝到 Kubernetes集群,就生成一個 release, chart 能夠多次安裝到同一個集群,每次安裝都是一個release
Helm包含兩個組件:Helm客戶端和 Tiller服務器,如下圖所示
HeIm客戶端負責chart 和 release的創建和管理以及和 Tiller的交互。Tiller服務器運行在 Kubernetes 集群中,它會處理Helm客戶端的請求,與 Kubernetes AP| Server 交互
⒉Helm 部署
越來越多的公司和團隊開始使用Helm 這個Kubernetes 的包管理器,我們也將使用Helm安裝Kubernetes 的常用組件。Hem 由客戶端命helm 令行工具和服務端tiller 組成,Helm 的安裝十分簡單。下載helm 命令行工具到master 節點node1 的/usrflocal/bin 下,這里下載的2.13.1版本:
ntpdate ntp1.aliyun.com wget https://storage.googleapis.com/kubernetes-helm/helm-v2.13.1-linux-amd64.tar.gz wget https://get.helm.sh/helm-v3.2.0-linux-amd64.tar.gz tar -zxvf helm-v2.13.1-linux-amd64.tar.gz cd linux-amd64/ cp -a helm /usr/local/bin/
chmod a+x /usr/local/bin/helm
apiVersion: v1 kind: ServiceAccount metadata: name: tiller namespace: kube-system --- apiVersion: rbac.authorization.k8s.io/v1beta1 kind: ClusterRoleBinding metadata: name: tiller roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: cluster-admin subjects: - kind: ServiceAccount name: tiller namespace: kube-system
kubectl create -f rbac-config.yaml
serviceaccount/tiller created clusterrolebinding.rbac.authorization.k8s.io/tiller created
helm init --service-account tiller --skip-refresh
⒊tiller 默認被部署在k8s 集群中的kube-system 這個 namespace 下
$ kubectl get pod -n kube-system -l app=helm NAME READY STATUS RESTARTS AGE tiller-deploy-c4fd4cd68-dwkhv 1/1 Running 83s $ helm version Client: &version.Version(SemVer:"v2.13.1", GitCommit:"618447bf203d147601b4b9bd7f837a5d39fbb4”, GitTreeState:"clean" Server:&version.VersionSemVer:"v2.13.1" GitCommit:"618447cbf203d147601b4b9bd7f8°37a5d39fbb4” GitTreeState: "clean”]
⒋Helm 自定義模板
# 創建文件夾 $ mkdir ./hello-world $ cd ./hello-world
#創建自描述文件Chart.yaml ,這個文件必須有 name 和 version 定義 $ cat <<'E0F' > ./Chart. yaml name: hello-world version: 1.0.0 EOF
#創建模板文件,用於生成Kubernetes 資源清單 (manifests) ,模板目錄必須是templates $ mkdir ./templates $ cat <<'EOF' > ./templates/deployment.yaml apiVersion: extensions/v1beta1 kind: Deployment metadata: name: hello-world spec: replicas: 1 template: metadata: labels: app: hello-world spec: containers: - name: hello-world image: hub.coreqi.cn/library/myapp:v1 ports: - containerPort: 80 protocol: TCP EOF $ cat <<'E0F' > ./templates/service.yaml apiVersion: v1 kind: Service metadata: name: hello-world spec: type: NodePort ports: - port: 80 targetPort: 80 protocol: TCP selector: app: hello-world EOF
# 使用命令 helm install RELATIVE_PATH_TO_CHART 創建一次Release,使用helm install . 從當前目錄安裝 helm install .
# 列出已經部署的 Release $ helm ls # 查詢一個特定的Release的狀態 $ helm status RELEASE_NAME # 移除所有與這個 Release 相關的 Kubernetes 資源 $ helm delete cautious-shrimp # helm rollback RELEASE_NAME REVISION NUMBER $ helm rollback cautious-shrimp 1 # 使用 helm delete --purge RELEASE_NAME 移除所有與指定 Release 相關的Kubernetes 資源和所有這個 Release 的記錄 $ helm delete --purge cautious-shrimp $ helm ls --deleted
#配置體現在配置文件 values.yaml $ cat <<'EOF' > ./values.yaml image: repository: gcr.io/google-samples/node-hello tag: '1.0' EOF # 這個文件中定義的值,在模板文件中可以通過 .Values對象訪問到 $ cat <<'EOF' > ./templates/deployment.yaml apiVersion: extensions/v1beta1 kind: Deployment metadata: name: hello-world spec: replicas: 1 template: metadata: labels: app: hello-world spec: containers: - name: hello-world image: {{ .Values.image.repository }}:{{ .Values.image.tag }} ports: - containerPort: 8080 protocol: TCP EOF
#在 values.yaml 中的值可以被部署 release 時用到的參數 --values YAML_FILE_PATH 或 --set key1=value1, key2=value2覆蓋掉 $ helm install --set image.tag="latest" . # 升級版本 helm upgrade -f values.yaml test .
⒌Debug
#使用模板動態生成K8s資源清單,非常需要能提前預覽生成的結果。 #使用--dry-run --debug 選項來打印出生成的清單文件內容,而不執行部署 helm install . --dry-run --debug --set image.tag latest
⒍使用Helm部署 dashboard
helm fetch stable/kubernetes-dashboard tar -zxvf kubernetes-dashboard-1.8.0.tgz
kubernetes-dashboard.yaml:
image: repository: k8s.gcr.io/kubernetes-dashboard-amd64 tag: v1.10.1 ingress: enabled: true hosts: - k8s.frognew.com annotations: nginx.ingress.kubernetes.io/ssl-redirect: "true" nginx.ingress.kubernetes.io/backend-protocol: "HTTPS" tls: - secretName: frognew-com-tls-secret hosts: - k8s.frognew.com rbac: clusterAdminRole: true
helm install stable/kubernetes-dashboard -n kubernetes-dashboard --namespace kube-system -f kubernetes-dashboard.yaml
kubectl -n kube-system get secret | grep kubernetes-dashboard-token kubernetes.io/service-account-token 3 3m7s kubectl describe -n kube-system secret/kubernetes-dashboard-token-pkm2s Name: kubernetes-dashboard-token-pkm2s Namespace: kube-system Labels: <none>Annotations: kubernetes.io/service-account.name: kubernetes-dashboard kubernetes.io/service-account.uid:2f0781dd-156a-11e9-b0f0-080027bb7c43 Type: kubernetes.io/service-account-token Data ==== ca.crt: 1025 bytes namespace:11 bytes token: eyJhbGci0iJSUzI1NiIsImtpZCI6IiJ9.eyJpc3Mi0iJrdWJ1 cm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5 pby9zZX32aWN1YWNjb3VudC9uYW11C3BY2Ui01JrdWJlXNc3R1bSIsImt1YmVybmVOZXMuaN8v2Vydm1jZWF jY291bnQ vc2VjcmVOLm5hbWUi0iJrdWJl cm5ldGVzLWRhc2hib2FyZC10b2t]bi1wa20ycyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWF jY291bn0v2VydmljZS1hY2NvdW50Lm5hbWUi0iJrdWJ1cm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0XMuaW8vc2Vydml jZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6IjJmMDc4MWRkLTE1NmEtMTF10S1iMGYWLTA4MDAyN2Ji N2M0MyIsInN 1YiI6InN5c3R1bTpzZX32aWNIYNjb3VudDprdW 1LXN3R1bTprdWJ1cm51dGVzLWRh2hib2FyZC09.24d6ZgZMxdydp wlmYAiMxZ9VSIN7dDR7Q-RLWOqC81ajXo0KHAyrEGpIonf1d3gqbE0x08nisskpmlkQra72- 9X6sBPoByqIKyTs083B0lME2sf0JemlDOHqzwSCjvS0a0x bU1q9HgH2vEXzpFuSS6Si7RbfzLX1EuggNoC4MfA4E2hF10X_m18iAKx-49y1BQQe5FGWyCyBSi1TD_- ZpVs44H5gIvsGK2kcvi0JT40HXtWjjQBKLIWL7xxyRCSE4HmUZT2StIHn0W1X7IEIBOoBX4mPg2_xNGnqwcu- 80ERU9IoqAAE2cZa0v3b502LMcPrcxrV0ukvRIumA
kubectl edit svc kubernetes-dashboard -n kube-system
type: NodePort
修改 ClusterIP 為 NodePort
⒎使用Helm部署metrics-server
從 Heapster 的github <https:/github.com/kubernetes/heapster>中可以看到已經, heapster 已經DEPRECATED.
這里是 heapsterdeprecation timeline。可以看出 heapster 從 Kubernetes 1.12開始將從 Kubernetes 各種安裝腳 本中移除。 Kubernetes 推薦使用 metrics-server。我們這里也使用helm來部署metrics-server.
metrics-server.yaml:
args: - --logtostderr - --kubelet-insecure-tls - --kubelet-preferred-address-types=InternalIP
helm install stable/metrics-server -n metrics-server --namespace kube-system -f metrics-server.yaml
使用下面的命令可以獲取到關於集群節點基本的指標信息:
kubectl top node NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% node1 650m 32% 1276Mi 73% node2 73m 3% 527Mi 30%
kubectl top pod --all-namespaces NAMESPACE NAME cpU(cores) MEMORY(bytes) ingress-nginx nginx-ingress-controller-6f5687c58d-jdxzk 3m 142Mi ingress-nginx nginx-ingress-controller-6f5687c58d-lxj5q 5m 146Mi ingress-nginx nginx-ingress-default-backend-6dc6c46dcc-lf882 1m 4Mi kube-system coredns-86c58d9df4-k5jkh 2m 15Mi kube-system coredns-86c58d9df4-rw6tt 3m 23Mi kube-system etcd-node1 20m 86Mi kube-system kube-apiserver-node1 33m 468Mi kube-system kube-controller-manager-node1 29m 89Mi kube-system kube-f]annel-ds-amd64-8nr5j 2m 13Mi kube-system kube-flannel-ds-amd64-bmncz 2m 21Mi kube-system kube-proxy-d5gxv 2m 18Mi kube-system kube-proxy-zm29n 2m 16Mi kube-system kube-scheduler-node1 8m 28Mi kube-system kubernetes-dashboard-788c98d699-qd2cx 2m 16Mi kube-system metrics-server-68785fbcb4-k4g9v 3m 12Mi kube-system tiller-deploy-c4fd4cd68-dwkhv 1m 24Mi
⒏部署Prometheus
1.相關地址信息
git clone https://github.com/coreos/kube-prometheus.git cd /root/kube-prometheus/manifests
2.修改grafana-service.yaml文件,使用nodepode 方式訪問grafana:
vim grafana-service.yaml apiVersion: v1 kind: Service metadata: name: grafana namespace: monitoring spec: type: NodePort #添加內容 ports: - name: http port: 3000 targetPort: http nodePort: 30100 #添加內容 selector: app: grafana
3.修改 prometheus-service.yaml,改為nodepode
vim prometheus-service.yaml apiVersion: v1 kind: Service metadata: labels: prometheus: k8s name:prometheus-k8s namespace: monitoring spec: type: NodePort ports: - name: web port: 9090 targetPort: web nodePort: 30200 selector: app: prometheus prometheus: k8s
4.修改 alertmanager-service.yaml, 改為 nodepode
vim alertmanager-service.yaml apiVersion: v1 kind: Service metadata: labels: alertmanager: main name: alertmanager-main namespace: monitoring spec: type: NodePort ports: - name: web port: 9093 targetPort: web nodePort: 30300 selector: alertmanager: main app: alertmanager



kubectl get service -n monitoring | grep grafana grafana NodePort 10.107.56.143 <none> 3000:30100/TCP 20h
如上可以看到 grafana 的端口號是30100,瀏覽器訪問http://Master1P:30100用戶名密碼默認 admin/admin
6.Horizontal Pod Autoscaling【HPA】
Horizontal Pod Autoscaling 可以根據CPU利用率自動伸縮一個 Replication Controller、 Deployment 或者 Replica Set中的Pod數量
kubectl run php-apache --image=gcr.io/google_containers/hpa-example --requests=cpu=200m --expose --port=80
創建HPA控制器-相關算法的詳情請參閱這篇文檔
kubectl autoscale deployment php-apache --cpu-percent=50 --min=1 --max=10
增加負載,查看負載節點數目
$kubectl run -i --tty load-generator --image=busybox /bin/sh
$while true; do wget -q -O- http://php-apache.default.svc.cluster.local; done
7.資源限制 - Pod
spec:
containers:
- image: xxxx imagePullPolicy: Always name: auth ports: - containerPort: 8080 protocol: TCP resources: limits: cpu: "4" memory: 2Gi requests: cpu: 250m memory: 250Mi
apiVersion: v1
kind: ResourceQuota
metadata:
name: compute-resources namespace: spark-cluster spec: hard: pods: "20" requests.cpu: "20" requests.memory: 100Gi limits.cpu: "40" limits.memory: 200Gi
2.配置對象數量配額限制
apiVersion: v1
kind: ResourceQuota
metadata:
name: object-counts namespace: spark-cluster spec: hard: configmaps: "10" persistentvolumeclaims: "4" replicationcontrollers: "20" secrets: "10" services: "10" services.loadbalancers: "2"
3.配置 CPU和內存 LimitRange
apiVersion: v1
kind: LimitRange
metadata:
name: mem-limit-range spec: limits: - default: memory: 50Gi cpu: 5 defaultRequest: memory: 1Gi cpu: 1 type: Container
⒐部署EFK平台
1、添加 Google incubator 倉庫
helm repo add incubator http://storage.googleapis.com/kubernetes-charts-incubator
2、部署 Elasticsearch
kubectl create namespace efk helm fetch incubator/elasticsearch
tar -zxvf elasticsearch-1.10.2.tgz helm install --name els1 --namespace=efk -f values.yaml incubator/elasticsearch kubectl run cirror-$RANDOM --rm -it --image=cirros -- /bin/sh curl Elasticsearch:Port/_cat/nodes
3.部署Fluentd
helm fetch stable/fluentd-elasticsearch
tar -zxvf fluentd-elasticsearch-2.0.7.tgz vim values.yaml # 更改其中 Elasticsearch 訪問地址
kubectl get svc -n efk CLUSTER-IP節點的ip地址 helm install --name flu1 --namespace=efk -f values.yaml stable/fluentd-elasticsearch
4.部署kibana
helm fetch stable/kibana --version 0.14.8
tar -zxvf kibana-0.14.8.tgz
#更改其中的 ElasticSearch訪問地址 helm install --name kib1 --namespace=efk -f values.yaml stable/kibana --version 0.14.8
5.配置外網訪問
kubectl edit svc kib1-kibana -n efk
#修改type節點ClusterIP為NodePort