一、概述
- 使用metric-server收集數據給k8s集群內使用,如kubectl,hpa,scheduler等
- 使用prometheus-operator部署prometheus,存儲監控數據
- 使用kube-state-metrics收集k8s集群內資源對象數據
- 使用node_exporter收集集群中各節點的數據
- 使用prometheus收集apiserver,scheduler,controller-manager,kubelet組件數據
- 使用alertmanager實現監控報警
- 使用grafana實現數據可視化
1、部署metrics-server
git clone https://github.com/cuishuaigit/k8s-monitor.git
cd k8s-monitor
我都是把這種服務部署在master節點上面,此時需要修改metrics-server-deployment.yaml
--- apiVersion: v1 kind: ServiceAccount metadata: name: metrics-server namespace: kube-system --- apiVersion: extensions/v1beta1 kind: Deployment metadata: name: metrics-server namespace: kube-system labels: k8s-app: metrics-server spec: selector: matchLabels: k8s-app: metrics-server template: metadata: name: metrics-server labels: k8s-app: metrics-server spec: serviceAccountName: metrics-server tolerations: - effect: NoSchedule key: node.kubernetes.io/unschedulable operator: Exists - key: NoSchedule operator: Exists effect: NoSchedule volumes: # mount in tmp so we can safely use from-scratch images and/or read-only containers - name: tmp-dir emptyDir: {} containers: - name: metrics-server image: k8s.gcr.io/metrics-server-amd64:v0.3.1 imagePullPolicy: Always command: - /metrics-server - --kubelet-insecure-tls - --kubelet-preferred-address-types=InternalIP,Hostname,InternalDNS,ExternalDNS,ExternalIP volumeMounts: - name: tmp-dir mountPath: /tmp nodeSelector: metrics: "yes"
為master節點添加label
kubectl label nodes ku metrics=yes
部署
kubectl create -f metrics-server/deploy/1.8+/
驗證:
it's cool
注:metrics-server默認使用node的主機名,但是coredns里面沒有物理機主機名的解析,一種是部署的時候添加一個參數:
- --kubelet-preferred-address-types=InternalIP,Hostname,InternalDNS,ExternalDNS,ExternalIP
第二種是使用dnsmasq構建一個上游的dns服務,參照https://www.cnblogs.com/cuishuai/p/9856843.html。
2、部署prometheus
下載相關文件:
前面部署metrics-server已經把所有的文件pull到本地了,所以直接使用
cd k8s-monitor
1.搭建nfs服務動態提供持久化存儲
1.安裝nfs sudo apt-get install -y nfs-kernel-server sudo apt-get install -y nfs-common sudo vi /etc/exports /data/opv *(rw,sync,no_root_squash,no_subtree_check) 注意將*換成自己的ip段,純內網的話也可以用*,代替任意 sudo /etc/init.d/rpcbind restart sudo /etc/init.d/nfs-kernel-server restart sudo systemctl enable rpcbind nfs-kernel-server 客戶端掛在使用 sudo apt-get install -y nfs-common mount -t nfs ku13-1:/data/opv /data/opv -o proto=tcp -o nolock 為了方便使用將上面的mount命令直接放到.bashrc里面 2.創建namesapce kubectl creaet -f nfs/monitoring-namepsace.yaml 3.為nfs創建rbac kubectl create -f nfs/rbac.yaml 4.創建deployment,將nfs的地址換成自己的 kubectl create -f nfs/nfs-deployment.yaml 5.創建storageclass kubectl create -f nfs/storageClass.yaml
2.安裝Prometheus
cd k8s-monitor/Promutheus/prometheus
1.創建權限 kubectl create -f rbac.yaml 2.創建 node-exporter kubectl create -f prometheus-node-exporter-daemonset.yaml kubectl create -f prometheus-node-exporter-service.yaml 3.創建 kube-state-metrics kubectl create -f kube-state-metrics-deployment.yaml kubectl create -f kube-state-metrics-service.yaml 4.創建 node-directory-size-metrics kubectl create -f node-directory-size-metrics-daemonset.yaml 5.創建 prometheus kubectl create -f prometheus-pvc.yaml kubectl create -f prometheus-core-configmap.yaml kubectl create -f prometheus-core-deployment.yaml kubectl create -f prometheus-core-service.yaml kubectl create -f prometheus-rules-configmap.yaml 6.修改core-configmap里的etcd地址
3.安裝Grafana
cd k8s-monitor/Promutheus/grafana
1.安裝grafana service kubectl create -f grafana-svc.yaml 2.創建configmap kubectl create -f grafana-configmap.yaml 3.創建pvc kubectl create -f grafana-pvc.yaml 4.創建gragana deployment kubectl create -f grafana-deployment.yaml 5.創建dashboard configmap kubectl create configmap "grafana-import-dashboards" --from-file=dashboards/ --namespace=monitoring 6.創建job,導入dashboard等數據 kubectl create -f grafana-job.yaml
查看部署:
prometheus和grafana都是采用的nodePort方式暴漏的服務,所以可以直接訪問。
grafana默認的用戶名密碼:admin/admin
QA:
1、集群是使用kubeadm部署的,controller-manager和schedule都是監聽的127.0.0.1,導致prometheus收集不到相關的數據?
可以在初始化之前修改其監聽地址:
apiVersion: kubeadm.k8s.io/v1beta1 kind: ClusterConfiguration controllerManager: extraArgs: address: 0.0.0.0 scheduler: extraArgs: address: 0.0.0.0
如果集群已經構建好了:
sed -e "s/- --address=127.0.0.1/- --address=0.0.0.0/" -i /etc/kubernetes/manifests/kube-controller-manager.yaml
sed -e "s/- --address=127.0.0.1/- --address=0.0.0.0/" -i /etc/kubernetes/manifests/kube-scheduler.yaml
2、metrics-server不能使用,報錯不能解析node節點的主機名?
需要修改deployment文件,
- --kubelet-preferred-address-types=InternalIP,Hostname,InternalDNS,ExternalDNS,ExternalIP
3、metrics-server報錯,x509,證書是非信任的?
command:
- /metrics-server
- --kubelet-insecure-tls
4、完整的配置文件
containers: - name: metrics-server image: k8s.gcr.io/metrics-server-amd64:v0.3.1 command: - /metrics-server - --metric-resolution=30s - --kubelet-insecure-tls - --kubelet-preferred-address-types=InternalIP,Hostname,InternalDNS,ExternalDNS,ExternalIP