高可用Kubernetes集群-15. 部署Kubernetes集群統一日志管理


參考文檔:

  1. Github:https://github.com/kubernetes/kubernetes/tree/master/cluster/addons/fluentd-elasticsearch

Kubernetes推薦采用Fluentd+Elasticsearch+Kibana對系統與容器日志進行采集,查詢並展現。 

一.環境

1. 基礎環境

組件

版本

Remark

kubernetes

v1.9.2

 

fluentd-elasticsearch

v2.0.4

 

elasticsearch

v5.6.4

 

kibana

5.6.4

 

2. 原理 

  1. 容器運行輸出到控制台的日志,以*-json.log的命名方式存放到/var/lib/docker/containers目錄;
  2. 在各Node上運行fluentd服務(同logstash),采集所在節點/var/log與/var/lib/docker/containers兩個目錄下的日志;
  3. fluentd采集的日志數據匯總到elasticsearch集群;
  4. kibana展示與交互。 

二.部署Kubernetes集群性能監控

1. 准備images

kubernetes部署服務時,為避免部署時發生pull鏡像超時的問題,建議提前將相關鏡像pull到相關所有節點(以下以kubenode1為例),或搭建本地鏡像系統。

  1. 基礎環境已做了鏡像加速,可參考:http://www.cnblogs.com/netonline/p/7420188.html
  2. 需要從gcr.io pull的鏡像,已利用Docker Hub的"Create Auto-Build GitHub"功能(Docker Hub利用GitHub上的Dockerfile文件build鏡像),在個人的Docker Hub build成功,可直接pull到本地使用。 
[root@kubenode1 ~]# docker pull netonline/fluentd-elasticsearch:v2.0.4
[root@kubenode1 ~]# docker pull netonline/elasticsearch:v5.6.4
[root@kubenode1 ~]# docker pull netonline/kibana:5.6.4

# elastic需要vm.max_map_count不低於262144,利於輕量級的alpine linux初始化1個基礎容器保證需求
[root@kubenode1 ~]# docker pull alpine:3.6

2. 下載yaml范本

3. es-statefulset.yaml

es-statefulset.yaml由4個模塊組成:ServiceAccout,ClusterRole,ClusterRoleBinding,StatefulSet。

其中ServiceAccout,ClusterRole,ClusterRoleBinding等3個模塊定義了1個新的ClusterRole權限,並完成ClusterRoleBinding,授權到ServiceAccout。這3個模塊默認不修改。

1)StatefulSet

默認不需要修改ServiceAccount部分,設置ServiceAccount資源,獲取rbac中定義的權限。

StatefulSet是Deployment/RC的一個特殊變種,主要面向有狀態的服務,特性如下:

  1. StatefulSet中每個Pod都有穩定的,唯一的網絡標識,可以用來發現集群內的其他成員;如StatefulSet的名字為elasticsearch,則第一個Pod為elasticsearch-0,第二個Pod為elasticsearch-1,依次類推;
  2. StatefulSet控制的Pod副本的啟停順序是受控的,操作第n個Pod時,前n-1個Pod已運行且狀態ready;
  3. StatefulSet里的Pod采用穩定的持久化的存儲卷,通過PV/PVC實現,刪除Pod時默認不會刪除與StatefulSet相關的存儲卷,保證了數據安全。
# 修改處:第76行,變更鏡像名;
# 設定為StatefulSet資源,副本數為2,采用持久化存儲;
# 使用init Container在應用啟動之前做初始化操作
[root@kubenode1 ~]# cd /usr/local/src/yaml/efk/
[root@kubenode1 efk]# sed -i 's|k8s.gcr.io/elasticsearch:v5.6.4|netonline/elasticsearch:v5.6.4|g' es-statefulset.yaml
[root@kubenode1 efk]# cat es-statefulset.yaml
# Elasticsearch deployment itself
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: elasticsearch-logging
  namespace: kube-system
  labels:
    k8s-app: elasticsearch-logging
    version: v5.6.4
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
spec:
  serviceName: elasticsearch-logging
  replicas: 2
  selector:
    matchLabels:
      k8s-app: elasticsearch-logging
      version: v5.6.4
  template:
    metadata:
      labels:
        k8s-app: elasticsearch-logging
        version: v5.6.4
        kubernetes.io/cluster-service: "true"
    spec:
      serviceAccountName: elasticsearch-logging
      containers:
      - image: netonline/elasticsearch:v5.6.4
        name: elasticsearch-logging
        resources:
          # need more cpu upon initialization, therefore burstable class
          limits:
            cpu: 1000m
          requests:
            cpu: 100m
        ports:
        - containerPort: 9200
          name: db
          protocol: TCP
        - containerPort: 9300
          name: transport
          protocol: TCP
 volumeMounts: - name: elasticsearch-logging mountPath: /data
        env:
        - name: "NAMESPACE"
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
 volumes: - name: elasticsearch-logging emptyDir: {} # Elasticsearch requires vm.max_map_count to be at least 262144.
      # If your OS already sets up this number to a higher value, feel free
      # to remove this init container.
 initContainers: - image: alpine:3.6 command: ["/sbin/sysctl", "-w", "vm.max_map_count=262144"] name: elasticsearch-logging-init securityContext: privileged: true 

4. es-service.yaml

es-service.yaml默認不需要修改。

5. fluentd-es-configmap.yaml

fluentd-es-configmap.yaml設定一個ConfigMap資源,以volume的形式掛載為fluentd服務內部的文件,默認不需要修改。

6. fluentd-es-ds.yaml

fluentd-es-ds.yaml由4個模塊組成:ServiceAccout,ClusterRole,ClusterRoleBinding,StatefulSet。

其中ServiceAccout,ClusterRole,ClusterRoleBinding等3個模塊定義了1個新的ClusterRole: fluentd-es權限,並完成ClusterRoleBinding: fluentd-es,授權到ServiceAccout: fluentd-es。這3個模塊默認不修改。

1)DaemonSet

fluentd需要在每個Node上運行,有以下3種方式實現:

  1. 直接在Node上部署fluentd服務;
  2. 通過kubelet的--config參數,為每個Node加載fluentd Pod;
  3. 通過DaemonSet資源設定fluentd Pod在每個Node運行(官方推薦)。
# 修改處:第16行,變更鏡像名;
# nodeSelector:標簽設置為”true”,設定DaemonSet調度Pod只能調度到含有標簽”beta.kubernetes.io/fluentd-ds-ready”的節點,需要在相應節點設置標簽
[root@kubenode1 efk]# sed -i 's|k8s.gcr.io/fluentd-elasticsearch:v2.0.4|netonline/fluentd-elasticsearch:v2.0.4|g' fluentd-es-ds.yaml
[root@kubenode1 efk]# cat fluentd-es-ds.yaml
……
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: fluentd-es-v2.0.4
  namespace: kube-system
  labels:
    k8s-app: fluentd-es
    version: v2.0.4
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
spec:
  selector:
    matchLabels:
      k8s-app: fluentd-es
      version: v2.0.4
  template:
    metadata:
      labels:
        k8s-app: fluentd-es
        kubernetes.io/cluster-service: "true"
        version: v2.0.4
      # This annotation ensures that fluentd does not get evicted if the node
      # supports critical pod annotation based priority scheme.
      # Note that this does not guarantee admission on the nodes (#40573).
      annotations:
        scheduler.alpha.kubernetes.io/critical-pod: ''
    spec:
      priorityClassName: system-node-critical
      serviceAccountName: fluentd-es
      containers:
      - name: fluentd-es
        image: netonline/fluentd-elasticsearch:v2.0.4
        env:
        - name: FLUENTD_ARGS
          value: --no-supervisor -q
        resources:
          limits:
            memory: 500Mi
          requests:
            cpu: 100m
            memory: 200Mi
        volumeMounts:
        - name: varlog
          mountPath: /var/log
        - name: varlibdockercontainers
          mountPath: /var/lib/docker/containers
          readOnly: true
        - name: libsystemddir
          mountPath: /host/lib
          readOnly: true
        - name: config-volume
          mountPath: /etc/fluent/config.d
 nodeSelector: beta.kubernetes.io/fluentd-ds-ready: "true"
      terminationGracePeriodSeconds: 30
      volumes:
      - name: varlog
        hostPath:
          path: /var/log
      - name: varlibdockercontainers
        hostPath:
          path: /var/lib/docker/containers
      # It is needed to copy systemd library to decompress journals
      - name: libsystemddir
        hostPath:
          path: /usr/lib64
      - name: config-volume configMap: name: fluentd-es-config-v0.1.4

2)設置標簽

# 所有期望運行fluentd Pod的節點都需要設置標簽
[root@kubenode1 ~]# kubectl get nodes
[root@kubenode1 ~]# kubectl label nodes 172.30.200.21 beta.kubernetes.io/fluentd-ds-ready=true
[root@kubenode1 ~]# kubectl label nodes 172.30.200.22 beta.kubernetes.io/fluentd-ds-ready=true
[root@kubenode1 ~]# kubectl label nodes 172.30.200.23 beta.kubernetes.io/fluentd-ds-ready=true

7. kibana-deployment.yaml

# 修改處:第22行,變更鏡像名;
[root@kubenode1 efk]# sed -i 's|docker.elastic.co/kibana/kibana:5.6.4|netonline/kibana:5.6.4|g' kibana-deployment.yaml

8. kibana-service.yaml

默認不需要修改Service部分。

三.驗證

1. 啟動服務

[root@kubenode1 ~]# cd /usr/local/src/yaml/efk/
[root@kubenode1 efk]# kubectl create -f .

2. 查看服務

# 查看statefulset,daemonset與deployment
[root@kubenode1 ~]# kubectl get statefulset -n kube-system
[root@kubenode1 ~]# kubectl get daemonset -n kube-system
[root@kubenode1 ~]# kubectl get deployment -n kube-system | grep kibana

# 查看elasticsearch與kibana的Pod運行狀態;
# 有狀態的Pod命名是有規律的
[root@kubenode1 ~]# kubectl get pods -n kube-system | grep -E 'elasticsearch|kibana'

# 查看fluentd的Pod運行狀態,”-o wide”參數可顯示Pod運行節點;
# 期望運行的節點都運行了fluentd Pod服務
[root@kubenode1 ~]# kubectl get pods -n kube-system -o wide | grep fluentd

# 查看service運行狀態
[root@kubenode1 ~]# kubectl get svc -n kube-system | grep -E 'elasticsearch|kibana'

# kibana Pod第一次啟動時有一定的初始化操作來優化並cache狀態頁面,時間一般在10~20分鍾內;
# 通過日志查看,”-f”參數類似於“tailf”;
[root@kubenode1 ~]# kubectl log kibana-logging-5d4b6ddfc7-szx6d -n kube-system -f 

3. 訪問elasticsearch

# 訪問elasticsearch,采用kube-apiserver方式,也可以使用kubecet proxy代理的方式(同dashboard方式)
[root@kubenode1 ~]# kubectl cluster-info

瀏覽器訪問訪問elasticsearch,返回json文檔:https://172.30.200.10:6443/api/v1/namespaces/kube-system/services/elasticsearch-logging/proxy/

4. 訪問kibana

訪問kibana,同樣采用kube-apiserver方式,也可以使用kubecet proxy代理的方式(同dashboard方式)

瀏覽器訪問訪問kibana:https://172.30.200.10:6443/api/v1/namespaces/kube-system/services/kibana-logging/proxy

同ELK,第一次進入kibana,需要進行初始化配置,默認"Create"即可;


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM