Kubernetes實戰之部署ELK Stack收集平台日志


主要內容

  • 1 ELK概念

  • 2 K8S需要收集哪些日志

  • 3 ELK Stack日志方案

  • 4 容器中的日志怎么收集

  • 5 K8S平台中應用日志收集

准備環境

一套正常運行的k8s集群,kubeadm安裝部署或者二進制部署即可

ip地址 角色 備注
192.168.73.136 nfs
192.168.73.138 k8s-master
192.168.73.139 k8s-node01
192.168.73.140 k8s-node02

1 ELK 概念

ELK是Elasticsearch、Logstash、Kibana三大開源框架首字母大寫簡稱。市面上也被成為Elastic Stack。其中Elasticsearch是一個基於Lucene、分布式、通過Restful方式進行交互的近實時搜索平台框架。像類似百度、谷歌這種大數據全文搜索引擎的場景都可以使用Elasticsearch作為底層支持框架,可見Elasticsearch提供的搜索能力確實強大,市面上很多時候我們簡稱Elasticsearch為es。Logstash是ELK的中央數據流引擎,用於從不同目標(文件/數據存儲/MQ)收集的不同格式數據,經過過濾后支持輸出到不同目的地(文件/MQ/redis/elasticsearch/kafka等)。Kibana可以將elasticsearch的數據通過友好的頁面展示出來,提供實時分析的功能。

通過上面對ELK簡單的介紹,我們知道了ELK字面意義包含的每個開源框架的功能。市面上很多開發只要提到ELK能夠一致說出它是一個日志分析架構技術棧總稱,但實際上ELK不僅僅適用於日志分析,它還可以支持其它任何數據分析和收集的場景,日志分析和收集只是更具有代表性。並非唯一性。我們本教程主要也是圍繞通過ELK如何搭建一個生產級的日志分析平台來講解ELK的使用。
官方網站:https://www.elastic.co/cn/products/

enter description here

2 日志管理平台

在過往的單體應用時代,我們所有組件都部署到一台服務器中,那時日志管理平台的需求可能並沒有那么強烈,我們只需要登錄到一台服務器通過shell命令就可以很方便的查看系統日志,並快速定位問題。隨着互聯網的發展,互聯網已經全面滲入到生活的各個領域,使用互聯網的用戶量也越來越多,單體應用已不能夠支持龐大的用戶的並發量,尤其像中國這種人口大國。那么將單體應用進行拆分,通過水平擴展來支持龐大用戶的使用迫在眉睫,微服務概念就是在類似這樣的階段誕生,在微服務盛行的互聯網技術時代,單個應用被拆分為多個應用,每個應用集群部署進行負載均衡,那么如果某項業務發生系統錯誤,開發或運維人員還是以過往單體應用方式登錄一台一台登錄服務器查看日志來定位問題,這種解決線上問題的效率可想而知。日志管理平台的建設就顯得極其重要。通過Logstash去收集每台服務器日志文件,然后按定義的正則模板過濾后傳輸到Kafka或redis,然后由另一個Logstash從KafKa或redis讀取日志存儲到elasticsearch中創建索引,最后通過Kibana展示給開發者或運維人員進行分析。這樣大大提升了運維線上問題的效率。除此之外,還可以將收集的日志進行大數據分析,得到更有價值的數據給到高層進行決策。

3 K8S需要收集哪些日志

這里只是以主要收集日志為例:

  • K8S系統的組件日志
  • K8S Cluster里面部署的應用程序日志
    -標准輸出
    -日志文件

4 K8S中的ELK Stack日志采集方案

  • 方案一:Node上部署一個日志收集程序
    使用DaemonSet的方式去給每一個node上部署日志收集程序logging-agent
    然后使用這個agent對本node節點上的/var/log和/var/lib/docker/containers/兩個目錄下的日志進行采集
    或者把Pod中容器日志目錄掛載到宿主機統一目錄上,這樣進行收集

  • 方案二:Pod中附加專用日志收集的容器
    每個運行應用程序的Pod中增加一個日志收集容器,使用emtyDir共享日志目錄讓日志收集程序讀取到。
    enter description here

  • 方案三:應用程序直接推送日志
    這個方案需要開發在代碼中修改直接把應用程序直接推送到遠程的存儲上,不再輸入出控制台或者本地文件了,使用不太多,超出Kubernetes范圍
    enter description here

方式 優點 缺點
方案一:Node上部署一個日志收集程序 每個Node僅需部署一個日志收集程序,資源消耗少,對應用無侵入 應用程序日志需要寫到標准輸出和標准錯誤輸出,不支持多行日志
方案二:Pod中附加專用日志收集的容器 低耦合 每個Pod啟動一個日志收集代理,增加資源消耗,並增加運維維護成本
方案三:應用程序直接推送日志 無需額外收集工具 浸入應用,增加應用復雜度

5 單節點方式部署ELK

單節點部署ELK的方法較簡單,可以參考下面的yaml編排文件,整體就是創建一個es,然后創建kibana的可視化展示,創建一個es的service服務,然后通過ingress的方式對外暴露域名訪問

首先,編寫es的yaml,這里部署的是單機版,在k8s集群內中,通常當日志量每天超過20G以上的話,還是建議部署在k8s集群外部,支持分布式集群的架構,這里使用的是有狀態部署的方式,並且使用動態存儲進行持久化,需要提前創建好存儲類,才能運行該yaml

[root@k8s-master fek]# vim elasticsearch.yaml 

apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: elasticsearch
  namespace: kube-system
  labels:
    k8s-app: elasticsearch
spec:
  serviceName: elasticsearch
  selector:
    matchLabels:
      k8s-app: elasticsearch
  template:
    metadata:
      labels:
        k8s-app: elasticsearch
    spec:
      containers:
      - image: elasticsearch:7.3.1
        name: elasticsearch
        resources:
          limits:
            cpu: 1
            memory: 2Gi
          requests:
            cpu: 0.5 
            memory: 500Mi
        env:
          - name: "discovery.type"
            value: "single-node"
          - name: ES_JAVA_OPTS
            value: "-Xms512m -Xmx2g" 
        ports:
        - containerPort: 9200
          name: db
          protocol: TCP
        volumeMounts:
        - name: elasticsearch-data
          mountPath: /usr/share/elasticsearch/data
  volumeClaimTemplates:
  - metadata:
      name: elasticsearch-data
    spec:
      storageClassName: "managed-nfs-storage"
      accessModes: [ "ReadWriteOnce" ]
      resources:
        requests:
          storage: 20Gi

---

apiVersion: v1
kind: Service
metadata:
  name: elasticsearch
  namespace: kube-system
spec:
  clusterIP: None
  ports:
  - port: 9200
    protocol: TCP
    targetPort: db
  selector:
    k8s-app: elasticsearch

使用剛才編寫好的yaml文件創建Elasticsearch,然后檢查是否啟動,如下所示能看到一個elasticsearch-0 的pod副本被創建,正常運行;如果不能正常啟動可以使用kubectl describe查看詳細描述,排查問題

[root@k8s-master fek]# kubectl get pod -n kube-system       
NAME                        READY   STATUS             RESTARTS   AGE
coredns-5bd5f9dbd9-95flw    1/1     Running            0          17h
elasticsearch-0             1/1     Running            1          16m
php-demo-85849d58df-4bvld   2/2     Running            2          18h
php-demo-85849d58df-7tbb2   2/2     Running            0          17h

然后,需要部署一個Kibana來對搜集到的日志進行可視化展示,使用Deployment的方式編寫一個yaml,使用ingress對外進行暴露訪問,直接引用了es

[root@k8s-master fek]# vim kibana.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:
  name: kibana
  namespace: kube-system
  labels:
    k8s-app: kibana
spec:
  replicas: 1
  selector:
    matchLabels:
      k8s-app: kibana
  template:
    metadata:
      labels:
        k8s-app: kibana
    spec:
      containers:
      - name: kibana
        image: kibana:7.3.1
        resources:
          limits:
            cpu: 1
            memory: 500Mi
          requests:
            cpu: 0.5 
            memory: 200Mi
        env:
          - name: ELASTICSEARCH_HOSTS
            value: http://elasticsearch:9200
        ports:
        - containerPort: 5601
          name: ui
          protocol: TCP

---
apiVersion: v1
kind: Service
metadata:
  name: kibana
  namespace: kube-system
spec:
  ports:
  - port: 5601
    protocol: TCP
    targetPort: ui
  selector:
    k8s-app: kibana

---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: kibana
  namespace: kube-system
spec:
  rules:
  - host: kibana.ctnrs.com
    http:
      paths:
      - path: /
        backend:
          serviceName: kibana
          servicePort: 5601

使用剛才編寫好的yaml創建kibana,可以看到最后生成了一個kibana-b7d98644-lshsz的pod,並且正常運行

[root@k8s-master fek]# kubectl apply -f kibana.yaml 
deployment.apps/kibana created
service/kibana created
ingress.extensions/kibana created
[root@k8s-master fek]# kubectl get pod -n kube-system       
NAME                        READY   STATUS             RESTARTS   AGE
coredns-5bd5f9dbd9-95flw    1/1     Running            0          17h
elasticsearch-0             1/1     Running            1          16m
kibana-b7d98644-48gtm       1/1     Running            1          17h
php-demo-85849d58df-4bvld   2/2     Running            2          18h
php-demo-85849d58df-7tbb2   2/2     Running            0          17h

最后,需要編寫yaml在每個node上創建一個ingress-nginx控制器來對外提供訪問

[root@k8s-master demo2]# vim mandatory.yaml 
apiVersion: v1
kind: Namespace
metadata:
  name: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx

---

kind: ConfigMap
apiVersion: v1
metadata:
  name: nginx-configuration
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx

---
kind: ConfigMap
apiVersion: v1
metadata:
  name: tcp-services
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx

---
kind: ConfigMap
apiVersion: v1
metadata:
  name: udp-services
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx

---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: nginx-ingress-serviceaccount
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx

---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
  name: nginx-ingress-clusterrole
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
rules:
  - apiGroups:
      - ""
    resources:
      - configmaps
      - endpoints
      - nodes
      - pods
      - secrets
    verbs:
      - list
      - watch
  - apiGroups:
      - ""
    resources:
      - nodes
    verbs:
      - get
  - apiGroups:
      - ""
    resources:
      - services
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - ""
    resources:
      - events
    verbs:
      - create
      - patch
  - apiGroups:
      - "extensions"
      - "networking.k8s.io"
    resources:
      - ingresses
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - "extensions"
      - "networking.k8s.io"
    resources:
      - ingresses/status
    verbs:
      - update

---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: Role
metadata:
  name: nginx-ingress-role
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
rules:
  - apiGroups:
      - ""
    resources:
      - configmaps
      - pods
      - secrets
      - namespaces
    verbs:
      - get
  - apiGroups:
      - ""
    resources:
      - configmaps
    resourceNames:
      # Defaults to "<election-id>-<ingress-class>"
      # Here: "<ingress-controller-leader>-<nginx>"
      # This has to be adapted if you change either parameter
      # when launching the nginx-ingress-controller.
      - "ingress-controller-leader-nginx"
    verbs:
      - get
      - update
  - apiGroups:
      - ""
    resources:
      - configmaps
    verbs:
      - create
  - apiGroups:
      - ""
    resources:
      - endpoints
    verbs:
      - get

---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
metadata:
  name: nginx-ingress-role-nisa-binding
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: nginx-ingress-role
subjects:
  - kind: ServiceAccount
    name: nginx-ingress-serviceaccount
    namespace: ingress-nginx

---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: nginx-ingress-clusterrole-nisa-binding
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: nginx-ingress-clusterrole
subjects:
  - kind: ServiceAccount
    name: nginx-ingress-serviceaccount
    namespace: ingress-nginx

---

apiVersion: apps/v1
kind: DaemonSet 
metadata:
  name: nginx-ingress-controller
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
spec:
  selector:
    matchLabels:
      app.kubernetes.io/name: ingress-nginx
      app.kubernetes.io/part-of: ingress-nginx
  template:
    metadata:
      labels:
        app.kubernetes.io/name: ingress-nginx
        app.kubernetes.io/part-of: ingress-nginx
      annotations:
        prometheus.io/port: "10254"
        prometheus.io/scrape: "true"
    spec:
      serviceAccountName: nginx-ingress-serviceaccount
      hostNetwork: true
      containers:
        - name: nginx-ingress-controller
          image: lizhenliang/nginx-ingress-controller:0.20.0
          args:
            - /nginx-ingress-controller
            - --configmap=$(POD_NAMESPACE)/nginx-configuration
            - --tcp-services-configmap=$(POD_NAMESPACE)/tcp-services
            - --udp-services-configmap=$(POD_NAMESPACE)/udp-services
            - --publish-service=$(POD_NAMESPACE)/ingress-nginx
            - --annotations-prefix=nginx.ingress.kubernetes.io
          securityContext:
            allowPrivilegeEscalation: true
            capabilities:
              drop:
                - ALL
              add:
                - NET_BIND_SERVICE
            # www-data -> 33
            runAsUser: 33
          env:
            - name: POD_NAME
              valueFrom:
                fieldRef:
                  fieldPath: metadata.name
            - name: POD_NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
          ports:
            - name: http
              containerPort: 80
            - name: https
              containerPort: 443
          livenessProbe:
            failureThreshold: 3
            httpGet:
              path: /healthz
              port: 10254
              scheme: HTTP
            initialDelaySeconds: 10
            periodSeconds: 10
            successThreshold: 1
            timeoutSeconds: 10
          readinessProbe:
            failureThreshold: 3
            httpGet:
              path: /healthz
              port: 10254
              scheme: HTTP
            periodSeconds: 10
            successThreshold: 1
            timeoutSeconds: 10

---

創建ingress控制器,可以看到使用的DaemonSet 的方式在每一個node都部署了ingress控制器,我們可以在本地host中綁定任意一個node ip,然后使用域名都可以訪問

[root@k8s-master demo2]# kubectl apply -f mandatory.yaml
[root@k8s-master demo2]# kubectl get pod -n ingress-nginx
NAME                             READY   STATUS             RESTARTS   AGE
nginx-ingress-controller-98769   1/1     Running            6          13h
nginx-ingress-controller-n6wpq   1/1     Running            0          13h
nginx-ingress-controller-tbfxq   1/1    Running   29         13h
nginx-ingress-controller-trxnj   1/1     Running            6          13h

綁定本機hosts,訪問域名驗證
windows系統,hosts文件地址:C:\Windows\System32\drivers\etc,Mac系統sudo vi /private/etc/hosts 編輯hosts文件,在底部加入域名和ip,用於解析,這個ip地址為任意node節點ip地址
加入如下命令,然后保存

192.168.73.139 kibana.ctnrs.com

最后在瀏覽器中,輸入kibana.ctnrs.com,就會進入kibana的web界面,已設置了不需要進行登陸,當前頁面都是全英文模式,可以修改上網搜一下修改配置文件的位置,建議使用英文版本

enter description here

5.1 方案一:Node上部署一個filebeat采集器采集k8s組件日志

es和kibana部署好了之后,我們如何采集pod日志呢,我們采用方案一的方式,首先在每一個node上中部署一個filebeat的采集器,采用的是7.3.1版本,因為filebeat是對k8s有支持,可以連接api給pod日志打標簽,所以yaml中需要進行認證,最后在配置文件中對獲取數據采集了之后輸入到es中,已在yaml中配置好

[root@k8s-master fek]# vim filebeat-kubernetes.yaml 
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: filebeat-config
  namespace: kube-system
  labels:
    k8s-app: filebeat
data:
  filebeat.yml: |-
    filebeat.config:
      inputs:
        # Mounted `filebeat-inputs` configmap:
        path: ${path.config}/inputs.d/*.yml
        # Reload inputs configs as they change:
        reload.enabled: false
      modules:
        path: ${path.config}/modules.d/*.yml
        # Reload module configs as they change:
        reload.enabled: false

    # To enable hints based autodiscover, remove `filebeat.config.inputs` configuration and uncomment this:
    #filebeat.autodiscover:
    #  providers:
    #    - type: kubernetes
    #      hints.enabled: true

    output.elasticsearch:
      hosts: ['${ELASTICSEARCH_HOST:elasticsearch}:${ELASTICSEARCH_PORT:9200}']
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: filebeat-inputs
  namespace: kube-system
  labels:
    k8s-app: filebeat
data:
  kubernetes.yml: |-
    - type: docker
      containers.ids:
      - "*"
      processors:
        - add_kubernetes_metadata:
            in_cluster: true
---
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
  name: filebeat
  namespace: kube-system
  labels:
    k8s-app: filebeat
spec:
  template:
    metadata:
      labels:
        k8s-app: filebeat
    spec:
      serviceAccountName: filebeat
      terminationGracePeriodSeconds: 30
      containers:
      - name: filebeat
        image: elastic/filebeat:7.3.1
        args: [
          "-c", "/etc/filebeat.yml",
          "-e",
        ]
        env:
        - name: ELASTICSEARCH_HOST
          value: elasticsearch
        - name: ELASTICSEARCH_PORT
          value: "9200"
        securityContext:
          runAsUser: 0
          # If using Red Hat OpenShift uncomment this:
          #privileged: true
        resources:
          limits:
            memory: 200Mi
          requests:
            cpu: 100m
            memory: 100Mi
        volumeMounts:
        - name: config
          mountPath: /etc/filebeat.yml
          readOnly: true
          subPath: filebeat.yml
        - name: inputs
          mountPath: /usr/share/filebeat/inputs.d
          readOnly: true
        - name: data
          mountPath: /usr/share/filebeat/data
        - name: varlibdockercontainers
          mountPath: /var/lib/docker/containers
          readOnly: true
      volumes:
      - name: config
        configMap:
          defaultMode: 0600
          name: filebeat-config
      - name: varlibdockercontainers
        hostPath:
          path: /var/lib/docker/containers
      - name: inputs
        configMap:
          defaultMode: 0600
          name: filebeat-inputs
      # data folder stores a registry of read status for all files, so we don't send everything again on a Filebeat pod restart
      - name: data
        hostPath:
          path: /var/lib/filebeat-data
          type: DirectoryOrCreate
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: filebeat
subjects:
- kind: ServiceAccount
  name: filebeat
  namespace: kube-system
roleRef:
  kind: ClusterRole
  name: filebeat
  apiGroup: rbac.authorization.k8s.io
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
  name: filebeat
  labels:
    k8s-app: filebeat
rules:
- apiGroups: [""] # "" indicates the core API group
  resources:
  - namespaces
  - pods
  verbs:
  - get
  - watch
  - list
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: filebeat
  namespace: kube-system
  labels:
    k8s-app: filebeat
---

使用編寫好的yaml文件創建filebeat采集器,然后檢查是否啟動,如果不能正常啟動可以使用kubectl describe查看詳細描述,排查問題

[root@k8s-master fek]# kubectl apply -f filebeat-kubernetes.yaml 
configmap/filebeat-config created
configmap/filebeat-inputs created
daemonset.extensions/filebeat created
clusterrolebinding.rbac.authorization.k8s.io/filebeat created
clusterrole.rbac.authorization.k8s.io/filebeat created
serviceaccount/filebeat created
[root@k8s-master fek]# ls
elasticsearch.yaml  filebeat-kubernetes.yaml  kibana.yaml
[root@k8s-master fek]# kubectl get pod -n kube-system
NAME                                    READY   STATUS              RESTARTS   AGE
alertmanager-5d75d5688f-fmlq6           2/2     Running             11         58d
coredns-5bd5f9dbd9-rv7ft                1/1     Running             2          47d
filebeat-2dlk8                          1/1     ContainerCreating   0          27s
filebeat-cqvmk                          1/1     ContainerCreating   0          27s
filebeat-s2xmm                          1/1     ContainerCreating   0          28s
filebeat-w28qc                          1/1     ContainerCreating   0          27s
grafana-0                               1/1     Running             5          64d
kube-state-metrics-7c76bdbf68-48b7w     2/2     Running             4          47d
kubernetes-dashboard-7d77666777-d5ng4   1/1     Running             9          65d
prometheus-0                            2/2     Running             2          47d

需要對k8s組件的日志進行采集,因為我的環境是用的kubeadm進行部署的,因此我的組件日志都在/var/log/message里面,因此我們還需要部署一個采集k8s組件日志的pod副本,自定義了索引k8s-module-%{+yyyy.MM.dd},編寫yaml如下:

[root@k8s-master elk]# vim k8s-logs.yaml 
apiVersion: v1
kind: ConfigMap
metadata:
  name: k8s-logs-filebeat-config
  namespace: kube-system 
  
data:
  filebeat.yml: |
    filebeat.inputs:
      - type: log
        paths:
          - /var/log/messages  
        fields:
          app: k8s 
          type: module 
        fields_under_root: true

    setup.ilm.enabled: false
    setup.template.name: "k8s-module"
    setup.template.pattern: "k8s-module-*"

    output.elasticsearch:
      hosts: ['elasticsearch.kube-system:9200']
      index: "k8s-module-%{+yyyy.MM.dd}"

---

apiVersion: apps/v1
kind: DaemonSet 
metadata:
  name: k8s-logs
  namespace: kube-system
spec:
  selector:
    matchLabels:
      project: k8s 
      app: filebeat
  template:
    metadata:
      labels:
        project: k8s
        app: filebeat
    spec:
      containers:
      - name: filebeat
        image: elastic/filebeat:7.3.1
        args: [
          "-c", "/etc/filebeat.yml",
          "-e",
        ]
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
          limits:
            cpu: 500m
            memory: 500Mi
        securityContext:
          runAsUser: 0
        volumeMounts:
        - name: filebeat-config
          mountPath: /etc/filebeat.yml
          subPath: filebeat.yml
        - name: k8s-logs 
          mountPath: /var/log/messages
      volumes:
      - name: k8s-logs
        hostPath: 
          path: /var/log/messages
      - name: filebeat-config
        configMap:
          name: k8s-logs-filebeat-config

創建編寫好的yaml,並且檢查是否成功創建,能看到兩個命名為k8s-log-xx的pod副本分別創建在兩個nodes上

[root@k8s-master elk]# kubectl apply -f k8s-logs.yaml
[root@k8s-master elk]# kubectl get pod -n kube-system
NAME                        READY   STATUS    RESTARTS   AGE
coredns-5bd5f9dbd9-8zdn5    1/1     Running   0          10h
elasticsearch-0             1/1     Running   1          13h
filebeat-2q5tz              1/1     Running   0          13h
filebeat-k6m27              1/1     Running   2          13h
k8s-logs-52xgk              1/1     Running   0          5h45m
k8s-logs-jpkqp              1/1     Running   0          5h45m
kibana-b7d98644-tllmm       1/1     Running   0          10h

5.1.1 在kibana的web界面進行配置日志可視化

首先打開kibana的web界面,點擊左邊菜單欄匯中的設置,然后點擊在Kibana下面的索引按鈕,然后點擊左上角的然后根據如圖所示分別創建一個filebeat-7.3.1-和k8s-module-的filebeat采集器的索引匹配

enter description here

然后按照時間過濾,完成創建

enter description here

索引匹配創建以后,點擊左邊最上面的菜單Discove,然后可以在左側看到我們剛才創建的索引,然后就可以在下面添加要展示的標簽,也可以對標簽進行篩選,最終效果如圖所示,可以看到采集到的日志的所有信息

enter description here

在其中一個node上,輸入echo hello logs >>/var/log/messages,然后在web上選擇k8s-module-*的索引匹配,就可以在采集到的日志中看到剛才輸入的hello logs,則證明采集成功,如圖所示

enter description here

5.2 方案二:Pod中附加專用日志收集的容器

我們也可以使用方案的方式,通過在pod中注入一個日志收集的容器來采集pod的日志,以一個php-demo的應用為例,使用emptyDir的方式把日志目錄共享給采集器的容器收集,編寫nginx-deployment.yaml ,直接在pod中加入filebeat的容器,並且自定義索引為nginx-access-%{+yyyy.MM.dd}

[root@k8s-master fek]# vim nginx-deployment.yaml 
apiVersion: apps/v1beta1
kind: Deployment
metadata:
  name: php-demo
  namespace: kube-system
spec:
  replicas: 2
  selector:
    matchLabels:
      project: www
      app: php-demo
  template:
    metadata:
      labels:
        project: www
        app: php-demo
    spec:
      imagePullSecrets:
      - name: registry-pull-secret
      containers:
      - name: nginx 
        image: lizhenliang/nginx-php 
        ports:
        - containerPort: 80
          name: web
          protocol: TCP
        resources:
          requests:
            cpu: 0.5
            memory: 256Mi
          limits:
            cpu: 1
            memory: 1Gi
        livenessProbe:
          httpGet:
            path: /status.html
            port: 80
          initialDelaySeconds: 20
          timeoutSeconds: 20
        readinessProbe:
          httpGet:
            path: /status.html
            port: 80
          initialDelaySeconds: 20
          timeoutSeconds: 20
        volumeMounts:
        - name: nginx-logs 
          mountPath: /usr/local/nginx/logs

      - name: filebeat
        image: elastic/filebeat:7.3.1 
        args: [
          "-c", "/etc/filebeat.yml",
          "-e",
        ]
        resources:
          limits:
            memory: 500Mi
          requests:
            cpu: 100m
            memory: 100Mi
        securityContext:
          runAsUser: 0
        volumeMounts:
        - name: filebeat-config
          mountPath: /etc/filebeat.yml
          subPath: filebeat.yml
        - name: nginx-logs 
          mountPath: /usr/local/nginx/logs

      volumes:
      - name: nginx-logs
        emptyDir: {}
      - name: filebeat-config
        configMap:
          name: filebeat-nginx-config
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: filebeat-nginx-config
  namespace: kube-system
  
data:
  filebeat.yml: |-
    filebeat.inputs:
      - type: log
        paths:
          - /usr/local/nginx/logs/access.log
        # tags: ["access"]
        fields:
          app: www
          type: nginx-access
        fields_under_root: true

    setup.ilm.enabled: false
    setup.template.name: "nginx-access"
    setup.template.pattern: "nginx-access-*"

    output.elasticsearch:
      hosts: ['elasticsearch.kube-system:9200']
      index: "nginx-access-%{+yyyy.MM.dd}"

創建剛才編寫的nginx-deployment.yaml,創建成果之后會在kube-system命名空間下面pod/web-demo-58d89c9bc4-r5692的2個pod副本,還有一個對外暴露的service/web-demo

[root@k8s-master elk]# kubectl apply -f nginx-deployment.yaml
[root@k8s-master fek]# kubectl get pod -n kube-system
NAME                        READY   STATUS    RESTARTS   AGE
coredns-5bd5f9dbd9-8zdn5    1/1     Running   0          20h
elasticsearch-0             1/1     Running   1          23h
filebeat-46nvd              1/1     Running   0          23m
filebeat-sst8m              1/1     Running   0          23m
k8s-logs-52xgk              1/1     Running   0          15h
k8s-logs-jpkqp              1/1     Running   0          15h
kibana-b7d98644-tllmm       1/1     Running   0          20h
php-demo-85849d58df-d98gv   2/2     Running   0          26m
php-demo-85849d58df-sl5ss   2/2     Running   0          26m

然后打開kibana的web,按照剛才的辦法繼續添加一個索引匹配nginx-access-*,如圖所示

enter description here

最后點擊左邊最上面的菜單Discove,然后可以在左側看到我們剛才創建的索引匹配,下拉選擇nginx-access-*,然后就可以在下面添加要展示的標簽,也可以對標簽進行篩選,最終效果如圖所示,可以看到采集到的日志的所有信息,可以使用 http://node ip+30001的訪問訪問一下剛才創建的nginx,然后測試是否有日志產生

enter description here

專注開源的DevOps技術棧技術,可以關注公眾號,有問題歡迎一起交流


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM