k8s集群日志收集ELK和graylog


使用ELK+Filebeat架構,還需要明確Filebeat采集K8S集群日志的方式。

方式1:Node級日志代理
在每個節點(即宿主機)上可以獨立運行一個Node級日志代理,通常的實現方式為DaemonSet。用戶應用只需要將日志寫到標准輸出,Docker 的日志驅動會將每個容器的標准輸出收集並寫入到主機文件系統,這樣Node級日志代理就可以將日志統一收集並上傳。另外,可以使用K8S的logrotate或Docker 的log-opt 選項負責日志的輪轉。

 

Docker默認的日志驅動(LogDriver)是json-driver,其會將日志以JSON文件的方式存儲。所有容器輸出到控制台的日志,都會以*-json.log的命名方式保存在/var/lib/docker/containers/目錄下。對於Docker日志驅動的具體介紹,請參考官方文檔。另外,除了收集Docker容器日志,一般建議同時收集K8S自身的日志以及宿主機的所有系統日志,其位置都在var/log下。

 

所以,簡單來說,本方式就是在每個node上各運行一個日志代理容器,對本節點/var/log和 /var/lib/docker/containers/兩個目錄下的日志進行采集,然后匯總到elasticsearch集群,最后通過kibana展示。

方式2:伴生容器(sidecar container)作為日志代理
創建一個伴生容器(也可稱作日志容器),與應用程序容器在處於同一個Pod中。同時伴生容器內部運行一個獨立的、專門為收集應用日志的代理,常見的有Logstash、Fluentd 、Filebeat等。日志容器通過共享卷可以獲得應用容器的日志,然后進行上傳。

 

方式3:應用直接上傳日志
應用程序容器直接通過網絡連接上傳日志到后端,這是最簡單的方式。

 

對比

 

相對來說,方式1在業界使用更為廣泛,並且官方也更為推薦。因此,最終我們采用ELK+Filebeat架構,並基於方式1,如下:

1.創建es.yaml 【kubectl apply -f ./es.yaml】

apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: elasticsearch
  namespace: kube-system
  labels:
    k8s-app: elasticsearch
spec:
  serviceName: elasticsearch
  selector:
    matchLabels:
      k8s-app: elasticsearch
  template:
    metadata:
      labels:
        k8s-app: elasticsearch
    spec:
      containers:
      - image: docker.elastic.co/elasticsearch/elasticsearch:7.9.0
        name: elasticsearch
        resources:
          limits:
            cpu: 1
            memory: 1Gi
          requests:
            cpu: 0.5 
            memory: 500Mi
        env:
          - name: "discovery.type"
            value: "single-node"
          - name: ES_JAVA_OPTS
            value: "-Xms512m -Xmx2g" 
        ports:
        - containerPort: 9200
          name: db
          protocol: TCP
        volumeMounts:
        - name: elasticsearch-data
          mountPath: /usr/share/elasticsearch/data
  volumeClaimTemplates:
  - metadata:
      name: elasticsearch-data
    spec:
      storageClassName: "nfs"
      accessModes: [ "ReadWriteOnce" ]
      resources:
        requests:
          storage: 2Gi

---

apiVersion: v1
kind: Service
metadata:
  name: elasticsearch
  namespace: kube-system
spec:
  clusterIP: None
  ports:
  - port: 9200
    protocol: TCP
    targetPort: db
  selector:
    k8s-app: elasticsearch

 2.創建kibana.yaml【kubectl apply -f ./kibana.yaml】

apiVersion: apps/v1
kind: Deployment
metadata:
  name: kibana
  namespace: kube-system
  labels:
    k8s-app: kibana
spec:
  replicas: 1
  selector:
    matchLabels:
      k8s-app: kibana
  template:
    metadata:
      labels:
        k8s-app: kibana
    spec:
      containers:
      - name: kibana
        image: kibana:7.3.1
        resources:
          limits:
            cpu: 1
            memory: 500Mi
          requests:
            cpu: 0.5
            memory: 200Mi
        env:
          - name: ELASTICSEARCH_HOSTS
            value: http://elasticsearch:9200
        ports:
        - containerPort: 5601
          name: ui
          protocol: TCP
---
apiVersion: v1
kind: Service
metadata:
  name: kibana
  namespace: kube-system
spec:
  type: NodePort
  ports:
  - port: 5601
    protocol: TCP
    targetPort: ui
    nodePort: 30056
  selector:
    k8s-app: kibana

3.現在有一個nginx程序, 我們采用方式2來收集日志 nginx-dm.yaml 【 kubectl apply -f ./nginx-dm.yaml】

apiVersion: apps/v1beta1
kind: Deployment
metadata:
  name: php-demo
  namespace: kube-system
spec:
  replicas: 1
  selector:
    matchLabels:
      project: www
      app: php-demo
  template:
    metadata:
      labels:
        project: www
        app: php-demo
    spec:
      imagePullSecrets:
      - name: registry-pull-secret
      containers:
      - name: nginx
        image: lizhenliang/nginx-php
        ports:
        - containerPort: 80
          name: web
          protocol: TCP
        resources:
          requests:
            cpu: 0.5
            memory: 256Mi
          limits:
            cpu: 1
            memory: 1Gi
        livenessProbe:
          httpGet:
            path: /status.html
            port: 80
          initialDelaySeconds: 20
          timeoutSeconds: 20
        readinessProbe:
          httpGet:
            path: /status.html
            port: 80
          initialDelaySeconds: 20
          timeoutSeconds: 20
        volumeMounts:
        - name: nginx-logs
          mountPath: /usr/local/nginx/logs

      - name: filebeat
        image: docker.elastic.co/beats/filebeat:7.9.0
        args: [
          "-c", "/etc/filebeat.yml",
          "-e",
        ]
        resources:
          limits:
            memory: 500Mi
          requests:
            cpu: 100m
            memory: 100Mi
        securityContext:
          runAsUser: 0
        volumeMounts:
        - name: filebeat-config
          mountPath: /etc/filebeat.yml
          subPath: filebeat.yml
        - name: nginx-logs
          mountPath: /usr/local/nginx/logs

      volumes:
      - name: nginx-logs
        emptyDir: {}
      - name: filebeat-config
        configMap:
          name: filebeat-nginx-config
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: filebeat-nginx-config
  namespace: kube-system

data:
  filebeat.yml: |-
    filebeat.inputs:
      - type: log
        paths:
          - /usr/local/nginx/logs/access.log
        # tags: ["access"]
        fields:
          app: www
          type: nginx-access
        fields_under_root: true

    setup.ilm.enabled: false
    setup.template.name: "nginx-access"
    setup.template.pattern: "nginx-access-*"

    output.elasticsearch:
      hosts: ['elasticsearch.kube-system:9200']
      index: "nginx-access-%{+yyyy.MM.dd}"

等pod啟動起來 我們可以進入pod 查看日志如下:

在kibana創建index   nginx-access*然后訪問如下:

 4.現在我們來用方式1來拉日志 filebeat-k8s.yaml 【 kubectl apply -f ./filebeat-k8s.yaml 】

apiVersion: v1
kind: ConfigMap
metadata:
  name: filebeat-config
  namespace: kube-system
  labels:
    k8s-app: filebeat
    kubernetes.io/cluster-service: "true"
data:
  filebeat.yml: |-
    filebeat.config:
      inputs:
        # Mounted `filebeat-inputs` configmap:
        path: ${path.config}/inputs.d/*.yml
        # Reload inputs configs as they change:
        reload.enabled: false
      modules:
        path: ${path.config}/modules.d/*.yml
        # Reload module configs as they change:
        reload.enabled: false

    # To enable hints based autodiscover, remove `filebeat.config.inputs` configuration and uncomment this:
    #filebeat.autodiscover:
    #  providers:
    #    - type: kubernetes
    #      hints.enabled: true
    output.elasticsearch:
      hosts: ['${ELASTICSEARCH_HOST:elasticsearch}:${ELASTICSEARCH_PORT:9200}']
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: filebeat-inputs
  namespace: kube-system
  labels:
    k8s-app: filebeat
    kubernetes.io/cluster-service: "true"
data:
  kubernetes.yml: |-
    - type: docker
      containers.ids:
      - "*"
      processors:
        - add_kubernetes_metadata:
            in_cluster: true
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: filebeat
  namespace: kube-system
  labels:
    k8s-app: filebeat
    kubernetes.io/cluster-service: "true"
spec:
  selector:
    matchLabels:
      k8s-app: filebeat
  template:
    metadata:
      labels:
        k8s-app: filebeat
        kubernetes.io/cluster-service: "true"
    spec:
      serviceAccountName: filebeat
      terminationGracePeriodSeconds: 30
      containers:
      - name: filebeat
        image: docker.elastic.co/beats/filebeat:7.9.0
        args: [
          "-c", "/etc/filebeat.yml",
          "-e",
        ]
        env:
        - name: ELASTICSEARCH_HOST
          value: elasticsearch
        - name: ELASTICSEARCH_PORT
          value: "9200"
        securityContext:
          runAsUser: 0
          # If using Red Hat OpenShift uncomment this:
          #privileged: true
        resources:
          requests:
            cpu: 10m
            memory: 100Mi
          limits:
            cpu: 50m
            memory: 500Mi
        volumeMounts:
        - name: config
          mountPath: /etc/filebeat.yml
          readOnly: true
          subPath: filebeat.yml
        - name: inputs
          mountPath: /usr/share/filebeat/inputs.d
          readOnly: true
        - name: data
          mountPath: /usr/share/filebeat/data
        - name: varlibdockercontainers
          mountPath: /var/lib/docker/containers
          readOnly: true
      volumes:
      - name: config
        configMap:
          defaultMode: 0600
          name: filebeat-config
      - name: varlibdockercontainers
        hostPath:
          path: /var/lib/docker/containers
      - name: inputs
        configMap:
          defaultMode: 0600
          name: filebeat-inputs
      # data folder stores a registry of read status for all files, so we don't send everything again on a Filebeat pod restart
      - name: data
        hostPath:
          path: /var/lib/filebeat-data
          type: DirectoryOrCreate
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: filebeat
subjects:
- kind: ServiceAccount
  name: filebeat
  namespace: kube-system
roleRef:
  kind: ClusterRole
  name: filebeat
  apiGroup: rbac.authorization.k8s.io
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
  name: filebeat
  labels:
    k8s-app: filebeat
rules:
- apiGroups: [""] # "" indicates the core API group
  resources:
  - namespaces
  - pods
  verbs:
  - get
  - watch
  - list
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: filebeat
  namespace: kube-system
  labels:
    k8s-app: filebeat

在kibanan 創建 filebeat-7.9.0-* 索引

  5.我們修改 nginx-dm.yaml文件之創建pod, log改在到nfs上去 【 kubectl apply -f ./nginx-dm.yaml】

apiVersion: apps/v1beta1
kind: Deployment
metadata:
  name: php-demo
  namespace: kube-system
spec:
  replicas: 1
  selector:
    matchLabels:
      project: www
      app: php-demo
  template:
    metadata:
      labels:
        project: www
        app: php-demo
    spec:
      imagePullSecrets:
      - name: registry-pull-secret
      containers:
      - name: nginx
        image: lizhenliang/nginx-php
        ports:
        - containerPort: 80
          name: web
          protocol: TCP
        resources:
          requests:
            cpu: 0.5
            memory: 256Mi
          limits:
            cpu: 1
            memory: 1Gi
        livenessProbe:
          httpGet:
            path: /status.html
            port: 80
          initialDelaySeconds: 20
          timeoutSeconds: 20
        readinessProbe:
          httpGet:
            path: /status.html
            port: 80
          initialDelaySeconds: 20
          timeoutSeconds: 20
        volumeMounts:
        - name: nginx-logs
          mountPath: /usr/local/nginx/logs
      volumes: #定義一組掛載設備   
       - name: nginx-logs #定義一個掛載設備的名字 
        #persistentVolumeClaim: 使用PVC模式
           #claimName: nfs-pvc   
         nfs:                          #這里是nfs掛載
           server: 192.168.100.11         #nfs服務器的ip或者域名
           path: "/nfs/jenkins"               #nfs服務配置的掛載目錄     

我這里把pod的日志寫道nfs上【如果日志寫的不頻繁這是可以的】然后在創建k8s-log.yaml把nfs的日志寫道es 【 kubectl apply -f ./k8s-log.yaml】

apiVersion: v1
kind: ConfigMap
metadata:
  name: k8s-logs-filebeat-config
  namespace: kube-system

data:
  filebeat.yml: |
    filebeat.inputs:
      - type: log
        paths:
          - /var/log/messages
        fields:
          app: k8s
          type: module
        fields_under_root: true

    setup.ilm.enabled: false
    setup.template.name: "k8s-module"
    setup.template.pattern: "k8s-module-*"

    output.elasticsearch:
      hosts: ['elasticsearch.kube-system:9200']
      index: "k8s-module-%{+yyyy.MM.dd}"

---

apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: k8s-logs
  namespace: kube-system
spec:
  selector:
    matchLabels:
      project: k8s
      app: filebeat
  template:
    metadata:
      labels:
        project: k8s
        app: filebeat
    spec:
      containers:
      - name: filebeat
        image: docker.elastic.co/beats/filebeat:7.9.0
        args: [
          "-c", "/etc/filebeat.yml",
          "-e",
        ]
        resources:
          requests:
            cpu: 10m
            memory: 100Mi
          limits:
            cpu: 50m
            memory: 500Mi
        securityContext:
          runAsUser: 0
        volumeMounts:
        - name: filebeat-config
          mountPath: /etc/filebeat.yml
          subPath: filebeat.yml
        - name: k8s-logs
          mountPath: /var/log/messages
      volumes:
      - name: k8s-logs
        nfs:
          server: 192.168.100.11
          path: "/nfs/jenkins"
      - name: filebeat-config
        configMap:
          name: k8s-logs-filebeat-config

在kibanan 創建 k8s-module-* 索引

6.如果我們自己的程序日志需要用到方案1 該如何實現:需要將我們的日志寫道 錯誤輸出里面【標准輸出默認是不會顯示到屏幕的】,也就是我們可以通過  kubectl logs gavintest-5f4d66bc58-67d4x -n go 可以在屏幕查看到日志信息, 方式1才能手機此信息

修改程序如下[ 我這里是beego項目]

  用jenkins發版,然后訪問,查看pod日志

查看 kibana日志

 

   備注:有些時候直接下載鏡像很慢,可以先下載下來: 我曾先用 docker.elastic.co/kibana/kibana:7.9.0 結果發現kibana 端口拒絕, 后來換一個鏡像

docker pull docker.elastic.co/beats/filebeat:7.9.0
docker pull docker.elastic.co/elasticsearch/elasticsearch:7.9.0
#docker pull docker.elastic.co/kibana/kibana:7.9.0
docker pull lizhenliang/nginx-php
 
docker save -o filebeat:7.9.0.tar docker.elastic.co/beats/filebeat:7.9.0
docker save -o elasticsearch:7.9.0.tar docker.elastic.co/elasticsearch/elasticsearch:7.9.0
#docker save -o kibana:7.9.0.tar docker.elastic.co/kibana/kibana:7.9.0
docker save -o kibana:7.3.1.tar kibana:7.3.1
docker save -o nginx-php.tar lizhenliang/nginx-php
 
docker load -i filebeat:7.9.0.tar
docker load -i elasticsearch:7.9.0.tar
#docker load -i kibana:7.9.0.tar
docker load -i kibana:7.3.1.tar
docker load -i nginx-php.tar

 GrayLog

我這里已經有es了,那我就直安裝mongodb和graylog了

新建mongodb.yaml 【kubectl apply -f mongodb.yaml】

apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: mongo
  namespace: kube-system
  labels:
    k8s-app: mongo
spec:
  serviceName: mongo
  selector:
    matchLabels:
      k8s-app: mongo
  template:
    metadata:
      labels:
        k8s-app: mongo
    spec:
      containers:
      - image: mongo:latest
        name: mongo
        ports:
        - containerPort: 27017
          protocol: TCP
        volumeMounts:
        - name: mongo-data
          mountPath: /data/db
  volumeClaimTemplates:
  - metadata:
      name: mongo-data
    spec:
      storageClassName: "nfs"
      accessModes: [ "ReadWriteMany" ]
      resources:
        requests:
          storage: 1Gi

---
apiVersion: v1
kind: Service
metadata:
  name: mongo
  namespace: kube-system
spec:
  ports:
  - port: 27017
    protocol: TCP
    targetPort: 27017
  selector:
    k8s-app: mongo    

 如果遇到changing ownership of '/data/db': Operation not permitted錯誤, 我這里的原因是

NFS默認是squash_all模式,但是要想使用NFS目錄,將目錄映射到docker容器中,必須要配置成no_root_squash模式才行,否則root用戶無權限使用nfs的volume來映射docker容器中的目錄!!!
no_root_squash模式:是登入 NFS 主機使用分享目錄的使用者,如果是 root 的話,那么對於這個分享的目錄來說,他就具有 root 的權限!這個項目『極不安全』,不建議使用!


編輯vim /etc/exports 文件在指定位置添加 no_root_squash
 如:/nfs/jenkins *(rw,sync,no_subtree_check,no_root_squash)
然后重啟nfs服務     systemctl restart nfs-kernel-server

2新建graylog.yaml 【kubectl apply -f graylog.yaml】

apiVersion: apps/v1
kind: Deployment
metadata:
  name: graylog
  namespace: kube-system
  labels:
    k8s-app: graylog
spec:
  replicas: 1
  selector:
    matchLabels:
      k8s-app: graylog
  template:
    metadata:
      labels:
        k8s-app: graylog
    spec:
      containers:
      - name: graylog
        image: graylog/graylog:4.0.3
        env:
        - name: GRAYLOG_PASSWORD_SECRET
          value: somepasswordpepper
        - name: GRAYLOG_ROOT_PASSWORD_SHA2
          value: 8c6976e5b5410415bde908bd4dee15dfb167a9c873fc4bb8a81f6f2ab448a918
        - name: GRAYLOG_HTTP_EXTERNAL_URI
          value: http://192.168.100.11:30003/
        - name: GRAYLOG_ELASTICSEARCH_HOSTS
          value: http://elasticsearch:9200
        - name: GRAYLOG_MONGODB_URI
          value: mongodb://mongo:27017/graylog
        ports:
        - containerPort: 9000
        - containerPort: 12201
        - containerPort: 5044
 
 
---
apiVersion: v1
kind: Service
metadata:
  name: graylog
  namespace: kube-system
spec:
  type: NodePort
  selector:
    k8s-app: graylog
  ports:
  - name: "9000"
    port: 9000
    targetPort: 9000
    nodePort: 30003
  - name: "12201"
    port: 12201
    targetPort: 12201
    nodePort: 30004
  - name: "5044"
    port: 5044
    targetPort: 5044
    nodePort: 30005

訪問http://192.168.100.11:30003/ 用戶名和密碼 都是admin,【我這里es是7.9,我graylog曾用過3.3 結果不兼容,所以改為最新的4.0.3,大家啟動gray也可以看看日志】

  filebeat

首先我們建一個filebeat的輸入【看到這里的端口是5044了吧】,然后修改采集日志的文件k8s-log.yaml

 k8s-log.yaml 【kubectl apply -f k8s-log.yaml]

apiVersion: v1
kind: ConfigMap
metadata:
  name: k8s-logs-filebeat-config
  namespace: kube-system

data:
  filebeat.yml: |
    filebeat.inputs:
      - type: log
        paths:
          - /var/log/messages/*.log
        fields:
          app: k8s
          type: module
        fields_under_root: true

    setup.ilm.enabled: false
    setup.template.name: "k8s-module"
    setup.template.pattern: "k8s-module-*"

    #output.elasticsearch:
      #hosts: ['elasticsearch.kube-system:9200']
      #index: "k8s-module-%{+yyyy.MM.dd}"
    output:
       logstash:
          hosts:
           - 192.168.100.11:30005

---

apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: k8s-logs
  namespace: kube-system
spec:
  selector:
    matchLabels:
      project: k8s
      app: filebeat
  template:
    metadata:
      labels:
        project: k8s
        app: filebeat
    spec:
      containers:
      - name: filebeat
        image: docker.elastic.co/beats/filebeat:7.9.0
        args: [
          "-c", "/etc/filebeat.yml",
          "-e",
        ]
        resources:
          requests:
            cpu: 10m
            memory: 100Mi
          limits:
            cpu: 50m
            memory: 500Mi
        securityContext:
          runAsUser: 0
        volumeMounts:
        - name: filebeat-config
          mountPath: /etc/filebeat.yml
          subPath: filebeat.yml
        - name: k8s-logs
          mountPath: /var/log/messages
      volumes:
      - name: k8s-logs
        nfs:
          server: 192.168.100.11
          path: "/nfs/jenkins"
      - name: filebeat-config
        configMap:
          name: k8s-logs-filebeat-config

查看結果 【需要把前面的nginx-dm.yam 文件運行起來, 往nfs寫日志】

  kafka

這里我的日志 從filebeat->kafka->graylog,新建graylog的input   【為了防止干擾 我們把向前的日志文件干掉   kubectl delete -f k8s-log.yaml  和 kubectl delete -f nginx-dm.yaml】

 

   修改filebeat-k8s.yaml 使日志輸出到kafka而不是es [kubectl apply -f filebeat-k8s.yaml]

apiVersion: v1
kind: ConfigMap
metadata:
  name: filebeat-config
  namespace: kube-system
  labels:
    k8s-app: filebeat
    kubernetes.io/cluster-service: "true"
data:
  filebeat.yml: |-
    filebeat.config:
      inputs:
        # Mounted `filebeat-inputs` configmap:
        path: ${path.config}/inputs.d/*.yml
        # Reload inputs configs as they change:
        reload.enabled: false
      modules:
        path: ${path.config}/modules.d/*.yml
        # Reload module configs as they change:
        reload.enabled: false

    # To enable hints based autodiscover, remove `filebeat.config.inputs` configuration and uncomment this:
    #filebeat.autodiscover:
    #  providers:
    #    - type: kubernetes
    #      hints.enabled: true
    #output.elasticsearch:
      #hosts: ['${ELASTICSEARCH_HOST:elasticsearch}:${ELASTICSEARCH_PORT:9200}']
    output.kafka:
      enabled: true
      hosts: ["192.168.100.30:9092"]
      topic: "k8s"
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: filebeat-inputs
  namespace: kube-system
  labels:
    k8s-app: filebeat
    kubernetes.io/cluster-service: "true"
data:
  kubernetes.yml: |-
    - type: docker
      containers.ids:
      - "*"
      processors:
        - add_kubernetes_metadata:
            in_cluster: true
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: filebeat
  namespace: kube-system
  labels:
    k8s-app: filebeat
    kubernetes.io/cluster-service: "true"
spec:
  selector:
    matchLabels:
      k8s-app: filebeat
  template:
    metadata:
      labels:
        k8s-app: filebeat
        kubernetes.io/cluster-service: "true"
    spec:
      serviceAccountName: filebeat
      terminationGracePeriodSeconds: 30
      containers:
      - name: filebeat
        image: docker.elastic.co/beats/filebeat:7.9.0
        args: [
          "-c", "/etc/filebeat.yml",
          "-e",
        ]
        env:
        - name: ELASTICSEARCH_HOST
          value: elasticsearch
        - name: ELASTICSEARCH_PORT
          value: "9200"
        securityContext:
          runAsUser: 0
          # If using Red Hat OpenShift uncomment this:
          #privileged: true
        resources:
          requests:
            cpu: 10m
            memory: 100Mi
          limits:
            cpu: 50m
            memory: 500Mi
        volumeMounts:
        - name: config
          mountPath: /etc/filebeat.yml
          readOnly: true
          subPath: filebeat.yml
        - name: inputs
          mountPath: /usr/share/filebeat/inputs.d
          readOnly: true
        - name: data
          mountPath: /usr/share/filebeat/data
        - name: varlibdockercontainers
          mountPath: /var/lib/docker/containers
          readOnly: true
      volumes:
      - name: config
        configMap:
          defaultMode: 0600
          name: filebeat-config
      - name: varlibdockercontainers
        hostPath:
          path: /var/lib/docker/containers
      - name: inputs
        configMap:
          defaultMode: 0600
          name: filebeat-inputs
      # data folder stores a registry of read status for all files, so we don't send everything again on a Filebeat pod restart
      - name: data
        hostPath:
          path: /var/lib/filebeat-data
          type: DirectoryOrCreate
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: filebeat
subjects:
- kind: ServiceAccount
  name: filebeat
  namespace: kube-system
roleRef:
  kind: ClusterRole
  name: filebeat
  apiGroup: rbac.authorization.k8s.io
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
  name: filebeat
  labels:
    k8s-app: filebeat
rules:
- apiGroups: [""] # "" indicates the core API group
  resources:
  - namespaces
  - pods
  verbs:
  - get
  - watch
  - list
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: filebeat
  namespace: kube-system
  labels:
    k8s-app: filebeat

 

 

   參考:

 https://hub.kubeapps.com/charts/stable/graylog#!

https://blog.51cto.com/liujingyu/2537488?source=dra

https://blog.csdn.net/qq_33430322/article/details/89237140

https://blog.51cto.com/14143894/2438188?source=dra


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM