在kubernetes1.17.2上結合ceph部署efk


簡紹

應用程序和系統日志可以幫助我們了解集群內部的運行情況,日志對於我們調試問題和監視集群情況也是非常有用的。而且大部分的應用都會有日志記錄,對於傳統的應用大部分都會寫入到本地的日志文件之中。對於容器化應用程序來說則更簡單,只需要將日志信息寫入到 stdout 和 stderr 即可,容器默認情況下就會把這些日志輸出到宿主機上的一個 JSON 文件之中,同樣也可以通過 docker logs 或者 kubectl logs 來查看到對應的日志信息。

Kubernetes 中比較流行的日志收集解決方案是 Elasticsearch、Fluentd 和 Kibana(EFK)技術棧,也是官方現在比較推薦的一種方案。
Elasticsearch 是一個實時的、分布式的可擴展的搜索引擎,允許進行全文、結構化搜索,它通常用於索引和搜索大量日志數據,也可用於搜索許多不同類型的文檔。Elasticsearch 通常與 Kibana 一起部署。

Kibana 是 Elasticsearch 的一個功能強大的數據可視化 Dashboard,Kibana 允許你通過 web 界面來瀏覽 Elasticsearch 日志數據。
Fluentd是一個流行的開源數據收集器,我們將在 Kubernetes 集群節點上安裝 Fluentd,通過獲取容器日志文件、過濾和轉換日志數據,然后將數據傳遞到 Elasticsearch 集群,在該集群中對其進行索引和存儲。

拓撲圖

ps: 因為我的物理機資源有限,並且還要在集群中部署myweb、prometheus、jenkins等,所以這里我只部署EFK,正常情況,這套方案也足夠使用了。
配置啟動一個可擴展的 Elasticsearch 集群,然后在 Kubernetes 集群中創建一個 Kibana 應用,最后通過 DaemonSet 來運行 Fluentd,以便它在每個 Kubernetes 工作節點上都可以運行一個 Pod。

檢查集群狀態

ceph集群

# ceph -s
  cluster:
    id:     ed4d59da-c861-4da0-bbe2-8dfdea5be796
    health: HEALTH_WARN
            application not enabled on 1 pool(s)
 
  services:
    mon: 3 daemons, quorum bs-k8s-harbor,bs-k8s-gitlab,bs-k8s-ceph
    mgr: bs-k8s-ceph(active), standbys: bs-k8s-harbor, bs-k8s-gitlab
    osd: 6 osds: 6 up, 6 in
 
  data:
    pools:   1 pools, 128 pgs
    objects: 92  objects, 285 MiB
    usage:   6.7 GiB used, 107 GiB / 114 GiB avail
    pgs:     128 active+clean

原因:這是因為未在池上啟用應用程序。
解決:
# ceph osd lspools
6 webapp
# ceph osd pool application enable webapp rbd
enabled application 'rbd' on pool 'webapp'
# ceph -s
......
    health: HEALTH_OK

kubernetes集群

# kubectl get pods --all-namespaces 
NAMESPACE     NAME                                         READY   STATUS    RESTARTS   AGE
kube-system   calico-kube-controllers-6cf5b744d7-rxt86     1/1     Running   0          47h
kube-system   calico-node-25dlc                            1/1     Running   2          2d4h
kube-system   calico-node-49q4n                            1/1     Running   2          2d4h
kube-system   calico-node-4gmcp                            1/1     Running   1          2d4h
kube-system   calico-node-gt4bt                            1/1     Running   1          2d4h
kube-system   calico-node-svcdj                            1/1     Running   1          2d4h
kube-system   calico-node-tkrqt                            1/1     Running   1          2d4h
kube-system   coredns-76b74f549-dkjxd                      1/1     Running   0          47h
kube-system   dashboard-metrics-scraper-64c8c7d847-dqbx2   1/1     Running   0          46h
kube-system   kubernetes-dashboard-85c79db674-bnvlk        1/1     Running   0          46h
kube-system   metrics-server-6694c7dd66-hsbzb              1/1     Running   0          47h
kube-system   traefik-ingress-controller-m8jf9             1/1     Running   0          47h
kube-system   traefik-ingress-controller-r7cgl             1/1     Running   0          47h
myweb         rbd-provisioner-9cf46c856-b9pm9              1/1     Running   1          7h2m
myweb         wordpress-6677ff7bd-sc45d                    1/1     Running   0          6h13m
myweb         wordpress-mysql-6d7bd496b4-62dps             1/1     Running   0          5h51m
# kubectl top nodes
NAME         CPU(cores)   CPU%   MEMORY(bytes)   MEMORY%  
20.0.0.201   563m         14%    1321Mi          103%    
20.0.0.202   359m         19%    1288Mi          100%    
20.0.0.203   338m         18%    1272Mi          99%     
20.0.0.204   546m         14%    954Mi           13%     
20.0.0.205   516m         13%    539Mi           23%     
20.0.0.206   375m         9%     1123Mi          87%  

創建namespace

這里我准備將所有efk放入assembly名稱空間下。 assembly:組件

# vim namespace.yaml 

[root@bs-k8s-master01 efk]# pwd
/data/k8s/efk
[root@bs-k8s-master01 efk]# cat namespace.yaml 
##########################################################################
#Author:                     zisefeizhu
#QQ:                         2********0
#Date:                       2020-03-13
#FileName:                   namespace.yaml
#URL:                        https://www.cnblogs.com/zisefeizhu/
#Description:                The test script
#Copyright (C):              2020 All rights reserved
###########################################################################
apiVersion: v1
kind: Namespace
metadata:
  name: assembly

創建動態RBD StorageClass

創建assembly pool

bs-k8s-ceph
# ceph osd pool create assembly 128
pool 'assembly' created
# ceph auth get-or-create client.assembly mon 'allow r' osd 'allow class-read, allow rwx pool=assembly' -o ceph.client.assemply.keyring

創建Storageclass

bs-k8s-master01
# ceph auth get-key client.assembly | base64
QVFBWjIzRmVDa0RnSGhBQWQ0TXJWK2YxVThGTUkrMjlva1JZYlE9PQ==
# cat ceph-efk-secret.yaml 
##########################################################################
#Author:                     zisefeizhu
#QQ:                         2********0
#Date:                       2020-03-13
#FileName:                   ceph-jenkins-secret.yaml
#URL:                        https://www.cnblogs.com/zisefeizhu/
#Description:                The test script
#Copyright (C):              2020 All rights reserved
###########################################################################
apiVersion: v1
kind: Secret
metadata:
  name: ceph-admin-secret
  namespace: assembly 
data:
  key: QVFBaUptcGU0R3RDREJBQWhhM1E3NnowWG5YYUl1VVI2MmRQVFE9PQ==
type: kubernetes.io/rbd
---
apiVersion: v1
kind: Secret
metadata:
  name: ceph-assembly-secret
  namespace: assembly 
data:
  key: QVFBWjIzRmVDa0RnSGhBQWQ0TXJWK2YxVThGTUkrMjlva1JZYlE9PQ==
type: kubernetes.io/rbd
# kubectl apply -f ceph-efk-secret.yaml
secret/ceph-admin-secret created
secret/ceph-assembly-secret created
# cat ceph-efk-storageclass.yaml
##########################################################################
#Author:                     zisefeizhu
#QQ:                         2********0
#Date:                       2020-03-13
#FileName:                   ceph-jenkins-storageclass.yaml
#URL:                        https://www.cnblogs.com/zisefeizhu/
#Description:                The test script
#Copyright (C):              2020 All rights reserved
###########################################################################
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: ceph-efk
  namespace: assembly
  annotations:
    storageclass.kubernetes.io/is-default-class: "false"
provisioner: ceph.com/rbd
reclaimPolicy: Retain
parameters:
  monitors: 20.0.0.205:6789,20.0.0.206:6789,20.0.0.207:6789
  adminId: admin
  adminSecretName: ceph-admin-secret
  adminSecretNamespace: assembly
  pool: assembly
  fsType: xfs
  userId: assembly
  userSecretName: ceph-assembly-secret
  imageFormat: "2"
  imageFeatures: "layering"
# kubectl apply -f ceph-efk-storageclass.yaml
storageclass.storage.k8s.io/ceph-efk created

ceph rbd和kubernetes結合需要第三方插件
# cat external-storage-rbd-provisioner.yaml 
##########################################################################
#Author:                     zisefeizhu
#QQ:                         2********0
#Date:                       2020-03-13
#FileName:                   external-storage-rbd-provisioner.yaml
#URL:                        https://www.cnblogs.com/zisefeizhu/
#Description:                The test script
#Copyright (C):              2020 All rights reserved
###########################################################################
apiVersion: v1
kind: ServiceAccount
metadata:
  name: rbd-provisioner
  namespace: assembly
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: rbd-provisioner
rules:
  - apiGroups: [""]
    resources: ["persistentvolumes"]
    verbs: ["get", "list", "watch", "create", "delete"]
  - apiGroups: [""]
    resources: ["persistentvolumeclaims"]
    verbs: ["get", "list", "watch", "update"]
  - apiGroups: ["storage.k8s.io"]
    resources: ["storageclasses"]
    verbs: ["get", "list", "watch"]
  - apiGroups: [""]
    resources: ["events"]
    verbs: ["create", "update", "patch"]
  - apiGroups: [""]
    resources: ["endpoints"]
    verbs: ["get", "list", "watch", "create", "update", "patch"]
  - apiGroups: [""]
    resources: ["services"]
    resourceNames: ["kube-dns"]
    verbs: ["list", "get"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: rbd-provisioner
subjects:
  - kind: ServiceAccount
    name: rbd-provisioner
    namespace: assembly
roleRef:
  kind: ClusterRole
  name: rbd-provisioner
  apiGroup: rbac.authorization.k8s.io

---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: rbd-provisioner
  namespace: assembly
rules:
- apiGroups: [""]
  resources: ["secrets"]
  verbs: ["get"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: rbd-provisioner
  namespace: assembly
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: rbd-provisioner
subjects:
- kind: ServiceAccount
  name: rbd-provisioner
  namespace: assembly

---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: rbd-provisioner
  namespace: assembly
spec:
  replicas: 1
  selector:
    matchLabels:
      app: rbd-provisioner
  strategy:
    type: Recreate
  template:
    metadata:
      labels:
        app: rbd-provisioner
    spec:
      containers:
      - name: rbd-provisioner
        image: "harbor.linux.com/rbd/rbd-provisioner:latest"
        imagePullPolicy: IfNotPresent
        env:
        - name: PROVISIONER_NAME
          value: ceph.com/rbd
      imagePullSecrets: 
        - name: k8s-harbor-login
      serviceAccount: rbd-provisioner
      nodeSelector:             ## 設置node篩選器,在特定label的節點上啟動
        rbd: "true"
# kubectl apply -f external-storage-rbd-provisioner.yaml
serviceaccount/rbd-provisioner created
clusterrole.rbac.authorization.k8s.io/rbd-provisioner unchanged
clusterrolebinding.rbac.authorization.k8s.io/rbd-provisioner configured
role.rbac.authorization.k8s.io/rbd-provisioner created
rolebinding.rbac.authorization.k8s.io/rbd-provisioner created
deployment.apps/rbd-provisioner created
# kubectl get pods -n assembly
NAME                              READY   STATUS    RESTARTS   AGE
rbd-provisioner-9cf46c856-6qzll   1/1     Running   0          71s

創建Elasticsearch

創建elasticsearch-svc.yaml
定義了一個名為 elasticsearch 的 Service,指定標簽app=elasticsearch,當我們將 Elasticsearch StatefulSet 與此服務關聯時,服務將返回帶有標簽app=elasticsearch的 Elasticsearch Pods 的 DNS A 記錄,然后設置clusterIP=None,將該服務設置成無頭服務。最后,我們分別定義端口9200、9300,分別用於與 REST API 交互,以及用於節點間通信。
# cat elasticsearch-svc.yaml
##########################################################################
#Author:                     zisefeizhu
#QQ:                         2********0
#Date:                       2020-03-13
#FileName:                   elasticsearch-svc.yaml
#URL:                        https://www.cnblogs.com/zisefeizhu/
#Description:                The test script
#Copyright (C):              2020 All rights reserved
###########################################################################
kind: Service
apiVersion: v1
metadata:
  name: elasticsearch
  namespace: assembly
  labels:
    app: elasticsearch
spec:
  selector:
    app: elasticsearch
  clusterIP: None
  ports:
    - port: 9200
      name: rest
    - port: 9300
      name: inter-node
# kubectl apply -f elasticsearch-svc.yaml
service/elasticsearch created
已經為 Pod 設置了無頭服務和一個穩定的域名.elasticsearch.assmbly.svc.cluster.local,接下來通過 StatefulSet 來創建具體的 Elasticsearch 的 Pod 應用.
Kubernetes StatefulSet 允許為 Pod 分配一個穩定的標識和持久化存儲,Elasticsearch 需要穩定的存儲來保證 Pod 在重新調度或者重啟后的數據依然不變,所以需要使用 StatefulSet 來管理 Pod。

創建動態pv
# cat elasticsearch-pvc.yaml 
##########################################################################
#Author:                     zisefeizhu
#QQ:                         2********0
#Date:                       2020-03-18
#FileName:                   elasticsearch-pvc.yaml
#URL:                        https://www.cnblogs.com/zisefeizhu/
#Description:                The test script
#Copyright (C):              2020 All rights reserved
###########################################################################
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: ceph-elasticsearch
  namespace: assembly
  labels:
    app: elasticsearch
spec:
  storageClassName: ceph-efk
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 10Gi
#kubectl apply -f ceph-efk-storageclass.yaml 
# cat elasticsearch-statefulset.yaml
##########################################################################
#Author:                     zisefeizhu
#QQ:                         2********0
#Date:                       2020-03-13
#FileName:                   elasticsearch-storageclass.yaml
#URL:                        https://www.cnblogs.com/zisefeizhu/
#Description:                The test script
#Copyright (C):              2020 All rights reserved
###########################################################################
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: es-cluster
  namespace: assembly
spec:
  serviceName: elasticsearch
  selector:
    matchLabels:
      app: elasticsearch
  template:
    metadata:
      labels:
        app: elasticsearch
    spec:
      imagePullSecrets: 
        - name: k8s-harbor-login
      containers:
      - name: elasticsearch
        image: harbor.linux.com/efk/elasticsearch-oss:6.4.3
        resources:
            limits:
              cpu: 1000m
            requests:
              cpu: 100m
        ports:
        - containerPort: 9200
          name: rest
          protocol: TCP
        - containerPort: 9300
          name: inter-node
          protocol: TCP
        volumeMounts:
        - name: data
          mountPath: /usr/share/elasticsearch/data
        env:
          - name: cluster.name
            value: k8s-logs
          - name: node.name
            valueFrom:
              fieldRef:
                fieldPath: metadata.name
         # - name: discovery.zen.ping.unicast.hosts
         #   value: "es-cluster-0.elasticsearch,es-cluster-1.elasticsearch,es-cluster-2.elasticsearch"
         # - name: discovery.zen.minimum_master_nodes
         #   value: "2"
          - name: ES_JAVA_OPTS
            value: "-Xms512m -Xmx512m"
      initContainers:
      - name: fix-permissions
        image: busybox
        command: ["sh", "-c", "chown -R 1000:1000 /usr/share/elasticsearch/data"]
        securityContext:
          privileged: true
        volumeMounts:
        - name: data
          mountPath: /usr/share/elasticsearch/data
      - name: increase-vm-max-map
        image: busybox
        command: ["sysctl", "-w", "vm.max_map_count=262144"]
        securityContext:
          privileged: true
      - name: increase-fd-ulimit
        image: busybox
        command: ["sh", "-c", "ulimit -n 65536"]
        securityContext:
          privileged: true
      volumes:
      - name: data
        persistentVolumeClaim:
          claimName: ceph-elasticsearch 
      nodeSelector:             ## 設置node篩選器,在特定label的節點上啟動
        elasticsearch: "true"  
節點打標簽
# kubectl label nodes 20.0.0.204 elasticsearch=true
node/20.0.0.204 labeled
# kubectl apply -f elasticsearch-statefulset.yaml
# kubectl get pods -n assembly
NAME                              READY   STATUS    RESTARTS   AGE
es-cluster-0                      1/1     Running   0          2m15s
rbd-provisioner-9cf46c856-6qzll   1/1     Running   0          37m

Pods 部署完成后,我們可以通過請求一個 REST API 來檢查 Elasticsearch 集群是否正常運行。使用下面的命令將本地端口9200轉發到 Elasticsearch 節點(es-cluster-0)對應的端口
# kubectl port-forward es-cluster-0 9200:9200 --namespace=assembly
Forwarding from 127.0.0.1:9200 -> 9200
#  curl http://localhost:9200/_cluster/state?pretty
{
  "cluster_name" : "k8s-logs",
  "compressed_size_in_bytes" : 234,
  "cluster_uuid" : "PopKT5FLROqyBYlRvvr7kw",
  "version" : 2,
  "state_uuid" : "ubOKSevGRVe4iR5JXODjDA",
  "master_node" : "vub5ot69Thu8igd4qeiZBg",
  "blocks" : { },
  "nodes" : {
    "vub5ot69Thu8igd4qeiZBg" : {
      "name" : "es-cluster-0",
      "ephemeral_id" : "9JjNmdOyRomyYsHAO1IQ5Q",
      "transport_address" : "172.20.46.85:9300",
      "attributes" : { }
    }
  },

創建Kibana

Elasticsearch 集群啟動成功了,接下來可以來部署 Kibana 服務,新建一個名為 kibana.yaml 的文件。

# cat kibana.yaml
##########################################################################
#Author:                     zisefeizhu
#QQ:                         2********0
#Date:                       2020-03-13
#FileName:                   kibana.yaml
#URL:                        https://www.cnblogs.com/zisefeizhu/
#Description:                The test script
#Copyright (C):              2020 All rights reserved
###########################################################################
apiVersion: v1
kind: Service
metadata:
  name: kibana
  namespace: assembly
  labels:
    app: kibana
spec:
  ports:
  - port: 5601
  type: NodePort
  selector:
    app: kibana

---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: kibana
  namespace: assembly
  labels:
    app: kibana
spec:
  selector:
    matchLabels:
      app: kibana
  template:
    metadata:
      labels:
        app: kibana
    spec:
      imagePullSecrets: 
        - name: k8s-harbor-login
      containers:
      - name: kibana
        image: harbor.linux.com/efk/kibana-oss:6.4.3
        resources:
          limits:
            cpu: 1000m
          requests:
            cpu: 100m
        env:
          - name: ELASTICSEARCH_URL
            value: http://elasticsearch:9200
        ports:
        - containerPort: 5601
      nodeSelector:             ## 設置node篩選器,在特定label的節點上啟動
        kibana: "true"  
節點打標簽
# kubectl label nodes 20.0.0.204 kibana=true
node/20.0.0.204 labeled
# kubectl apply -f kibana.yaml
service/kibana created
deployment.apps/kibana created
# kubectl get pods -n assembly
NAME                              READY   STATUS    RESTARTS   AGE
es-cluster-0                      1/1     Running   0          8m4s
kibana-598987f498-k8ff9           1/1     Running   0          70s
rbd-provisioner-9cf46c856-6qzll   1/1     Running   0          43m
定義了兩個資源對象,一個 Service 和 Deployment,為了測試方便,我們將 Service 設置為了 NodePort 類型,Kibana Pod 中配置都比較簡單,唯一需要注意的是我們使用 ELASTICSEARCH_URL 這個環境變量來設置Elasticsearch 集群的端點和端口,直接使用 Kubernetes DNS 即可,此端點對應服務名稱為 elasticsearch,由於是一個 headless service,所以該域將解析為 Elasticsearch Pod 的 IP 地址列表
# kubectl get svc --namespace=assembly
NAME            TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)             AGE
elasticsearch   ClusterIP   None            <none>        9200/TCP,9300/TCP   50m
kibana          NodePort    10.68.123.234   <none>        5601:22693/TCP      2m22s

代理kibana

這里我讓kibana走traefik代理

# cat kibana-ingreeroute.yaml
##########################################################################
#Author:                     zisefeizhu
#QQ:                         2********0
#Date:                       2020-03-13
#FileName:                   kibana-ingreeroute.yaml
#URL:                        https://www.cnblogs.com/zisefeizhu/
#Description:                The test script
#Copyright (C):              2020 All rights reserved
###########################################################################
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
  name: kibana
  namespace: assembly
spec:
  entryPoints:
    - web
  routes:
  - match: Host(`kibana.linux.com`)
    kind: Rule
    services:
    - name: kibana
      port: 5601
# kubectl apply -f kibana-ingreeroute.yaml
ingressroute.traefik.containo.us/kibana created

traefik代理成功,本地主機hosts解析

web訪問成功!

創建Fluentd

Fluentd是一個高效的日志聚合器,是用 Ruby 編寫的,並且可以很好地擴展。對於大部分企業來說,Fluentd 足夠高效並且消耗的資源相對較少,另外一個工具Fluent-bit更輕量級,占用資源更少,但是插件相對 Fluentd 來說不夠豐富,所以整體來說,Fluentd 更加成熟,使用更加廣泛,所以這里使用 Fluentd 來作為日志收集工具。

工作原理

Fluentd 通過一組給定的數據源抓取日志數據,處理->轉換成結構化的數據格式將它們轉發給其他服務,比如 Elasticsearch、對象存儲等等。Fluentd 支持超過300個日志存儲和分析服務,所以在這方面是非常靈活的。主要運行步驟如下:

​ 首先 Fluentd 從多個日志源獲取數據

​ 結構化並且標記這些數據

​ 然后根據匹配的標簽將數據發送到多個目標服務去

Fluentd拓撲圖

配置

通過一個配置文件來告訴 Fluentd 如何采集、處理數據的

日志源配置

比如這里為了收集 Kubernetes 節點上的所有容器日志,就需要做如下的日志源配置:

<source>
@id fluentd-containers.log
@type tail
path /var/log/containers/*.log
pos_file /var/log/fluentd-containers.log.pos
time_format %Y-%m-%dT%H:%M:%S.%NZ
tag raw.kubernetes.*
format json
read_from_head true
</source>

上面配置部分參數說明如下:

  • id:表示引用該日志源的唯一標識符,該標識可用於進一步過濾和路由結構化日志數據
  • type:Fluentd 內置的指令,tail表示 Fluentd 從上次讀取的位置通過 tail 不斷獲取數據,另外一個是http表示通過一個 GET 請求來收集數據。
  • path:tail類型下的特定參數,告訴 Fluentd 采集/var/log/containers目錄下的所有日志,這是 docker 在 Kubernetes 節點上用來存儲運行容器 stdout 輸出日志數據的目錄。
  • pos_file:檢查點,如果 Fluentd 程序重新啟動了,它將使用此文件中的位置來恢復日志數據收集。
  • tag:用來將日志源與目標或者過濾器匹配的自定義字符串,Fluentd 匹配源/目標標簽來路由日志數據。

路由配置

上面是日志源的配置,接下來看看如何將日志數據發送到 Elasticsearch:

<match **>
@id elasticsearch
@type elasticsearch
@log_level info
include_tag_key true
type_name fluentd
host "#{ENV['OUTPUT_HOST']}"
port "#{ENV['OUTPUT_PORT']}"
logstash_format true
<buffer>
@type file
path /var/log/fluentd-buffers/kubernetes.system.buffer
flush_mode interval
retry_type exponential_backoff
flush_thread_count 2
flush_interval 5s
retry_forever
retry_max_interval 30
chunk_limit_size "#{ENV['OUTPUT_BUFFER_CHUNK_LIMIT']}"
queue_limit_length "#{ENV['OUTPUT_BUFFER_QUEUE_LIMIT']}"
overflow_action block
</buffer>
  • match:標識一個目標標簽,后面是一個匹配日志源的正則表達式,我們這里想要捕獲所有的日志並將它們發送給 Elasticsearch,所以需要配置成**
  • id:目標的一個唯一標識符。
  • type:支持的輸出插件標識符,我們這里要輸出到 Elasticsearch,所以配置成 elasticsearch,這是 Fluentd 的一個內置插件。
  • log_level:指定要捕獲的日志級別,我們這里配置成info,表示任何該級別或者該級別以上(INFO、WARNING、ERROR)的日志都將被路由到 Elsasticsearch。
  • host/port:定義 Elasticsearch 的地址,也可以配置認證信息,我們的 Elasticsearch 不需要認證,所以這里直接指定 host 和 port 即可。
  • logstash_format:Elasticsearch 服務對日志數據構建反向索引進行搜索,將 logstash_format 設置為true,Fluentd 將會以 logstash 格式來轉發結構化的日志數據。
  • Buffer: Fluentd 允許在目標不可用時進行緩存,比如,如果網絡出現故障或者 Elasticsearch 不可用的時候。緩沖區配置也有助於降低磁盤的 IO

要收集 Kubernetes 集群的日志,直接用 DasemonSet 控制器來部署 Fluentd 應用,這樣,它就可以從 Kubernetes 節點上采集日志,確保在集群中的每個節點上始終運行一個 Fluentd 容器。

首先,通過 ConfigMap 對象來指定 Fluentd 配置文件,新建 fluentd-configmap.yaml 文件。

# cat fluentd-configmap.yaml
##########################################################################
#Author:                     zisefeizhu
#QQ:                         2********0
#Date:                       2020-03-13
#FileName:                   fluentd-configmap.yaml
#URL:                        https://www.cnblogs.com/zisefeizhu/
#Description:                The test script
#Copyright (C):              2020 All rights reserved
###########################################################################
kind: ConfigMap
apiVersion: v1
metadata:
  name: fluentd-config
  namespace: assembly
  labels:
    addonmanager.kubernetes.io/mode: Reconcile
data:
  system.conf: |-
    <system>
      root_dir /tmp/fluentd-buffers/
    </system>
  containers.input.conf: |-
    <source>
      @id fluentd-containers.log
      @type tail
      path /var/log/containers/*.log
      pos_file /var/log/es-containers.log.pos
      time_format %Y-%m-%dT%H:%M:%S.%NZ
      localtime
      tag raw.kubernetes.*
      format json
      read_from_head true
    </source>
    # Detect exceptions in the log output and forward them as one log entry.
    <match raw.kubernetes.**>
      @id raw.kubernetes
      @type detect_exceptions
      remove_tag_prefix raw
      message log
      stream stream
      multiline_flush_interval 5
      max_bytes 500000
      max_lines 1000
    </match>
  system.input.conf: |-
    # Logs from systemd-journal for interesting services.
    <source>
      @id journald-docker
      @type systemd
      filters [{ "_SYSTEMD_UNIT": "docker.service" }]
      <storage>
        @type local
        persistent true
      </storage>
      read_from_head true
      tag docker
    </source>
    <source>
      @id journald-kubelet
      @type systemd
      filters [{ "_SYSTEMD_UNIT": "kubelet.service" }]
      <storage>
        @type local
        persistent true
      </storage>
      read_from_head true
      tag kubelet
    </source>
  forward.input.conf: |-
    # Takes the messages sent over TCP
    <source>
      @type forward
    </source>
  output.conf: |-
    # Enriches records with Kubernetes metadata
    <filter kubernetes.**>
      @type kubernetes_metadata
    </filter>
    <match **>
      @id elasticsearch
      @type elasticsearch
      @log_level info
      include_tag_key true
      host elasticsearch
      port 9200
      logstash_format true
      request_timeout    30s
      <buffer>
        @type file
        path /var/log/fluentd-buffers/kubernetes.system.buffer
        flush_mode interval
        retry_type exponential_backoff
        flush_thread_count 2
        flush_interval 5s
        retry_forever
        retry_max_interval 30
        chunk_limit_size 2M
        queue_limit_length 8
        overflow_action block
      </buffer>
    </match>
# kubectl apply -f fluentd-configmap.yaml
configmap/fluentd-config created

上面配置文件中配置了 docker 容器日志目錄以及 docker、kubelet 應用的日志的收集,收集到數據經過處理后發送到 elasticsearch:9200 服務。

然后新建一個 fluentd-daemonset.yaml 的文件

# cat fluentd-daemonset.yaml
##########################################################################
#Author:                     zisefeizhu
#QQ:                         2********0
#Date:                       2020-03-13
#FileName:                   fluentd-daemonset.yaml
#URL:                        https://www.cnblogs.com/zisefeizhu/
#Description:                The test script
#Copyright (C):              2020 All rights reserved
###########################################################################
apiVersion: v1
kind: ServiceAccount
metadata:
  name: fluentd-es
  namespace: assembly
  labels:
    k8s-app: fluentd-es
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: fluentd-es
  labels:
    k8s-app: fluentd-es
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
rules:
- apiGroups:
  - ""
  resources:
  - "namespaces"
  - "pods"
  verbs:
  - "get"
  - "watch"
  - "list"
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: fluentd-es
  labels:
    k8s-app: fluentd-es
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
subjects:
- kind: ServiceAccount
  name: fluentd-es
  namespace: assembly
  apiGroup: ""
roleRef:
  kind: ClusterRole
  name: fluentd-es
  apiGroup: ""
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: fluentd-es
  namespace: assembly
  labels:
    k8s-app: fluentd-es
    version: v2.0.4
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
spec:
  selector:
    matchLabels:
      k8s-app: fluentd-es
      version: v2.0.4
  template:
    metadata:
      labels:
        k8s-app: fluentd-es
        kubernetes.io/cluster-service: "true"
        version: v2.0.4
      # This annotation ensures that fluentd does not get evicted if the node
      # supports critical pod annotation based priority scheme.
      # Note that this does not guarantee admission on the nodes (#40573).
      annotations:
        scheduler.alpha.kubernetes.io/critical-pod: ''
    spec:
      serviceAccountName: fluentd-es
      imagePullSecrets: 
        - name: k8s-harbor-login
      containers:
      - name: fluentd-es
        image: harbor.linux.com/efk/fluentd-elasticsearch:v2.0.4
        env:
        - name: FLUENTD_ARGS
          value: --no-supervisor -q
        resources:
          limits:
            memory: 500Mi
          requests:
            cpu: 100m
            memory: 200Mi
        volumeMounts:
        - name: varlog
          mountPath: /var/log
        - name: varlibdockercontainers
          mountPath: /data/docker/containers
          readOnly: true
        - name: config-volume
          mountPath: /etc/fluent/config.d
      nodeSelector:
        beta.kubernetes.io/fluentd-ds-ready: "true"
      terminationGracePeriodSeconds: 30
      volumes:
      - name: varlog
        hostPath:
          path: /var/log
      - name: varlibdockercontainers
        hostPath:
          path: /var/lib/docker/containers
      - name: config-volume
        configMap:
          name: fluentd-config
      nodeSelector:             ## 設置node篩選器,在特定label的節點上啟動
        fluentd: "true" 
節點打標簽
# kubectl apply -f fluentd-daemonset.yaml
serviceaccount/fluentd-es created
clusterrole.rbac.authorization.k8s.io/fluentd-es created
clusterrolebinding.rbac.authorization.k8s.io/fluentd-es created
daemonset.apps/fluentd-es created
# kubectl label nodes 20.0.0.204 fluentd=true
node/20.0.0.204 labeled
# kubectl label nodes 20.0.0.205 fluentd=true
node/20.0.0.205 labeled
# kubectl label nodes 20.0.0.206 fluentd=true
node/20.0.0.206 labeled
# kubectl get pods -n assembly -o wide
NAME                              READY   STATUS    RESTARTS   AGE     IP              NODE         NOMINATED NODE   READINESS GATES
es-cluster-0                      1/1     Running   0          30m     172.20.46.85    20.0.0.204   <none>           <none>
fluentd-es-5fgt7                  1/1     Running   0          5m36s   172.20.46.87    20.0.0.204   <none>           <none>
fluentd-es-l22nj                  1/1     Running   0          5m22s   172.20.145.9    20.0.0.205   <none>           <none>
fluentd-es-pnqk8                  1/1     Running   0          5m18s   172.20.208.29   20.0.0.206   <none>           <none>
kibana-598987f498-k8ff9           1/1     Running   0          23m     172.20.46.86    20.0.0.204   <none>           <none>
rbd-provisioner-9cf46c856-6qzll   1/1     Running   0          65m     172.20.46.84    20.0.0.204   <none>           <none>

前面 Fluentd 配置文件中我們采集的日志使用的是 logstash 格式,這里只需要在文本框中輸入logstash-*即可匹配到 Elasticsearch pod中的所有日志數據,然后點擊下一步,進入以下頁面:

在該頁面中配置使用哪個字段按時間過濾日志數據,在下拉列表中,選擇@timestamp字段,然后點擊Create index pattern,創建完成后,點擊左側導航菜單中的Discover,然后就可以看到一些直方圖和最近采集到的日志數據了

至此完成了efk的部署

啟動池

# ceph osd pool application enable assembly rbd
enabled application 'rbd' on pool 'assembly'


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM