EFK (Elasticsearch + Fluentd + Kibana) 日志分析系統


EFK (Elasticsearch + Fluentd + Kibana) 日志分析系統

EFK 不是一個軟件,而是一套解決方案。EFK 是三個開源軟件的縮寫,Elasticsearch,Fluentd,Kibana。其中 ELasticsearch 負責日志分析和存儲,Fluentd 負責日志收集,Kibana 負責界面展示。它們之間互相配合使用,完美銜接,高效的滿足了很多場合的應用,是目前主流的一種日志分析系統解決方案。
本文主要基於Fluentd實時讀取日志文件的方式獲取日志內容,將日志發送至Elasticsearch,並通過Kibana展示。

基於Docker 部署EFK

Docker images

$ docker pull elasticsearch:7.10.1
$ docker pull kibana:7.10.1
$ docker pull fluent/fluentd:v1.12-debian-armhf-1

創建數據目錄,用於數據持久化

$ mkdir -p elasticsearch/data kibana/data fluentd/conf
$ chmod 777 elasticsearch/data kibana/data fluentd/conf

目錄結構

├── docker-compose-ek.yml
├── docker-compose-f.yml
├── elasticsearch
│       └── data
├── fluentd
│       ├── conf
│       │         └── fluent.conf
│       └── Dockerfile
├── kibana
│       ├── data
│       │       └── uuid
│       └── kibana.yml

調整宿主機vm.max_map_count大小

$ vim /etc/sysctl.conf
	vm.max_map_count = 2621440
$ sysctl -p

配置fluentd收集的log文件

$ cat fluentd/conf/fluent.conf

<source>
  @type forward
  port 24224                          <--Fluentd 默認啟動的端口>
  bind 0.0.0.0
</source>

<source>
  @type tail
  path /usr/local/var/log/access.log	                <--收集的log絕對路徑>
  tag celery.log                                    	<--TAG,標示log>
  refresh_interval 5
  pos_file /usr/local/var/log/fluent/access-log.pos
  <parse>
    @type none
  </parse>
</source>

<match access.log>
  @type copy
  <store>
    @type elasticsearch
    host 192.168.1.11        <--Elasticearch 地址>
    port 22131                <--Elasticearch 端口>
    user elastic              <--Elasticearch 用戶名,可選,如果es沒有配置用戶密碼,此處刪除>
    password elastic      <--Elasticearch 密碼,可選,如果es沒有配置用戶密碼,此處刪除>
    logstash_format true
    logstash_prefix celery
    logstash_dateformat %Y%m%d
    include_tag_key true
    type_name access_log
    tag_key @log_name
    flush_interval 10s
    suppress_type_name true
  </store>
  <store>
    @type stdout
  </store>
</match>

// 如果多個日志文件,拷貝上面的配置即可(注:第一個<source>定義Fluentd的配置不用拷貝);

Kibana 配置文件

$ vim ./kibana/kibana.yml

#
# ** THIS IS AN AUTO-GENERATED FILE **
#

# Default Kibana configuration for docker target
server.name: kibana
server.host: "0"
# server.basePath: "/efk"
elasticsearch.hosts: [ "http://elasticsearch:9200" ]
monitoring.ui.container.elasticsearch.enabled: true
i18n.locale: "zh-CN"
# elasticsearch.username: "kibana_system"
# elasticsearch.password: "kibana_system"

官網Fluentd的docker images缺少fluent-plugin-elasticsearch包,我們重新build一下

$ cat fluentd/Dockerfile

FROM fluent/fluentd:v1.12-debian-armhf-1
USER root
RUN ["gem", "install", "fluent-plugin-elasticsearch"]
#USER fluent

# 構建
docker build -f fluentd/Dockerfile -t fluentd:v1.12-debian-armhf-1.1 .

Docker compose

$ cat docker-compose-f.yml

version: '2'
services:
  fluentd:
    image: fluentd:v1.12-debian-armhf-1.1
    container_name: fluentd
    mem_limit: 8G
    volumes:
      - ./fluentd/conf:/fluentd/etc # Fluentd配置文件
      - /usr/local/var/log:/usr/local/var/log # 收集的Log目錄位置
    ports:
      - "22130:24224"
      - "22130:24224/udp"
    networks:
      - monitoring

networks:
  monitoring:
    driver: bridge
$ cat docker-compose-ek.yml

version: '2'
services:
  elasticsearch:
    image: elasticsearch:7.10.1
    container_name: elasticsearch
    mem_limit: 8G
    environment:
      - discovery.type=single-node
      - xpack.security.enabled=true
    volumes:
      - /etc/localtime:/etc/localtime
      - ./elasticsearch/data:/usr/share/elasticsearch/data # Elasticsearch數據目錄
    ports:
      - "22131:9200"
    networks:
      - monitoring

  kibana:
    image: kibana:7.10.1
    container_name: kibana
    mem_limit: 4G
    volumes:
      - ./kibana/kibana.yml:/usr/share/kibana/config/kibana.yml # Kibana配置文件
      - ./kibana/data:/usr/share/kibana/data # Kibana數據目錄
      - /etc/localtime:/etc/localtime
    links:
      - "elasticsearch"
    depends_on:
      - "elasticsearch"
    ports:
      - "22132:5601"
    networks:
      - monitoring

networks:
  monitoring:
    driver: bridge

服務啟動

$ docker-compose -f docker-compose-f.yml up -d		<--Fluentd>
$ docker-compose -f docker-compose-ek.yml up -d		<--elasticsearch+kibana>

啟動服務后建議配置Elasticsearch安全認證,配置密碼訪問

docker exec -it efk_elasticsearch_1 bash
./bin/elasticsearch-setup-passwords interactive

--- Please confirm that you would like to continue [y/N]y

Enter password for [elastic]:                     <--輸入用戶elastic的密碼>
Reenter password for [elastic]:                   <--再次輸入定義的密碼>
Enter password for [apm_system]:                  <--輸入用戶apm_system的密碼>
Reenter password for [apm_system]:                <--再次輸入定義的密碼>
Enter password for [kibana_system]:                      <--輸入用戶kibana的密碼>
Reenter password for [kibana_system]:                    <--再次輸入定義的密碼>
Enter password for [logstash_system]:             <--輸入用戶logstash_system的密碼>
Reenter password for [logstash_system]:           <--再次輸入定義的密碼>
Enter password for [beats_system]:                <--輸入用戶beats_system的密碼>
Reenter password for [beats_system]:              <--再次輸入定義的密碼>
Enter password for [remote_monitoring_user]:      <--輸入用戶remote_monitoring_user的密碼>
Reenter password for [remote_monitoring_user]:    <--再次輸入定義的密碼>
Changed password for user [apm_system]
Changed password for user [kibana_system]
Changed password for user [logstash_system]
Changed password for user [beats_system]
Changed password for user [remote_monitoring_user]
Changed password for user [elastic]

exit

Fluentd配置連接Es的認證

$ vim ./fluentd/conf/fluent.conf

    user elastic          	<--此處為在elasticsearch設置的elastic用戶密碼>
    password elastic	<--此處為在elasticsearch設置的elastic用戶密碼>

Kibana配置連接Es的認證

$ vim ./kibana/kibana.yml

elasticsearch.username: "kibana_system"		<--此處為在elasticsearch設置的kibana_system用戶密碼>
elasticsearch.password: "kibana_system"		<--此處為在elasticsearch設置的kibana_system用戶密碼>

重啟服務

$ docker-compose -f docker-compose-f.yml down
$ docker-compose -f docker-compose-f.yml up -d
$ docker-compose -f docker-compose-ek.yml down
$ docker-compose -f docker-compose-ek.yml up -d

配置Nginx代理

$ vim nginx.conf

    location /efk/ {
          proxy_pass http://192.168.1.11:5601/;
    }

// 配置kibana
$ vim ./kibana/kibana.yml

	server.basePath: "/efk"		<--/efk為url路徑>

重啟服務

$ docker-compose -f docker-compose-ek.yml down
$ docker-compose -f docker-compose-ek.yml up -d

瀏覽器訪問Nginx域名即可打開Kibana頁面

  • 用戶名&密碼就是剛剛創建的elasticsearch用戶信息:elastic / elastic

登錄完成后,創建索引


填入關鍵詞系統可以自動匹配


索引管理,輸入關鍵詞可以看到剛才創建的索引

主頁——Kibana可視化和分析——Discover,填入關鍵詞可以過濾日志


  • 其它

修改elastic的密碼,Kibana登錄密碼

// curl -H "Content-Type:application/json" -XPOST -u <elastic用戶名>:<elastic密碼>  'http://192.168.1.11:9200/_xpack/security/user/elastic/_password' -d '{ "password" : "<elastic新密碼>" }'
示例:curl -H "Content-Type:application/json" -XPOST -u elastic:elastic 'http://192.168.1.11:9200/_xpack/security/user/elastic/_password' -d '{ "password" : "123456" }'


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM