ELKstack搭建及配置


ELKstack 簡介

  1. ELK是Elasticsearch、Logstash、Kibana三個開源軟件的組合。在實時數據檢索和分析場合,三者通常是配合使用,而且又都先后歸於 Elastic.co 公司名下,故有此簡稱。
  2. Elasticsearch是個開源分布式搜索引擎,它的特點有:分布式,零配置,自動發現,索引自動分片,索引副本機制,restful風格接口,多數據源,自動搜索負載等。
  3. Logstash是一個完全開源的工具,它可以對你的日志進行收集、分析,並將其存儲供以后使用。
  4. kibana 是一個開源和免費的工具,它可以為 Logstash 和 ElasticSearch 提供的日志分析友好的 Web 界面,可以幫助您匯總、分析和搜索重要數據日志。

流程

  在需要收集日志的所有服務上部署logstash,作為logstash agent(logstash shipper)用於監控並過濾收集日志,將過濾后的內容發送到Redis,然后logstash indexer將日志收集在一起交給全文搜索服務ElasticSearch,可以用ElasticSearch進行自定義搜索通過Kibana 來結合自定義搜索進行頁面展示。

二、ELK安裝及配置

1、系統及軟件版本介紹:
系統:CentOS6.5_64
elasticsearch:elasticsearch-5.6.3.tar.gz
logstash:logstash-2.3.4.tar.gz
kibana:kibana-4.5.4-linux-x64.tar.gz
redis:redis-3.2.tar.gz
JDK:jdk-8u73-linux-x64.tar.gz

2、服務器規划 在兩台服務器安裝ELK組件
A-client(需要分析的nginx日志服務器):安裝logstash(logstash agent)
B-master(ELK服務端):安裝elasticsearch、logstash(logstash index)、kibana、redis

軟件包安裝目錄:/data/elk

3、創建用戶

# groupadd app useradd -g app -d /data/elk elk

4、安裝及配置JDK
logstash及elasticsearch需要JDK支持

# su - elk
$ tar zxf jdk-8u73-linux-x64.tar.gz
$ vim .bash_profile (添加及修改如下內容)

JAVA_HOME=/data/elk/jdk1.8.0_73
PATH=${JAVA_HOME}/bin:$PATH:$HOME/bin

export PATH JAVA_HOME

$ . .bash_profile

5. ansible 一鍵安裝 A服務器安裝及配置logstash client

#目錄結構如下
#
tree ansible-logstash-playbook ansible-logstash-playbook ├── hosts ├── jdk.retry ├── jdk.yml └── roles ├── jdk │   ├── defaults │   ├── files │   │   └── jdk-8u131-linux-x64.tar.gz │   ├── handlers │   ├── meta │   ├── tasks │   │   └── main.yml │   ├── templates │   └── vars └── logstash ├── defaults ├── files │   └── logstash-2.3.4.tar.gz ├── handlers ├── meta ├── tasks │   └── main.yml ├── templates │   └── logstash_agent.conf.j2 └── vars └── main.yml

ansible一鍵安裝jdk

# cat ansible-logstash-playbook/roles/jdk/tasks/main.yml 
- name: create group
  group: name=app system=yes
- name: create logstash-user
  user: name=elk group=app home=/data/elk system=yes
- name: create a diretory if it doesn't exist
  file:
    path: /data/elk/soft
    state: directory
    mode: 0755
    owner: elk
    group: app
- name: create a diretory if it doesn't exist
  file:
    path: /data/elk/scripts
    state: directory
    mode: 0755
    owner: elk
    group: app
- name: jdk file to dest host
  copy:
    src: /data/soft/jdk-8u131-linux-x64.tar.gz
    dest: /data/elk/soft/
    owner: elk
    group: app
- name: tar jdk-8u131-linux-x64.tar.gz
  shell: chdir=/data/elk/soft tar zxf jdk-8u131-linux-x64.tar.gz && chown -R elk.app /data/elk/
- name: java_profile config
  shell: /bin/echo {{ item }} >> /data/elk/.bash_profile && source /data/elk/.bash_profile
  with_items:
    - "export JAVA_HOME=/data/elk/soft/jdk1.8.0_131"
    - PATH=\${JAVA_HOME}/bin:\$PATH:\$HOME/bin
    - export PATH

absible 一鍵安裝logstash client

# cat ansible-logstash-playbook/roles/logstash/tasks/main.yml 
- name: logstash file to dest host copy: src: logstash-2.3.4.tar.gz dest: /data/elk/soft/ owner: elk group: app - name: tar logstash-2.3.4.tar.gz shell: chdir=/data/elk/soft/ tar zxf logstash-2.3.4.tar.gz - name: mv logstash-2.3.4 shell: mv /data/elk/soft/logstash-2.3.4 /data/elk/logstash && chown -R elk.app /data/elk - name: touch conf file: path=/data/elk/logstash/conf owner=elk group=app state=directory - name: logstash conf file to dest host template: src: logstash_agent.conf.j2 dest: /data/elk/logstash/conf/logstash_agent.conf owner: elk group: app - name: start logstash client shell: su - elk -c "nohup /data/elk/logstash/bin/logstash agent -f /data/elk/logstash/conf/logstash_agent.conf &"
# cat ansible-logstash-playbook/roles/logstash/templates/logstash_agent.conf.j2 
input {
        file {
                type => "tomcat log"
        add_field => {"host"=> "{{IP}}" }           #{{IP }} <==> vars/main.yml
                path => ["/data/tomcat/apache-tomcat-8088/logs/catalina.out"]  #tomcat日志路徑
        }
}
output {
        redis {
                host => "10.19.182.215" #redis server IP
                port => "6379" #redis server port
                data_type => "list" #redis作為隊列服務器,key的類型為list
                key => "tomcat:redis" #key的名稱,可以自定義
        }
}
# cat ansible-logstash-playbook/roles/logstash/vars/main.yml 
IP: "{{ ansible_eth0['ipv4']['address'] }}"

 ansible一鍵執行部署

# cat jdk.yml 
- hosts: logstash
  user: root
  roles:
    - jdk
    - logstash
# ansible-playbook jdk.yml -i hosts 

 

6.服務端: 安裝及配置elasticsearch && redis

$ unzip elasticsearch-5.6.3.zip
$ mv elasticsearch-5.6.3 elasticsearch
$ mkdir elasticsearch/{logs,data}
$ vim elasticsearch/config/elasticsearch.yml  #修改如下內容
cluster.name: server
node.name: node-1
path.data: /data1/elk/elasticsearch-5.6.3/data
path.logs: /data1/elk/elasticsearch-5.6.3/logs
network.host: 10.19.86.42
http.port: 9200
http.cors.enabled: true           #開啟跨域訪問
http.cors.allow-origin: "*"       #跨域訪問,安裝elasticsearch-head需要
discovery.zen.ping.unicast.hosts: ["10.19.33.42", "10.19.22.215","10.19.11.184"]
discovery.zen.minimum_master_nodes: 3

#調整使用內存大小;默認為2G;建議調整物理機內存一半;最大內存使用32G;
$ vim elasticsearch-5.6.3/config/jvm.options 
-Xms20g
-Xmx20g
$ echo 511 > /proc/sys/net/core/somaxconn
# cat /etc/security/limits.d/90-nproc.conf 
# Default limit for number of user's processes to prevent
# accidental fork bombs.
# See rhbz #432903 for reasoning.

*          soft    nproc     102400
root       soft    nproc     unlimited
echo 262144 > /proc/sys/vm/max_map_count 
$ ./bin/elasticsearch -d

7. 服務端配置logstash indexer

[elk@10-19-86-42 ~]$ cat logstash/conf/logstash_tomcat.conf 
input {
    redis {
        host => "10.19.86.42"
        port => "6379"
        data_type => "list"
        key => "tomcat:redis"
        type => "redis-input"
        }
    }
    filter {
        grok {
            match => { "message" => "^%{TIMESTAMP_ISO8601:[@metadata][timestamp]}\s+
\s+%{LOGLEVEL:level}\s+%{GREEDYDATA:class}\s+-\s+%{GREEDYDATA:msg}" }
           #match => { "message" => "(^.+Exception:.+)|(^\s+at .+)|(^\s+... \d+ more)|(^\s*Caused by:.+)" }   #匹配堆信息
        }
mutate {
    split => { "fieldname" => "," }
}
date {
    match => [ "timestamp","dd/MMM/yyyy:HH:mm:ss Z" ]
    remove_field => [ "timestamp" ]
    }
}

output {
    elasticsearch {
       hosts => ["10.19.86.42:9200","10.19.77.184:9200","10.19.182.215:9200"]
       workers =>2                     #必須設置線程數
   flush_size => 50000
   idle_flush_time => 1
       index => "catalina-%{+YYYY.MM.dd}"
}
          stdout {codec => rubydebug}
}

supervisor 啟動logstash

$ cat /etc/supervisor/conf.d/logstash_tomcat.conf 
# start logstash
[program:logstash-tomcat]
environment=
    JAVA_HOME= /data/elk/jdk1.8.0_131

directory=/data/elk/logstash
command=/data/elk/logstash/bin/logstash  -w 24 -b 5000 -f /data/elk/logstash/conf/logstash_tomcat.conf
autostart = true
startsecs = 5
user = elk
group = app
stdout_logfile = /data/elk/logs/logstash_tomcat.log
stderr_logfile = /data/elk/logs/logstash_tomcat_err.log

注:啟動多個logstash indexer服務來消費redis隊列; 

 

8. 安裝kibana並配置

$ tar kibana-5.6.3-linux-x86_64.tar.gz 
$ vim kibana-5.6.3-linux-x86_64/config/kibana.yml
server.port: 5601
server.host: "10.19.11.42"
elasticsearch.url: "http://10.19.11.42:9200"

kiana啟動

$ cat logstash_kinana.conf 
# start logstash
[program:kinana]
environment=
    JAVA_HOME= /data/elk/jdk1.8.0_131

directory=/data/elk/kibana-5.6.3-linux-x86_64
command=/data/elk/kibana-5.6.3-linux-x86_64/bin/kibana
autostart = true
startsecs = 5
user = elk
group = app
stdout_logfile = /data/elk/logs/kibana.log
stderr_logfile = /data/elk/logs/kibana_err.log

 

補充:

1. 安裝elasticsearch-head

# yum install -y git grunt-cli grunt npm
# git clone git://github.com/mobz/elasticsearch-head.git
vim elasticsearch-head/Gruntfile.js
        connect: {
            server: {
                options: {
                    hostname: '10.19.86.42',
                    port: 9100,
                    base: '.',
                    keepalive: true
                }
            }
        }

啟動elasticsearch-head

$ cd /data/elk/elasticsearch-5.6.3/elasticsearch-head/node_modules/grunt/bin && nohup ./grunt server &

 


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM