ELKStack入門篇(一)之ELK部署和使用


 一、ELKStack簡介

1、ELK介紹

中文指南:https://www.gitbook.com/book/chenryn/elk-stack-guide-cn/details

ELK Stack包含:ElasticSearch、Logstash、Kibana

ElasticSearch是一個搜索引擎,用來搜索、分析、存儲日志。它是分布式的,也就是說可以橫向擴容,可以自動發現,索引自動分片,總之很強大。文檔https://www.elastic.co/guide/cn/elasticsearch/guide/current/index.html

Logstash用來采集日志,把日志解析為json格式交給ElasticSearch。

Kibana是一個數據可視化組件,把處理后的結果通過web界面展示

Beats在這里是一個輕量級日志采集器,其實Beats家族有5個成員

早期的ELK架構中使用Logstash收集、解析日志,但是Logstash對內存、cpu、io等資源消耗比較高。相比 Logstash,Beats所占系統的CPU和內存幾乎可以忽略不計

x-pack對Elastic Stack提供了安全、警報、監控、報表、圖表於一身的擴展包,是收費的。

2、ELK架構圖:

二、Elasticsearch部署

1、安裝JDK

方法一:yum安裝JDK
[root@linux-node1 ~]# yum install -y java
[root@linux-node1 ~]# java -version
openjdk version "1.8.0_151"
OpenJDK Runtime Environment (build 1.8.0_151-b12)
OpenJDK 64-Bit Server VM (build 25.151-b12, mixed mode)

方法二:源碼安裝JDK
下載

[root@linux-node1 ~]# wget http://download.oracle.com/otn-pub/java/jdk/8u151-b12/e758a0de34e24606bca991d704f6dcbf/jdk-8u151-linux-x64.tar.gz

配置Java環境
[root@linux-node1 ~]# tar zxf jdk-8u151-linux-x64.tar.gz -C /usr/local/
[root@linux-node1 ~]#  ln –s /usr/local/jdk1.8.0_91 /usr/local/jdk

[root@linux-node1 ~]# vim /etc/profile
export JAVA_HOME=/usr/local/jdk
export CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar
export PATH=$PATH:$JAVA_HOME/bin
[root@linux-node1 ~]# source /etc/profile
[root@linux-node1 ~]# java -version

★★★★注:linux-node2節點上也需要安裝JDK

2、安裝Elasticsearch

linux-node2節點也需要安裝elasticsearch 
使用yum安裝elasticsearch會很慢,建議先下載:https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-6.0.0.rpm

(1)源碼安裝elasticsearch:

安裝elasticsearch
[root@linux-node1 ~]# wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-6.0.0.rpm
[root@linux-node1 ~]# yum install -y elasticsearch-6.0.0.rpm 

配置elasticsearch,linux-node2配置一個相同的節點,通過組播進行通信,會通過cluster進行查找,如果無法通過組播查詢,修改成單播即可。
[root@linux-node1 ~]# vim /etc/elasticsearch/elasticsearch.yml
cluster.name:elk-cluster    #集群名稱
node.name:elk-node1        #節點名稱,一個集群之內節點的名稱不能重復
path.data:/data/elkdata      #數據路徑
path.logs:/data/logs              #日志路徑
bootstrap.memory_lock:true      #鎖住es內存,保證內存不分配至交換分區。
network.host:192.168.56.11       #網絡監聽地址
http.port:9200                       #用戶訪問查看的端口,9300是組件訪問使用
discovery.zen,ping.unicast.hosts:["192.168.56.11","192.168.56.12"] #單播(配置一台即可,生產可以使用組播方式)

★★★注:內存鎖定需要進行配置需要2G以上內存否則會導致無法啟動elasticsearch。6.x版本啟用鎖定內存,需要進行以下修改操作:
[root@linux-node1 ~]# systemctl edit elasticsearch
[Service]
LimitMEMLOCK=infinity
[root@linux-node1 ~]# systemctl daemon-reload
[root@linux-node1 ~]# mkdir /data/{elkdata,logs}   #創建數據目錄和日志目錄
[root@linux-node1 ~]# chown elasticsearch.elasticsearch /data -R
[root@linux-node1 ~]# systemctl start elasticsearch.service
[root@linux-node1 ~]# netstat -tulnp |grep java
tcp6       0      0 192.168.56.11:9200      :::*                    LISTEN      26866/java          
tcp6       0      0 192.168.56.11:9300      :::*                    LISTEN      26866/java          

將配置文件拷貝到linux-node2
[root@linux-node1 ~]# scp /etc/elasticsearch/elasticsearch.yml 192.168.56.12:/etc/elasticsearch/
[root@linux-node2 ~]# vim /etc/elasticsearch/elasticsearch.yml
修改:
node.name=elk-node2
network.host=192.168.56.12
[root@linux-node2 ~]# mkdir /data/{elkdata,logs}
[root@linux-node2 ~]# chown elasticsearch.elasticsearch /data -R
[root@linux-node2 ~]# systemctl start elasticsearch.service
[root@linux-node2 ~]# netstat -tulnp |grep java
tcp6       0      0 192.168.56.12:9200      :::*                    LISTEN      16346/java          
tcp6       0      0 192.168.56.12:9300      :::*                    LISTEN      16346/java          

(2)yum安裝elasticsearch

1.下載並安裝GPG key
[root@linux-node1 ~]# rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch

2.添加yum倉庫
[root@linux-node1 ~]# vim /etc/yum.repos.d/es.repo 
[elasticsearch-6.x]
name=Elasticsearch repository for 6.x packages
baseurl=https://artifacts.elastic.co/packages/6.x/yum
gpgcheck=1
gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch
enabled=1
autorefresh=1
type=rpm-md

3.安裝elasticsearch
[root@hadoop-node1 ~]# yum install -y elasticsearch

3、Elasticsearch的集群配置和監控

可以使用命令來查看elasticsearch的狀態內容

[root@linux-node1 ~]# curl http://192.168.56.11:9200/_cluster/health?pretty=true
{
  "cluster_name" : "elk-cluster",
  "status" : "green",
  "timed_out" : false,
  "number_of_nodes" : 2,
  "number_of_data_nodes" : 2,
  "active_primary_shards" : 0,
  "active_shards" : 0,
  "relocating_shards" : 0,
  "initializing_shards" : 0,
  "unassigned_shards" : 0,
  "delayed_unassigned_shards" : 0,
  "number_of_pending_tasks" : 0,
  "number_of_in_flight_fetch" : 0,
  "task_max_waiting_in_queue_millis" : 0,
  "active_shards_percent_as_number" : 100.0
}

[root@linux-node2 ~]# curl http://192.168.56.12:9200/_cluster/health?pretty=true
{
  "cluster_name" : "elk-cluster",
  "status" : "green",
  "timed_out" : false,
  "number_of_nodes" : 2,
  "number_of_data_nodes" : 2,
  "active_primary_shards" : 0,
  "active_shards" : 0,
  "relocating_shards" : 0,
  "initializing_shards" : 0,
  "unassigned_shards" : 0,
  "delayed_unassigned_shards" : 0,
  "number_of_pending_tasks" : 0,
  "number_of_in_flight_fetch" : 0,
  "task_max_waiting_in_queue_millis" : 0,
  "active_shards_percent_as_number" : 100.0
}

[root@linux-node1 ~]# curl  -i -XGET 'http://192.168.56.11:9200/_count?'   #查看es里面有什么內容
HTTP/1.1 200 OK
content-type: application/json; charset=UTF-8
content-length: 71

{"count":0,"_shards":{"total":0,"successful":0,"skipped":0,"failed":0}}
解釋:
返回頭部200,執行成功0個,返回0個

curl http://192.168.56.11:9200/_cluster/health?pretty 健康檢查
curl http://192.168.56.11:9200/_cluster/state?pretty    集群詳細信息

注:但是我們不可能經常通過命令來查看集群的信息,這里就使用elasticsearch的插件--head
插件是為了完成不同的功能,而官方提供了一些插件但大部分是收費的,另外也有一些開發愛好者提供的插件。可以實現對elasticsearch集群的狀態與管理配置等功能。
View Code

4、Elasticsearch插件–>Head插件

插件作用:主要是做集群管理的插件 
Github下載地址:https://github.com/mobz/elasticsearch-head

安裝Head插件
[root@linux-node1 ~]# wget https://nodejs.org/dist/v8.10.0/node-v8.10.0-linux-x64.tar.xz
[root@linux-node1 ~]# tar xf node-v8.10.0-linux-x64.tar.xz
[root@linux-node1 ~]# mv node-v8.10.0-linux-x64 /usr/local/node
[root@linux-node1 ~]# vim /etc/profile
export NODE_HOME=/usr/local/node
export PATH=$PATH:$NODE_HOME/bin
[root@linux-node1 ~]# source /etc/profile
[root@linux-node1 ~]# which node
/usr/local/node/bin/node
[root@linux-node1 ~]# node -v
v8.10.0
[root@linux-node1 ~]# which npm
/usr/local/node/bin/npm
[root@linux-node1 ~]# npm -v
5.6.0
[root@linux-node1 ~]# npm install -g cnpm --registry=https://registry.npm.taobao.org
[root@linux-node1 ~]# npm install -g grunt-cli --registry=https://registry.npm.taobao.org
[root@linux-node1 ~]# grunt -version
grunt-cli v1.2.0
[root@linux-node1 ~]# wget https://github.com/mobz/elasticsearch-head/archive/master.zip
[root@linux-node1 ~]# unzip master.zip
[root@linux-node1 ~]# cd elasticsearch-head-master/
[root@linux-node1 elasticsearch-head-master]# vim Gruntfile.js
90                 connect: {
91                         server: {
92                                 options: {
93                                         hostname: '192.168.56.11',
94                                         port: 9100,
95                                         base: '.',
96                                         keepalive: true
97                                 }
98                         }
99                 }
[root@linux-node1 elasticsearch-head-master]# vim _site/app.js
4354 this.base_uri = this.config.base_uri || this.prefs.get("app-base_uri") || "http://192.168.56.11:9200";
[root@linux-node1 elasticsearch-head-master]# cnpm install
[root@linux-node1 elasticsearch-head-master]# grunt --version
grunt-cli v1.2.0
grunt v1.0.1
[root@linux-node1 elasticsearch-head-master]# vim /etc/elasticsearch/elasticsearch.yml
90 # ---------------------------------- Head -------------------------------------增加如下兩行:
91 #
92 http.cors.enabled: true
93 http.cors.allow-origin: "*"
[root@linux-node1 elasticsearch-head-master]# systemctl restart elasticsearch
[root@linux-node1 elasticsearch-head-master]# systemctl status elasticsearch
[root@linux-node1 elasticsearch-head-master]# grunt server &
(node:2833) ExperimentalWarning: The http2 module is an experimental API.
Running "connect:server" (connect) task
Waiting forever...
Started connect web server on http://192.168.56.11:9100

注:在elasticsearch 2.x以前的版本可以通過/usr/share/elasticsearch/bin/plugin install mobz/elasticsearch-head來安裝head插件,在elasticsearch 5.x以上版本需要通過npm進行安裝。

瀏覽器訪問:http://192.168.56.11:9100,可以看到各個節點的狀態信息,如圖:
View Code

三、logstash的安裝

1、logstash介紹

Logstash是一個開源的數據收集引擎,可以水平伸縮,而且logstash是整個ELK當中擁有最多插件的一個組件,其可以接收來自不同源的數據並統一輸入到指定的且可以是不同目的地。

logstash收集日志基本流程: input–>codec–>filter–>codec–>output 
1.input:從哪里收集日志。 
2.filter:發出去前進行過濾 
3.output:輸出至Elasticsearch或Redis消息隊列 
4.codec:輸出至前台,方便邊實踐邊測試 
5.數據量不大日志按照月來進行收集

2、安裝logstash

環境准備:關閉防火牆和Selinux,並且安裝java環境
logstash下載地址:https://artifacts.elastic.co/downloads/logstash/logstash-6.0.0.rpm
[root@linux-node1 ~]# wget https://artifacts.elastic.co/downloads/logstash/logstash-6.0.0.rpm
[root@linux-node1 ~]# yum install -y logstash-6.0.0.rpm 
[root@linux-node1 ~]# rpm -ql logstash
[root@linux-node1 ~]# chown -R logstash.logstash chown -R logstash.logstash /usr/share/logstash/data/queue 
#權限更改為logstash用戶和組,否則啟動的時候日志報錯
#node2節點安裝logstash
[root@linux-node2 ~]# yum install -y logstash-6.0.0.rpm 
[root@linux-node1 ~]# ll /etc/logstash/conf.d/     #logstash的主配置目錄
總用量 0

3、測試logstash是否正常

3.1、logstash的基本語法

input {
        指定輸入
}

output {
        指定輸出
}

3.2、測試標准輸入輸出

使用rubydebug方式前台輸出展示以及測試

#標准輸入輸出
[root@linux-node1 ~]# /usr/share/logstash/bin/logstash -e 'input { stdin {} } output { stdout { codec => rubydebug} }'     
OpenJDK 64-Bit Server VM warning: If the number of processors is expected to increase from one, then you should configure the number of parallel GC threads appropriately using -XX:ParallelGCThreads=N
WARNING: Could not find logstash.yml which is typically located in $LS_HOME/config or /etc/logstash. You can specify the path using --path.settings. Continuing using the defaults
Could not find log4j2 configuration at path /usr/share/logstash/config/log4j2.properties. Using default config which logs errors to the console
The stdin plugin is now waiting for input:
hello  #輸入

{
      "@version" => "1",              #@version時間版本號,一個事件就是一個ruby對象
          "host" => "linux-node1",       #host標記事件發生在哪里
    "@timestamp" => 2017-12-08T14:56:25.395Z,      #@timestamp,用來標記當前事件發生的時間
       "message" => "hello"       #消息的具體內容
}

3.3、測試輸出到文件

[root@linux-node1 ~]# /usr/share/logstash/bin/logstash -e 'input { stdin{} } output { file { path => "/tmp/test-%{+YYYY.MM.dd}.log"} }'
OpenJDK 64-Bit Server VM warning: If the number of processors is expected to increase from one, then you should configure the number of parallel GC threads appropriately using -XX:ParallelGCThreads=N
WARNING: Could not find logstash.yml which is typically located in $LS_HOME/config or /etc/logstash. You can specify the path using --path.settings. Continuing using the defaults
Could not find log4j2 configuration at path /usr/share/logstash/config/log4j2.properties. Using default config which logs errors to the console
The stdin plugin is now waiting for input:
hello world
welcome to beijing!

[root@linux-node1 ~]# tailf /tmp/test-2018.03.14.log 
{"@version":"1","host":"linux-node1","@timestamp":"2018-03-14T07:57:27.096Z","message":"hello world"}
{"@version":"1","host":"linux-node1","@timestamp":"2018-03-14T07:58:29.074Z","message":"welcome to beijing!"}

開啟gzip壓縮輸出
[root@linux-node1 ~]# /usr/share/logstash/bin/logstash -e 'input { stdin {} } outpu{ file { path => "/tmp/test-%{+YYYY.MM.dd}.log.tar.gz" gzip => true } }'

OpenJDK 64-Bit Server VM warning: If the number of processors is expected to increase from one, then you should configure the number of parallel GC threads appropriately using -XX:ParallelGCThreads=N
what's your name?

[root@linux-node1 ~]# ll /tmp/test-2018.03.14.log.tar.gz 
-rw-r--r-- 1 root root 117 3月  14 16:00 /tmp/test-2018.03.14.log.tar.gz

3.4、測試輸出到elasticsearch

[root@linux-node1 ~]# /usr/share/logstash/bin/logstash -e 'input { stdin {} } output { elasticsearch { hosts => ["192.168.56.110:9200"] index => "logstash-test-%{+YYYY.MM.dd}" } }'
OpenJDK 64-Bit Server VM warning: If the number of processors is expected to increase from one, then you should configure the number of parallel GC threads appropriately using -XX:ParallelGCThreads=N
WARNING: Could not find logstash.yml which is typically located in $LS_HOME/config or /etc/logstash. You can specify the path using --path.settings. Continuing using the defaults
Could not find log4j2 configuration at path /usr/share/logstash/config/log4j2.properties. Using default config which logs errors to the console
The stdin plugin is now waiting for input:
what's your name ?
my name is kim.

驗證elasticsearch服務器收到數據
[root@linux-node1 ~]# ll /data/elkdata/nodes/0/indices/
總用量 0
drwxr-xr-x 8 elasticsearch elasticsearch 65 3月  14 16:05 cV8nUO0WSkmR990aBH0RiA
drwxr-xr-x 8 elasticsearch elasticsearch 65 3月  14 15:18 Rca-tNpDSt20jWxEheyIrQ

從head插件上可以看到有索引:logstash-test-2018-03-04,並且通過數據瀏覽可以看到剛才輸入的數據。

★★★★★ 
在該界面刪除testindex,”動作”–>”刪除”,再查看上面目錄. 
tips:在刪除數據時,在該界面刪除,切勿在上面的目錄刪除,因為集群節點上每個都有這樣的數據,刪除某一個,可能會導致elasticsearch無法啟動。

四、Kibana安裝

Kibana 是為 Elasticsearch 設計的開源分析和可視化平台。你可以使用 Kibana 來搜索,查看存儲在 Elasticsearch 索引中的數據並與之交互。你可以很容易實現高級的數據分析和可視化,以圖表的形式展現出來。

kiabana下載地址:https://artifacts.elastic.co/downloads/kibana/kibana-6.0.0-x86_64.rpm
[root@linux-node1 ~]# wget https://artifacts.elastic.co/downloads/kibana/kibana-6.0.0-x86_64.rpm
[root@linux-node1 ~]# yum install -y kibana-6.0.0-x86_64.rpm 
[root@linux-node1 ~]# vim /etc/kibana/kibana.yml 
[root@linux-node1 ~]# grep "^[a-Z]" /etc/kibana/kibana.yml 
server.port: 5601        #監聽端口
server.host: "192.168.56.11"      #監聽IP地址,建議內網ip
elasticsearch.url: "http://192.168.56.11:9200"       #elasticsearch連接kibana的URL,也可以填寫192.168.56.12,因為它們是一個集群
[root@linux-node1 ~]# systemctl enable kibana
Created symlink from /etc/systemd/system/multi-user.target.wants/kibana.service to /etc/systemd/system/kibana.service.
[root@linux-node1 ~]# systemctl start kibana
監聽端口為:5601
[root@linux-node1 ~]# ss -tnl
State       Recv-Q Send-Q                                                 Local Address:Port                                                                Peer Address:Port              
LISTEN      0      128                                                                *:9100                                                                           *:*                  
LISTEN      0      128                                                                *:22                                                                             *:*                  
LISTEN      0      100                                                        127.0.0.1:25                                                                             *:*                  
LISTEN      0      128                                                    192.168.56.11:5601                                                                           *:*                  
LISTEN      0      128                                             ::ffff:192.168.56.11:9200                                                                          :::*                  
LISTEN      0      128                                             ::ffff:192.168.56.11:9300                                                                          :::*                  
LISTEN      0      128                                                               :::22                                                                            :::*                  
LISTEN      0      100                                                              ::1:25                                                                            :::*                  
LISTEN      0      80                                                                :::3306                                                                          :::*       

瀏覽器訪問192.168.56.11:5601,如圖:

可以通過http://192.168.56.11:5601/status 來查看看是否正常,如果不正常,是無法進入到上圖界面

五、通過配置logstash文件收集message日志

1、Kibana展示上一節的日志

在Kibana上展示上一節收集的日志信息,添加索引,如圖:

點擊“discover”查看收集的信息,如圖:

2、使用logstash配置文件收集messages日志

前提需要logstash用戶對被收集的日志文件有讀的權限並對寫入的文件有寫權限。

編輯logstash的配置文件:
[root@linux-node1 ~]# vim /etc/logstash/conf.d/system.conf
input {
  file {
    path => "/var/log/messages"     #日志路徑
    type => "systemlog"      #類型,自定義,在進行多個日志收集存儲時可以通過該項進行判斷輸出
    start_position => "beginning"    #logstash 從什么位置開始讀取文件數據,默認是結束位置(end),也就是說 logstash 進程會以類似 tail -F 的形式運行。如果你是要導入原有數據,把這個設定改成"beginning",logstash 進程就從頭開始讀取,類似 less +F 的形式運行。
    stat_interval => "2"  #logstash 每隔多久檢查一次被監聽文件狀態(是否有更新) ,默認是 1 秒。
  }
}

output {
  elasticsearch {
    hosts => ["192.168.56.11:9200"]      #指定hosts
    index => "logstash-systemlog-%{+YYYY.MM.dd}"    #索引名稱
  }

}
[root@linux-node1 ~]# /usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/system.conf -t     #檢測配置文件是否有語法錯誤
OpenJDK 64-Bit Server VM warning: If the number of processors is expected to increase from one, then you should configure the number of parallel GC threads appropriately using -XX:ParallelGCThreads=N
WARNING: Could not find logstash.yml which is typically located in $LS_HOME/config or /etc/logstash. You can specify the path using --path.settings. Continuing using the defaults
Could not find log4j2 configuration at path /usr/share/logstash/config/log4j2.properties. Using default config which logs errors to the console
Configuration OK
[root@linux-node1 ~]# ll /var/log/messages 
-rw-------. 1 root root 791209 12月 27 11:43 /var/log/messages
#這里可以看到該日志文件是600權限,而elasticsearch是運行在elasticsearch用戶下,這樣elasticsearch是無法收集日志的。所以這里需要更改日志的權限,否則會報權限拒絕的錯誤。在日志中查看/var/log/logstash/logstash-plain.log 是否有錯誤。
[root@linux-node1 ~]# chmod 644 /var/log/messages 
[root@linux-node1 ~]# systemctl restart logstash

在管理界面查看是否有相應的索引(logstash-systemlog-2017.12.27),如圖:

添加到Kibana中展示,創建索引:

查看日志

3、使用一個配置文件收集多個日志

修改logstash的配置文件,這里增加收集數據庫mariadb的日志:
[root@linux-node1 ~]# vim /etc/logstash/conf.d/system.conf 
input {
  file {
        path => "/var/log/messages"
        type => "systemlog"
        start_position => "beginning"
        stat_interval => "2"
  }
  file {
        path => "/var/log/mariadb/mariadb.log"
        type => "mariadblog"
        start_position => "beginning"
        stat_interval => "2"
  }
}

output {
  if [type] == "systemlog" {       #使用if來判斷類型,並輸出到elasticsearch和file,展示一個out可以作多樣輸出
  elasticsearch {
        hosts => ["192.168.56.11:9200"]
        index => "logstash-systemlog-%{+YYYY.MM.dd}"
  }
  file {
        path => "/tmp/logstash-systemlog-%{+YYYY.MM.dd}"

  }}
  if [type] == "mariadblog" {
  elasticsearch {
        hosts => ["192.168.56.11:9200"]
        index => "logstash-mariadblog-%{+YYYY.MM.dd}"
  }
  file {
        path => "/tmp/logstash-mariadblog-%{+YYYY.MM.dd}"
  }}

}

配置文件檢測語法是否正常:
[root@linux-node1 ~]# /usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/system.conf -t
OpenJDK 64-Bit Server VM warning: If the number of processors is expected to increase from one, then you should configure the number of parallel GC threads appropriately using -XX:ParallelGCThreads=N
WARNING: Could not find logstash.yml which is typically located in $LS_HOME/config or /etc/logstash. You can specify the path using --path.settings. Continuing using the defaults
Could not find log4j2 configuration at path /usr/share/logstash/config/log4j2.properties. Using default config which logs errors to the console
Configuration OK

重啟logstash:
[root@linux-node1 ~]# systemctl restart logstash

修改mariadb的日志權限:
[root@linux-node1 ~]# ll /var/log/mariadb/ -d
drwxr-x--- 2 mysql mysql 24 12月  4 17:43 /var/log/mariadb/
[root@linux-node1 ~]# chmod 755 /var/log/mariadb/
[root@linux-node1 ~]# ll /var/log/mariadb/mariadb.log 
-rw-r----- 1 mysql mysql 114993 12月 27 14:23 /var/log/mariadb/mariadb.log
[root@linux-node1 ~]# chmod 644 /var/log/mariadb/mariadb.log 
View Code

通過head插件查看:

 

 查看是否在/tmp下收集到了日志數據

[root@linux-node1 ~]# ll /tmp/logstash-*
-rw-r--r-- 1 logstash logstash 288449 12月 27 14:27 /tmp/logstash-mariadblog-2017.12.27
-rw-r--r-- 1 logstash logstash  53385 12月 27 14:28 /tmp/logstash-systemlog-2017.12.27

Kibana創建索引:

 


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM