一、ELK體系結構
二、系統環境變量
【主機信息】
IP 主機名 操作系統版本 10.10.10.102 console CentOS7.5 10.10.10.103 log1 CentOS7.5
10.10.10.104 log2 CentOS7.5
【軟件包版本信息】
elasticsearch-6.4.0.tar.gz logstash-6.4.0.tar.gz kibana-6.4.0-linux-x86_64.tar.gz
node-v8.11.4-linux-x64.tar.gz
elasticsearch-head-master.zip
1. 設置主機名和IP映射
分別在上述三台機器的/etc/hosts文件中追加如下內容:
10.10.10.102 console 10.10.10.103 log1 10.10.10.104 log2
2.關於3台機器的防火牆,並設置開機不啟動
#關閉防火牆
systemctl stop firewalld
#設置防火牆開機不啟動
systemctl disable firewalld
3.修改3台機器的系統文件描述符大小
vim /etc/security/limits.conf es - nofile 65536
4.增大3台機器的虛擬內存mmap count配置
vim /etc/sysctl.conf
vm.max_map_count = 262144
#使修改生效
sysctl -p
5.在3台機器上分別新建用戶es和日志文件目錄
useradd es
mkdir /esdata
chown -R es:es /esdata
6.在3台機器上都安裝JDK1.8
三、Elasticsearch的安裝與配置
1.分別在10.10.10.102、10.10.10.103、10.10.10.104機器上新建Elasticsearch安裝目錄並修改屬主用戶和組
mkdir -p /usr/local/elasticsearch-6.4.0 chown -R es:es /usr/local/elasticsearch-6.4.0
2.登錄10.10.10.102機器並切換到es用戶,將elasticsearch-6.4.0.tar.gz解壓到 /usr/local/elasticsearch-6.4.0目錄下
tar -xf /home/es/elasticsearch-6.4.0.tar.gz cp -r * /usr/local/elasticsearch-6.4.0
3.修改配置文件
console配置文件如下:
[es@console config]$ cat /usr/local/elasticsearch-6.4.0/config/elasticsearch.yml # ======================== Elasticsearch Configuration ========================= # # NOTE: Elasticsearch comes with reasonable defaults for most settings. # Before you set out to tweak and tune the configuration, make sure you # understand what are you trying to accomplish and the consequences. # # The primary way of configuring a node is via this file. This template lists # the most important settings you may want to configure for a production cluster. # # Please consult the documentation for further information on configuration options: # https://www.elastic.co/guide/en/elasticsearch/reference/index.html # # ---------------------------------- Cluster ----------------------------------- # # Use a descriptive name for your cluster: # cluster.name: console #設置集群的名稱為console # # ------------------------------------ Node ------------------------------------ # # Use a descriptive name for the node: # node.name: console #設置集群節點名稱為console node.master: true #設置該節點是否為主節點,這里選擇true,其他2台機器這里設置為false # # Add custom attributes to the node: # #node.attr.rack: r1 # # ----------------------------------- Paths ------------------------------------ # # Path to directory where to store the data (separate multiple locations by comma): # path.data: /esdata #設置數據目錄為/esdata # # Path to log files: # #path.logs: /path/to/logs # # ----------------------------------- Memory ----------------------------------- # # Lock the memory on startup: # #bootstrap.memory_lock: true # #bootstrap.mlockall: true # # Make sure that the heap size is set to about half the memory available # on the system and that the owner of the process is allowed to use this # limit. # # Elasticsearch performs poorly when the system is swapping the memory. # # ---------------------------------- Network ----------------------------------- # # Set the bind address to a specific IP (IPv4 or IPv6): # network.host: 10.10.10.102 #這里配置的是console機器的IP,其他2台機器分別配置自己的IP network.bind_host: 10.10.10.102 #同上 network.publish_host: 10.10.10.102 #同上 # # Set a custom port for HTTP: # http.port: 9200 #開啟端口 # # For more information, consult the network module documentation. # # --------------------------------- Discovery ---------------------------------- # # Pass an initial list of hosts to perform discovery when new node is started: # The default list of hosts is ["127.0.0.1", "[::1]"] # discovery.zen.ping.unicast.hosts: ["10.10.10.102:9300"] #配置自動發現機制,其他2台機器也設置這個值 # # Prevent the "split brain" by configuring the majority of nodes (total number of master-eligible nodes / 2 + 1): # discovery.zen.minimum_master_nodes: 1 #設置發現的主節點個數為1 # # For more information, consult the zen discovery module documentation. # # ---------------------------------- Gateway ----------------------------------- # # Block initial recovery after a full cluster restart until N nodes are started: # #gateway.recover_after_nodes: 3 # # For more information, consult the gateway module documentation. # # ---------------------------------- Various ----------------------------------- # # Require explicit names when deleting indices: # #action.destructive_requires_name: true
log1配置文件:
[es@log1 config]$ cat elasticsearch.yml # ======================== Elasticsearch Configuration ========================= # # NOTE: Elasticsearch comes with reasonable defaults for most settings. # Before you set out to tweak and tune the configuration, make sure you # understand what are you trying to accomplish and the consequences. # # The primary way of configuring a node is via this file. This template lists # the most important settings you may want to configure for a production cluster. # # Please consult the documentation for further information on configuration options: # https://www.elastic.co/guide/en/elasticsearch/reference/index.html # # ---------------------------------- Cluster ----------------------------------- # # Use a descriptive name for your cluster: # cluster.name: console # # ------------------------------------ Node ------------------------------------ # # Use a descriptive name for the node: # node.name: log1 node.master: false # # Add custom attributes to the node: # #node.attr.rack: r1 # # ----------------------------------- Paths ------------------------------------ # # Path to directory where to store the data (separate multiple locations by comma): # path.data: /esdata # # Path to log files: # #path.logs: /path/to/logs # # ----------------------------------- Memory ----------------------------------- # # Lock the memory on startup: # #bootstrap.memory_lock: true # #bootstrap.mlockall: true # # Make sure that the heap size is set to about half the memory available # on the system and that the owner of the process is allowed to use this # limit. # # Elasticsearch performs poorly when the system is swapping the memory. # # ---------------------------------- Network ----------------------------------- # # Set the bind address to a specific IP (IPv4 or IPv6): # network.host: 10.10.10.103 network.bind_host: 10.10.10.103 network.publish_host: 10.10.10.103 # # Set a custom port for HTTP: # http.port: 9200 # # For more information, consult the network module documentation. # # --------------------------------- Discovery ---------------------------------- # # Pass an initial list of hosts to perform discovery when new node is started: # The default list of hosts is ["127.0.0.1", "[::1]"] # discovery.zen.ping.unicast.hosts: ["10.10.10.102:9300"] # # Prevent the "split brain" by configuring the majority of nodes (total number of master-eligible nodes / 2 + 1): # discovery.zen.minimum_master_nodes: 1 # # For more information, consult the zen discovery module documentation. # # ---------------------------------- Gateway ----------------------------------- # # Block initial recovery after a full cluster restart until N nodes are started: # #gateway.recover_after_nodes: 3 # # For more information, consult the gateway module documentation. # # ---------------------------------- Various ----------------------------------- # # Require explicit names when deleting indices: # #action.destructive_requires_name: true
log2配置文件:
[es@log2 config]$ cat elasticsearch.yml # ======================== Elasticsearch Configuration ========================= # # NOTE: Elasticsearch comes with reasonable defaults for most settings. # Before you set out to tweak and tune the configuration, make sure you # understand what are you trying to accomplish and the consequences. # # The primary way of configuring a node is via this file. This template lists # the most important settings you may want to configure for a production cluster. # # Please consult the documentation for further information on configuration options: # https://www.elastic.co/guide/en/elasticsearch/reference/index.html # # ---------------------------------- Cluster ----------------------------------- # # Use a descriptive name for your cluster: # cluster.name: console # # ------------------------------------ Node ------------------------------------ # # Use a descriptive name for the node: # node.name: log2 node.master: false # # Add custom attributes to the node: # #node.attr.rack: r1 # # ----------------------------------- Paths ------------------------------------ # # Path to directory where to store the data (separate multiple locations by comma): # path.data: /esdata # # Path to log files: # #path.logs: /path/to/logs # # ----------------------------------- Memory ----------------------------------- # # Lock the memory on startup: # #bootstrap.memory_lock: true # #bootstrap.mlockall: true # # Make sure that the heap size is set to about half the memory available # on the system and that the owner of the process is allowed to use this # limit. # # Elasticsearch performs poorly when the system is swapping the memory. # # ---------------------------------- Network ----------------------------------- # # Set the bind address to a specific IP (IPv4 or IPv6): # network.host: 10.10.10.104 network.bind_host: 10.10.10.104 network.publish_host: 10.10.10.104 # # Set a custom port for HTTP: # http.port: 9200 # # For more information, consult the network module documentation. # # --------------------------------- Discovery ---------------------------------- # # Pass an initial list of hosts to perform discovery when new node is started: # The default list of hosts is ["127.0.0.1", "[::1]"] # discovery.zen.ping.unicast.hosts: ["10.10.10.102:9300"] # # Prevent the "split brain" by configuring the majority of nodes (total number of master-eligible nodes / 2 + 1): # discovery.zen.minimum_master_nodes: 1 # # For more information, consult the zen discovery module documentation. # # ---------------------------------- Gateway ----------------------------------- # # Block initial recovery after a full cluster restart until N nodes are started: # #gateway.recover_after_nodes: 3 # # For more information, consult the gateway module documentation. # # ---------------------------------- Various ----------------------------------- # # Require explicit names when deleting indices: # #action.destructive_requires_name: true
4.后台啟動Elasticsearch
/usr/local/elasticsearch-6.4.0/bin/elasticsearch -d
啟動后顯示如下:
5. 安裝ElasticSearch head插件
由於ElasticSearch的界面展示的是Json文件,不是很友好。我們可以通過安裝插件來解決它。
ElasticSearch_head 下載地址:https://github.com/troub1emaker0911/elasticsearch-head
ElasticSearch_head 需要node.js的支持。我們需要首先安裝node.js
【安裝node.js】
首先切換到root用戶下,將node.js的安裝包上傳到console機器上。
#將node.js解壓到目錄/usr/local/node-v8.11.4
tar -xf node-v8.11.4-linux-x64.tar.xz -C /usr/local/node-v8.11.4
#設置符號鏈接
ln -s /usr/local/node-v8.11.4/bin/node /usr/local/bin/
ln -s /usr/local/node-v8.11.4/bin/npm /usr/local/bin/
#檢查是否配置成功
node -v
npm -v
【安裝ElasticSearch_head插件】
切換到es用戶,將安裝包上傳到console機器上。
#解壓文件 unzip elasticsearch-head-master.zip
#將文件包移動到目錄/usr/local下 mv elasticsearch-head-master /usr/local
cd /usr/local/elasticsearch-head-master
npm install
#啟動Elasticsearch-head-master
npm run start > /dev/null 2>&1 &
執行上述步驟完成后,在瀏覽器中輸入http://10.10.10.102:9100即可顯示如下界面。
但是這樣集群健康值是不可用的(截圖中是我已經配置完畢的),我們需要在console機器的elasticsearch.yml文件中追加如下配置:
vim /usr/local/elasticsearch-6.4.0/config/elasticsearch.yml http.cors.enabled: true http.cors.allow-origin: "*"
然后修改“連接”按鈕前的地址,將原來的http://localhost:9200/修改為console的地址,即http://10.10.10.102:9200,然后點擊“連接”,此時后面的“集群健康值”就變成green了。
6. 新建索引
切換到“索引”選項卡,點擊“新建索引”,這里填寫“索引名稱”為book.
然后點擊“概覽”,就可以看到剛才新建的索引。
注意上圖中的綠色塊。有粗邊框的為主,細邊框的為備。
7.安裝插件:中文分詞器ik
elasticsearch-analysis-ik 是一款中文的分詞插件,支持自定義詞庫。項目地址為:https://github.com/medcl/elasticsearch-analysis-ik
(1)安裝Maven
由於該項目使用了Maven來管理,源代碼放到github上。所以要先在服務器上面安裝Maven,便可以直接在服務器上面生成項目jar包,部署起來更加方便了。
yum install -y maven
(2)安裝分詞器ik
這里安裝的版本是6.3.0
git clone https://github.com/medcl/elasticsearch-analysis-ik.git [es@console ~]$ cd elasticsearch-analysis-ik/ [es@console elasticsearch-analysis-ik]$ mvn package
(3)復制和解壓
[es@console elasticsearch-analysis-ik]$ mkdir -p /usr/local/elasticsearch/plugins/ik [es@console elasticsearch-analysis-ik]$ cp target/releases/elasticsearch-analysis-ik-6.3.0.zip /usr/local/elasticsearch/plugins/ik [es@console ~]$ cd /usr/local/elasticsearch/plugins/ik/ [es@console ik]$ unzip -oq elasticsearch-analysis-ik-6.3.0.zip
(4)重啟Elasticsearch
[es@console ik]$ cd /usr/local/elasticsearch/bin/ [es@console bin]$ jps 20221 Jps 14910 Elasticsearch [es@console bin]$ kill -9 14910 [es@console elasticsearch]$ bin/elasticsearch -d
注:在瀏覽器輸入如下地址可以查看集群的nodes節點,但結果是json格式,不是很易讀,可以將其格式化。
http://10.10.10.102:9200/_nodes
四、Logstash的安裝與配置
Logstash 是一款強大的數據處理工具,它可以實現數據傳輸,格式處理,格式化輸出,還有強大的插件功能,常用於日志處理。
Logstash工作的三個階段:
1.安裝Logstash
#切換到es用戶下,解壓安裝包到指定目錄下
tar -xf logstash-6.4.0.tar.gz -C /usr/local/logstash-6.4.0
至此,Logstash安裝完成
2.Logstash簡介
Logstash是一個開源的、接受來自多種數據源(input)、過濾你想要的數據(filter)、存儲到其他設備的日志管理程序。Logstash包含三個基本插件input\filter\output,一個基本的logstash服務必須包含input和output.
Logstash如何工作:
Logstash數據處理有三個階段,input–>filter–>output.input生產數據,filter根據定義的規則修改數據,output將數據輸出到你定義的存儲位置。
Inputs:
數據生產商,包含以下幾個常用輸出:
-
file: 從文件系統中讀取文件,類似使用tail -0F
-
syslog: syslog服務,監聽在514端口使用RFC3164格式
-
redis: 從redis服務讀取,使用redis管道和列表。
-
beats: 一種代理,自己負責收集好數據然后轉發給Logstash,常用的如filebeat.
Filters:
filters相當一個加工管道,它會一條一條過濾數據根據你定義的規則,常用的filters如下:
-
grok: 解析無規則的文字並轉化為有結構的格式。
-
mutate: 豐富的基礎類型處理,包括類型轉換、字符串處理、字段處理等。
-
drop: 丟棄一部分events不進行處理,例如: debug events
-
clone: 負責一個event,這個過程中可以添加或刪除字段。
-
geoip: 添加地理信息(為前台kibana圖形化展示使用)
Outputs:
-
elasticserache elasticserache接收並保存數據,並將數據給kibana前端展示。
-
output 標准輸出,直接打印在屏幕上。
3.Logstash舉例
bin/logstash -e 'input { stdin { } } output { stdout {} }'
我們現在可以在命令行下輸入一些字符,然后我們將看到logstash的輸出內容:
[es@console logstash-6.4.0]$ bin/logstash -e 'input { stdin { } } output { stdout {} }' hello world Sending Logstash logs to /usr/local/logstash-6.4.0/logs which is now configured via log4j2.properties [2018-09-14T22:33:52,155][WARN ][logstash.config.source.multilocal] Ignoring the 'pipelines.yml' file because modules or command line options are specified [2018-09-14T22:33:54,402][INFO ][logstash.runner ] Starting Logstash {"logstash.version"=>"6.4.0"} [2018-09-14T22:34:00,577][INFO ][logstash.pipeline ] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>1, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50} [2018-09-14T22:34:00,931][INFO ][logstash.pipeline ] Pipeline started successfully {:pipeline_id=>"main", :thread=>"#<Thread:0x3f32496 run>"} The stdin plugin is now waiting for input: [2018-09-14T22:34:01,199][INFO ][logstash.agent ] Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]} { "@version" => "1", "message" => "hello world", "@timestamp" => 2018-09-14T14:34:01.245Z, "host" => "console" } [2018-09-14T22:34:02,693][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}
我們再運行另一個命令:
bin/logstash -e 'input { stdin { } } output { stdout { codec => rubydebug } }'
然后輸入helloworld,查看顯示的內容:
[es@console logstash-6.4.0]$ bin/logstash -e 'input { stdin { } } output { stdout { codec => rubydebug } }' helloworld Sending Logstash logs to /usr/local/logstash-6.4.0/logs which is now configured via log4j2.properties [2018-09-12T03:07:33,884][WARN ][logstash.config.source.multilocal] Ignoring the 'pipelines.yml' file because modules or command line options are specified [2018-09-12T03:07:36,017][INFO ][logstash.runner ] Starting Logstash {"logstash.version"=>"6.4.0"} [2018-09-12T03:07:43,294][INFO ][logstash.pipeline ] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>1, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50} [2018-09-12T03:07:43,646][INFO ][logstash.pipeline ] Pipeline started successfully {:pipeline_id=>"main", :thread=>"#<Thread:0x7cefe25 run>"} The stdin plugin is now waiting for input: [2018-09-12T03:07:43,872][INFO ][logstash.agent ] Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]} { "host" => "console", "@version" => "1", "@timestamp" => 2018-09-11T19:07:43.813Z, "message" => "helloworld" } [2018-09-12T03:07:45,292][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}
以上示例通過重新設置了叫”stdout”的output(添加了”codec”參數),我們就可以改變Logstash的輸出表現。類似的我們可以通過在你的配置文件中添加或者修改inputs、outputs、filters,就可以使隨意的格式化日志數據成為可能,從而訂制更合理的存儲格式為查詢提供便利。
前面已經說過Logstash必須有一個輸入和一個輸出,上面的例子表示從終端上輸入並輸出到終端。
數據在線程之間以事件的形式流傳。不要叫行,因為Logstash可以處理多行事件。
input {
# 輸入域,可以使用上面提到的幾種輸入方式。stdin{} 表示標准輸入,file{} 表示從文件讀取。
input的各種插件: https://www.elastic.co/guide/en/logstash/current/input-plugins.html
}
output {
#Logstash的功能就是對數據進行加工,上述例子就是Logstash的格式化輸出,當然這是最簡單的。
output的各種插件:https://www.elastic.co/guide/en/logstash/current/output-plugins.html
}
Logstash配置文件和命令:
Logstash的默認配置已經夠我們使用了,從5.0后使用logstash.yml文件,可以將一些命令行參數直接寫到YAML文件即可。
-
–configtest 或 -t 用於測試Logstash的配置語法是否正確,非常好的一個參數。
-
–log 或 -l Logstash默認輸出日志到標准輸出,指定Logstash的日志存放位置
-
–pipeline-workers 或 -w 指定運行filter和output的pipeline線程數量,使用默認就好。
-
-f 指定規則文件,可以將自己的文件放在同一個路徑下,使用-f 即可運行。
一個簡單的Logstash從文件中讀取配置:
vim file.conf #file.conf可以放在任意位置 input { stdin { } } output { stdout { codec=>rubydebug } } ~ bin/logstash -f /root/conf/file.conf #啟動即可
3. 插件
(1)grok插件
Grok是Logstash最重要的插件,你可以在grok里自定義好命名規則,然后在grok參數或者其他正則表達式中引用它。
官方給出了120個左右默認的模式:https://github.com/logstash-plugins/logstash-patterns-core/tree/master/patterns
USERNAME [a-zA-Z0-9._-]+ USER %{USERNAME}
第一行,用普通的正則表達式來定義一個grok表達式;第二行,通過打印賦值格式,用前面定義好的grok表達式來定義里一個grok表達式。
正則表達式引格式:
%{SYNTAX:SEMANTIC}
-
SYNTAX:表示你的規則是如何被匹配的,比如3.14將會被NUMBER模式匹配,55.1.1.2將會被IP模式匹配。
-
SEMANTIC:表示被匹配到的唯一標識符,比如3.14被匹配到了后,SEMANTIC就當是3.14。
匹配到的數據默認是strings類型,當然你也可以裝換你匹配到的數據,類似這樣:
%{NUMBER:num:int}
當前只支持裝換為int和float。
例如:
[es@console config]$ more file.conf input { stdin { } } filter { grok { match => { "message" => "%{WORD} %{NUMBER:request_time:float} %{WORD}" } } } output { stdout { codec=>rubydebug } }
然后運行logstash
[es@console logstash-6.4.0]$ bin/logstash -f /usr/local/logstash-6.4.0/config/file.conf
結果如下:
monkey 12.12 beta { "message" => "monkey 12.12 beta", "@version" => "1", "@timestamp" => 2018-09-17T08:18:42.416Z, "host" => "console", "request_time" => 12.12 }
這個我們就匹配到我們想要的值了,並將名字命名為:request_time
在實際生產中為了方便我們不可能在配置文件中一行一行的寫表達式,建議把所有的grok表達式統一寫到一個地方,使用patterns_dir選項來引用。
grok { patterns_dir => "/root/conf/nginx" #這是你定義的grok表達式文件 match => { "message" => "%{CDN_FORMAT}" } add_tag => ["CDN"] }
事實上,我們收集的日志也有很多不需要的地方,我們可以刪除一部分field信息,保留我們想要的那一部分。
grok { match => { "message" => "%{WORD} %{NUMBER:request_time:float} %{WORD}" } remove_field => [ "request_time" ] overwrite => [ "message" ] } as 12 as { "@timestamp" => 2017-02-08T06:39:07.921Z, "@version" => "1", "host" => "0.0.0.0", "message" => "as 12 as" }
已經沒有request_time這個field啦~
更多關於grok的用戶看官方文檔:https://www.elastic.co/guide/en/logstash/current/plugins-filters-grok.html
最重要的一點:我強烈建議每個人都要使用 Grok Debugger 來調試自己的 grok 表達式。
(2)kv插件
(3)geoip插件
geoip主要是查詢IP地址歸屬地,用來判斷訪問網站的來源地。
[es@console config]$ more file.conf input { stdin { } } filter { grok { match => { "message" => "%{WORD} %{NUMBER:request_time:float} %{WORD}" } } geoip { source => "clientip" fields => [ "ip","city_name","country_name","location" ] } } output { stdout { codec=>rubydebug } }
參考文檔:https://www.cnblogs.com/blogjun/articles/8064646.html
四、Kibana的安裝與配置
Kibana是一個開源的分析與可視化平台,設計出來用於和Elasticsearch一起使用的。你可以用kibana搜索、查看、交互存放在Elasticsearch索引里的數據,
使用各種不同的圖表、表格、地圖等kibana能夠很輕易地展示高級數據分析與可視化。
Kibana讓我們理解大量數據變得很容易。它簡單、基於瀏覽器的接口使你能快速創建和分享實時展現Elasticsearch查詢變化的動態儀表盤。
# 簡單來講他具體的工作流程就是 logstash agent 監控並過濾日志,logstash index將日志收集在一起交給全文搜索服務ElasticSearch 可以用ElasticSearch進行自定義搜索 通過Kibana 來結合 自定義搜索進行頁面展示,如上圖。
1.安裝Kibana
#新建安裝目錄
mkdir -p /usr/local/kibana-6.4.0
#解壓安裝包並將解壓后的復制到相應目錄下
tar -xf kibana-6.4.0.tar.gz
#修改安裝目錄的屬主和用戶
cp -r * /root/software/kibana-6.4.0 /usr/local/kibana-6.4.0
2.配置Kibana與啟動
修改kibana的配置文件kibana.yml, 配置后的結果如下:
[root@console config]# more /usr/local/kibana-6.4.0/config/kibana.yml # Kibana is served by a back end server. This setting specifies the port to use. server.port: 5601 #配置kibana的端口號 # Specifies the address to which the Kibana server will bind. IP addresses and host names are both valid values. # The default is 'localhost', which usually means remote machines will not be able to connect. # To allow connections from remote users, set this parameter to a non-loopback address. server.host: "10.10.10.102" #配置kibana安裝的主機的IP # Enables you to specify a path to mount Kibana at if you are running behind a proxy. # Use the `server.rewriteBasePath` setting to tell Kibana if it should remove the basePath # from requests it receives, and to prevent a deprecation warning at startup. # This setting cannot end in a slash. #server.basePath: "" # Specifies whether Kibana should rewrite requests that are prefixed with # `server.basePath` or require that they are rewritten by your reverse proxy. # This setting was effectively always `false` before Kibana 6.3 and will # default to `true` starting in Kibana 7.0. #server.rewriteBasePath: false # The maximum payload size in bytes for incoming server requests. #server.maxPayloadBytes: 1048576 # The Kibana server's name. This is used for display purposes. server.name: "console" # The URL of the Elasticsearch instance to use for all your queries. elasticsearch.url: "http://10.10.10.102:9200" #配置Elasticsearch安裝主機的IP地址和端口 # When this setting's value is true Kibana uses the hostname specified in the server.host # setting. When the value of this setting is false, Kibana uses the hostname of the host # that connects to this Kibana instance. #elasticsearch.preserveHost: true # Kibana uses an index in Elasticsearch to store saved searches, visualizations and # dashboards. Kibana creates a new index if the index doesn't already exist. #kibana.index: ".kibana" # The default application to load. #kibana.defaultAppId: "home" # If your Elasticsearch is protected with basic authentication, these settings provide # the username and password that the Kibana server uses to perform maintenance on the Kibana # index at startup. Your Kibana users still need to authenticate with Elasticsearch, which # is proxied through the Kibana server. #elasticsearch.username: "user" #elasticsearch.password: "pass" # Enables SSL and paths to the PEM-format SSL certificate and SSL key files, respectively. # These settings enable SSL for outgoing requests from the Kibana server to the browser. #server.ssl.enabled: false #server.ssl.certificate: /path/to/your/server.crt #server.ssl.key: /path/to/your/server.key # Optional settings that provide the paths to the PEM-format SSL certificate and key files. # These files validate that your Elasticsearch backend uses the same key files. #elasticsearch.ssl.certificate: /path/to/your/client.crt #elasticsearch.ssl.key: /path/to/your/client.key # Optional setting that enables you to specify a path to the PEM file for the certificate # authority for your Elasticsearch instance. #elasticsearch.ssl.certificateAuthorities: [ "/path/to/your/CA.pem" ] # To disregard the validity of SSL certificates, change this setting's value to 'none'. #elasticsearch.ssl.verificationMode: full # Time in milliseconds to wait for Elasticsearch to respond to pings. Defaults to the value of # the elasticsearch.requestTimeout setting. #elasticsearch.pingTimeout: 1500 # Time in milliseconds to wait for responses from the back end or Elasticsearch. This value # must be a positive integer. #elasticsearch.requestTimeout: 30000 # List of Kibana client-side headers to send to Elasticsearch. To send *no* client-side # headers, set this value to [] (an empty list). #elasticsearch.requestHeadersWhitelist: [ authorization ] # Header names and values that are sent to Elasticsearch. Any custom headers cannot be overwritten # by client-side headers, regardless of the elasticsearch.requestHeadersWhitelist configuration. #elasticsearch.customHeaders: {} # Time in milliseconds for Elasticsearch to wait for responses from shards. Set to 0 to disable. #elasticsearch.shardTimeout: 30000 # Time in milliseconds to wait for Elasticsearch at Kibana startup before retrying. #elasticsearch.startupTimeout: 5000 # Logs queries sent to Elasticsearch. Requires logging.verbose set to true. #elasticsearch.logQueries: false # Specifies the path where Kibana creates the process ID file. #pid.file: /var/run/kibana.pid # Enables you specify a file where Kibana stores log output. #logging.dest: stdout # Set the value of this setting to true to suppress all logging output. #logging.silent: false # Set the value of this setting to true to suppress all logging output other than error messages. #logging.quiet: false # Set the value of this setting to true to log all events, including system usage information # and all requests. #logging.verbose: false # Set the interval in milliseconds to sample system and process performance # metrics. Minimum is 100ms. Defaults to 5000. #ops.interval: 5000 # The default locale. This locale can be used in certain circumstances to substitute any missing # translations. #i18n.defaultLocale: "en" # #
啟動kibana:
cd /usr/local/kibana-6.4.0 ./bin/kibana
成功啟動后,在瀏覽器輸入http://10.10.10.102:5601,界面如下:
如下地址可以查看kibana的狀態和資源使用情況:
10.10.10.102:5601/status
3.