部署使用elk


1.配置8台虛擬機 es[1:5],kibana,logstash,web   ip 192.168.1.61-68

2.開始配置ip和主機名

3.用ansible部署elasticsearch,並能訪問網頁,以下是ansible部署的yml代碼

 1 ---
 2 - hosts: es
 3   remote_user: root
 4   tasks:
 5     - copy:
 6         src: local.repo
 7         dest: /etc/yum.repos.d/local.repo
 8         owner: root
 9         group: root
10         mode: 0644
11     - name: install elasticsearch
12       yum:
13         name: java-1.8.0-openjdk,elasticsearch
14         state: installed
15     - template:
16         src: elasticsearch.yml
17         dest: /etc/elasticsearch/elasticsearch.yml
18         owner: root
19         group: root
20         mode: 0644
21       notify: reload elasticsearch
22       tags: esconf
23     - service:
24         name: elasticsearch
25         enabled: yes
26   handlers:
27     - name: reload elasticsearch
28       service:
29         name: elasticsearch
30         state: restarted
View Code

4.以下是當前要改的 hosts

192.168.1.61 es1
192.168.1.62 es2
192.168.1.63 es3
192.168.1.64 es4
192.168.1.65 es5
192.168.1.66 kibana
192.168.1.67 logstash

5.yum源有相應的軟件包和依賴包即可

6.以下是當前要改的 elasticsearch.yml

 1 # ======================== Elasticsearch Configuration =========================
 2 #
 3 # NOTE: Elasticsearch comes with reasonable defaults for most settings.
 4 #       Before you set out to tweak and tune the configuration, make sure you
 5 #       understand what are you trying to accomplish and the consequences.
 6 #
 7 # The primary way of configuring a node is via this file. This template lists
 8 # the most important settings you may want to configure for a production cluster.
 9 #
10 # Please see the documentation for further information on configuration options:
11 # <http://www.elastic.co/guide/en/elasticsearch/reference/current/setup-configuration.html>
12 #
13 # ---------------------------------- Cluster -----------------------------------
14 #
15 # Use a descriptive name for your cluster:
16 #
17 cluster.name: nsd1810
18 #
19 # ------------------------------------ Node ------------------------------------
20 #
21 # Use a descriptive name for the node:
22 #
23 node.name: {{ansible_hostname}}
24 #
25 # Add custom attributes to the node:
26 #
27 # node.rack: r1
28 #
29 # ----------------------------------- Paths ------------------------------------
30 #
31 # Path to directory where to store the data (separate multiple locations by comma):
32 #
33 # path.data: /path/to/data
34 #
35 # Path to log files:
36 #
37 # path.logs: /path/to/logs
38 #
39 # ----------------------------------- Memory -----------------------------------
40 #
41 # Lock the memory on startup:
42 #
43 # bootstrap.mlockall: true
44 #
45 # Make sure that the `ES_HEAP_SIZE` environment variable is set to about half the memory
46 # available on the system and that the owner of the process is allowed to use this limit.
47 #
48 # Elasticsearch performs poorly when the system is swapping the memory.
49 #
50 # ---------------------------------- Network -----------------------------------
51 #
52 # Set the bind address to a specific IP (IPv4 or IPv6):
53 #
54 network.host: 0.0.0.0
55 #
56 # Set a custom port for HTTP:
57 #
58 # http.port: 9200
59 #
60 # For more information, see the documentation at:
61 # <http://www.elastic.co/guide/en/elasticsearch/reference/current/modules-network.html>
62 #
63 # --------------------------------- Discovery ----------------------------------
64 #
65 # Pass an initial list of hosts to perform discovery when new node is started:
66 # The default list of hosts is ["127.0.0.1", "[::1]"]
67 #
68 discovery.zen.ping.unicast.hosts: ["es1", "es2", "es3"]
69 #
70 # Prevent the "split brain" by configuring the majority of nodes (total number of nodes / 2 + 1):
71 #
72 # discovery.zen.minimum_master_nodes: 3
73 #
74 # For more information, see the documentation at:
75 # <http://www.elastic.co/guide/en/elasticsearch/reference/current/modules-discovery.html>
76 #
77 # ---------------------------------- Gateway -----------------------------------
78 #
79 # Block initial recovery after a full cluster restart until N nodes are started:
80 #
81 # gateway.recover_after_nodes: 3
82 #
83 # For more information, see the documentation at:
84 # <http://www.elastic.co/guide/en/elasticsearch/reference/current/modules-gateway.html>
85 #
86 # ---------------------------------- Various -----------------------------------
87 #
88 # Disable starting multiple nodes on a single system:
89 #
90 # node.max_local_storage_nodes: 1
91 #
92 # Require explicit names when deleting indices:
93 #
94 # action.destructive_requires_name: true
View Code

7.elasticsearch搭建完成,可以用http://192.168.1.61:9200 訪問檢驗

8.部署插件

插件裝在哪一台機器上,只能在哪台機器上使用(這里安裝在es5機器上面)

1)使用遠程 uri 路徑可以直接安裝

es5 ~]# cd /usr/share/elasticsearch/bin
es5 bin]# ./plugin install \

ftp://192.168.1.254/elk/elasticsearch-head-master.zip        //安裝head插件

es5 bin]# ./plugin install \

ftp://192.168.1.254/elk/elasticsearch-kopf-master.zip        //安裝kopf插件

es5 bin]# [root@es5 bin]# ./plugin install \

ftp://192.168.1.254/elk/bigdesk-master.zip

//安裝bigdesk插件    

es5 bin]# ./plugin list        //查看安裝的插件

Installed plugins in /usr/share/elasticsearch/plugins:

  1. - head
  2. - kopf
  3. - bigdesk

2)訪問head插件

  1. [root@room9pc01 ~]# firefox http://192.168.1.65:9200/_plugin/head
  2. )訪問kopf插件 

    1. [root@room9pc01 ~]# http://192.168.1.65:9200/_plugin/kopf

4)訪問bigdesk插件

  1. [root@room9pc01 ~]# http://192.168.1.65:9200/_plugin/bigdesk

 

9:安裝kibana

1)在另一台主機,配置ip為192.168.1.66,配置yum源,更改主機名

2)安裝kibana

  1. [root@kibana ~]# yum -y install kibana
  2. [root@kibana ~]# rpm -qc kibana
  3. /opt/kibana/config/kibana.yml
  4. [root@kibana ~]# vim /opt/kibana/config/kibana.yml
  5. 2 server.port: 5601        
  6. //若把端口改為80,可以成功啟動kibana,但ss時沒有端口,沒有監聽80端口,服務里面寫死了,不能用80端口,只能是5601這個端口
  7. 5 server.host: "0.0.0.0"        //服務器監聽地址
  8. 15 elasticsearch.url: http://192.168.1.61:9200    
  9. //聲明地址,從哪里查,集群里面隨便選一個
  10. 23 kibana.index: ".kibana"    //kibana自己創建的索引
  11. 26 kibana.defaultAppId: "discover"    //打開kibana頁面時,默認打開的頁面discover
  12. 53 elasticsearch.pingTimeout: 1500    //ping檢測超時時間
  13. 57 elasticsearch.requestTimeout: 30000    //請求超時
  14. 64 elasticsearch.startupTimeout: 5000    //啟動超時
  15. [root@kibana ~]# systemctl restart kibana
  16. [root@kibana ~]# systemctl enable kibana
  17. Created symlink from /etc/systemd/system/multi-user.target.wants/kibana.service to /usr/lib/systemd/system/kibana.service.
  18. [root@kibana ~]# ss -antup | grep 5601 //查看監聽端口

3)瀏覽器訪問kibana,

  1. [root@kibana ~]# firefox 192.168.1.66:5601

4)點擊Status,查看是否安裝成功,全部是綠色的對鈎,說明安裝成功

5)用head插件訪問會有.kibana的索引信息,如圖所示:

[root@es5 ~]# firefox http://192.168.1.65:9200/_plugin/head/   #插件安裝再es5上
 

 

 10.安裝jdk 和 logstash
  1. [root@logstash ~]# yum -y install java-1.8.0-openjdk
  2. [root@logstash ~]# yum -y install logstash
  3. [root@logstash ~]# java -version
11.自己配置logstash.conf 文件,參考代碼如下,input,output,filter 可以參考elastic官網https://www.elastic.co/guide/en/logstash/current/index.html
      yes的為必須選項,下面有id和example(與id在一起的地方)
      注意這個代碼包含很多測試輸入輸出還有調用宏(
  1. root@logstash ~]# cd /opt/logstash/vendor/bundle/ \
  2. jruby/1.9/gems/logstash-patterns-core-2.0.5/patterns/
  3. [root@logstash ~]# vim grok-patterns //查找COMBINEDAPACHELOG
  4. COMBINEDAPACHELOG %{COMMONAPACHELOG} %{QS:referrer} %{QS:agent}
)查找宏路徑,寫入數據到/tmp/a.log,以及echo > /dev/tcp/192.168.1.67 /8888 ,也可以用syslog部分,在/etc/rsyslog.conf在加上
  1. local0.info @192.168.1.67:514
  2. //寫一個@或兩個@@都可以,一個@代表udp,兩個@@代表tcp

logger -p local0.info -t nds "001 elk" 然后上述操作都能被logstash,收集並處理,看到分析結果。調用宏用的是grok(因為寫的正則有點復雜和頭疼就用宏定義了)

 1 input{
 2     stdin{ codec => "json"}
 3     beats{
 4       port => 5044
 5  }
 6 file {
 7     path => ["/tmp/a.log","/tmp/b.log"]
 8     sincedb_path => "/var/lib/logstash/sincedb"
 9     start_position => "beginning"
10     type => "testlog"
11    }
12  tcp {
13   host => "0.0.0.0"
14   port => "8888"
15   type => "tcplog"
16   }
17  udp {
18      host => "0.0.0.0"
19      port => "8888"
20      type => "udplog"
21 }
22 
23 syslog {
24            type => "syslog"
25 }
26 
27 }
28 filter{
29  grok {
30   match => ["message", "%{COMBINEDAPACHELOG}"]
31   }}
32 
33 output{
34     stdout{
35     codec => "rubydebug"
36    }
37   elasticsearch {
38     hosts => ["es1", "es2", "es3"]
39     index => "weblog"
40     flush_size => 2000
41     idle_flush_time => 10
42 }
43 }
View Code

12.

步驟二:安裝Apache服務,用filebeat收集Apache服務器的日志,存入elasticsearch

1)在之前安裝了Apache的主機上面安裝filebeat

  1. [root@web~]# yum -y install filebeat
  2. [root@web~]# vim/etc/filebeat/filebeat.yml
  3. paths:
  4.     - /var/log/httpd/access_log //日志的路徑,短橫線加空格代表yml格式
  5. document_type: apachelog //文檔類型
  6. elasticsearch:        //加上注釋
  7. hosts: ["localhost:9200"]                //加上注釋
  8. logstash:                    //去掉注釋
  9. hosts: ["192.168.1.67:5044"]     //去掉注釋,logstash那台主機的ip
  10. [root@web ~]# systemctl start filebeat

13.然后在/etc/logstash.conf中寫入beats port相關配置(filebeat是給logstash的小插件,使得各服務器不用裝logstash就可以自動把數據發送給logstash服務器)

14都配置好后,就可以在logstash執行/opt/logstash/bin/logstash -f /etc/logstash/logstash.conf 

  來進行解析數據了

並且可以用netstat -antup | grep 5044 查到 兩個5044的端口 (用netstat -lntup | grep 5044只有一個)

15.訪問web服務器后,logstash能收集到數據,而elastic的網頁可以看到weblog,在.kibana的網頁輸入weblog可以看到訪問的條形圖

 weblog圖類似下面,名字不一樣

 

.kibana圖如下

 

不能畫出餅圖,數據太少,訪問網頁要多刷新幾次,數據太少,可能看不到圖片。 

 

上述的logstash.conf文件是用來解析httpd的log日志的如果是nginx或者是其他服務就要調用他的宏了,可以if來判斷,如以下代碼,提供一些思路

 1 input{
 2     stdin{ codec => "json"}
 3     beats{
 4       port => 5044
 5  }
 6 file {
 7     path => ["/tmp/a.log","/tmp/b.log"]
 8     sincedb_path => "/var/lib/logstash/sincedb"
 9     start_position => "beginning"
10     type => "testlog"
11    }
12  tcp {
13   host => "0.0.0.0"
14   port => "8888"
15   type => "tcplog"
16   }
17  udp {
18      host => "0.0.0.0"
19      port => "8888"
20      type => "udplog"
21 }
22 
23 syslog {
24            type => "syslog"
25 }
26 
27 }
28 filter{
29  if [type] == "httplog"{
30  grok {
31   match => ["message", "%{COMBINEDAPACHELOG}"]
32   }}
33 }
34 
35 output{
36     stdout{
37     codec => "rubydebug"
38    }
39 if [type] == "httplog"{
40   elasticsearch {
41     hosts => ["es1", "es2", "es3"]
42     index => "weblog"
43     flush_size => 2000
44     idle_flush_time => 10
45 }
46 }}
View Code

 elk 整體思路是 客戶通過訪問web,然后web服務器的filebeat把數據發送給logstash,logstash解析后發送各elasticsearch檢索存儲,kibana上可以從elastic提取數據並有用web和圖形展示出來

 


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM