1.配置8台虛擬機 es[1:5],kibana,logstash,web ip 192.168.1.61-68
2.開始配置ip和主機名
3.用ansible部署elasticsearch,並能訪問網頁,以下是ansible部署的yml代碼
1 --- 2 - hosts: es 3 remote_user: root 4 tasks: 5 - copy: 6 src: local.repo 7 dest: /etc/yum.repos.d/local.repo 8 owner: root 9 group: root 10 mode: 0644 11 - name: install elasticsearch 12 yum: 13 name: java-1.8.0-openjdk,elasticsearch 14 state: installed 15 - template: 16 src: elasticsearch.yml 17 dest: /etc/elasticsearch/elasticsearch.yml 18 owner: root 19 group: root 20 mode: 0644 21 notify: reload elasticsearch 22 tags: esconf 23 - service: 24 name: elasticsearch 25 enabled: yes 26 handlers: 27 - name: reload elasticsearch 28 service: 29 name: elasticsearch 30 state: restarted
4.以下是當前要改的 hosts
192.168.1.61 es1 192.168.1.62 es2 192.168.1.63 es3 192.168.1.64 es4 192.168.1.65 es5 192.168.1.66 kibana 192.168.1.67 logstash
5.yum源有相應的軟件包和依賴包即可
6.以下是當前要改的 elasticsearch.yml
1 # ======================== Elasticsearch Configuration ========================= 2 # 3 # NOTE: Elasticsearch comes with reasonable defaults for most settings. 4 # Before you set out to tweak and tune the configuration, make sure you 5 # understand what are you trying to accomplish and the consequences. 6 # 7 # The primary way of configuring a node is via this file. This template lists 8 # the most important settings you may want to configure for a production cluster. 9 # 10 # Please see the documentation for further information on configuration options: 11 # <http://www.elastic.co/guide/en/elasticsearch/reference/current/setup-configuration.html> 12 # 13 # ---------------------------------- Cluster ----------------------------------- 14 # 15 # Use a descriptive name for your cluster: 16 # 17 cluster.name: nsd1810 18 # 19 # ------------------------------------ Node ------------------------------------ 20 # 21 # Use a descriptive name for the node: 22 # 23 node.name: {{ansible_hostname}} 24 # 25 # Add custom attributes to the node: 26 # 27 # node.rack: r1 28 # 29 # ----------------------------------- Paths ------------------------------------ 30 # 31 # Path to directory where to store the data (separate multiple locations by comma): 32 # 33 # path.data: /path/to/data 34 # 35 # Path to log files: 36 # 37 # path.logs: /path/to/logs 38 # 39 # ----------------------------------- Memory ----------------------------------- 40 # 41 # Lock the memory on startup: 42 # 43 # bootstrap.mlockall: true 44 # 45 # Make sure that the `ES_HEAP_SIZE` environment variable is set to about half the memory 46 # available on the system and that the owner of the process is allowed to use this limit. 47 # 48 # Elasticsearch performs poorly when the system is swapping the memory. 49 # 50 # ---------------------------------- Network ----------------------------------- 51 # 52 # Set the bind address to a specific IP (IPv4 or IPv6): 53 # 54 network.host: 0.0.0.0 55 # 56 # Set a custom port for HTTP: 57 # 58 # http.port: 9200 59 # 60 # For more information, see the documentation at: 61 # <http://www.elastic.co/guide/en/elasticsearch/reference/current/modules-network.html> 62 # 63 # --------------------------------- Discovery ---------------------------------- 64 # 65 # Pass an initial list of hosts to perform discovery when new node is started: 66 # The default list of hosts is ["127.0.0.1", "[::1]"] 67 # 68 discovery.zen.ping.unicast.hosts: ["es1", "es2", "es3"] 69 # 70 # Prevent the "split brain" by configuring the majority of nodes (total number of nodes / 2 + 1): 71 # 72 # discovery.zen.minimum_master_nodes: 3 73 # 74 # For more information, see the documentation at: 75 # <http://www.elastic.co/guide/en/elasticsearch/reference/current/modules-discovery.html> 76 # 77 # ---------------------------------- Gateway ----------------------------------- 78 # 79 # Block initial recovery after a full cluster restart until N nodes are started: 80 # 81 # gateway.recover_after_nodes: 3 82 # 83 # For more information, see the documentation at: 84 # <http://www.elastic.co/guide/en/elasticsearch/reference/current/modules-gateway.html> 85 # 86 # ---------------------------------- Various ----------------------------------- 87 # 88 # Disable starting multiple nodes on a single system: 89 # 90 # node.max_local_storage_nodes: 1 91 # 92 # Require explicit names when deleting indices: 93 # 94 # action.destructive_requires_name: true
7.elasticsearch搭建完成,可以用http://192.168.1.61:9200 訪問檢驗
8.部署插件
插件裝在哪一台機器上,只能在哪台機器上使用(這里安裝在es5機器上面)
1)使用遠程 uri 路徑可以直接安裝
ftp://192.168.1.254/elk/elasticsearch-head-master.zip //安裝head插件
es5 bin]# ./plugin install \
ftp://192.168.1.254/elk/elasticsearch-kopf-master.zip //安裝kopf插件
es5 bin]# [root@es5 bin]# ./plugin install \
ftp://192.168.1.254/elk/bigdesk-master.zip
//安裝bigdesk插件
es5 bin]# ./plugin list //查看安裝的插件
Installed plugins in /usr/share/elasticsearch/plugins:
- - head
- - kopf
- - bigdesk
2)訪問head插件
- [root@room9pc01 ~]# firefox http://192.168.1.65:9200/_plugin/head
-
)訪問kopf插件
- [root@room9pc01 ~]# http://192.168.1.65:9200/_plugin/kopf
4)訪問bigdesk插件
- [root@room9pc01 ~]# http://192.168.1.65:9200/_plugin/bigdesk
9:安裝kibana
1)在另一台主機,配置ip為192.168.1.66,配置yum源,更改主機名
2)安裝kibana
- [root@kibana ~]# yum -y install kibana
- [root@kibana ~]# rpm -qc kibana
- /opt/kibana/config/kibana.yml
- [root@kibana ~]# vim /opt/kibana/config/kibana.yml
- 2 server.port: 5601
- //若把端口改為80,可以成功啟動kibana,但ss時沒有端口,沒有監聽80端口,服務里面寫死了,不能用80端口,只能是5601這個端口
- 5 server.host: "0.0.0.0" //服務器監聽地址
- 15 elasticsearch.url: http://192.168.1.61:9200
- //聲明地址,從哪里查,集群里面隨便選一個
- 23 kibana.index: ".kibana" //kibana自己創建的索引
- 26 kibana.defaultAppId: "discover" //打開kibana頁面時,默認打開的頁面discover
- 53 elasticsearch.pingTimeout: 1500 //ping檢測超時時間
- 57 elasticsearch.requestTimeout: 30000 //請求超時
- 64 elasticsearch.startupTimeout: 5000 //啟動超時
- [root@kibana ~]# systemctl restart kibana
- [root@kibana ~]# systemctl enable kibana
- Created symlink from /etc/systemd/system/multi-user.target.wants/kibana.service to /usr/lib/systemd/system/kibana.service.
- [root@kibana ~]# ss -antup | grep 5601 //查看監聽端口
3)瀏覽器訪問kibana,
- [root@kibana ~]# firefox 192.168.1.66:5601
4)點擊Status,查看是否安裝成功,全部是綠色的對鈎,說明安裝成功
5)用head插件訪問會有.kibana的索引信息,如圖所示:
.kibana圖如下

不能畫出餅圖,數據太少,訪問網頁要多刷新幾次,數據太少,可能看不到圖片。
上述的logstash.conf文件是用來解析httpd的log日志的如果是nginx或者是其他服務就要調用他的宏了,可以if來判斷,如以下代碼,提供一些思路
1 input{ 2 stdin{ codec => "json"} 3 beats{ 4 port => 5044 5 } 6 file { 7 path => ["/tmp/a.log","/tmp/b.log"] 8 sincedb_path => "/var/lib/logstash/sincedb" 9 start_position => "beginning" 10 type => "testlog" 11 } 12 tcp { 13 host => "0.0.0.0" 14 port => "8888" 15 type => "tcplog" 16 } 17 udp { 18 host => "0.0.0.0" 19 port => "8888" 20 type => "udplog" 21 } 22 23 syslog { 24 type => "syslog" 25 } 26 27 } 28 filter{ 29 if [type] == "httplog"{ 30 grok { 31 match => ["message", "%{COMBINEDAPACHELOG}"] 32 }} 33 } 34 35 output{ 36 stdout{ 37 codec => "rubydebug" 38 } 39 if [type] == "httplog"{ 40 elasticsearch { 41 hosts => ["es1", "es2", "es3"] 42 index => "weblog" 43 flush_size => 2000 44 idle_flush_time => 10 45 } 46 }}
elk 整體思路是 客戶通過訪問web,然后web服務器的filebeat把數據發送給logstash,logstash解析后發送各elasticsearch檢索存儲,kibana上可以從elastic提取數據並有用web和圖形展示出來


