本文地址 http://www.cnblogs.com/jasonxuli/p/6397244.html
https://www.elastic.co 上,elasticsearch,logstash (filebeat),kibana 都有各自的教程,基本照做就可以跑通。
但只是初步跑起來,如果要都作為服務運行,需要使用 rpm 包安裝。
系統:
# cat /etc/issue CentOS release 6.5 (Final)
ElasticSearch
安裝后會創建用戶 elasticsearch。
修改配置文件:
> vim /etc/elasticsearch/elasticsearch.yml # ---------------------------------- Network ----------------------------------- # # Set the bind address to a specific IP (IPv4 or IPv6): # # network.host: 127.0.0.1 network.host: 192.168.20.50 # # Set a custom port for HTTP: # http.port: 9200 ... bootstrap.system_call_filter: false
!使用本地 IP(127.0.0.1)時,Elasticsearch 進入 dev mode,只能從本機訪問,只顯示警告。
!使用局域網IP后,可以從其他機器訪問,但啟動時進入 production mode,並進行 bootstrap check,有可能對不合適的系統參數報錯。
例如:
ERROR: bootstrap checks failed max file descriptors [65535] for elasticsearch process likely too low, increase to at least [65536] memory locking requested for elasticsearch process but memory is not locked max number of threads [1024] for user [jason] likely too low, increase to at least [2048] max virtual memory areas vm.max_map_count [65530] likely too low, increase to at least [262144] system call filters failed to install; check the logs and fix your configuration or disable system call filters at your own risk
需要針對這些參數進行設置:
> vim /etc/security/limits.conf ... elasticsearch hard nofile 65536 # 針對 max file descriptors elasticsearch soft nproc 2048 # 針對 max number of threads > vim /etc/sysctl.conf ... vm.max_map_count=262144 # 針對 max virtual memory areas > vim /etc/elasticsearch/elasticsearch.yml ... bootstrap.system_call_filter: false # 針對 system call filters failed to install, 參見 https://www.elastic.co/guide/en/elasticsearch/reference/current/system-call-filter-check.html
memory locking 的問題似乎改了上面幾個就沒有出現了。
sudo chkconfig --add elasticsearch # configure Elasticsearch to start automatically when the system boots up sudo -i service elasticsearch start sudo -i service elasticsearch stop
日志: /var/log/elasticsearch/
LogStash

安裝:
rpm -vi logstash-5.2.0.rpm
這個例子里使用 Filebeat 將測試用的 Apache web log 作為 logstash的輸入,解析並寫入數據到 ElasticSearch 中。
目前還不需要修改 logstash.yml,logstash 會讀取 /etc/logstash/conf.d/ 目錄下的文件作為 pipeline 配置文件。
新建 logstash pipeline 配置文件:
> vim /etc/logstash/conf.d/first-pipeline.conf input { beats { port => "5043" } } filter { grok { match => { "message" => "%{COMBINEDAPACHELOG}"} } geoip { source => "clientip" } } output { elasticsearch { hosts => [ "192.168.20.50:9200" ] index => "testlog-%{+YYYY.MM.dd}" } }
grok
可以解析未結構化的日志數據,Grok filter pattern 測試網站:
http://grokdebug.herokuapp.com/
上述日志匹配模式為:
%{COMBINEDAPACHELOG}
相當於:
%{IPORHOST:clientip} %{USER:ident} %{USER:auth} \[%{HTTPDATE:timestamp}\] "(?:%{WORD:verb} %{NOTSPACE:request} (?: HTTP/%{NUMBER:httpversion})?|%{DATA:rawrequest})" %{NUMBER:response} (?:%{NUMBER:bytes}|-) %{QS:referrer} %{QS:agent}
啟動:
sudo initctl start logstash // 作為服務運行,在使用Upstart的系統中
接下來需要安裝 Filebeat 來作為 input 獲取數據。
解壓用 gzip -d logstash-tutorial.log.gz
Filebeat
The
Beats are open source data shippers that you install as
agents on your servers to send different types of operational data to
Elasticsearch. Beats can send data directly to Elasticsearch or send it to Elasticsearch via Logstash, which you can use to parse and transform the data.
安裝:
curl -L -O https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-5.2.0-x86_64.rpm sudo rpm -vi filebeat-5.2.0-x86_64.rpm
配置:
> vim /etc/filebeat/filebeat.yml filebeat.prospectors: - input_type: log paths: - /var/log/logstash-tutorial.log # 之前下載的測試文件 #- /var/log/*.log #- c:\programdata\elasticsearch\logs\* ... #----------------------------- Logstash output -------------------------------- output.logstash: # The Logstash hosts #hosts: ["localhost:5044"] hosts: ["localhost:5043"]
rpm 安裝、啟動
curl -L -O https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-5.2.0-x86_64.rpm sudo rpm -vi filebeat-5.2.0-x86_64.rpm sudo /etc/init.d/filebeat start
kibana
kibana 5.2.0 要求 Elasticsearch 也為 5.2.0 。
修改配置:
> vim /etc/kibana/kibana.yml server.host: "192.168.20.50" elasticsearch.url: "http://192.168.20.50:9200"
> sudo chkconfig --add kibana # 設置自動啟動 > sudo -i service kibana start > sudo -i service kibana stop
驗證
1,檢查 Elasticsearch 的索引:
http://192.168.20.50:9200/_cat/indices?v
2,根據之前 first-pipeline.conf 中的索引設置,創建出的索引應該是 testlog-2017.02.14,查詢API為:
http://192.168.20.50:9200/testlog-2017.02.14/_search?
3,filebeat 第一次啟動后應該會解析測試日志文件中的所有數據。
之后可以添加新日志進去,然后刷新 ES 索引的url,看到該索引的 count 增加就說明解析新日志成功了。這個延遲大概幾秒鍾。
echo '1.1.1.3 - - [04/Jan/2015:05:13:42 +0000] "GET /test.png HTTP/1.1" 200 203023 "http://test.com/" "Mozilla/5.0"' >> /var/log/logstash-tutorial.log