环境准备:Centos7.6
包准备:
elasticsearch-7.12.0-x86_64.rpm jdk-8u181-linux-x64.rpm kibana-7.12.0-x86_64.rpm logstash-7.12.0-x86_64.rpm
1.安装jdk
[root@localhost ~]# rpm -ivh jdk-8u181-linux-x64.rpm
warning: jdk-8u181-linux-x64.rpm: Header V3 RSA/SHA256 Signature, key ID ec551f03: NOKEY
Preparing... ################################# [100%]
Updating / installing...
1:jdk1.8-2000:1.8.0_181-fcs ################################# [100%]
Unpacking JAR files...
tools.jar...
plugin.jar...
javaws.jar...
deploy.jar...
rt.jar...
jsse.jar...
charsets.jar...
localedata.jar...
[root@localhost ~]# java -version
java version "1.8.0_181"
Java(TM) SE Runtime Environment (build 1.8.0_181-b13)
Java HotSpot(TM) 64-Bit Server VM (build 25.181-b13, mixed mode)
2.安装 ELK
[root@localhost ~]# rpm -ivh elasticsearch-7.12.0-x86_64.rpm
warning: elasticsearch-7.12.0-x86_64.rpm: Header V4 RSA/SHA512 Signature, key ID d88e42b4: NOKEY
Preparing... ################################# [100%]
Creating elasticsearch group... OK
Creating elasticsearch user... OK
Updating / installing...
1:elasticsearch-0:7.12.0-1 ################################# [100%]
### NOT starting on installation, please execute the following statements to configure elasticsearch service to start automatically using systemd
sudo systemctl daemon-reload
sudo systemctl enable elasticsearch.service
### You can start elasticsearch service by executing
sudo systemctl start elasticsearch.service
Created elasticsearch keystore in /etc/elasticsearch/elasticsearch.keystore
[root@localhost ~]# rpm -ivh logstash-7.12.0-x86_64.rpm
warning: logstash-7.12.0-x86_64.rpm: Header V4 RSA/SHA512 Signature, key ID d88e42b4: NOKEY
Preparing... ################################# [100%]
Updating / installing...
1:logstash-1:7.12.0-1 ################################# [100%]
Using bundled JDK: /usr/share/logstash/jdk
Using provided startup.options file: /etc/logstash/startup.options
OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was deprecated in version 9.0 and will likely be removed in a future release.
/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/pleaserun-0.0.31/lib/pleaserun/platform/base.rb:112: warning: constant ::Fixnum is deprecated
Successfully created system startup script for Logstash
[root@localhost ~]# rpm -ivh kibana-7.12.0-x86_64.rpm
warning: kibana-7.12.0-x86_64.rpm: Header V4 RSA/SHA512 Signature, key ID d88e42b4: NOKEY
Preparing... ################################# [100%]
Updating / installing...
1:kibana-7.12.0-1 ################################# [100%]
Creating kibana group... OK
Creating kibana user... OK
Created Kibana keystore in /etc/kibana/kibana.keystore
3.修改配置文件目录
[root@localhost ~]# grep "^[a-Z]" /etc/elasticsearch/elasticsearch.yml
cluster.name: elk
node.name: node-1
path.data: /elk/data
path.logs: /elk/logs
bootstrap.memory_lock: false
bootstrap.system_call_filter: false
network.host: 10.197.10.207
cluster.initial_master_nodes: ["node-1"]
[root@localhost ~]# grep "^[a-Z]" /etc/kibana/kibana.yml
server.port: 5601
server.host: "10.197.10.207"
elasticsearch.hosts: ["http://10.197.10.207:9200"]
4.启动 es kibana
[root@localhost ~]# /etc/init.d/kibana start
Starting kibana (via systemctl): [ OK ]
[root@localhost ~]# /etc/init.d/elasticsearch start
Starting elasticsearch (via systemctl): Job for elasticsearch.service failed because the control process exited with error code. See "systemctl status elasticsearch.service" and "journalctl -xe" for details.
[FAILED]
这里启动es失败了,需要 把对应的目录权限给到elasticsearch
[root@localhost ~]# chown -R elasticsearch.elasticsearch /elk/
[root@localhost ~]# /etc/init.d/elasticsearch start
Starting elasticsearch (via systemctl): [ OK ]
启动后,ES可以通过http:10.197.10.207:9200 查看
通过es的集群健康状态
http://10.197.10.207:9200/_cluster/health?pretty=true
举一个py的脚本:
获取到的是一个json 格式的返回值,那就可以通过python 对其中的信息进行
分析,例如对status 进行分析,如果等于green(绿色)就是运行在正常,等于
yellow(黄色)表示副本分片丢失,red(红色)表示主分片丢失
[root@localhost ~]# more es-cluster-monitor.py
#!/usr/bin/env python
#coding:utf-8
import smtplib
from email.mime.text import MIMEText
from email.utils import formataddr
import subprocess
body = ""
false="false"
obj = subprocess.Popen(("curl -sXGET http://192.168.56.11:9200/_cluster/health?pretty=true"),shell=True,
stdout=subprocess.PIPE)
data = obj.stdout.read()
data1 = eval(data)
status = data1.get("status")
if status == "green":
print "50"
else:
print "100"
kibana 展示:http://10.197.10.207:5601
查看状态
http://10.197.10.207:5601/status
5.logstash搭建
Logstash 是一个开源的数据收集引擎,可以水平伸缩,而且logstash 整个ELK
当中拥有最多插件的一个组件,其可以接收来自不同来源的数据并统一输出到指
定的且可以是多个不同目的地
权限更改为logstash 用户和组,否则启动的时候日志报错
chown -R logstash.logstash /usr/share/logstash/
启动服务:systemctl start logstash
开机自启: systemctl enable logstash
#测试标准输入到标准输出
[root@localhost ~]# /usr/share/logstash/bin/logstash -e 'input { stdin{} } output { stdout{ codec => rubydebug }}
#手动输入123
123
{
#版本信息
"@version" => "1",
#主机名
"host" => "m01",
#输入的内容
"message" => "123",
#时间戳
"@timestamp" => 2020-05-20T02:15:35.798Z
}
[root@localhost ~]# /usr/share/logstash/bin/logstash -e 'input { stdin{} } output { file { path => "/tmp/test_%{+YYYY.MM.dd}.log" }}'
#输入test output file
test output file
[INFO ] 2021-06-08 00:37:35.980 [[main]>worker4] file - Opening file {:path=>"/tmp/test_2021.06.08.log"}
[INFO ] 2021-06-08 00:37:47.112 [[main]>worker4] file - Closing file /tmp/test_2021.06.08.log
打开文件验证:
[root@localhost ~]# tailf /tmp/test_2021.06.08.log
{"host":"localhost.localdomain","@timestamp":"2021-06-08T04:37:35.738Z","message":"test output file","@version":"1"}
{"host":"localhost.localdomain","@timestamp":"2021-06-08T04:37:52.474Z","message":"","@version":"1"}
{"host":"localhost.localdomain","@timestamp":"2021-06-08T04:37:58.796Z","message":"","@version":"1"}
[root@localhost ~]# usr/share/logstash/bin/logstash -e 'input { stdin{} } output { elasticsearch { hosts => ["10.197.10.207:9200"] index => "test_%{+YYYY-MM-dd}" } }'
[root@localhost ~]# ll /elk/data/nodes/0/indices/
total 0
drwxr-xr-x 4 elasticsearch elasticsearch 29 Jun 7 23:40 3ZVteNkuSoO9XeXNzDrgbw
drwxr-xr-x 4 elasticsearch elasticsearch 29 Jun 7 23:40 4korcDPTQQarsao4UA8WvA
drwxr-xr-x 4 elasticsearch elasticsearch 29 Jun 8 00:42 bWvRgb6nSFaKayt7YQKetg
drwxr-xr-x 4 elasticsearch elasticsearch 29 Jun 7 23:40 tMJSl_dhR323D34NluNRjQ
drwxr-xr-x 4 elasticsearch elasticsearch 29 Jun 7 23:40 UFj6f3-URKyfoi_aBQFXSQ
drwxr-xr-x 4 elasticsearch elasticsearch 29 Jun 7 23:40 uZfc5OC1TJ29EEWj07IsGw
drwxr-xr-x 4 elasticsearch elasticsearch 29 Jun 7 23:40 xHYYR8AMSnKGtYZg12tYCQ
之后可以通过Kibana展示:
Stack Management/Index patterns --Create index pattern
index pattern name test* 然后选择next step ,选择
Time field -选择 @timestamp 之后 Create index pattern
之后就可以看到数据了