Centos7下ELK+Redis日志分析平台的集群環境部署記錄


轉載於http://www.cnblogs.com/kevingrace/p/9104423.html

之前的文檔介紹了ELK架構的基礎知識(推薦參考下http://blog.oldboyedu.com/elk/),日志集中分析系統的實施方案:
- ELK+Redis
- ELK+Filebeat 
- ELK+Filebeat+Redis
- ELK+Filebeat+Kafka+ZooKeeper

ELK進一步優化架構為EFK,其中F就表示Filebeat。Filebeat即是輕量級數據收集引擎,基於原先Logstash-fowarder 的源碼改造出來。換句話說:Filebeat就是新版的 Logstash-fowarder,也會是ELK Stack在shipper端的第一選擇。

這里選擇ELK+Redis的方式進行部署,下面簡單記錄下ELK結合Redis搭建日志分析平台的集群環境部署過程,大致的架構如下:

+ Elasticsearch是一個分布式搜索分析引擎,穩定、可水平擴展、易於管理是它的主要設計初衷 
+ Logstash是一個靈活的數據收集、加工和傳輸的管道軟件 
+ Kibana是一個數據可視化平台,可以通過將數據轉化為酷炫而強大的圖像而實現與數據的交互將三者的收集加工,存儲分析和可視轉化整合在一起就形成了ELK。

基本流程:
1)Logstash-Shipper獲取日志信息發送到redis。
2)Redis在此處的作用是防止ElasticSearch服務異常導致丟失日志,提供消息隊列的作用。
3)logstash是讀取Redis中的日志信息發送給ElasticSearch。
4)ElasticSearch提供日志存儲和檢索。
5)Kibana是ElasticSearch可視化界面插件。

1)機器環境

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
主機名           ip地址              部署的服務
elk-node01      192.168.10.213      es01,redis01
elk-node02      192.168.10.214      es02,redis02(vip:192.168.10.217)
elk-node03      192.168.10.215      es03,kibana,nginx
  
三台節點都是centos7.4系統
[root@elk-node01 ~] # cat /etc/redhat-release
CentOS Linux release 7.4.1708 (Core)
  
三台節點各自修改主機名
[root@localhost ~] # hostname elk-node01
[root@localhost ~] # hostnamectl set-hostname elk-node01
  
關閉三台節點的iptables和selinux
[root@elk-node01 ~] # systemctl stop firewalld.service
[root@elk-node01 ~] # systemctl disable firewalld.service
[root@elk-node01 ~] # firewall-cmd --state
not running
  
[root@elk-node01 ~] # setenforce 0
[root@elk-node01 ~] # getenforce
Disabled
[root@elk-node01 ~] # vim /etc/sysconfig/selinux
......
SELINUX=disabled
  
三台節點機都要做下hosts綁定
[root@elk-node01 ~] # cat /etc/hosts
......
192.168.10.213 elk-node01
192.168.10.214 elk-node02
192.168.10.215 elk-node03
  
同步三台節點機的系統時間
[root@elk-node01 ~] # yum install -y ntpdate
[root@elk-node01 ~] # ntpdate ntp1.aliyun.com
  
三台節點都要部署java8環境
下載地址:https: //pan .baidu.com /s/1pLaAjPp
提取密碼:x27s
   
[root@elk-node01 ~] # rpm -ivh jdk-8u131-linux-x64.rpm --force
[root@elk-node01 ~] # vim /etc/profile
......
JAVA_HOME= /usr/java/jdk1 .8.0_131
JAVA_BIN= /usr/java/jdk1 .8.0_131 /bin
PATH= /usr/local/sbin : /usr/local/bin : /usr/sbin : /usr/bin : /root/bin : /bin : /sbin/
CLASSPATH=.: /lib/dt .jar: /lib/tools .jar
export  JAVA_HOME JAVA_BIN PATH CLASSPATH
   
[root@elk-node01 ~] # source /etc/profile
[root@elk-node01 ~] # java -version
java version  "1.8.0_131"
Java(TM) SE Runtime Environment (build 1.8.0_131-b11)
Java HotSpot(TM) 64-Bit Server VM (build 25.131-b11, mixed mode)

2)部署ElasticSearch集群環境

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
a)安裝Elasticsearch(三台節點都要操作。部署的時候,要求三台節點機器都能正常對外訪問,正常聯網)
[root@elk-node01 ~] # vim /etc/yum.repos.d/elasticsearch.repo
[elasticsearch-2.x]
name=Elasticsearch repository  for  2.x packages
baseurl=http: //packages .elastic.co /elasticsearch/2 .x /centos
gpgcheck=1
gpgkey=http: //packages .elastic.co /GPG-KEY-elasticsearch
enabled=1
 
[root@elk-node01 ~] # yum install -y elasticsearch
 
b)配置Elasticsearch集群
elk-node01節點的配置
[root@elk-node01 ~] # cp /etc/elasticsearch/elasticsearch.yml /etc/elasticsearch/elasticsearch.yml.bak
[root@elk-node01 ~] # cat /etc/elasticsearch/elasticsearch.yml|grep -v "#"
cluster.name: kevin-elk         #集群名稱,三個節點的集群名稱配置要一樣
node.name: elk-node01           #集群節點名稱,一般為本節點主機名。注意這個要是能ping通的,即在各節點的/etc/hosts里綁定。
path.data:  /data/es-data        #集群數據存放目錄,注意目錄權限要是elasticsearch
path.logs:  /var/log/elasticsearch        #日志路徑,默認就是這個路徑
network.host: 192.168.10.213        #服務綁定的網絡地址,一般填寫本節點ip;也可以填寫0.0.0.0
http.port: 9200
discovery.zen. ping .unicast.hosts: [ "192.168.10.213" "192.168.10.214" "192.168.10.215" ]       #添加集群中的主機地址,會自動發現並自動選擇master主節點
 
[root@elk-node01 ~] # mkdir -p /data/es-data
[root@elk-node01 ~] # mkdir -p /var/log/elasticsearch/
[root@elk-node01 ~] # chown -R elasticsearch.elasticsearch /data/es-data
[root@elk-node01 ~] # chown -R elasticsearch.elasticsearch /var/log/elasticsearch/
 
[root@elk-node01 ~] # systemctl daemon-reload
[root@elk-node01 ~] # systemctl enable elasticsearch
[root@elk-node01 ~] # systemctl start elasticsearch
[root@elk-node01 ~] # systemctl status elasticsearch
[root@elk-node01 ~] # lsof -i:9200
 
-------------------------------------------------------------------------------------
elk-node02節點的配置
[root@elk-node02 ~] # cat /etc/elasticsearch/elasticsearch.yml |grep -v "#"
cluster.name: kevin-elk
node.name: elk-node02
path.data:  /data/es-data
path.logs:  /var/log/elasticsearch
network.host: 192.168.10.214
http.port: 9200
discovery.zen. ping .unicast.hosts: [ "192.168.10.213" "192.168.10.214" "192.168.10.215" ]
 
[root@elk-node02 ~] # mkdir -p /data/es-data
[root@elk-node02 ~] # mkdir -p /var/log/elasticsearch/
[root@elk-node02 ~] # chown -R elasticsearch.elasticsearch /data/es-data
[root@elk-node02 ~] # chown -R elasticsearch.elasticsearch /var/log/elasticsearch/
 
[root@elk-node02 ~] # systemctl daemon-reload
[root@elk-node02 ~] # systemctl enable elasticsearch
[root@elk-node02 ~] # systemctl start elasticsearch
[root@elk-node02 ~] # systemctl status elasticsearch
[root@elk-node02 ~] # lsof -i:9200
 
-------------------------------------------------------------------------------------
elk-node03節點的配置
[root@elk-node03 ~] # cat /etc/elasticsearch/elasticsearch.yml|grep -v "#"
cluster.name: kevin-elk
node.name: elk-node03
path.data:  /data/es-data
path.logs:  /var/log/elasticsearch
network.host: 192.168.10.215
http.port: 9200
discovery.zen. ping .unicast.hosts: [ "192.168.10.213" "192.168.10.214" "192.168.10.215" ]
 
[root@elk-node03 ~] # mkdir -p /data/es-data
[root@elk-node03 ~] # mkdir -p /var/log/elasticsearch/
[root@elk-node03 ~] # chown -R elasticsearch.elasticsearch /data/es-data
[root@elk-node03 ~] # chown -R elasticsearch.elasticsearch /var/log/elasticsearch/
 
[root@elk-node03 ~] # systemctl daemon-reload
[root@elk-node03 ~] # systemctl enable elasticsearch
[root@elk-node03 ~] # systemctl start elasticsearch
[root@elk-node03 ~] # systemctl status elasticsearch
[root@elk-node03 ~] # lsof -i:9200
 
c)查看elasticsearch集群信息(下面命令在任意一個節點機器上操作都可以)
[root@elk-node01 ~] # curl -XGET 'http://192.168.10.213:9200/_cat/nodes'
192.168.10.213 192.168.10.213 8 49 0.01 d * elk-node01         #帶*號表示該節點是master主節點。  
192.168.10.214 192.168.10.214 8 49 0.00 d m elk-node02
192.168.10.215 192.168.10.215 8 59 0.00 d m elk-node03
 
后面添加 ? v  ,表示詳細顯示
[root@elk-node01 ~] # curl -XGET 'http://192.168.10.213:9200/_cat/nodes?v'
host           ip             heap.percent  ram .percent load node.role master name      
192.168.10.213 192.168.10.213            8          49 0.00 d         *      elk-node01
192.168.10.214 192.168.10.214            8          49 0.06 d         m      elk-node02
192.168.10.215 192.168.10.215            8          59 0.00 d         m      elk-node03
 
查詢集群狀態方法
[root@elk-node01 ~] # curl -XGET 'http://192.168.10.213:9200/_cluster/state/nodes?pretty'
{
   "cluster_name"  "kevin-elk" ,
   "nodes"  : {
     "1GGuoA9FT62vDw978HSBOA"  : {
       "name"  "elk-node01" ,
       "transport_address"  "192.168.10.213:9300" ,
       "attributes"  : { }
     },
     "EN8L2mP_RmipPLF9KM5j7Q"  : {
       "name"  "elk-node02" ,
       "transport_address"  "192.168.10.214:9300" ,
       "attributes"  : { }
     },
     "n75HL99KQ5GPqJDk6F2W2A"  : {
       "name"  "elk-node03" ,
       "transport_address"  "192.168.10.215:9300" ,
       "attributes"  : { }
     }
   }
}
 
查詢集群中的master
[root@elk-node01 ~] # curl -XGET 'http://192.168.10.213:9200/_cluster/state/master_node?pretty'
{
   "cluster_name"  "kevin-elk" ,
   "master_node"  "1GGuoA9FT62vDw978HSBOA"
}
 
或者
[root@elk-node01 ~] # curl -XGET 'http://192.168.10.213:9200/_cat/master?v'
id                      host           ip             node      
1GGuoA9FT62vDw978HSBOA 192.168.10.213 192.168.10.213 elk-node01
 
查詢集群的健康狀態(一共三種狀態:green、yellow,red;其中green表示健康)
[root@elk-node01 ~] # curl -XGET 'http://192.168.10.213:9200/_cat/health?v'
epoch      timestamp cluster   status node.total node.data shards pri relo init unassign pending_tasks max_task_wait_time active_shards_percent
1527576950 14:55:50  kevin-elk green           3         3      0   0    0    0        0             0                  -                100.0%
 
或者
[root@elk-node01 ~] # curl -XGET 'http://192.168.10.213:9200/_cluster/health?pretty'
{
   "cluster_name"  "kevin-elk" ,
   "status"  "green" ,
   "timed_out"  false ,
   "number_of_nodes"  : 3,
   "number_of_data_nodes"  : 3,
   "active_primary_shards"  : 0,
   "active_shards"  : 0,
   "relocating_shards"  : 0,
   "initializing_shards"  : 0,
   "unassigned_shards"  : 0,
   "delayed_unassigned_shards"  : 0,
   "number_of_pending_tasks"  : 0,
   "number_of_in_flight_fetch"  : 0,
   "task_max_waiting_in_queue_millis"  : 0,
   "active_shards_percent_as_number"  : 100.0
}
 
 
d)在線安裝elasticsearch插件(三個節點上都要操作,且機器都要能對外正常訪問)
安裝 head 插件
[root@elk-node01 ~] # /usr/share/elasticsearch/bin/plugin install mobz/elasticsearch-head
-> Installing mobz /elasticsearch-head ...
Trying https: //github .com /mobz/elasticsearch-head/archive/master .zip ...
Downloading .............................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................DONE
Verifying https: //github .com /mobz/elasticsearch-head/archive/master .zip checksums  if  available ...
NOTE: Unable to verify checksum  for  downloaded plugin (unable to  find  .sha1 or .md5  file  to verify)
Installed  head  into  /usr/share/elasticsearch/plugins/head
 
安裝kopf插件
[root@elk-node01 ~] # /usr/share/elasticsearch/bin/plugin install lmenezes/elasticsearch-kopf
-> Installing lmenezes /elasticsearch-kopf ...
Trying https: //github .com /lmenezes/elasticsearch-kopf/archive/master .zip ...
Downloading ....................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................DONE
Verifying https: //github .com /lmenezes/elasticsearch-kopf/archive/master .zip checksums  if  available ...
NOTE: Unable to verify checksum  for  downloaded plugin (unable to  find  .sha1 or .md5  file  to verify)
Installed kopf into  /usr/share/elasticsearch/plugins/kopf
 
安裝bigdesk插件
[root@elk-node01 ~] # /usr/share/elasticsearch/bin/plugin install hlstudio/bigdesk
-> Installing hlstudio /bigdesk ...
Trying https: //github .com /hlstudio/bigdesk/archive/master .zip ...
Downloading ................................................................................................................................................................................................................................DONE
Verifying https: //github .com /hlstudio/bigdesk/archive/master .zip checksums  if  available ...
NOTE: Unable to verify checksum  for  downloaded plugin (unable to  find  .sha1 or .md5  file  to verify)
Installed bigdesk into  /usr/share/elasticsearch/plugins/bigdesk
 
三個插件安裝后,記得給plugins目錄授權,並重啟elasticsearch服務
[root@elk-node01 ~] # chown -R elasticsearch:elasticsearch /usr/share/elasticsearch/plugins
[root@elk-node01 ~] # ll /usr/share/elasticsearch/plugins
total 4
drwxr-xr-x. 3 elasticsearch elasticsearch  124 May 29 14:58 bigdesk
drwxr-xr-x. 6 elasticsearch elasticsearch 4096 May 29 14:56  head
drwxr-xr-x. 8 elasticsearch elasticsearch  230 May 29 14:57 kopf
[root@elk-node01 ~] # systemctl restart elasticsearch
[root@elk-node01 ~] # lsof -i:9200                         #服務重啟后,9200端口稍過一會兒才能起來
COMMAND   PID          USER   FD   TYPE DEVICE SIZE /OFF  NODE NAME
java    31855 elasticsearch  107u  IPv6  87943      0t0  TCP elk-node01:wap-wsp (LISTEN)
 
最后就可以查看插件狀態,直接訪問http: //ip :9200 /_plugin/ "插件名"
head 集群管理界面的狀態圖,五角星表示該節點為master;
這里在三個節點機上安裝了插件,所以三個節點都可以訪問插件狀態。

比如用elk-node01節點的ip地址訪問這三個插件,分別是http://192.168.10.213:9200/_plugin/head、http://192.168.10.213:9200/_plugin/kopf、http://192.168.10.213:9200/_plugin/bigdesk,如下:

 3)Redis+Keepalived高可用環境部署記錄

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
參考另一篇文檔:https: //www .cnblogs.com /kevingrace/p/9001975 .html
部署過程在此省略
 
[root@elk-node01 ~] # redis-cli -h 192.168.10.213 INFO|grep role
role:master
[root@elk-node01 ~] # redis-cli -h 192.168.10.214 INFO|grep role
role:slave
[root@elk-node01 ~] # redis-cli -h 192.168.10.217 INFO|grep role
role:master
 
[root@elk-node01 ~] # ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1
     link /loopback  00:00:00:00:00:00 brd 00:00:00:00:00:00
     inet 127.0.0.1 /8  scope host lo
        valid_lft forever preferred_lft forever
     inet6 ::1 /128  scope host
        valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
     link /ether  52:54:00:ae:01:00 brd ff:ff:ff:ff:ff:ff
     inet 192.168.10.213 /24  brd 192.168.10.255 scope global eth0
        valid_lft forever preferred_lft forever
     inet 192.168.10.217 /32  scope global eth0
        valid_lft forever preferred_lft forever
     inet6 fe80::7562:4278:d71d:f862 /64  scope link
        valid_lft forever preferred_lft forever
 
即redis的master主節點一開始在elk-node01節點上。

4)Kibana及nginx代理訪問環境部署(訪問權限控制)。在elk-node03節點機上操作

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
a)kibana安裝配置(官網下載地址:https: //www .elastic.co /downloads
[root@elk-node03 ~] # cd /usr/local/src/
[root@elk-node03 src] # wget https://download.elastic.co/kibana/kibana/kibana-4.6.6-linux-x86_64.tar.gz
[root@elk-node03 src] # tar -zvxf kibana-4.6.6-linux-x86_64.tar.gz
 
由於維護的業務系統比較多,每個系統下的業務日志在kibana界面展示的訪問權限只給該系統相關人員開放,對系統外人員不開放。所以需要做kibana權限控制。
這里通過nginx的訪問驗證配置來實現。
 
可以配置多個端口的kibana,每個系統單獨開一個kibana端口號,比如財務系統kibana使用5601端口、租賃系統kibana使用5602,然后nginx做代理訪問配置。
每個系統的業務日志單獨在其對應的端口的kibana界面里展示。
 
[root@elk-node03 src] # cp -r kibana-4.6.6-linux-x86_64 /usr/local/nc-5601-kibana
[root@elk-node03 src] # cp -r kibana-4.6.6-linux-x86_64 /usr/local/zl-5602-kibana
[root@elk-node03 src] # ll -d /usr/local/*-kibana
drwxr-xr-x. 11 root root 203 May 29 16:49  /usr/local/nc-5601-kibana
drwxr-xr-x. 11 root root 203 May 29 16:49  /usr/local/zl-5602-kibana
 
修改配置文件:
[root@elk-node03 src] # vim /usr/local/nc-5601-kibana/config/kibana.yml
......
server.port: 5601
server.host:  "0.0.0.0"
elasticsearch.url:  "http://192.168.10.213:9200"                 #添加elasticsearch的master主節點的ip地址
kibana.index:  ".nc-kibana"
 
[root@elk-node03 src] # vim /usr/local/zl-5602-kibana/config/kibana.yml
......
server.port: 5602
server.host:  "0.0.0.0"
elasticsearch.url:  "http://192.168.10.213:9200"
kibana.index:  ".zl-kibana"
 
安裝 screen ,並啟動kibana
[root@elk-node03 src] # yum -y install screen
 
[root@elk-node03 src] # screen
[root@elk-node03 src] # /usr/local/nc-5601-kibana/bin/kibana          #按鍵ctrl+a+d將其放在后台執行
 
[root@elk-node03 src] # screen
[root@elk-node03 src] # /usr/local/zl-5602-kibana/bin/kibana          #按鍵ctrl+a+d將其放在后台執行
 
[root@elk-node03 src] # lsof -i:5601
COMMAND   PID USER   FD   TYPE  DEVICE SIZE /OFF  NODE NAME
node    32627 root   13u  IPv4 1028042      0t0  TCP *:esmagent (LISTEN)
 
[root@elk-node03 src] # lsof -i:5602
COMMAND   PID USER   FD   TYPE  DEVICE SIZE /OFF  NODE NAME
node    32659 root   13u  IPv4 1029133      0t0  TCP *:a1-msc (LISTEN)
 
--------------------------------------------------------------------------------------
接着配置nginx的反向代理以及訪問驗證
[root@elk-node03 ~] # yum -y install gcc pcre-devel zlib-devel openssl-devel
[root@elk-node03 ~] # cd /usr/local/src/
[root@elk-node03 src] # wget http://nginx.org/download/nginx-1.9.7.tar.gz
[root@elk-node03 src] # tar -zvxf nginx-1.9.7.tar.gz
[root@elk-node03 src] # cd nginx-1.9.7
[root@elk-node03 nginx-1.9.7] # useradd www -M -s /sbin/nologin
[root@elk-node03 nginx-1.9.7] # ./configure --prefix=/usr/local/nginx --user=www --group=www --with-http_ssl_module --with-http_flv_module --with-http_stub_status_module --with-http_gzip_static_module --with-pcre
[root@elk-node03 nginx-1.9.7] # make && make install
 
nginx的配置
[root@elk-node03 nginx-1.9.7] # cd /usr/local/nginx/conf/
[root@elk-node03 conf] # cp nginx.conf nginx.conf.bak
[root@elk-node03 conf] # cat nginx.conf
user  www;
worker_processes  8;
  
#error_log  logs/error.log;
#error_log  logs/error.log  notice;
#error_log  logs/error.log  info;
  
#pid        logs/nginx.pid;
  
  
events {
     worker_connections  65535;
}
  
  
http {
     include       mime.types;
     default_type  application /octet-stream ;
     charset utf-8;
        
     ######
     ## set access log format
     ######
     log_format  main   '$http_x_forwarded_for $remote_addr $remote_user [$time_local] "$request" '
                       '$status $body_bytes_sent "$http_referer" '
                       '"$http_user_agent" "$http_cookie" $host $request_time' ;
  
     #######
     ## http setting
     #######
     sendfile       on;
     tcp_nopush     on;
     tcp_nodelay    on;
     keepalive_timeout  65;
     proxy_cache_path  /var/www/cache  levels=1:2 keys_zone=mycache:20m max_size=2048m inactive=60m;
     proxy_temp_path  /var/www/cache/tmp ;
  
     fastcgi_connect_timeout 3000;
     fastcgi_send_timeout 3000;
     fastcgi_read_timeout 3000;
     fastcgi_buffer_size 256k;
     fastcgi_buffers 8 256k;
     fastcgi_busy_buffers_size 256k;
     fastcgi_temp_file_write_size 256k;
     fastcgi_intercept_errors on;
  
     #
     client_header_timeout 600s;
     client_body_timeout 600s;
    # client_max_body_size 50m;
     client_max_body_size 100m;    
     client_body_buffer_size 256k;     
  
     gzip   on;
     gzip_min_length  1k;
     gzip_buffers     4 16k;
     gzip_http_version 1.1;
     gzip_comp_level 9;
     gzip_types       text /plain  application /x-javascript  text /css  application /xml  text /javascript  application /x-httpd-php ;
     gzip_vary on;
  
     ## includes vhosts
     include vhosts/*.conf;
}
 
 
[root@elk-node03 conf] # mkdir vhosts
[root@elk-node03 conf] # cd vhosts/
[root@elk-node03 vhosts] # vim nc_kibana.conf
  server {
    listen 15601;
    server_name localhost;
 
    location / {
      proxy_pass http: //192 .168.10.215:5601/;
      auth_basic  "Access Authorized" ;
      auth_basic_user_file  /usr/local/nginx/conf/nc_auth_password ;
    }
}
 
[root@elk-node03 vhosts] # vim zl_kibana.conf
  server {
    listen 15602;
    server_name localhost;
 
    location / {
      proxy_pass http: //192 .168.10.215:5602/;
      auth_basic  "Access Authorized" ;
      auth_basic_user_file  /usr/local/nginx/conf/zl_auth_password ;
    }
}
 
 
[root@elk-node03 vhosts] # /usr/local/nginx/sbin/nginx
[root@elk-node03 vhosts] # /usr/local/nginx/sbin/nginx -s reload
[root@elk-node03 vhosts] # lsof -i:15601
[root@elk-node03 vhosts] # lsof -i:15602
---------------------------------------------------------------------------------------------
設置驗證訪問
創建類htpasswd文件(如果沒有htpasswd命令,可通過 "yum install -y *htpasswd*" "yum install -y httpd"
[root@elk-node03 vhosts] # yum install -y *htpasswd*
 
創建財務系統日志的kibana訪問的驗證權限
[root@elk-node03 vhosts] # htpasswd -c /usr/local/nginx/conf/nc_auth_password nclog
New password:
Re- type  new password:
Adding password  for  user nclog
[root@elk-node03 vhosts] # cat /usr/local/nginx/conf/nc_auth_password
nclog:$apr1$WLHsdsCP$PLLNJB /wxeQKy/OHp/7o2 .
 
創建租賃系統日志的kibana訪問的驗證權限
[root@elk-node03 vhosts] # htpasswd -c /usr/local/nginx/conf/zl_auth_password zllog
New password:
Re- type  new password:
Adding password  for  user zllog
[root@elk-node03 vhosts] # cat /usr/local/nginx/conf/zl_auth_password
zllog:$apr1$dRHpzdwt$yeJxnL5AAQh6A6MJFPCEM1
 
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
htpasswd命令的使用技巧
1) 首次生成驗證文件,使用-c參數,創建時后面跟一個用戶名,但是不能直接跟密碼,需要回車輸入兩次密碼
# htpasswd -c /usr/local/nginx/conf/nc_auth_password nclog
 
2)在驗證文件生成后,后續添加用戶,使用-b參數,后面可以直接跟用戶名和密碼。
    注意這時不能加-c參數,否則會將之前創建的用戶信息覆蓋掉。
# htpasswd -c /usr/local/nginx/conf/nc_auth_password kevin kevin@123
 
3)刪除用於,使用-D參數。
#htpasswd -D /usr/local/nginx/conf/nc_auth_password kevin
 
4)修改用戶密碼(可以先刪除,再創建)
# htpasswd -D /usr/local/nginx/conf/nc_auth_password kevin
# htpasswd -b /usr/local/nginx/conf/nc_auth_password kevin keivn@#2312
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

5)客戶機日志收集操作(Logstash)

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
1)安裝logstash
[root@elk-client ~] # cat /etc/yum.repos.d/logstash.repo
[logstash-2.1]
name=Logstash repository  for  2.1.x packages
baseurl=http: //packages .elastic.co /logstash/2 .1 /centos
gpgcheck=1
gpgkey=http: //packages .elastic.co /GPG-KEY-elasticsearch
enabled=1
 
[root@elk-client ~] # yum install -y logstash
[root@elk-client ~] # ll -d /opt/logstash/
drwxr-xr-x. 5 logstash logstash 160 May 29 17:45  /opt/logstash/
 
2)調整java環境。
有些服務器由於業務代碼自身限制只能用java6或java7,而新版logstash要求java8環境。
這種情況下,要安裝Logstash,就只能單獨配置Logstas自己使用的java8環境了。
[root@elk-client ~] # java -version
java version  "1.6.0_151"
OpenJDK Runtime Environment (rhel-2.6.11.0.el6_9-x86_64 u151-b00)
OpenJDK 64-Bit Server VM (build 24.151-b00, mixed mode)
 
下載jdk-8u172-linux-x64. tar .gz,放到 /usr/local/src 目錄下
下載地址:https: //pan .baidu.com /s/1z3L4Q24AuHA2r6KT6oT9vw
提取密碼:dprz
 
[root@elk-client ~] # cd /usr/local/src/
[root@elk-client src] # tar -zvxf jdk-8u172-linux-x64.tar.gz
[root@elk-client src] # mv jdk1.8.0_172 /usr/local/
 
/etc/sysconfig/logstash 文件結尾添加下面兩行內容:
[root@elk-client src] # vim /etc/sysconfig/logstash
.......
JAVA_CMD= /usr/local/jdk1 .8.0_172 /bin
JAVA_HOME= /usr/local/jdk1 .8.0_172
  
/opt/logstash/bin/logstash .lib.sh文件添加下面一行內容:
[root@elk-client src] # vim /opt/logstash/bin/logstash.lib.sh
.......
export  JAVA_HOME= /usr/local/jdk1 .8.0_172
  
這樣使用logstash收集日志,就不會報java版本的錯誤了。
 
3)使用logstash收集日志
------------------------------------------------------------
比如收集財務系統的日志
[root@elk-client ~] # mkdir /opt/nc
[root@elk-client ~] # cd /opt/nc
[root@elk-client  nc ] # vim redis-input.conf
input {
     file  {
        path =>  "/data/nc-tomcat/logs/catalina.out"
        type  =>  "nc-log"
        start_position =>  "beginning"
        codec => multiline {
            pattern =>  "^[a-zA-Z0-9]|[^ ]+"            #收集以字母(大小寫)或數字或空格開頭的日志信息  
            negate =>  true
            what =>  "previous"
        }
     }
}
    
output {
     if  [ type ] ==  "nc-log" {
        redis {
           host =>  "192.168.10.217"
           port =>  "6379"
           db =>  "1"
           data_type =>  "list"
           key =>  "nc-log"
        }
      }
}
 
[root@elk-client  nc ] # vim file.conf
input {
      redis {
         type  =>  "nc-log"
         host =>  "192.168.10.217"                   #redis高可用的vip地址
         port =>  "6379"
         db =>  "1"
         data_type =>  "list"
         key =>  "nc-log"
      }
}
     
     
output {
     if  [ type ] ==  "nc-log" {
         elasticsearch {
            hosts => [ "192.168.10.213:9200" ]              #elasticsearch集群的master主節點地址
            index =>  "nc-app01-nc-log-%{+YYYY.MM.dd}"
         }
     }
}
 
驗證收集日志的logstash文件是否配置OK
[root@elk-client  nc ] # /opt/logstash/bin/logstash -f /opt/nc/redis-input.conf --configtest
Configuration OK
[root@elk-client  nc ] # /opt/logstash/bin/logstash -f /opt/nc/file.conf --configtest
Configuration OK
 
啟動收集日志的logstash程序
[root@elk-client  nc ] # /opt/logstash/bin/logstash -f /opt/nc/redis-input.conf &
[root@elk-client  nc ] # /opt/logstash/bin/logstash -f /opt/nc/file.conf &
[root@elk-client  nc ] # ps -ef|grep logstash
 
-------------------------------------------------------------------
再比如收集租賃系統的日志
[root@elk-client ~] # mkdir /opt/zl
[root@elk-client ~] # cd /opt/zl
[root@elk-client zl] # vim redis-input.conf
input {
     file  {
        path =>  "/data/zl-tomcat/logs/catalina.out"
        type  =>  "zl-log"
        start_position =>  "beginning"
        codec => multiline {
            pattern =>  "^[a-zA-Z0-9]|[^ ]+"         
            negate =>  true
            what =>  "previous"
        }
     }
}
    
output {
     if  [ type ] ==  "zl-log" {
        redis {
           host =>  "192.168.10.217"
           port =>  "6379"
           db =>  "2"
           data_type =>  "list"
           key =>  "zl-log"
        }
      }
}
[root@elk-client zl] # vim file.conf
input {
      redis {
         type  =>  "zl-log"
         host =>  "192.168.10.217"
         port =>  "6379"
         db =>  "2"
         data_type =>  "list"
         key =>  "zl-log"
      }
}
     
     
output {
     if  [ type ] ==  "zl-log" {
         elasticsearch {
            hosts => [ "192.168.10.213:9200" ]
            index =>  "zl-app01-zl-log-%{+YYYY.MM.dd}"
         }
     }
}
 
[root@elk-client zl] # /opt/logstash/bin/logstash -f /opt/zl/redis-input.conf --configtest
Configuration OK
[root@elk-client zl] # /opt/logstash/bin/logstash -f /opt/zl/file.conf --configtest
Configuration OK
[root@elk-client zl] # ps -ef|grep logstash
 
當上面財務和租賃系統日志有新數據寫入時,日志就會被logstash收集起來,並最終通過各自的kibana進行web展示。

訪問head插件就可以看到收集的日志信息(在logstash程序啟動后,當有新日志數據寫入時,才會在head插件訪問界面里展示)

 添加財務系統kibana日志展示

 

 添加租賃系統kibana日志展示

========Logstash之multiline插件(匹配多行日志)使用說明========
在處理日志時,除了訪問日志外,還要處理運行時日志,該日志大都用程序寫的,比如log4j。運行時日志跟訪問日志最大的不同是,運行時日志是多行,也就是說,連續的多行才能表達一個意思。如果能按多行處理,那么把它們拆分到字段就很容易了。這里就需要說下Logstash的multiline插件,用於匹配多行日志。首先看下面一個java日志:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
[2016-05-20 11:54:24,106][INFO][cluster.metadata ] [node-1][.kibana] creating index,cause [api],template [],shards [1]/[1],mappings [config]
      
      at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:440)
      at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:396)
      at oracle.jdbc.driver.T4C8Oall.processError(T4C8Oall.java:837)
      at oracle.jdbc.driver.T4CTTIfun.receive(T4CTTIfun.java:445)
      at oracle.jdbc.driver.T4CTTIfun.doRPC(T4CTTIfun.java:191)
      at oracle.jdbc.driver.T4C8Oall.doOALL(T4C8Oall.java:523)
      at oracle.jdbc.driver.T4CPreparedStatement.doOall8(T4CPreparedStatement.java:207)
      at oracle.jdbc.driver.T4CPreparedStatement.executeForDescribe(T4CPreparedStatement.java:863)
      at oracle.jdbc.driver.OracleStatement.executeMaybeDescribe(OracleStatement.java:1153)
      at oracle.jdbc.driver.OracleStatement.doExecuteWithTimeout(OracleStatement.java:1275)
      at oracle.jdbc.driver.OraclePreparedStatement.executeInternal(OraclePreparedStatement.java:3576)
      at oracle.jdbc.driver.OraclePreparedStatement.executeQuery(OraclePreparedStatement.java:3620)

再看看這些日志信息在kibana界面里的展示效果:

可以看到,每一行at其實都屬於一個事件的信息,但是Logstash卻使用了多行顯示出來,這樣會造成閱讀不便。為了解決這個問題,可以使用Logstash input插件中的file插件,其中還有一個子功能是Codec-->multiline。官方對於multiline插件的描述是“Merges multiline messages into a single event”,翻譯過來就是將多行信息合並為單一事件。

登錄客戶機器查看Java日志,發現每一個單獨的事件都是以“[ ]”方括號開始的,所以可以把這個方括號當做特征,再結合multiline插件來實現合並信息。使用插件的語法如下,主要含義是“把任何不以[開頭的行,都與前面不是[開頭的行合並成一個事件”:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
[root@elk-client zl] # vim redis-input.conf
input {
     file  {
        path =>  "/data/zl-tomcat/logs/catalina.out"
        type  =>  "zl-log"
        start_position =>  "beginning"
        codec => multiline {
            pattern =>  "^\["       
            negate =>  true       
            what => previous
        }
     }
}
     
output {
     if  [ type ] ==  "zl-log" {
        redis {
           host =>  "192.168.10.217"
           port =>  "6379"
           db =>  "2"
           data_type =>  "list"
           key =>  "zl-log"
        }
      }
}

解釋說明:
pattern => "^\["         這是正則表達式,用來做規則匹配。匹配多行日志的方式要根據實際日志信息進行正則匹配,這里是以"[",也可以是正則匹配,以日志具體情況而定。
negate => true         這個negate是對pattern的結果做判斷是否匹配,默認值是false代表匹配,而true代表不匹配,這里並沒有反,因為negate本身是否定的意思,在這里就是不以大括號開頭的內容才算符合條件,后續才會進行合並操作。
what => previous     next或者previous二選一,previous代表codec將把匹配內容與之前的內容合並,next代表之后的內容。

經過插件整理后的信息在kibana界面里查看就直觀多了,如下圖:

multiline 字段屬性
對multiline 插件來說,有三個設置比較重要:negate、pattern 和 what。
negate
- 類型是 boolean
- 默認為 false
否定正則表達式(如果沒有匹配的話)。

pattern
- 必須設置
- 類型為 string
- 沒有默認值
要匹配的正則表達式。

what
- 必須設置
- 可以為 previous 或 next
- 沒有默認值
如果正則表達式匹配了,那么該事件是屬於下一個或是前一個事件?

==============================================
再來看一例:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
看下面的java日志:
[root@elk-client ~] # tail -f /data/nc-tomcat/logs/catalina.out
........
........
$$callid=1527643542536-4261 $$thread=[WebContainer : 23] $$host=10.0.52.21 $$userid=1001A6100000000006KR $$ts=2018-05-30 09:25:42 $$remotecall=[ nc .bs.dbcache.intf.ICacheVersionBS] $$debuglevel=ERROR  $$msg=<Select CacheTabName, CacheTabVersion From BD_cachetabversion where CacheTabVersion >= null order by CacheTabVersion desc>throws ORA-00942: 表或視圖不存在
  
$$callid=1527643542536-4261 $$thread=[WebContainer : 23] $$host=10.0.52.21 $$userid=1001A6100000000006KR $$ts=2018-05-30 09:25:42 $$remotecall=[ nc .bs.dbcache.intf.ICacheVersionBS] $$debuglevel=ERROR  $$msg=sql original exception
java.sql.SQLException: ORA-00942: 表或視圖不存在
 
       at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:440)
       at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:396)
       at oracle.jdbc.driver.T4C8Oall.processError(T4C8Oall.java:837)
       at oracle.jdbc.driver.T4CTTIfun.receive(T4CTTIfun.java:445)
       at oracle.jdbc.driver.T4CTTIfun.doRPC(T4CTTIfun.java:191)
       at oracle.jdbc.driver.T4C8Oall.doOALL(T4C8Oall.java:523)
       at oracle.jdbc.driver.T4CPreparedStatement.doOall8(T4CPreparedStatement.java:207)
       at oracle.jdbc.driver.T4CPreparedStatement.executeForDescribe(T4CPreparedStatement.java:863)
       at oracle.jdbc.driver.OracleStatement.executeMaybeDescribe(OracleStatement.java:1153)
       at oracle.jdbc.driver.OracleStatement.doExecuteWithTimeout(OracleStatement.java:1275)
       at oracle.jdbc.driver.OraclePreparedStatement.executeInternal(OraclePreparedStatement.java:3576)
       at oracle.jdbc.driver.OraclePreparedStatement.executeQuery(OraclePreparedStatement.java:3620)
       at oracle.jdbc.driver.OraclePreparedStatementWrapper.executeQuery(OraclePreparedStatementWrapper.java:1203)
       at com.ibm.ws.rsadapter.jdbc.WSJdbcPreparedStatement.pmiExecuteQuery(WSJdbcPreparedStatement.java:1110)
       at com.ibm.ws.rsadapter.jdbc.WSJdbcPreparedStatement.executeQuery(WSJdbcPreparedStatement.java:712)
       at  nc .jdbc.framework.crossdb.CrossDBPreparedStatement.executeQuery(CrossDBPreparedStatement.java:103)
       at  nc .jdbc.framework.JdbcSession.executeQuery(JdbcSession.java:297)
 
從以上日志可以看出,每一個單獨的事件都是以 "$" 開始的,所以可以把這個方括號當做特征,結合multiline插件來實現合並信息。
[root@elk-client  nc ] # vim redis-input.conf
input {
     file  {
        path =>  "/data/nc-tomcat/logs/catalina.out"
        type  =>  "nc-log"
        start_position =>  "beginning"
        codec => multiline {
            pattern =>  "^\$"         #匹配以$開頭的日志信息。(如果日志每行是以日期開頭顯示,比如"2018-05-30 11:42.....",則此行就配置為pattern => "^[0-9]",即表示匹配以數字開頭的行)
            negate =>  true           #不匹配
            what =>  "previous"       #即上面不匹配的行的內容與之前的行的內容合並
        }
     }
}
     
output {
     if  [ type ] ==  "nc-log" {
        redis {
           host =>  "192.168.10.217"
           port =>  "6379"
           db =>  "1"
           data_type =>  "list"
           key =>  "nc-log"
        }
      }
}

如上調整后,登錄kibana界面,就可以看到匹配的多行合並的展示效果了(如果多行合並后內容過多,可以點擊截圖中的小箭頭,點擊進去直接看message信息,這樣就能看到合並多行后的內容了):

=======================================================
從上面的例子中可以發現,logstash收集的日志在kibana的展示界面里出現了中文亂碼。

這就需要在logstash收集日志的配置中指定編碼。使用"file"命令去查看對應日志文件的字符編碼:
1)如果命令返回結果說明改日志為utf-8,則logstash配置文件中charset設置為UTF-8。(其實如果命令結果為utf-8,則默認不用添加charset設置,logstash收集日志中的中文信息也會正常顯示出來)
2)如果命令返回結果說明改日志不是utf-8,則logstash配置文件中charset統一設置為GB2312。

具體操作記錄:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
[root@elk-client ~] # file /data/nchome/nclogs/master/nc-log.log
/data/nchome/nclogs/master/nc-log .log: ISO-8859 English text, with very long lines, with CRLF, LF line terminators
 
由上面的 file 命令查看得知,該日志文件的字符編碼不是UTF-8,所以在logstash配置文件中將charset統一設置為GB2312。
根據上面的例子,只需要在redis-input.conf文件中添加對應字符編碼的配置即可, file .conf文件不需要修改。如下:
[root@elk-client  nc ] # vim redis-input.conf
input {
     file  {
        path =>  "/data/nc-tomcat/logs/catalina.out"
        type  =>  "nc-log"
        start_position =>  "beginning"
        codec => multiline {
            charset =>  "GB2312"                     #添加這一行
            pattern =>  "^\$"             
            negate =>  true             
            what =>  "previous"           
        }
     }
}
      
output {
     if  [ type ] ==  "nc-log" {
        redis {
           host =>  "192.168.10.217"
           port =>  "6379"
           db =>  "1"
           data_type =>  "list"
           key =>  "nc-log"
        }
      }
}
 
重啟logstash程序,然后登陸kibana,就發現中文能正常顯示了!

=============================================================
logstash收集那些存在"以當天日期為目錄名"下的日志,如下:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
[root@elk-client ~] # cd /data/yxhome/yx_data/applog
[root@elk-client applog] # ls
20180528 20180529 20180530 20180531 20180601 20180602 20180603 20180604
[root@elk-client applog] # ls 20180604
cm.log  timsserver.log
 
/data/yxhome/yx_data/applog 下那些以當天日期為名稱的目錄創建時間是0點
[root@elk-client ~] # ll -d /data/yxhome/yx_data/applog/20180603
drwxr-xr-x 2 root root 4096 6月   3 00:00  /data/yxhome/yx_data/applog/20180603
 
由於logstash文件中input-> file 下的path路徑配置不能跟` date  +%Y%m%d`或$( date  +%Y%m%d)。
我的做法是:寫個腳本將每天的日志軟鏈接到一個固定路徑下,然后logstash文件中的path配置成軟鏈之后的新路徑。
[root@elk-client ~] # vim /mnt/yx_log_line.sh
#!/bin/bash
/bin/rm  -f  /mnt/yx_log/ *
/bin/ln  -s  /data/yxhome/yx_data/applog/ $( date  +%Y%m%d) /cm .log  /mnt/yx_log/cm .log
/bin/ln  -s  /data/yxhome/yx_data/applog/ $( date  +%Y%m%d) /timsserver .log  /mnt/yx_log/timsserver .log
 
[root@elk-client ~] # chmod +755 /mnt/yx_log_line.sh
[root@elk-client ~] # /bin/bash -x /mnt/yx_log_line.sh
[root@elk-client ~] # ll /mnt/yx_log
總用量 0
lrwxrwxrwx 1 root root 43 6月   4 14:29 cm.log ->  /data/yxhome/yx_data/applog/20180604/cm .log
lrwxrwxrwx 1 root root 51 6月   4 14:29 timsserver.log ->  /data/yxhome/yx_data/applog/20180604/timsserver .log
 
[root@elk-client ~] # crontab -l
0 3 * * *  /bin/bash  -x  /mnt/yx_log_line .sh >  /dev/null  2>&1
 
logstash配置如下(多個log日志采集的配置放在一個文件里):
[root@elk-client ~] # cat /opt/redis-input.conf
input {
     file  {
        path =>  "/data/nchome/nclogs/master/nc-log.log"
        type  =>  "nc-log"
        start_position =>  "beginning"
        codec => multiline {
            charset =>  "GB2312"
            pattern =>  "^\$"         
            negate =>  true
            what =>  "previous"
        }
     }
 
     file  {
        path =>  "/mnt/yx_log/timsserver.log"
        type  =>  "yx-timsserver.log"
        start_position =>  "beginning"
        codec => multiline {
            charset =>  "GB2312"
            pattern =>  "^[0-9]"            #以數字開頭。實際該日志是以2018日期字樣開頭,比如2018-06-04 09:19:53,364:...... 
            negate =>  true
            what =>  "previous"
        }
     }
 
     file  {
        path =>  "/mnt/yx_log/cm.log"
        type  =>  "yx-cm.log"
        start_position =>  "beginning"
        codec => multiline {
            charset =>  "GB2312"
            pattern =>  "^[0-9]"         
            negate =>  true
            what =>  "previous"
        }
     }
}
    
output {
     if  [ type ] ==  "nc-log" {
        redis {
           host =>  "192.168.10.217"
           port =>  "6379"
           db =>  "2"
           data_type =>  "list"
           key =>  "nc-log"
        }
      }
 
     if  [ type ] ==  "yx-timsserver.log" {
        redis {
           host =>  "192.168.10.217"
           port =>  "6379"
           db =>  "4"
           data_type =>  "list"
           key =>  "yx-timsserver.log"
        }
      }
 
     if  [ type ] ==  "yx-cm.log" {
        redis {
           host =>  "192.168.10.217"
           port =>  "6379"
           db =>  "5"
           data_type =>  "list"
           key =>  "yx-cm.log"
        }
      }
 
}
 
 
[root@elk-client ~] # cat /opt/file.conf
input {
      redis {
         type  =>  "nc-log"
         host =>  "192.168.10.217"
         port =>  "6379"
         db =>  "2"
         data_type =>  "list"
         key =>  "nc-log"
      }
 
      redis {
         type  =>  "yx-timsserver.log"
         host =>  "192.168.10.217"
         port =>  "6379"
         db =>  "4"
         data_type =>  "list"
         key =>  "yx-timsserver.log"
      }
 
       redis {
         type  =>  "yx-cm.log"
         host =>  "192.168.10.217"
         port =>  "6379"
         db =>  "5"
         data_type =>  "list"
         key =>  "yx-cm.log"
      }
}
     
     
output {
     if  [ type ] ==  "nc-log" {
         elasticsearch {
            hosts => [ "192.168.10.213:9200" ]
            index =>  "elk-client(10.0.52.21)-nc-log-%{+YYYY.MM.dd}"
         }
     }
 
     if  [ type ] ==  "yx-timsserver.log" {
         elasticsearch {
            hosts => [ "192.168.10.213:9200" ]
            index =>  "elk-client(10.0.52.21)-yx-timsserver.log-%{+YYYY.MM.dd}"
         }
     }
 
     if  [ type ] ==  "yx-cm.log" {
         elasticsearch {
            hosts => [ "192.168.10.213:9200" ]
            index =>  "elk-client(10.0.52.21)-yx-cm.log-%{+YYYY.MM.dd}"
         }
     }
}
 
 
先檢查配置是否正確
[root@elk-client ~] # /opt/logstash/bin/logstash -f /opt/redis-input.conf --configtest
Configuration OK
[root@elk-client ~] # /opt/logstash/bin/logstash -f /opt/file.conf --configtest
Configuration OK
[root@elk-client ~] #
 
接着啟動
[root@elk-client ~] # /opt/logstash/bin/logstash -f /opt/redis-input.conf &
[root@elk-client ~] # /opt/logstash/bin/logstash -f /opt/file.conf &
[root@elk-client ~] # ps -ef|grep logstash
 
當日志文件中有新信息寫入,訪問elasticsearch的 head 插件就能看到對應的索引了,然后添加到kibana界面里即可。

====================ELK收集IDC防火牆日志======================

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
1)通過rsyslog將機房防火牆(地址為10.1.32.105)日志收集到一台linux服務器上(比如A服務器)
    rsyslog收集防火牆日志操作,可參考:http: //www .cnblogs.com /kevingrace/p/5570411 .html
    
比如收集到A服務器上的日志路徑為:
[root@Server-A ~] # cd /data/fw_logs/10.1.32.105/
[root@Server-A 10.1.32.105] # ll
total 127796
-rw------- 1 root root 130855971 Jun 13 16:24 10.1.32.105_2018-06-13.log
 
由於rsyslog收集后的日志會每天產生一個日志文件,並且以當天日期命名。
可以編寫腳本,將每天收集的日志文件軟鏈接到一個固定名稱的文件上。
[root@Server-A ~] # cat /data/fw_logs/log.sh
#!/bin/bash
/bin/unlink  /data/fw_logs/firewall .log
/bin/ln  -s  /data/fw_logs/10 .1.32.105 /10 .1.32.105_$( date  +%Y-%m-%d).log  /data/fw_logs/firewall .log
 
[root@Server-A ~] # sh /data/fw_logs/log.sh
[root@Server-A ~] # ll /data/fw_logs/firewall.log
lrwxrwxrwx 1 root root 52 Jun 13 15:17  /data/fw_logs/firewall .log ->  /data/fw_logs/10 .1.32.105 /10 .1.32.105_2018-06-13.log
 
通過 crontab 定時執行
[root@Server-A ~] # crontab -l
0 1 * * *   /bin/bash  -x  /data/fw_logs/log .sh > /dev/null  2>&1
0 6 * * *   /bin/bash  -x  /data/fw_logs/log .sh > /dev/null  2>&1
 
2)在A服務器上配置logstash
安裝logstash省略(如上)
[root@Server-A ~] # cat /opt/redis-input.conf
input {
     file  {
        path =>  "/data/fw_logs/firewall.log"
        type  =>  "firewall-log"
        start_position =>  "beginning"
        codec => multiline {
            pattern =>  "^[a-zA-Z0-9]|[^ ]+"      
            negate =>  true      
            what => previous
        }
     }
}
      
output {
     if  [ type ] ==  "firewall-log" {
        redis {
           host =>  "192.168.10.217"
           port =>  "6379"
           db =>  "5"
           data_type =>  "list"
           key =>  "firewall-log"
        }
      }
}
 
 
[root@Server-A ~] # cat /opt/file.conf
input {
      redis {
         type  =>  "firewall-log"
         host =>  "192.168.10.217"
         port =>  "6379"
         db =>  "5"
         data_type =>  "list"
         key =>  "firewall-log"
      }
}
      
      
output {
     if  [ type ] ==  "firewall-log" {
         elasticsearch {
            hosts => [ "192.168.10.213:9200" ]
            index =>  "firewall-log-%{+YYYY.MM.dd}"
         }
     }
}
 
[root@Server-A ~] # /opt/logstash/bin/logstash -f /opt/zl/redis-input.conf --configtest
Configuration OK
[root@Server-A ~] # /opt/logstash/bin/logstash -f /opt/zl/file.conf --configtest
Configuration OK
[root@Server-A ~] # /opt/logstash/bin/logstash -f /opt/zl/redis-input.conf &
Configuration OK
[root@Server-A ~] # /opt/logstash/bin/logstash -f /opt/zl/file.conf &
Configuration OK
 
注意:
logstash配置文件中的index名稱有時不注意的話,會invalid無效。
比如上面 "firewall-log-%{+YYYY.MM.dd}" 改為 "IDC-firewall-log-%{+YYYY.MM.dd}" 的話,啟動logstash就會報錯:index name is invalid!
 
然后登陸kibana界面,將firewall-log日志添加進去展示即可。


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM