kafka+elk日志系統實戰


一、架構圖

 

二、kafka-node1主機上的操作

1、zookeeper-3.4.10

1、安裝Java

yum list java*
yum -y install java-1.8.77-openjdk*

2、下載Zookeeper

wget http://mirrors.cnnic.cn/apache/zookeeper/zookeeper-3.4.6/zookeeper-3.4.6.tar.gz

3、解壓配置

tar -zxvf zookeeper-3.4.6.tar.gz
mv zookeeper-3.4.6/ /usr/local/zookeeper
/usr/local/zookeeper/conf
cp -a zoo_sample.cfg  zoo.cfg
egrep -v "^$|#" zoo.cfg
tickTime=2000
initLimit=10
syncLimit=5
dataDir=/tmp/zookeeper
clientPort=2181

4、啟動

/usr/local/zookeeper/bin
./zkServer.sh start

5、端口檢測

netstat -lntup|grep 2181
tcp        0      0 0.0.0.0:2181            0.0.0.0:*               LISTEN      5171/java 

2、kafka_2.12-1.0.0

1、從官網下載Kafka安裝包,解壓安裝,或直接使用命令下載。

wget http://mirror.bit.edu.cn/apache/kafka/1.0.0/kafka_2.11-1.0.0.tgz

2、解壓安裝

tar -zvxf kafka_2.11-1.0.0.tgz -C /usr/local/
d /usr/local/kafka_2.11-1.0.0/

3、修改配置文件

/usr/local/kafka/config

egrep -v "^$|#" server.properties
broker.id=0
listeners=PLAINTEXT://192.168.0.126:9092  
host.name=192.168.0.126
port=9092
num.network.threads=3
num.io.threads=8
socket.send.buffer.bytes=102400
socket.receive.buffer.bytes=102400
socket.request.max.bytes=104857600
log.dirs=/data/kafka-logs
num.partitions=1
num.recovery.threads.per.data.dir=1
offsets.topic.replication.factor=1
transaction.state.log.replication.factor=1
transaction.state.log.min.isr=1
log.retention.hours=168
log.segment.bytes=1073741824
log.retention.check.interval.ms=300000
zookeeper.connect=192.168.0.126:2181
zookeeper.connection.timeout.ms=6000
group.initial.rebalance.delay.ms=0

4、啟動

cd /usr/local/kafka
bin/kafka-server-start.sh config/server.properties

5、更多kafka命令見一下鏈接

Kafka shell :運維常用命令

3、logstash-6.3.2

1、安裝

vi /etc/yum.repos.d/logstash.repo
[logstash-6.x]
name=Elastic repository for 6.x packages
baseurl=https://artifacts.elastic.co/packages/6.x/yum
gpgcheck=1
gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch
enabled=1
autorefresh=1
type=rpm-md

yum install logstash -y

2、配置

1、yml配置文件

cat /etc/logstash/logstash.yml |grep -v "#"
path.data: /var/lib/logstash
path.logs: /var/log/logstash

2、kafka-to-es.conf

 cat /etc/logstash/conf.d/kafka-to-es.conf
input {
    kafka {
        bootstrap_servers => "192.168.0.125:9092"
        topics => ["logs"]
        codec => json
        }
}

output {
    elasticsearch {
        hosts => ["10.0.10.126:9200"]
        index => "logs-%{+YYYY.MM.dd}"
    }
    stdout{
        codec=>rubydebug
    }

}

3、啟動

nohup /usr/local/logstash/bin/logstash -f /etc/logstash/conf.d/kafka-to-es.conf &

4、KafkaOffsetMonitor

詳見以下鏈接

https://www.cnblogs.com/luoahong/articles/9492135.htm

三、elk-node1主機上的操作

1、elasticsearch

1、准備工作

設置 yum源,采用官網提供的源 

https://www.elastic.co/guide/en/elasticsearch/reference/current/rpm.html 

下載並安裝公共簽名密鑰:

rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch

創建yum的repo文件

[root@elk-node1 yum.repos.d]# cat elasticsearch.repo
[elasticsearch-6.x]
name=Elasticsearch repository for 6.x packages
baseurl=https://artifacts.elastic.co/packages/6.x/yum
gpgcheck=1
gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch
enabled=1
autorefresh=1
type=rpm-md

2、安裝JDK

yum install -y java
java -version
openjdk version "1.8.0_65"
OpenJDK Runtime Environment (build 1.8.0_65-b17)
OpenJDK 64-Bit Server VM (build 25.65-b01, mixed mode)

3、安裝Elasticsearch

yum -y install elasticsearch

4、配置elasticsearch

修改配置文件

cat /etc/elasticsearch/elasticsearch.yml
cluster.name: myes   #ES集群名稱
node.name: node-1  #節點名稱
path.data: /data/elasticsearch #數據存儲的目錄(多個目錄使用逗號分隔)
path.logs: /var/log/elasticsearch #日志格式
bootstrap.memory_lock: true #鎖住es內存,保證內存不分配至交換分區
network.host: 192.168.0.126 #設置本機IP地址
http.port: 9200 #端口默認9200

5、設置數據目錄權限

chown -R elasticsearch:elasticsearch /data/elasticsearch
#這個是我們存放數據的目錄,手動創建
es默認發現有組播和單播,組播就是都加入到一個組里面,單播就是一對一通信

6、設置 ulimit數量和線程

vi /etc/security/limits.conf
elasticsearch - nofile 65536
elasticsearch - nproc 2048

7、設置JVM堆大小

vi /etc/elasticsearch/jvm.options
-Xms2g
-Xmx2g

8、啟動 

啟動es
systemctl start elasticsearch.service
netstat -lntup
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name    
tcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTEN      532/sshd            
tcp        0      0 127.0.0.1:25            0.0.0.0:*               LISTEN      724/master          
tcp6       0      0 192.168.0.126:9200      :::*                    LISTEN      2125/java           
tcp6       0      0 192.168.0.126:9300      :::*                    LISTEN      2125/java           
tcp6       0      0 :::22                   :::*                    LISTEN      532/sshd            
tcp6       0      0 ::1:25                  :::*                    LISTEN      724/master   
端口默認:9200

2、kibana

1、安裝

yum -y install kibana

2、配置

grep "^[a-Z]" /opt/kibana/config/kibana.yml
server.port: 5601               #端口,默認5601
server.host: "0.0.0.0"          #主機
elasticsearch.url: "http://192.168.0.126:9200"  #es地址
kibana.index: ".kibana"         #kibana是一個小系統,自己也需要存儲數據(將kibana的數據保存到.kibana的索引中,會在ES里面創建一個.kibana)
# elasticsearch.username: "user"    kibana中的es插件是需要收費的,所以無法使用
# elasticsearch.password: "pass"

3、啟動

systemctl start kibana.service
netstat -lntup
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name    
tcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTEN      489/sshd            
tcp        0      0 127.0.0.1:25            0.0.0.0:*               LISTEN      694/master          
tcp        0      0 0.0.0.0:5601            0.0.0.0:*               LISTEN      3692/node           
tcp6       0      0 192.168.0.126:9200      :::*                    LISTEN      2828/java           
tcp6       0      0 :::8080                 :::*                    LISTEN      1317/java           
tcp6       0      0 192.168.0.126:9300      :::*                    LISTEN      2828/java           
tcp6       0      0 :::22                   :::*                    LISTEN      489/sshd            
tcp6       0      0 :::44697                :::*                    LISTEN      1317/java           
tcp6       0      0 ::1:25                  :::*                    LISTEN      694/master          
udp6       0      0 :::33848                :::*                                1317/java           
udp6       0      0 :::5353                 :::*                                1317/java    
#可以看到5601已經啟動,說明kibana啟動成功

3、kibana生產截圖

4、KafkaOffsetMonitor截圖

四、遇到的坑

1、啟動elk中elasticsearch服務報錯which: no java in (/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin)

解決辦法:

在文件 /etc/sysconfig/elasticsearch添加以下內容

JAVA_HOME=/usr/local/jdk1.8.0_77

2、啟動Kafka報錯的

[2018-04-11 16:27:31,185] ERROR [KafkaServer id=0] Fatal error during KafkaServer startup. Prepare to shutdown (kafka.server.Kaf
java.net.UnknownHostException: bj1-10-112-179-12: bj1-10-112-179-12: Name or service not known
	at java.net.InetAddress.getLocalHost(InetAddress.java:1475)
	at kafka.server.KafkaServer$$anonfun$3.apply(KafkaServer.scala:387)
	at kafka.server.KafkaServer$$anonfun$3.apply(KafkaServer.scala:385)
	at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
	at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
	at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
	at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
	at scala.collection.TraversableLike$class.map(TraversableLike.scala:234)
	at scala.collection.AbstractTraversable.map(Traversable.scala:104)
	at kafka.server.KafkaServer.createBrokerInfo(KafkaServer.scala:385)
	at kafka.server.KafkaServer.startup(KafkaServer.scala:253)
	at kafka.server.KafkaServerStartable.startup(KafkaServerStartable.scala:38)
	at kafka.Kafka$.main(Kafka.scala:92)
	at kafka.Kafka.main(Kafka.scala)
Caused by: java.net.UnknownHostException: bj1-10-112-179-12: Name or service not known
	at java.net.Inet6AddressImpl.lookupAllHostAddr(Native Method)
	at java.net.InetAddress$1.lookupAllHostAddr(InetAddress.java:901)
	at java.net.InetAddress.getAddressesFromNameService(InetAddress.java:1295)
	at java.net.InetAddress.getLocalHost(InetAddress.java:1471)

主要是這句:java.net.UnknownHostException: bj1-10-112-179-12: bj1-10-112-179-12: Name or service not known,提示找不到bj1-10-112-179-12這個名字(用hostname指令可以查看我自己的計算機名就是bj1-10-112-179-12)

在/etc/hosts文件里添加192.168.0.126 kafka-node1

[root@kafka-node1 ~]# cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6

192.168.0.126 kafka-node1

3 、消費者不能消費(kafka1.0.1)

bin/kafka-console-consumer.sh --zookeeper localhost:2181 --topic test --from-beginning

#(java)org.apache.kafka警告
Connection to node 0 could not be established. Broker may not be available.


# (nodejs) kafka-node異常 (執行producer.send后的異常)
{ TimeoutError: Request timed out after 30000ms
    at new TimeoutError (D:\project\node\kafka-test\src\node_modules\kafka-node\lib\errors\TimeoutError.js:6:9)
    at Timeout.setTimeout [as _onTimeout] (D:\project\node\kafka-test\src\node_modules\kafka-node\lib\kafkaClient.js:737:14)
    at ontimeout (timers.js:466:11)
    at tryOnTimeout (timers.js:304:5)
    at Timer.listOnTimeout (timers.js:264:5) message: 'Request timed out after 30000ms' }

把localhost:2181修改為192.168.0.126:2181因為kafka配置的是

zookeeper.connect=192.168.0.126:2181

4、遠程主機無法連接kafka配置文件(server.properties)里添加如下內容

listeners=PLAINTEXT://192.168.0.126:9092


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM