kafka+elk日志系统实战


一、架构图

 

二、kafka-node1主机上的操作

1、zookeeper-3.4.10

1、安装Java

yum list java*
yum -y install java-1.8.77-openjdk*

2、下载Zookeeper

wget http://mirrors.cnnic.cn/apache/zookeeper/zookeeper-3.4.6/zookeeper-3.4.6.tar.gz

3、解压配置

tar -zxvf zookeeper-3.4.6.tar.gz
mv zookeeper-3.4.6/ /usr/local/zookeeper
/usr/local/zookeeper/conf
cp -a zoo_sample.cfg  zoo.cfg
egrep -v "^$|#" zoo.cfg
tickTime=2000
initLimit=10
syncLimit=5
dataDir=/tmp/zookeeper
clientPort=2181

4、启动

/usr/local/zookeeper/bin
./zkServer.sh start

5、端口检测

netstat -lntup|grep 2181
tcp        0      0 0.0.0.0:2181            0.0.0.0:*               LISTEN      5171/java 

2、kafka_2.12-1.0.0

1、从官网下载Kafka安装包,解压安装,或直接使用命令下载。

wget http://mirror.bit.edu.cn/apache/kafka/1.0.0/kafka_2.11-1.0.0.tgz

2、解压安装

tar -zvxf kafka_2.11-1.0.0.tgz -C /usr/local/
d /usr/local/kafka_2.11-1.0.0/

3、修改配置文件

/usr/local/kafka/config

egrep -v "^$|#" server.properties
broker.id=0
listeners=PLAINTEXT://192.168.0.126:9092  
host.name=192.168.0.126
port=9092
num.network.threads=3
num.io.threads=8
socket.send.buffer.bytes=102400
socket.receive.buffer.bytes=102400
socket.request.max.bytes=104857600
log.dirs=/data/kafka-logs
num.partitions=1
num.recovery.threads.per.data.dir=1
offsets.topic.replication.factor=1
transaction.state.log.replication.factor=1
transaction.state.log.min.isr=1
log.retention.hours=168
log.segment.bytes=1073741824
log.retention.check.interval.ms=300000
zookeeper.connect=192.168.0.126:2181
zookeeper.connection.timeout.ms=6000
group.initial.rebalance.delay.ms=0

4、启动

cd /usr/local/kafka
bin/kafka-server-start.sh config/server.properties

5、更多kafka命令见一下链接

Kafka shell :运维常用命令

3、logstash-6.3.2

1、安装

vi /etc/yum.repos.d/logstash.repo
[logstash-6.x]
name=Elastic repository for 6.x packages
baseurl=https://artifacts.elastic.co/packages/6.x/yum
gpgcheck=1
gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch
enabled=1
autorefresh=1
type=rpm-md

yum install logstash -y

2、配置

1、yml配置文件

cat /etc/logstash/logstash.yml |grep -v "#"
path.data: /var/lib/logstash
path.logs: /var/log/logstash

2、kafka-to-es.conf

 cat /etc/logstash/conf.d/kafka-to-es.conf
input {
    kafka {
        bootstrap_servers => "192.168.0.125:9092"
        topics => ["logs"]
        codec => json
        }
}

output {
    elasticsearch {
        hosts => ["10.0.10.126:9200"]
        index => "logs-%{+YYYY.MM.dd}"
    }
    stdout{
        codec=>rubydebug
    }

}

3、启动

nohup /usr/local/logstash/bin/logstash -f /etc/logstash/conf.d/kafka-to-es.conf &

4、KafkaOffsetMonitor

详见以下链接

https://www.cnblogs.com/luoahong/articles/9492135.htm

三、elk-node1主机上的操作

1、elasticsearch

1、准备工作

设置 yum源,采用官网提供的源 

https://www.elastic.co/guide/en/elasticsearch/reference/current/rpm.html 

下载并安装公共签名密钥:

rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch

创建yum的repo文件

[root@elk-node1 yum.repos.d]# cat elasticsearch.repo
[elasticsearch-6.x]
name=Elasticsearch repository for 6.x packages
baseurl=https://artifacts.elastic.co/packages/6.x/yum
gpgcheck=1
gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch
enabled=1
autorefresh=1
type=rpm-md

2、安装JDK

yum install -y java
java -version
openjdk version "1.8.0_65"
OpenJDK Runtime Environment (build 1.8.0_65-b17)
OpenJDK 64-Bit Server VM (build 25.65-b01, mixed mode)

3、安装Elasticsearch

yum -y install elasticsearch

4、配置elasticsearch

修改配置文件

cat /etc/elasticsearch/elasticsearch.yml
cluster.name: myes   #ES集群名称
node.name: node-1  #节点名称
path.data: /data/elasticsearch #数据存储的目录(多个目录使用逗号分隔)
path.logs: /var/log/elasticsearch #日志格式
bootstrap.memory_lock: true #锁住es内存,保证内存不分配至交换分区
network.host: 192.168.0.126 #设置本机IP地址
http.port: 9200 #端口默认9200

5、设置数据目录权限

chown -R elasticsearch:elasticsearch /data/elasticsearch
#这个是我们存放数据的目录,手动创建
es默认发现有组播和单播,组播就是都加入到一个组里面,单播就是一对一通信

6、设置 ulimit数量和线程

vi /etc/security/limits.conf
elasticsearch - nofile 65536
elasticsearch - nproc 2048

7、设置JVM堆大小

vi /etc/elasticsearch/jvm.options
-Xms2g
-Xmx2g

8、启动 

启动es
systemctl start elasticsearch.service
netstat -lntup
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name    
tcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTEN      532/sshd            
tcp        0      0 127.0.0.1:25            0.0.0.0:*               LISTEN      724/master          
tcp6       0      0 192.168.0.126:9200      :::*                    LISTEN      2125/java           
tcp6       0      0 192.168.0.126:9300      :::*                    LISTEN      2125/java           
tcp6       0      0 :::22                   :::*                    LISTEN      532/sshd            
tcp6       0      0 ::1:25                  :::*                    LISTEN      724/master   
端口默认:9200

2、kibana

1、安装

yum -y install kibana

2、配置

grep "^[a-Z]" /opt/kibana/config/kibana.yml
server.port: 5601               #端口,默认5601
server.host: "0.0.0.0"          #主机
elasticsearch.url: "http://192.168.0.126:9200"  #es地址
kibana.index: ".kibana"         #kibana是一个小系统,自己也需要存储数据(将kibana的数据保存到.kibana的索引中,会在ES里面创建一个.kibana)
# elasticsearch.username: "user"    kibana中的es插件是需要收费的,所以无法使用
# elasticsearch.password: "pass"

3、启动

systemctl start kibana.service
netstat -lntup
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name    
tcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTEN      489/sshd            
tcp        0      0 127.0.0.1:25            0.0.0.0:*               LISTEN      694/master          
tcp        0      0 0.0.0.0:5601            0.0.0.0:*               LISTEN      3692/node           
tcp6       0      0 192.168.0.126:9200      :::*                    LISTEN      2828/java           
tcp6       0      0 :::8080                 :::*                    LISTEN      1317/java           
tcp6       0      0 192.168.0.126:9300      :::*                    LISTEN      2828/java           
tcp6       0      0 :::22                   :::*                    LISTEN      489/sshd            
tcp6       0      0 :::44697                :::*                    LISTEN      1317/java           
tcp6       0      0 ::1:25                  :::*                    LISTEN      694/master          
udp6       0      0 :::33848                :::*                                1317/java           
udp6       0      0 :::5353                 :::*                                1317/java    
#可以看到5601已经启动,说明kibana启动成功

3、kibana生产截图

4、KafkaOffsetMonitor截图

四、遇到的坑

1、启动elk中elasticsearch服务报错which: no java in (/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin)

解决办法:

在文件 /etc/sysconfig/elasticsearch添加以下内容

JAVA_HOME=/usr/local/jdk1.8.0_77

2、启动Kafka报错的

[2018-04-11 16:27:31,185] ERROR [KafkaServer id=0] Fatal error during KafkaServer startup. Prepare to shutdown (kafka.server.Kaf
java.net.UnknownHostException: bj1-10-112-179-12: bj1-10-112-179-12: Name or service not known
	at java.net.InetAddress.getLocalHost(InetAddress.java:1475)
	at kafka.server.KafkaServer$$anonfun$3.apply(KafkaServer.scala:387)
	at kafka.server.KafkaServer$$anonfun$3.apply(KafkaServer.scala:385)
	at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
	at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
	at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
	at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
	at scala.collection.TraversableLike$class.map(TraversableLike.scala:234)
	at scala.collection.AbstractTraversable.map(Traversable.scala:104)
	at kafka.server.KafkaServer.createBrokerInfo(KafkaServer.scala:385)
	at kafka.server.KafkaServer.startup(KafkaServer.scala:253)
	at kafka.server.KafkaServerStartable.startup(KafkaServerStartable.scala:38)
	at kafka.Kafka$.main(Kafka.scala:92)
	at kafka.Kafka.main(Kafka.scala)
Caused by: java.net.UnknownHostException: bj1-10-112-179-12: Name or service not known
	at java.net.Inet6AddressImpl.lookupAllHostAddr(Native Method)
	at java.net.InetAddress$1.lookupAllHostAddr(InetAddress.java:901)
	at java.net.InetAddress.getAddressesFromNameService(InetAddress.java:1295)
	at java.net.InetAddress.getLocalHost(InetAddress.java:1471)

主要是这句:java.net.UnknownHostException: bj1-10-112-179-12: bj1-10-112-179-12: Name or service not known,提示找不到bj1-10-112-179-12这个名字(用hostname指令可以查看我自己的计算机名就是bj1-10-112-179-12)

在/etc/hosts文件里添加192.168.0.126 kafka-node1

[root@kafka-node1 ~]# cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6

192.168.0.126 kafka-node1

3 、消费者不能消费(kafka1.0.1)

bin/kafka-console-consumer.sh --zookeeper localhost:2181 --topic test --from-beginning

#(java)org.apache.kafka警告
Connection to node 0 could not be established. Broker may not be available.


# (nodejs) kafka-node异常 (执行producer.send后的异常)
{ TimeoutError: Request timed out after 30000ms
    at new TimeoutError (D:\project\node\kafka-test\src\node_modules\kafka-node\lib\errors\TimeoutError.js:6:9)
    at Timeout.setTimeout [as _onTimeout] (D:\project\node\kafka-test\src\node_modules\kafka-node\lib\kafkaClient.js:737:14)
    at ontimeout (timers.js:466:11)
    at tryOnTimeout (timers.js:304:5)
    at Timer.listOnTimeout (timers.js:264:5) message: 'Request timed out after 30000ms' }

把localhost:2181修改为192.168.0.126:2181因为kafka配置的是

zookeeper.connect=192.168.0.126:2181

4、远程主机无法连接kafka配置文件(server.properties)里添加如下内容

listeners=PLAINTEXT://192.168.0.126:9092


免责声明!

本站转载的文章为个人学习借鉴使用,本站对版权不负任何法律责任。如果侵犯了您的隐私权益,请联系本站邮箱yoyou2525@163.com删除。



 
粤ICP备18138465号  © 2018-2025 CODEPRJ.COM