kafka,filebeat 配置


1. zookeeper配置

kafka是依賴zookeeper的,所以先要運行zookeeper,下載的tar包里面包含zookeeper
需要改一下dataDir ,放在/tmp可是非常危險的

dataDir=/data/zookeeper
# the port at which the clients will connect
clientPort=2181
# disable the per-ip limit on the number of connections since this is a non-production config
maxClientCnxns=0

maxClientCnxns可以限制每個ip的連接數。可以適當開。。

2. kafka 配置

可配置項還是挺多的,總結幾個must配置的地方。

# The address the socket server listens on. It will get the value returned from 
# java.net.InetAddress.getCanonicalHostName() if not configured.
#   FORMAT:
#     listeners = listener_name://host_name:port
#   EXAMPLE:
#     listeners = PLAINTEXT://your.host.name:9092
#listeners=PLAINTEXT://:9092

# Hostname and port the broker will advertise to producers and consumers. If not set, 
# it uses the value for "listeners" if configured.  Otherwise, it will use the value
# returned from java.net.InetAddress.getCanonicalHostName().
#advertised.listeners=PLAINTEXT://your.host.name:9092


這個listeners 很多地方都能用到。如果不配置的話默認用hostname ,建議配上。advertised.listeners 一般跟listeners一致就可以,如果在一些特殊情況下會用到。

log.dirs=/tmp/kafka-logs

別放/tmp 。

# The default number of log partitions per topic. More partitions allow greater
# parallelism for consumption, but this will also result in more files across
# the brokers.
num.partitions=1

默認分區數,可以設置一下,

# The minimum age of a log file to be eligible for deletion due to age
log.retention.hours=168

數據存多久,默認是168小時,也就是7天,看需求改。

# Zookeeper connection string (see zookeeper docs for details).
# This is a comma separated host:port pairs, each corresponding to a zk
# server. e.g. "127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002".
# You can also append an optional chroot string to the urls to specify the
# root directory for all kafka znodes.
zookeeper.connect=localhost:2181

# Timeout in ms for connecting to zookeeper
zookeeper.connection.timeout.ms=6000

zookeeper配置,必須要跟zookeeper對上,這沒什么說的。

3. filebeat 配置input,output kafka

https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-input-log.html

These options make it possible for Filebeat to decode logs structured as JSON messages. Filebeat processes the logs line by line, so the JSON decoding only works if there is one JSON object per line.
json用行隔開。

The decoding happens before line filtering and multiline. You can combine JSON decoding with filtering and multiline if you set the message_key option. This can be helpful in situations where the application logs are wrapped in JSON objects, as with like it happens for example with Docker.

json:
  keys_under_root: true
  add_error_key: true
# message_key: log

keys_under_root
By default, the decoded JSON is placed under a "json" key in the output document. If you enable this setting, the keys are copied top level in the output document. The default is false.

add_error_key
If this setting is enabled, Filebeat adds a "error.message" and "error.type: json" key in case of JSON unmarshalling errors or when a message_key is defined in the configuration but cannot be used.

設置兩個,如果你日志全部輸出,想要過濾的設置message_key ,如果單獨放文件,就不需要。
filebeat 比想象中的強大很多。


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM