Kafka對接Flume


配置flume,編寫kafka.conf文件。從端口44444采集數據,發送到kafka的first主題。

# Name the components on this agent
a1.sources = r1
a1.sinks = k1
a1.channels = c1

# Describe/configure the source
a1.sources.r1.type = netcat
a1.sources.r1.bind = localhost
a1.sources.r1.port = 44444

# Describe the sink 相當於kafka的生產者
a1.sinks.k1.type = org.apache.flume.sink.kafka.KafkaSink
a1.sinks.k1.kafka.topic = first
a1.sinks.k1.kafka.bootstrap.servers = hadoop102:9092,hadoop103:9092,hadoop104:9092
a1.sinks.k1.kafka.flumeBatchSize = 20
a1.sinks.k1.kafka.producer.acks = 1
a1.sinks.k1.kafka.producer.linger.ms = 1

# Use a channel which buffers events in memory
a1.channels.c1.type = memory
a1.channels.c1.capacity = 1000
a1.channels.c1.transactionCapacity = 100

# Bind the source and sink to the channel
a1.sources.r1.channels = c1
a1.sinks.k1.channel = c1

啟動flume采集數據

 bin/flume-ng agent -c conf/ -n a1 -f job/kafka.conf

模擬生產數據

[atguigu@hadoop102 ~]$ nc localhost 44444
helloworld
OK
123
OK

控制台消費數據。如此便形成了一個由flume采集數據,然后發送到kafka的過程。

[atguigu@hadoop102 kafka]$ bin/kafka-console-consumer.sh --bootstrap-server hadoop102:9092 --topic first
helloworld
123

 通過flume攔截器,將數據發送到kafka的不同主題。待續。。

思路:如果是kafka作為channel,通過ChannelSelector的攔截器,可以發往不同的Kafka Channel。


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM