flume 以 kafka 为channel 的配置


#此配置以kafka的一个topic为channel,相比其他channel类型 file和cache 兼并了快和安全的要求!


# Define a kafka channel

a1.channels.c1.type = org.apache.flume.channel.kafka.KafkaChannel
a1.channels.c1.kafka.bootstrap.servers = kafka-1:9092,kafka-2:9092,kafka-3:9092
a1.channels.c1.kafka.topic = channel1
a1.channels.c1.kafka.consumer.group.id = flume-consumer
# Define an spooldir source
a1
.sources.src.type = spooldir
a1
.sources.src.spoolDir = /home/flumeSpool
a1
.sources.src.deletePolicy=immediate
#为了把数据分发到kafka topic的所有partition

a1.sources.src.interceptors = i2
a1.sources.src.interceptors.i2.type=org.apache.flume.sink.solr.morphline.UUIDInterceptor$Builder 
a1
.sources.src.interceptors.i2.headerName=key
a1
.sources.src.interceptors.i2.preserveExisting=false

# Define a kafka sink

a1
.sinks.k1.type = org.apache.flume.sink.kafka.KafkaSink
a1
.sinks.k1.topic = web_nginx_log
a1
.sinks.k1.brokerList =
kafka-1:9092,kafka-2:9092,kafka-3:9092
 
 
a1.sinks.k1.kafka.producer.acks = 1 a1.sinks.k1.kafka.producer.linger.ms = 1 a1.sinks.ki.kafka.producer.compression.type = snappy
 
 

# Finally, now that we've defined all of our components, tell
a1
.channels = c1
a1
.sources = src
a1
.sinks = k1

# Bind the source and sink to the channel
a1
.sources.src.channels = c1
a1
.sinks.k1.channel = c1

 


免责声明!

本站转载的文章为个人学习借鉴使用,本站对版权不负任何法律责任。如果侵犯了您的隐私权益,请联系本站邮箱yoyou2525@163.com删除。



 
粤ICP备18138465号  © 2018-2025 CODEPRJ.COM