flume集成hdfs(hdfs开启kerberos认证)


)当 sink 到 hdfs 时:
) 需修改 flume-env.sh 配置,增添 hdfs 依赖库:
  FLUME_CLASSPATH="/root/TDH-Client/hadoop/hadoop/*:/root/TDHClient/hadoop/hadoop-hdfs/*:/root/TDH-Client/hadoop/hadoop/lib/*"
 
实例:
a1.sources=r1
a1.sinks=k2
a1.channels=c2
 
a1.sources.r1.type=avro
a1.sources.r1.channels=c1 c2
a1.sources.r1.bind=172.20.237.105
a1.sources.r1.port=8888
 
#r1的数据通过c2发送给k2输出到HDFS中存储
a1.sinks.k2.channel = c2
a1.sinks.k2.type=hdfs
a1.sinks.k2.hdfs.kerberosKeytab=/etc/hdfs1/conf/hdfs.keytab
a1.sinks.k2.hdfs.kerberosPrincipal=hdfs/gz237-105@TDH
#存储到hdfs上的位置
a1.sinks.k2.hdfs.filePrefix=log-%Y-%m-%d
a1.sinks.k2.hdfs.useLocalTimeStamp = true
a1.sinks.k2.hdfs.writeFormat = text
a1.sinks.k2.hdfs.fileType=DataStream
a1.sinks.k2.hdfs.inUseSuffix=.log
#a1.sinks.k2.hdfs.rollInterval = 0
a1.sinks.k2.hdfs.rollInterval = 60
a1.sinks.k2.hdfs.rollSize = 10240
a1.sinks.k2.hdfs.rollCount = 100
#a1.sinks.k2.hdfs.rollCount = 0
a1.sinks.k2.hdfs.idleTimeout=60
 
a1.channels.c2.type=memory
a1.channels.c2.capacity=100000
a1.channels.c2.transactionCapacity=10000


免责声明!

本站转载的文章为个人学习借鉴使用,本站对版权不负任何法律责任。如果侵犯了您的隐私权益,请联系本站邮箱yoyou2525@163.com删除。



 
粤ICP备18138465号  © 2018-2025 CODEPRJ.COM