Kafka是非常常用的軟件,集群更是常見的使用方式,本文從實際使用角度出發,搭建一個Kafka集群,並逐漸完善到實際使用階段。
1.概念解釋
Kafka是無中心節點的結構,集群中的機器是平等的,無主次之分,由於Kafka的集群需要借助Zookeeper來實現,搭建集群時,集群機器的數量要求為:其按2N+1(N>=1),因此集群最小機器數量為3台。另外,Kafka本身自帶了Zookeeper,無需單獨下載安裝,使用本身自帶軟件即可。
2.安裝准備
三台機器:
192.168.102.128
192.168.102.132
192.168.102.133
3.安裝過程
先配置128這台機器。下載、解壓Kafka,進入主目錄,在config目錄打開zookeeper.properties文件,改其配置如下:
dataDir=/tmp/zookeeper dataLogDir=/tmp/zookeeper/log clientPort=2181 maxClientCnxns=0 admin.enableServer=false tickTime=2000 initLimit=10 syncLimit=5 #設置broker Id的服務地址 server.0=192.168.102.128:2888:3888 server.1=192.168.102.132:2888:3888 server.2=192.168.102.133:2888:3888
其中,2888端口為zookeeper的通訊端口,3888端口為選舉端口。之后,在其數據目錄下,新建myid文件,並寫入server.id具體值(建議和kafka的broker.id保持一致)。
然后,再配置Kafka的配置文件server.properties:
broker.id=0 listeners=PLAINTEXT://192.168.102.128:9092 log.dirs=/tmp/kafka-logs zookeeper.connect=192.168.102.128:2181,192.168.102.132:2181,192.168.102.133:2181
配置文件完成后保存。
之后,安裝128機器,安裝並配置132、133機器,只是修改下Zookeeper的server.id值、kafka的listeners值即可。
4.啟動事項
(1)先啟動Zookeeper,再啟動Kafka
(2)最好能后台啟動Zookeeper及Kafka,並將日志寫入指定文件。以下相關啟動命令已驗證
nohup bin/zookeeper-server-start.sh config/zookeeper.properties > /usr/local/kafka2.12/myabc.log 2>&1 & nohup bin/kafka-server-start.sh config/server.properties > /usr/local/kafka2.12/kafka.log 2>&1 & #tailf filename
5.生產及消費端測試(非生產環境使用)
測試方式:以128為生產端節點,以133為消費端節點。
生產端代碼如下:
// Topic private static final String topic = "kafkaTopic1"; public static void aaa() { Properties props = new Properties(); props.put("bootstrap.servers", "192.168.102.128:9092"); props.put("acks", "0"); props.put("group.id", "ABC"); props.put("retries", "0"); props.put("key.serializer", "org.apache.kafka.common.serialization.StringSerializer"); props.put("value.serializer", "org.apache.kafka.common.serialization.StringSerializer"); //生產者實例 KafkaProducer<String, String> producer = new KafkaProducer<String, String>(props); int i = 1; // 發送消息 while (true) { System.out.println("--------------生產開始:--------------"); producer.send(new ProducerRecord<String, String>(topic, "key:" + i, "value:" + i+1)); System.out.println("key:" + i + " " + "value~~~:" + i+1); i++; if (i >= 10) { break; } } }
消費端代碼:
private static final String topic = "kafkaTopic1"; public static void aaaa() { Properties props = new Properties(); props.put("bootstrap.servers", "192.168.102.133:9092"); props.put("group.id", "ABC"); props.put("enable.auto.commit", "true"); props.put("auto.commit.interval.ms", "1000"); props.put("auto.offset.reset", "earliest"); props.put("key.deserializer", "org.apache.kafka.common.serialization.StringDeserializer"); props.put("value.deserializer", "org.apache.kafka.common.serialization.StringDeserializer"); KafkaConsumer<String, String> consumer = new KafkaConsumer<String, String>(props); consumer.subscribe(Arrays.asList(topic)); while (true) { ConsumerRecords<String, String> records = consumer.poll(1000); for (ConsumerRecord<String, String> record : records) { System.out.printf("offset = %d, key = %s, value = %s%n", record.offset(), record.key(), record.value()); //i++; } } }
生產端日志輸出:
消費端日志輸出: