安裝之前先看下圖
Kafka基礎架構及術語
Kafka基本組成
Kafka cluster: Kafka消息隊列(存儲消息的隊列組件)
Zookeeper: 注冊中心(kafka集群依賴zookeeper來保存集群的的元信息,來保證系統的可用性)
Producer: 提供者(往隊列放數據的程序或代碼)
Consumer: 消費者(從隊列取數據的程序或代碼)
Kafka cluster 組成:
Broker:Broker是kafka實例,每個服務器上有一個或多個kafka的實例,我們姑且認為每個broker對應一台服務器。每個kafka集群內的broker都有一個不重復的編號,如圖中的broker-0、broker-1等……
Topic:消息的主題,可以理解為消息的分類,kafka的數據就保存在topic。在每個broker上都可以創建多個topic。
Partition:Topic的分區,每個topic可以有多個分區,分區的作用是做負載,提高kafka的吞吐量。同一個topic在不同的分區的數據是不重復的,partition的表現形式就是一個一個的文件夾!
Replication: 每一個分區都有多個副本,副本的作用是做備胎。當主分區(Leader)故障的時候會選擇一個備胎(Follower)上位,成為Leader。在kafka中默認副本的最大數量是10個,且副本的數量不能大於Broker的數量,follower和leader絕對是在不同的機器,同一機器對同一個分區也只可能存放一個副本(包括自己)。
Message:每一條發送的消息主體。
Consumer Group組成:我們可以將多個消費組組成一個消費者組,在kafka的設計中同一個分區的數據只能被消費者組中的某一個消費者消費。同一個消費者組的消費者可以消費同一個topic的不同分區的數據,這也是為了提高kafka的吞吐量!
安裝Zookeeper
#docker下載zookeeper鏡像
docker pull wurstmeister/zookeeper:latest
#生成zookeeper容器 docker run -d --name zookeeper -p 2181:2181 -v /etc/localtime:/etc/localtime wurstmeister/zookeeper:latest
配置詳解
- -v /etc/localtime:/etc/localtime 容器時間同步虛擬機的時間
安裝Kafka
#docker下載kafka鏡像
docker pull wurstmeister/kafka:latest
#生成容器 docker run -d --name kafka -p 9092:9092 -e KAFKA_BROKER_ID=0 -e KAFKA_ZOOKEEPER_CONNECT=10.9.44.11:2181 -e KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://10.9.44.11:9092 -e KAFKA_LISTENERS=PLAINTEXT://0.0.0.0:9092 -t wurstmeister/kafka:latest
配置詳解
- -e KAFKA_BROKER_ID=0 #在kafka集群中,每個kafka都有一個BROKER_ID來區分自己
- -e KAFKA_ZOOKEEPER_CONNECT=10.9.44.11:2181/kafka #配置zookeeper管理kafka的路徑10.9.44.11:2181/kafka
- -e KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://10.9.44.11:9092 #把kafka的地址端口注冊給zookeeper
- -e KAFKA_LISTENERS=PLAINTEXT://0.0.0.0:9092 #配置kafka的監聽端口
完整server.properties配置文件
路徑/etc/kafka/
# Licensed to the Apache Software Foundation (ASF) under one or more # contributor license agreements. See the NOTICE file distributed with # this work for additional information regarding copyright ownership. # The ASF licenses this file to You under the Apache License, Version 2.0 # (the "License"); you may not use this file except in compliance with # the License. You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # see kafka.server.KafkaConfig for additional details and defaults ############################# Server Basics ############################# ################################################################################## # broker就是一個kafka的部署實例,在一個kafka集群中,每一台kafka都要有一個broker.id # 並且,該id唯一,且必須為整數 ################################################################################## broker.id=10 ############################# Socket Server Settings ############################# # The address the socket server listens on. It will get the value returned from # java.net.InetAddress.getCanonicalHostName() if not configured. # FORMAT: # listeners = security_protocol://host_name:port # EXAMPLE: # listeners = PLAINTEXT://your.host.name:9092 #listeners=PLAINTEXT://:9092 # Hostname and port the broker will advertise to producers and consumers. If not set, # it uses the value for "listeners" if configured. Otherwise, it will use the value # returned from java.net.InetAddress.getCanonicalHostName(). #advertised.listeners=PLAINTEXT://your.host.name:9092 ################################################################################## #The number of threads handling network requests # 默認處理網絡請求的線程個數 3個 ################################################################################## num.network.threads=3 ################################################################################## # The number of threads doing disk I/O # 執行磁盤IO操作的默認線程個數 8 ################################################################################## num.io.threads=8 ################################################################################## # The send buffer (SO_SNDBUF) used by the socket server # socket服務使用的進行發送數據的緩沖區大小,默認100kb ################################################################################## socket.send.buffer.bytes=102400 ################################################################################## # The receive buffer (SO_SNDBUF) used by the socket server # socket服務使用的進行接受數據的緩沖區大小,默認100kb ################################################################################## socket.receive.buffer.bytes=102400 ################################################################################## # The maximum size of a request that the socket server will accept (protection against OOM) # socket服務所能夠接受的最大的請求量,防止出現OOM(Out of memory)內存溢出,默認值為:100m # (應該是socker server所能接受的一個請求的最大大小,默認為100M) ################################################################################## socket.request.max.bytes=104857600 ############################# Log Basics (數據相關部分,kafka的數據稱為log)############################# ################################################################################## # A comma seperated list of directories under which to store log files # 一個用逗號分隔的目錄列表,用於存儲kafka接受到的數據 ################################################################################## log.dirs=/home/uplooking/data/kafka ################################################################################## # The default number of log partitions per topic. More partitions allow greater # parallelism for consumption, but this will also result in more files across # the brokers. # 每一個topic所對應的log的partition分區數目,默認1個。更多的partition數目會提高消費 # 並行度,但是也會導致在kafka集群中有更多的文件進行傳輸 # (partition就是分布式存儲,相當於是把一份數據分開幾份來進行存儲,即划分塊、划分分區的意思) ################################################################################## num.partitions=1 ################################################################################## # The number of threads per data directory to be used for log recovery at startup and flushing at shutdown. # This value is recommended to be increased for installations with data dirs located in RAID array. # 每一個數據目錄用於在啟動kafka時恢復數據和在關閉時刷新數據的線程個數。如果kafka數據存儲在磁盤陣列中 # 建議此值可以調整更大。 ################################################################################## num.recovery.threads.per.data.dir=1 ############################# Log Flush Policy (數據刷新策略)############################# # Messages are immediately written to the filesystem but by default we only fsync() to sync # the OS cache lazily. The following configurations control the flush of data to disk. # There are a few important trade-offs(平衡) here: # 1. Durability 持久性: Unflushed data may be lost if you are not using replication. # 2. Latency 延時性: Very large flush intervals may lead to latency spikes when the flush does occur as there will be a lot of data to flush. # 3. Throughput 吞吐量: The flush is generally the most expensive operation, and a small flush interval may lead to exceessive seeks. # The settings below allow one to configure the flush policy to flush data after a period of time or # every N messages (or both). This can be done globally and overridden on a per-topic basis. # kafka中只有基於消息條數和時間間隔數來制定數據刷新策略,而沒有大小的選項,這兩個選項可以選擇配置一個 # 當然也可以兩個都配置,默認情況下兩個都配置,配置如下。 # The number of messages to accept before forcing a flush of data to disk # 消息刷新到磁盤中的消息條數閾值 #log.flush.interval.messages=10000 # The maximum amount of time a message can sit in a log before we force a flush # 消息刷新到磁盤生成一個log數據文件的時間間隔 #log.flush.interval.ms=1000 ############################# Log Retention Policy(數據保留策略) ############################# # The following configurations control the disposal(清理) of log segments(分片). The policy can # be set to delete segments after a period of time, or after a given size has accumulated(累積). # A segment will be deleted whenever(無論什么時間) *either* of these criteria(標准) are met. Deletion always happens # from the end of the log. # 下面的配置用於控制數據片段的清理,只要滿足其中一個策略(基於時間或基於大小),分片就會被刪除 # The minimum age of a log file to be eligible for deletion # 基於時間的策略,刪除日志數據的時間,默認保存7天 log.retention.hours=168 # A size-based retention policy for logs. Segments are pruned from the log as long as the remaining # segments don't drop below log.retention.bytes. 1G # 基於大小的策略,1G #log.retention.bytes=1073741824 # The maximum size of a log segment file. When this size is reached a new log segment will be created. # 數據分片策略 log.segment.bytes=1073741824 # The interval at which log segments are checked to see if they can be deleted according # to the retention policies 5分鍾 # 每隔多長時間檢測數據是否達到刪除條件 log.retention.check.interval.ms=300000 ############################# Zookeeper ############################# # Zookeeper connection string (see zookeeper docs for details). # This is a comma separated host:port pairs, each corresponding to a zk # server. e.g. "127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002". # You can also append an optional chroot string to the urls to specify the # root directory for all kafka znodes. zookeeper.connect=uplooking01:2181,uplooking02:2181,uplooking03:2181 # Timeout in ms for connecting to zookeeper zookeeper.connection.timeout.ms=6000
文章整合至:https://www.cnblogs.com/panpanwelcome/p/12580506.html、https://blog.csdn.net/qq_22041375/article/details/106180415、https://www.cnblogs.com/toutou/p/linux_install_kafka.html