一 :環境准備:
- 物理機window7 64位
- vmware 3個虛擬機 centos6.8 IP為:192.168.17.[129 -131]
- JDK1.7安裝配置
- 各虛擬機之間配置免密登錄
- 安裝clustershell用於集群各節點統一操作配置
1 :在此說明一下免密和clustershell的操作和使用方式
1.1 :配置免密登錄(各集群節點間,互相操作對方時,只需要輸入對方ip或者host即可,不需要輸入密碼,即:免密登錄)
1.1.2 :生成密鑰文件和私鑰文件 命令
ssh-keygen -t rsa
1.1.3 :查看生成秘鑰文件
ls /root/.ssh
1.1.4 : 將秘鑰拷貝到對方機器
ssh-copy-id -i /root/.ssh/id_rsa.pub 192.168.17.129
ssh-copy-id -i /root/.ssh/id_rsa.pub 192.168.17.130
ssh-copy-id -i /root/.ssh/id_rsa.pub 192.168.17.131
1.1.5 :測試互相是否連接上
可以分別在不同節點間互相登錄操作一下
ssh root@192.168.17.130
hostname
1.2 : clustershell的安裝
備注一下,我是安裝的centos6.6 mini無界面版本,通過yun install clustershell安裝時,會提示no package ,原因yum源中的包長期沒有更新,所以使用來epel-release
安裝命令:
sudo yum install epel-release
然后在yum install clustershell 就可以通過epel來安裝了
1.2.2 : 配置cluster groups
vim /etc/clustershell/groups
添加一個組名:服務器IP或者host
kafka:192.168.17.129 192.168.17.130 192.168.17.131
二 :Zookeeper和Kafka下載
本文使用的zookeeper和kafka版本分別為:3.4.8 , 0.10.0.0
1 :首先到官網進行下載:
將壓縮包放在自己指定的目錄下,我這里放在了/opt/kafka 目錄下
然后,通過clush 將壓縮包copy到其它幾個服務節點中
clush -g kafka -c /opt/kafka
2 :通過clush來解壓縮所有節點的zk和kafka壓縮包
clush -g kafka tar zxvf /opt/kafka/zookeeper-3.4.8
clush -g kafka tar zxvf /opt/kafka/kafka_2.11-0.10.1.0
3 : 將zoo_sample.cfg 拷貝一份為zoo.cfg (默認的zookeeper配置文件)
修改配置,zoo.cfg文件
# The number of milliseconds of each tick tickTime=2000 # The number of ticks that the initial # synchronization phase can take initLimit=10 # The number of ticks that can pass between # sending a request and getting an acknowledgement syncLimit=5 # the directory where the snapshot is stored. # do not use /tmp for storage, /tmp here is just # example sakes. dataDir=/tmp/zookeeper # the port at which the clients will connect clientPort=2181 ## zk 默認端口 ## 節點IP和端口 server.1=192.168.17.129:2888:3888 server.2=192.168.17.130:2888:3888 server.3=192.168.17.131:2888:3888 # the maximum number of client connections. # increase this if you need to handle more clients #maxClientCnxns=60 # # Be sure to read the maintenance section of the # administrator guide before turning on autopurge. # # http://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance
# # The number of snapshots to retain in dataDir #autopurge.snapRetainCount=3 # Purge task interval in hours # Set to "0" to disable auto purge feature #autopurge.purgeInterval=1
3 : 創建tmp/zookeeper 用來存儲zk信息
mkdir /tmp/zookeeper
4 : 為每個tmp/zookeeper 設置一個myid的文件,內容為節點id 1 or 2 or 3
echo "1" > myid
clush -g kafka "service iptables status"clush -g kafka "service iptables stop"
clush -g kafka /opt/kafka/zookeeper/bin/zkServer.sh start /opt/kafka/zookeeper/conf/zoo.cfg
clush -g kafka lsof -i:2181
bin/zkCli.sh -server 192.168.17.130:2181create /test hello
三 :Kafka安裝部署
zookeeper.connect=192.168.17.129:2181,192.168.17.130:2181,192.168.17.131:2181
/opt/kafka/kafka_2.11-0.10.1.0/bin/kafka-server-start.sh -daemon /opt/kafka/kafka_2.11-0.10.1.0/config/server.properties
bin/kafka-topics.sh --zookeeper 192.168.17.129:2181 -topic topicTest --create --partition 3 --replication-factor 2
[root@Kafka01 kafka_2.11-0.10.1.0]# bin/kafka-topics.sh --zookeeper 192.168.17.129:2181 -topic topicTest --describe
bin/kafka-console-consumer.sh --zookeeper 192.168.17.130:2181 --topic topicTest
bin/kafka-console-producer.sh --broker-list kafka02:9092 --topic topicTest
