使用helm在k8s上部署kafka


https://blog.frognew.com/2019/07/use-helm-install-kafka-on-k8s.html

 

1.配置helm chart repo

kafka的helm chart还在孵化当中,使用前需要添加incubator的repo:helm repo add incubator http://storage.googleapis.com/kubernetes-charts-incubator

肉身在国内需要设置azure提供的镜像库地址:

1
2 3 4 5 6 7 8 
helm repo add stable http://mirror.azure.cn/kubernetes/charts
helm repo add incubator http://mirror.azure.cn/kubernetes/charts-incubator

helm repo list
NAME     	URL                                               
stable   	http://mirror.azure.cn/kubernetes/charts          
local    	http://127.0.0.1:8879/charts                      
incubator	http://mirror.azure.cn/kubernetes/charts-incubator

2.创建Kafka和Zookeeper的Local PV

2.1 创建Kafka的Local PV

这里的部署环境是本地的测试环境,存储选择Local Persistence Volumes。首先,在k8s集群上创建本地存储的StorageClass local-storage.yaml

1 2 3 4 5 6 7 
apiVersion: storage.k8s.io/v1 kind: StorageClass metadata:  name: local-storage provisioner: kubernetes.io/no-provisioner volumeBindingMode: WaitForFirstConsumer reclaimPolicy: Retain
1
2 
kubectl apply -f local-storage.yaml 
storageclass.storage.k8s.io/local-storage created

这里要在node1、node2这两个k8s节点上部署3个kafka的broker节点,因此先在node1、node2上创建这3个kafka broker节点的Local PV kafka-local-pv.yaml:

 1  2  3  4  5  6  7  8  9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 
apiVersion: v1 kind: PersistentVolume metadata:  name: datadir-kafka-0 spec:  capacity:  storage: 5Gi  accessModes:  - ReadWriteOnce  persistentVolumeReclaimPolicy: Retain  storageClassName: local-storage  local:  path: /home/kafka/data-0  nodeAffinity:  required:  nodeSelectorTerms:  - matchExpressions:  - key: kubernetes.io/hostname  operator: In  values:  - node1 --- apiVersion: v1 kind: PersistentVolume metadata:  name: datadir-kafka-1 spec:  capacity:  storage: 5Gi  accessModes:  - ReadWriteOnce  persistentVolumeReclaimPolicy: Retain  storageClassName: local-storage  local:  path: /home/kafka/data-1  nodeAffinity:  required:  nodeSelectorTerms:  - matchExpressions:  - key: kubernetes.io/hostname  operator: In  values:  - node2 --- apiVersion: v1 kind: PersistentVolume metadata:  name: datadir-kafka-2 spec:  capacity:  storage: 5Gi  accessModes:  - ReadWriteOnce  persistentVolumeReclaimPolicy: Retain  storageClassName: local-storage  local:  path: /home/kafka/data-2  nodeAffinity:  required:  nodeSelectorTerms:  - matchExpressions:  - key: kubernetes.io/hostname  operator: In  values:  - node2
1
kubectl apply -f kafka-local-pv.yaml

根据上面创建的local pv,在node1上创建目录/home/kafka/data-0,在node2上创建目录/home/kafka/data-1/home/kafka/data-2

1
2 3 4 5 6 
# node1
mkdir -p /home/kafka/data-0

# node2
mkdir -p /home/kafka/data-1
mkdir -p /home/kafka/data-2

2.2 创建Zookeeper的Local PV

这里要在node1、node2这两个k8s节点上部署3个zookeeper节点,因此先在node1、node2上创建这3个zookeeper节点的Local PV zookeeper-local-pv.yaml:

 1  2  3  4  5  6  7  8  9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 
apiVersion: v1 kind: PersistentVolume metadata:  name: data-kafka-zookeeper-0 spec:  capacity:  storage: 5Gi  accessModes:  - ReadWriteOnce  persistentVolumeReclaimPolicy: Retain  storageClassName: local-storage  local:  path: /home/kafka/zkdata-0  nodeAffinity:  required:  nodeSelectorTerms:  - matchExpressions:  - key: kubernetes.io/hostname  operator: In  values:  - node1 --- apiVersion: v1 kind: PersistentVolume metadata:  name: data-kafka-zookeeper-1 spec:  capacity:  storage: 5Gi  accessModes:  - ReadWriteOnce  persistentVolumeReclaimPolicy: Retain  storageClassName: local-storage  local:  path: /home/kafka/zkdata-1  nodeAffinity:  required:  nodeSelectorTerms:  - matchExpressions:  - key: kubernetes.io/hostname  operator: In  values:  - node2 --- apiVersion: v1 kind: PersistentVolume metadata:  name: data-kafka-zookeeper-2 spec:  capacity:  storage: 5Gi  accessModes:  - ReadWriteOnce  persistentVolumeReclaimPolicy: Retain  storageClassName: local-storage  local:  path: /home/kafka/zkdata-2  nodeAffinity:  required:  nodeSelectorTerms:  - matchExpressions:  - key: kubernetes.io/hostname  operator: In  values:  - node2
1
kubectl apply -f zookeeper-local-pv.yaml

根据上面创建的local pv,在node1上创建目录/home/kafka/zkdata-0,在node2上创建目录/home/kafka/zkdata-1/home/kafka/zkdata-2

1
2 3 4 5 6 
# node1
mkdir -p /home/kafka/zkdata-0

# node2
mkdir -p /home/kafka/zkdata-1
mkdir -p /home/kafka/zkdata-2

3.部署Kafka

编写kafka chart的vaule文件kafka-values.yaml:

 1  2  3  4  5  6  7  8  9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 
replicas: 3 tolerations: - key: node-role.kubernetes.io/master  operator: Exists  effect: NoSchedule - key: node-role.kubernetes.io/master  operator: Exists  effect: PreferNoSchedule persistence:  storageClass: local-storage  size: 5Gi zookeeper:  persistence:  enabled: true  storageClass: local-storage  size: 5Gi  replicaCount: 3  image:  repository: gcr.azk8s.cn/google_samples/k8szk  tolerations:  - key: node-role.kubernetes.io/master  operator: Exists  effect: NoSchedule  - key: node-role.kubernetes.io/master  operator: Exists  effect: PreferNoSchedule
  • 安装过程需要使用到gcr.io/google_samples/k8szk:v3等docker镜像,切换成使用azure的GCR Proxy Cache:gcr.azk8s.cn

    1
    
    helm install --name kafka --namespace kafka -f kafka-values.yaml incubator/kafka 

最后需要确认所有的pod都处于running状态:

 1
 2  3  4  5  6  7  8  9 10 11 12 13 14 15 16 17 18 19 20 
kubectl get pod -n kafka -o wide
NAME                READY   STATUS    RESTARTS   AGE     IP            NODE    NOMINATED NODE   READINESS GATES
kafka-0             1/1     Running   0          12m     10.244.0.61   node1   <none>           <none>
kafka-1             1/1     Running   0          6m3s    10.244.1.12   node2   <none>           <none>
kafka-2             1/1     Running   0          2m26s   10.244.1.13   node2   <none>           <none>
kafka-zookeeper-0   1/1     Running   0          12m     10.244.1.9    node2   <none>           <none>
kafka-zookeeper-1   1/1     Running   0          11m     10.244.1.10   node2   <none>           <none>
kafka-zookeeper-2   1/1     Running   0          11m     10.244.1.11   node2   <none>           <none>

kubectl get statefulset -n kafka
NAME              READY   AGE
kafka             3/3     22m
kafka-zookeeper   3/3     22m

kubectl get service -n kafka
NAME                       TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)                      AGE
kafka                      ClusterIP   10.102.8.192    <none>        9092/TCP                     31m
kafka-headless             ClusterIP   None            <none>        9092/TCP                     31m
kafka-zookeeper            ClusterIP   10.110.43.203   <none>        2181/TCP                     31m
kafka-zookeeper-headless   ClusterIP   None            <none>        2181/TCP,3888/TCP,2888/TCP   31m

可以看到当前kafka的helm chart,采用StatefulSet的形式部署了kafka和zookeeper,而我们通过Local PV的形式,将kafka-0调度到node1上,将kafka-1kafka-2调度到node2上。

4.安装后的测试

在k8s集群内运行下面的客户端Pod,访问kafka broker进行测试:

 1  2  3  4  5  6  7  8  9 10 11 12 13 
apiVersion: v1 kind: Pod metadata:  name: testclient  namespace: kafka spec:  containers:  - name: kafka  image: confluentinc/cp-kafka:5.0.1  command:  - sh  - -c  - "exec tail -f /dev/null"

创建并进入testclient容器内:

1
2 
kubectl apply -f testclient.yaml
kubectl -n kafka exec testclient -it sh

查看kafka相关命令:

 1
 2  3  4  5  6  7  8  9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 
ls /usr/bin/ | grep kafka
kafka-acls
kafka-broker-api-versions
kafka-configs
kafka-console-consumer
kafka-console-producer
kafka-consumer-groups
kafka-consumer-perf-test
kafka-delegation-tokens
kafka-delete-records
kafka-dump-log
kafka-log-dirs
kafka-mirror-maker
kafka-preferred-replica-election
kafka-producer-perf-test
kafka-reassign-partitions
kafka-replica-verification
kafka-run-class
kafka-server-start
kafka-server-stop
kafka-streams-application-reset
kafka-topics
kafka-verifiable-consumer
kafka-verifiable-producer

创建一个Topic test1:

1
kafka-topics --zookeeper kafka-zookeeper:2181 --topic test1 --create --partitions 1 --replication-factor 1

查看的Topic:

1
2 
kafka-topics --zookeeper kafka-zookeeper:2181 --list
test1

5.总结

当前基于Helm官方仓库的chartincubator/kafka在k8s上部署的kafka,使用的镜像是confluentinc/cp-kafka:5.0.1。 即部署的是Confluent公司提供的kafka版本。Confluent Platform Kafka(简称CP Kafka)提供了一些Apache Kafka没有的高级特性,例如跨数据中心备份、Schema注册中心以及集群监控工具等。CP Kafka目前分为免费版本和企业版两种,免费版除了Apache Kafka的标准组件外还包含Schema注册中心和Rest Proxy。

Confluent Platform and Apache Kafka Compatibility中给出了Confluent Kafka和Apache Kafka的版本对应关系,可以看出这里安装的cp 5.0.1对应Apache Kafka的2.0.x。

进入一个broker容器中,查看:

 1
 2  3  4  5  6  7  8  9 10 11 12 13 14 15 
ls /usr/share/java/kafka | grep kafka
kafka-clients-2.0.1-cp1.jar
kafka-log4j-appender-2.0.1-cp1.jar
kafka-streams-2.0.1-cp1.jar
kafka-streams-examples-2.0.1-cp1.jar
kafka-streams-scala_2.11-2.0.1-cp1.jar
kafka-streams-test-utils-2.0.1-cp1.jar
kafka-tools-2.0.1-cp1.jar
kafka.jar
kafka_2.11-2.0.1-cp1-javadoc.jar
kafka_2.11-2.0.1-cp1-scaladoc.jar
kafka_2.11-2.0.1-cp1-sources.jar
kafka_2.11-2.0.1-cp1-test-sources.jar
kafka_2.11-2.0.1-cp1-test.jar
kafka_2.11-2.0.1-cp1.jar

可以看到对应apache kafka的版本号是2.11-2.0.1,前面2.11是Scala编译器的版本,Kafka的服务器端代码是使用Scala语言开发的,后边2.0.1是Kafka的版本。 即CP Kafka 5.0.1是基于Apache Kafka 2.0.1的。


免责声明!

本站转载的文章为个人学习借鉴使用,本站对版权不负任何法律责任。如果侵犯了您的隐私权益,请联系本站邮箱yoyou2525@163.com删除。



 
粤ICP备18138465号  © 2018-2025 CODEPRJ.COM