集成kafka的身份認證(SASL/PLAIN)到kubernets環境中


一、准備條件

0-搭建k8s+docker的實驗環境

 

1-K8s的基礎知識准備

 

2-Helm的基礎知識准備

 

3-kafka在stand alone node上身份認證機制理解

二、使用helm 安裝k8s微服務(kafka和zookeeper)

1) 安裝微服務

helm repo add gs-all repoUrl(helm repo remove gs-all repoUrl)
export NAMESPACE=eric-schema-registry-sr-install
export TLS=false
kubectl create ns $NAMESPACE
helm install eric-data-coordinator-zk gs-all/eric-data-coordinator-zk --namespace=$NAMESPACE --devel --wait --timeout 20000s --set global.security.tls.enabled=$TLS --set replicas=1 --set persistence.persistentVolumeClaim.enabled=false
helm install eric-data-message-bus-kf gs-all/eric-data-message-bus-kf --namespace=$NAMESPACE --devel --wait --timeout 20000s --set global.security.tls.enabled=$TLS --set replicaCount=3 --set persistence.persistentVolumeClaim.enabled=false

 

2)確認安裝

## kubectl get pods -n eric-schema-registry-sr-install
NAME                         READY   STATUS    RESTARTS   AGE
eric-data-coordinator-zk-0   1/1     Running   57         98d
eric-data-message-bus-kf-0   1/1     Running   46         75d
eric-data-message-bus-kf-1   1/1     Running   43         75d
eric-data-message-bus-kf-2   1/1     Running   43         75d

## kubectl describe -n eric-schema-registry-sr-install
error: You must specify the type of resource to describe. Use "kubectl api-resources" for a complete list of supported resources.
ehunjng@CN-00005131:~$ kubectl describe namespace eric-schema-registry-sr-install
Name:         eric-schema-registry-sr-install
Labels:       <none>
Annotations:  <none>
Status:       Active

No resource quota.

No LimitRange resource.


## kubectl describe all
Name:              kubernetes
Namespace:         default
Labels:            component=apiserver
                   provider=kubernetes
Annotations:       <none>
Selector:          <none>
Type:              ClusterIP
IP Families:       <none>
IP:                10.96.0.1
IPs:               <none>
Port:              https  443/TCP
TargetPort:        6443/TCP
Endpoints:         192.168.65.4:6443
Session Affinity:  None
Events:            <none>

 

3) 進到container里面, 確認下文件

查看所有的images

docker ps –a

 

進到k8s pod中

kubectl exec -it eric-data-message-bus-kf-0 -n eric-schema-registry-sr-install --  /bin/sh
Defaulted container "messagebuskf" out of: messagebuskf, checkzkready (init)
sh-4.4$

 

進到docker container中:docker exec –it containerId(containername)  bash(/bin/bash)

docker exec -it k8s_messagebuskf_eric-data-message-bus-kf-0_eric-schema-registry-sr-install_a71d1214-c7bb-4e19-ab36-9832e26896a5_46 bash 

bash-4.4$ cd /etc/confluent/docker/
bash-4.4$ ls
 configure  entrypoint        kafka.properties.template      kafka_server_jaas.conf.properties  log4j.properties.template  monitorcertZK.sh  renewcertZK.sh  tools-log4j.properties
 ensure     initcontainer.sh  kafkaPartitionReassignment.sh  launch                             monitorcertKF.sh           renewcertKF.sh    run             tools-log4j.properties.template

docker exec -it k8s_messagebuskf_eric-data-message-bus-kf-0_eric-schema-registry-sr-install_a71d1214-c7bb-4e19-ab36-9832e26896a5_46 bash
bash-4.4$

 

4) 拷貝文件到container(pod)中

docker ps docker cp k8s_messagebuskf_eric-data-message-bus-kf-0_eric-schema-registry-sr-install_b0c9eb0f-881b-4081-9931-b9fc0b314bb9_5:/etc/kafka /mnt/c/repo/k8skafka

docker cp messagebuskf:/etc/kafka/*  /mnt/c/repo/k8skafka

  kubectl -n eric-schema-registry-sr-install cp /mnt/c/repo/k8skafka/kafka/kafka_server_jaas.conf eric-data-message-bus-kf-1:/etc/kafka

docker cp k8s_messagebuskf_eric-data-message-bus-kf-0_eric-schema-registry-sr-install_b0c9eb0f-881b-4081-9931-b9fc0b314bb9_5:/usr/bin /mnt/c/repo/k8skafka docker cp /mnt/c/repo/k8skafka/kafka_server_jaas.conf k8s_messagebuskf_eric-data-message-bus-kf-0_eric-schema-registry-sr-install_b0c9eb0f-881b-4081-9931-b9fc0b314bb9_5:/etc/kafka/

 

5)確認node和k8s集群中服務

kubectl get node
NAME             STATUS   ROLES    AGE    VERSION
docker-desktop   Ready    master   161d   v1.19.7
kubectl get services -n eric-schema-registry-sr-install
NAME                                        TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)                       AGE
eric-data-coordinator-zk                    ClusterIP   10.105.141.43    <none>        2181/TCP,8080/TCP,21007/TCP   98d
eric-data-coordinator-zk-ensemble-service   ClusterIP   None             <none>        2888/TCP,3888/TCP             98d
eric-data-message-bus-kf                    ClusterIP   None             <none>        9092/TCP                      75d
eric-data-message-bus-kf-0-nodeport         NodePort    10.111.66.146    <none>        9092:31090/TCP                75d
eric-data-message-bus-kf-1-nodeport         NodePort    10.108.230.113   <none>        9092:31091/TCP                75d
eric-data-message-bus-kf-2-nodeport         NodePort    10.98.161.13     <none>        9092:31092/TCP                75d
eric-data-message-bus-kf-client             ClusterIP   10.102.191.90    <none>        9092/TCP                      75d
zookeeper-nodeport                          NodePort    10.105.151.183   <none>        2181:32181/TCP                83d

三、學習kafka微服務中helm chart中文件內容

結合pod關於kafka的環境變量、helm chart如何通過values和templates去控制pod的環境變量、container中從

/etc/docker/run/run-->configure-->lauch這樣一個邏輯,從上到下從總到分的邏輯。那么如果想要開啟或者配置container中kafka的功能,

需要修改helm chart中的values值,以及需要在values中添加對應的值,

然后通過helm install或者helm upgrade使得值生效,然后控制container中kafka的一些環境變量和配置文件的生成。

image

 

四、修改values和chart file,開啟helm對於kafka SASL/PLAIN的支持

1)kafka微服務的helm chart

從本地helm repo中找到chart壓縮包,eric-data-message-bus-kf-1.17.0-28.tgz,解壓。

 

2) 修改charts中對應的文件

value.yaml=====>templates(kafka-ss.yaml)====>pod env variables===>/etc/confluent/docker/(files)=====>/etc/kafka/properties.  ****pod env variables and /etc/kafka/*properties control /usr/bin/kafka

 

3)修改values.yaml,添加或者修改現有的值。應該修改或者添加哪個值,思路是:

查看template/kafka-ss.yaml,搜索sasl

image

image

image

因此,在k8s集群內部,首先要修改eric-data-message-bus-kf.sasl的值為true。 查看templates/_helpers.tpl, eric-data-message-bus-kf.sasl如何被映射到

values.yaml中。

security:
#     policyBinding:
#       create: false
#     policyReferenceMap:
#       default-restricted-security-policy: "default-restricted-security-policy"
#     tls:
#       enabled: true
    sasl:
      enabled: true

kafka-ss.yaml

from : 

260 {{- else }}
261 port: {{ template "eric-data-message-bus-kf.plaintextPort" . }}
262 {{- end }}

    to:

260 {{- else }}
261 port: {{ template "eric-data-message-bus-kf.saslPlaintextPort" . }}
262 {{- end }}

修改完之后,使用docker cp把文件copy到container中,使用helm install /helm upgrade重新部署k8s微服務。

helm upgrade eric-data-message-bus-kf . --reuse-values --set global.security.sasl.enabled=true --set global.security.tls.enabled=false -n eric-schema-r
egistry-sr-install - -----因該命令不生效,故修改values.yaml re-install

helm uninstall eric-data-message-bus-kf n eric-schema-registry-sr-install

helm install eric-data-message-bus-kf /home/ehunjng/helm-study/eric-data-message-bus-kf/ --namespace=$NAMESPACE --devel --wait --timeout 20000s --set global.security.tls.enabled=$TLS --se
t replicaCount=3 --set persistence.persistentVolumeClaim.enabled=false

 

確認重新部署后的狀態:

kubectl get pods -n namespace
kubectl logs eric-data-message-bus-kf-0 -n namespace
kubectl describe pods eric-data-message-bus-kf-2 -n namespace

 

驗證:

分別在不同的container中 啟動producer和consumer,看是否能通信。

kubectl exec -it eric-data-message-bus-kf-0 -n eric-schema-registry-sr-install --  /bin/sh
/usr/bin/kafka-console-producer.sh --broker-list localhost:9091 --topic test0730 --producer.config /etc/kafka/producer.properties

kubectl exec -it eric-data-message-bus-kf-1-n eric-schema-registry-sr-install --  /bin/sh
 /usr/bin/kafka-console-consumer.sh --bootstrap-server localhost:9091 --topic test0730 --consumer.config /etc/kafka/consumer.properties

 

五、k8s外部通過用戶名密碼訪問kafka

1) 首先是要k8s集群要對外暴露服務,通過nodeport來對外暴露服務開啟SASL支持,在values.yaml中添加:

#options required for external access via nodeport "advertised.listeners": EXTERNAL://127.0.0.1:$((31090 + ${KAFKA_BROKER_ID}))

"listener.security.protocol.map": SASL_PLAINTEXT:SASL_PLAINTEXT,EXTERNAL:SASL_PLAINTEXT nodeport: enabled: true servicePort: 9092 firstListenerPort: 31090

 

2) 其次,參考cp-helm-charts/statefulset.yaml at master · confluentinc/cp-helm-charts · GitHub

修改一下kafka-ss.yaml。

command:
        - sh
        - -exc
        - |
          export KAFKA_BROKER_ID=${HOSTNAME##*-} && \
          export KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://${POD_NAME}.{{ template "cp-kafka.fullname" . }}-headless.${POD_NAMESPACE}:9092{{ include "cp-kafka.configuration.advertised.listeners" . }} && \
          exec /etc/confluent/docker/run

 

3) 最后,同樣 改完之后helm 重新安裝或者

helm upgrade --install eric-data-message-bus-kf .  --reuse-values --set global.security.sasl.enabled=true --set global.security.tls.enabled=false -n eric-schema-registry-sr-install

 

配置的過程中參考:

kafka的參數解釋

https://blog.csdn.net/lidelin10/article/details/105316252

kafka/KafkaConfig.scala at trunk · apache/kafka (github.com)

K8S環境快速部署Kafka(K8S外部可訪問):https://www.cnblogs.com/bolingcavalry/p/13917562.html

 

4) 驗證

①在客戶端安裝kafka,並開啟支持。

②進到container內部 查看用戶名密碼:

bash-4.4$ cat /etc/kafka/kafka_server_jaas.conf
KafkaServer {
  org.apache.kafka.common.security.plain.PlainLoginModule required
  username="admin"
  password="h7801XHzaC"
  user_admin="h7801XHzaC";
};
KafkaClient {
  org.apache.kafka.common.security.plain.PlainLoginModule required
  username="admin"
  password="h7801XHzaC";
};

 

③當client端使用錯誤的用戶名密碼

ehunjng@CN-00005131:~/kafka_2.12-2.4.0$ bin/kafka-console-producer.sh --broker-list 127.0.0.1:31090 -topic kafkatest0804 --producer.config config/producer.properties
>[2021-10-27 14:36:31,697] ERROR [Producer clientId=console-producer] Connection to node -1 (localhost/127.0.0.1:31090) failed authentication due to: Authentication failed: Invalid username or password (org.apache.kafka.clients.NetworkClient)
[2021-10-27 14:36:32,106] ERROR [Producer clientId=console-producer] Connection to node -1 (localhost/127.0.0.1:31090) failed authentication due to: Authentication failed: Invalid username or password (org.apache.kafka.clients.NetworkClient)
[2021-10-27 14:36:32,872] ERROR [Producer clientId=console-producer] Connection to node -1 (localhost/127.0.0.1:31090) failed authentication due to: Authentication failed: Invalid username or password (org.apache.kafka.clients.NetworkClient)
[2021-10-27 14:36:34,145] ERROR [Producer clientId=console-producer] Connection to node -1 (localhost/127.0.0.1:31090) failed authentication due to: Authentication failed: Invalid username or password (org.apache.kafka.clients.NetworkClient)
[2021-10-27 14:36:35,367] ERROR [Producer clientId=console-producer] Connection to node -1 (localhost/127.0.0.1:31090) failed authentication due to: Authentication failed: Invalid username or password (org.apache.kafka.clients.NetworkClient)
^Cehunjng@CN-00005131:~/kafka_2.12-2.4.0$ cat kafka_client_jaas.conf
KafkaClient {
  org.apache.kafka.common.security.plain.PlainLoginModule required
  username="kafka"
  password="kafkapasswd";
};

 

④當client端使用正確的用戶名密碼

查看下對外暴露服務的node port 
kubectl get services -n eric-schema-registry-sr-install
NAME                                        TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)                       AGE
eric-data-coordinator-zk                    ClusterIP   10.105.141.43    <none>        2181/TCP,8080/TCP,21007/TCP   98d
eric-data-coordinator-zk-ensemble-service   ClusterIP   None             <none>        2888/TCP,3888/TCP             98d
eric-data-message-bus-kf                    ClusterIP   None             <none>        9092/TCP                      75d
eric-data-message-bus-kf-0-nodeport         NodePort    10.111.66.146    <none>        9092:31090/TCP                75d
eric-data-message-bus-kf-1-nodeport         NodePort    10.108.230.113   <none>        9092:31091/TCP                75d
eric-data-message-bus-kf-2-nodeport         NodePort    10.98.161.13     <none>        9092:31092/TCP                75d
eric-data-message-bus-kf-client             ClusterIP   10.102.191.90    <none>        9092/TCP                      75d
zookeeper-nodeport                          NodePort    10.105.151.183   <none>        2181:32181/TCP                84d


修改正確的用戶名密碼,
cat kafka_client_jaas.conf
KafkaClient {
  org.apache.kafka.common.security.plain.PlainLoginModule required
  username="admin"
  password="h7801XHzaC";
};

啟動producer,並把pod-0當作消息中轉broker bin/kafka-console-producer.sh --broker-list 127.0.0.1:31090 -topic kafkatestExternal1027 --producer.config config/producer.properties >external1027-1 >external1027-2 >external1027-3 >external1027-4 > 啟動consumer,並把pod-2當作消息中轉broker bin/kafka-console-consumer.sh --bootstrap-server 127.0.0.1:31092 --topic kafkatestExternal1027 --from-beginning --consumer.config config/consumer.properties external1027-1 external1027-2 external1027-3 external1027-4


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM