1、構建rabbitmq鏡像
RabbitMQ提供了一個Autocluster插件,可以自動創建RabbitMQ集群。下面我們將基於RabbitMQ的官方docker鏡像,添加這個autocluster插件,構建我們自己的Rabbit鏡像,以便在Kubernetes上使用這個鏡像。
首選需要從這里下載autocluster和rabbitmq_aws插件,我這里下載的是0.8.0的最新版本。
mkdir -p rabbitmq/plugins
cd rabbitmq/plugins
wget https://github.com/rabbitmq/rabbitmq-autocluster/releases/download/0.8.0/autocluster-0.8.0.ez
wget https://github.com/rabbitmq/rabbitmq-autocluster/releases/download/0.8.0/rabbitmq_aws-0.8.0.ez
cd ..
我的Dockerfile內容如下:
FROM rabbitmq:3.6.11-management-alpine
MAINTAINER yangyuhang
RUN apk update && apk add ca-certificates && \
apk add tzdata && \
ln -sf /usr/share/zoneinfo/Asia/Shanghai /etc/localtime && \
echo "Asia/Shanghai" > /etc/timezone
ADD plugins/*.ez /opt/rabbitmq/plugins/
RUN rabbitmq-plugins enable --offline autocluster
- 這里選擇是rabbitmq:3.6.11-management-alpine作為基礎鏡像
- 添加了autocluster插件
制作鏡像並推送至阿里雲個人倉庫:
docker build -t registry.cn-zhangjiakou.aliyuncs.com/beibei_dtstack/cashier_rabbitmq:base .
docker push registry.cn-zhangjiakou.aliyuncs.com/beibei_dtstack/cashier_rabbitmq:base
2、以statefulset部署rabbitmq集群
在部署集群之前需要為集群創建一個Storage Class(存儲類)來作為集群數據的持久化后端。本例中使用nfs作為后端存儲,在創建存儲類之前需要先搭建好nfs,並保證在k8s集群各個節點上均能掛載該nfs存儲。搭建好nfs后,需要先創建一個存儲類,yaml文件如下:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: first-storage
parameters:
archiveOnDelete: "false"
provisioner: nfs-first-storage
reclaimPolicy: Retain
volumeBindingMode: Immediate
然后需要再創建一個PersistentVolumeClaim(PVC,存儲卷),作為rabbitmq集群的后端存儲:
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: rabbitmq
namespace: cashier
spec:
accessModes:
- ReadWriteMany #可被多節點讀寫
resources:
requests:
storage: 5Gi
storageClassName: first-storage #聲明pv
volumeMode: Filesystem
相繼執行kubectl apply -f 即可將后端存儲建好。
前面在構建RabbitMQ的Docker鏡像時,我們添加了autocluster插件,這個插件基於很多種backend做服務發現自動發現的RabbitMQ節點添加到RabbitMQ集群中,autocluster當前支持如下幾種backend:
- AWS EC2 tags
- AWS Autoscaling Groups
- Kubernetes
- DNS A records
- Consul
- etcd
Kubernetes赫然在列,實際上當使用Kubernetes作為rabbitmq-autocluster的backend時,autocluster會通過訪問Kubernetes的API Server獲取RabbitMQ服務的endpoints,這樣就能拿到Kubernete集群中的RabbitMQ的Pod的信息,從而可以將它們添加到RabbitMQ的集群中去。 這里也就是說要在autocluster實際上是在RabbitMQ Pod中要訪問Kubernetes的APIServer。
可是然后呢?因為已經對Kubernetes的API Server啟用了TLS認證,同時也為API Server起到用了RBAC,要想從Pod中訪問API Server需要借助Kubernetes的Service Account。 Service Account是Kubernetes Pod中的程序用於訪問Kubernetes API的Account(賬號),它為Pod中的程序提供訪問Kubernetes API的身份標識。下面我們創建rabbitmq Pod的ServiceAccount,並針對Kubernetes的endpoint資源做授權,創建相關的role和rolebinding。
先說明一下,我們的部署是在cashier這個namespace下的。創建如下的rabbitmq.rbac.yaml文件:
---
apiVersion: v1
kind: ServiceAccount #集群訪問apiserver的憑證
metadata:
name: rabbitmq
namespace: cashier
---
kind: Role #創建sa角色
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: rabbitmq
namespace: cashier
rules:
- apiGroups:
- ""
resources:
- endpoints
verbs:
- get
---
kind: RoleBinding #將角色綁定
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: rabbitmq
namespace: cashier
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: rabbitmq
subjects:
- kind: ServiceAccount
name: rabbitmq
namespace: cashier
在Kubernetes上創建rabbitmq這個ServiceAccount以及相關的role和rolebinding:
kubectl create -f rabbitmq.rbac.yaml
然后創建訪問rabbitmq集群的service,創建rabbitmq.service.yaml:
---
apiVersion: v1
kind: Service
metadata:
name: rabbitmq-management
namespace: cashier
labels:
app: rabbitmq
spec:
ports:
- port: 15672
name: http
nodePort: 32001 #集群外訪問rabbitmq管理web界面,http://nodeip:32001
- port: 5672
name: amqp
nodePort: 32002
selector:
app: rabbitmq
type: NodePort
---
apiVersion: v1
kind: Service
metadata:
name: rabbitmq
namespace: cashier
labels:
app: rabbitmq
spec:
clusterIP: None
ports:
- port: 5672
name: amqp
selector:
app: rabbitmq
在Kubernetes上創建rabbitmq的service:
kubectl create -f rabbitmq.service.yaml
然后通過statefulset類型創建rabbitmq集群,創建rabbitmq.statefulset.yaml:
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
labels:
app: rabbitmq
k8s.eip.work/layer: cloud
k8s.eip.work/name: rabbitmq
name: rabbitmq
namespace: cashier
spec:
replicas: 3
revisionHistoryLimit: 10
selector:
matchLabels:
app: rabbitmq
k8s.eip.work/layer: cloud
k8s.eip.work/name: rabbitmq
serviceName: rabbitmq
template:
metadata:
labels:
app: rabbitmq
k8s.eip.work/layer: cloud
k8s.eip.work/name: rabbitmq
spec:
containers:
- env:
- name: RABBITMQ_DEFAULT_USER
valueFrom:
secretKeyRef:
key: rabbitDefaulUser
name: devsecret #登陸用戶名和密碼都存儲在一個secret對象中
- name: RABBITMQ_DEFAULT_PASS
valueFrom:
secretKeyRef:
key: rabbitDefaultPass
name: devsecret
- name: RABBITMQ_ERLANG_COOKIE
valueFrom:
secretKeyRef:
key: erlang.cookie
name: devsecret
- name: MY_POD_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.name
- name: K8S_SERVICE_NAME
value: "rabbitmq"
- name: RABBITMQ_USE_LONGNAME
value: 'true'
- name: RABBITMQ_NODENAME
value: "rabbit@$(MY_POD_NAME).$(K8S_SERVICE_NAME)"
- name: RABBITMQ_NODE_TYPE
value: "disc"
- name: AUTOCLUSTER_TYPE
value: "k8s"
- name: AUTOCLUSTER_DELAY
value: '10'
- name: AUTOCLUSTER_CLEANUP
value: 'true'
- name: CLEANUP_WARN_ONLY
value: 'false'
- name: K8S_ADDRESS_TYPE
value: "hostname"
- name: K8S_HOSTNAME_SUFFIX
value: ".$(K8S_SERVICE_NAME)"
image: "registry.cn-zhangjiakou.aliyuncs.com/beibei_dtstack/cashier_rabbitmq:base"
imagePullPolicy: IfNotPresent
name: rabbitmq
ports:
- containerPort: 5672
name: amqp
protocol: TCP
resources:
limits:
cpu: 250m
memory: 512Mi
requests:
cpu: 150m
memory: 256Mi
volumeMounts:
- mountPath: /var/lib/rabbitmq
name: rabbitmq-volume
dnsPolicy: ClusterFirst
imagePullSecrets:
- name: aliyunsecret
restartPolicy: Always
schedulerName: default-scheduler
serviceAccount: rabbitmq
serviceAccountName: rabbitmq
volumes:
- name: rabbitmq-volume
persistentVolumeClaim:
claimName: rabbitmq #綁定pvc
updateStrategy:
rollingUpdate:
partition: 0
type: RollingUpdate
在創建之前還需要創建一個secret對象,用來存儲rabbitmq的用戶名、密碼及erlang.cookie,具體創建步驟如下:
#首先需要生成一個erlang.cookie的文件:
echo $(openssl rand -base64 32) > erlang.cookie
#然后將該文件的內容復制下來,編寫一個secret對象yaml:
apiVersion: v1
kind: Secret
metadata:
name: devsecret
namespace: cashier
type: Opaque
data:
rabbitDefaulUser: "cmFiYml0dXNlcgo="
rabbitDefaultPass: "cmFiYml0cGFzcwo="
erlang.cookie: "ClmQ9uk2OYk/e+F6wxQEj49rcWT0XzJFWvWIC8RHOiA="
- secret對象不允許存儲明碼的內容,必須將data下所有的數據轉換為base64數據才ok,然后create一下:
kubectl creat -f secret.yaml
- 通過環境變量RABBITMQ_USE_LONGNAME, RABBITMQ_NODENAME, AUTOCLUSTER_TYPE, AUTOCLUSTER_DELAY, K8S_ADDRESS_TYPE, AUTOCLUSTER_CLEANUP等環境變量配置了autocluster插件,具體可以參考 RabbitMQ Autocluster中的文檔內容
- 通過RABBITMQ_ERLANG_COOKIE指定了Erlang cookie。RabbitMQ的集群是通過Erlang OTP實現的,而Erlang節點間通信的認證通過Erlang cookie來允許通信,這里從devsecret這個Secret中掛載。關於devsecret這個Secret這里不再給出。
- 通過RABBITMQ_DEFAULT_USER和RABBITMQ_DEFAULT_PASS指定了RabbitMQ的管理員用戶名和密碼,也是從devsecret這個Secret中掛載的
- 通過RABBITMQ_NODE_TYPE設置集群所有節點類型為disc,即為磁盤節點
為了在Kubernetes上運行RabbitMQ集群,必須保證各個RabbitMQ節點之間可以通信,也就是SatefulSet的Pod可以通信。 采用的RabbitMQ節點的命名方式為rabbit@hostdomainname的形式:
rabbit@rabbitmq-0.rabbit (rabbit@rabbitmq-0.rabbit.cashier.svc.cluster.local)
rabbit@rabbitmq-1.rabbit (rabbit@rabbitmq-1.rabbit.cashier.svc.cluster.local)
rabbit@rabbitmq-2.rabbit (rabbit@rabbitmq-2.rabbit.cashier.svc.cluster.local)
可以看出采用的是長節點名的命名方式,因此設置了RABBITMQ_USE_LONGNAME為true。為了保證節點間可以通過訪問rabbitmq-0.rabbit, rabbitmq-1.rabbit, rabbitmq-2.rabbit這些域名通信,必須使用Headless Service,上面rabbitmq Service的clusterIP: None這個必須設置。
在Kubernetes上創建Service和StatefulSet:
kubectl create -f rabbitmq.statefulset.yaml
[root@k8s-master1 ~]# kubectl get statefulset rabbitmq -n cashier
NAME READY AGE
rabbitmq 3/3 5h15m
最后可以在RabbitMQ Management中查看RabbitMQ的3個節點已經組成了集群: