集群版zookeeper安裝
第一步:添加helm鏡像源
helm repo add incubator http://storage.googleapis.com/kubernetes-charts-incubator
第二步:下載Zookeeper
helm fetch incubator/zookeeper
第三步:修改
...
persistence:
enabled: true
## zookeeper data Persistent Volume Storage Class
## If defined, storageClassName: <storageClass>
## If set to "-", storageClassName: "", which disables dynamic provisioning
## If undefined (the default) or set to null, no storageClassName spec is
## set, choosing the default provisioner. (gp2 on AWS, standard on
## GKE, AWS & OpenStack)
##
storageClass: "nfs-client"
accessMode: ReadWriteOnce
size: 5Gi
...
注意:
1、如果已有存儲,可不執行以下操作,將現有的storageClass替換即可
查看storageclass,替換對應的NAME
kubectl get sc -(namespace名稱)
[root@k8s-master zookeeper]# kubectl get sc -n xxxxxx
NAME PROVISION RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
nfs-client cluster.local/moldy-seagull-nfs-client-provisioner Delete Immediate true 16d
2、如果沒有存儲,執行下列操作時,注意存儲的方式及地址
修改存儲(storageclass 名稱為kubectl get sc -(namespace名稱) 下面的共享存儲卷,如果沒有按照以下步驟安裝)
1、集群版本:如果是1.19+
# xxx填寫存儲地址,例如nfs共享存儲填寫ip:192.168.8.158
helm install --set nfs.server=x.x.x.x --set nfs.path=/exported/path stable/nfs-client-provisioner
如果出現錯誤
Error: failed to download "stable/nfs-client-provisioner" (hint: running `helm repo update` may help)
2、如果是1.19版本以下執行yaml文件
$ kubectl create -f nfs-client-sa.yaml
$ kubectl create -f nfs-client-class.yaml
$ kubectl create -f nfs-client.yaml
注意nfs-client.yaml存儲地址!!!
nfs-client-sa.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: nfs-client-provisioner
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: nfs-client-provisioner-runner
rules:
- apiGroups: [""]
resources: ["persistentvolumes"]
verbs: ["get", "list", "watch", "create", "delete"]
- apiGroups: [""]
resources: ["persistentvolumeclaims"]
verbs: ["get", "list", "watch", "update"]
- apiGroups: ["storage.k8s.io"]
resources: ["storageclasses"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["events"]
verbs: ["list", "watch", "create", "update", "patch"]
- apiGroups: [""]
resources: ["endpoints"]
verbs: ["create", "delete", "get", "list", "watch", "patch", "update"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: run-nfs-client-provisioner
subjects:
- kind: ServiceAccount
name: nfs-client-provisioner
namespace: default
roleRef:
kind: ClusterRole
name: nfs-client-provisioner-runner
apiGroup: rbac.authorization.k8s.io
nfs-client-class.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: course-nfs-storage
provisioner: fuseim.pri/ifs
nfs-client.yaml
spec.containers.env.name:NFS_SERVER 對應的value地址根據實際需求更換,以下192.168.8.158地址為示例地址
kind: Deployment
apiVersion: apps/v1
metadata:
name: nfs-client-provisioner
spec:
replicas: 1
selector:
matchLabels:
app: nfs-client-provisioner
strategy:
type: Recreate
template:
metadata:
labels:
app: nfs-client-provisioner
spec:
serviceAccountName: nfs-client-provisioner
containers:
- name: nfs-client-provisioner
image: quay.io/external_storage/nfs-client-provisioner:latest
volumeMounts:
- name: nfs-client-root
mountPath: /persistentvolumes
env:
- name: PROVISIONER_NAME
value: fuseim.pri/ifs
- name: NFS_SERVER
value: 192.168.8.158
- name: NFS_PATH
value: /data/k8s
volumes:
- name: nfs-client-root
nfs:
server: 192.168.8.158
path: /data/k8s
非集群版zookeeper安裝
注意:zookeeper.yaml中存儲地址,根據實際情況修改存儲(共有三處PV需要修改)
kubectl apply -f zookeeper.yaml -n xxxxx
zookeeper.yaml
##創建Service
---
apiVersion: v1
kind: Service
metadata:
name: zookeeper
labels:
name: zookeeper
spec:
type: NodePort
ports:
- port: 2181
protocol: TCP
targetPort: 2181
name: zookeeper-2181
nodePort: 30000
- port: 2888
protocol: TCP
targetPort: 2888
name: zookeeper-2888
- port: 3888
protocol: TCP
targetPort: 3888
name: zookeeper-3888
selector:
name: zookeeper
---
##創建PV
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: zookeeper-data-pv
labels:
pv: zookeeper-data-pv
spec:
capacity:
storage: 10Gi
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Retain
#########################################################注意pv的nfs存儲地址,根據實際情況修改##################
nfs: #NFS設置
server: 192.168.8.158
path: /data/k8s
##創建pvc
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: zookeeper-data-pvc
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 10Gi
selector:
matchLabels:
pv: zookeeper-data-pv
---
##創建PV
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: zookeeper-datalog-pv
labels:
pv: zookeeper-datalog-pv
spec:
capacity:
storage: 10Gi
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Retain
#########################################################注意pv的nfs存儲地址,根據實際情況修改##################
nfs: #NFS設置
server: 192.168.8.158
path: /data/k8s
##創建pvc
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: zookeeper-datalog-pvc
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 10Gi
selector:
matchLabels:
pv: zookeeper-datalog-pv
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: zookeeper-logs-pv
labels:
pv: zookeeper-logs-pv
spec:
capacity:
storage: 10Gi
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Retain
#########################################################注意pv的nfs存儲地址,根據實際情況修改##################
nfs:
server: 192.168.8.158
path: /data/k8s
##創建pvc
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: zookeeper-logs-pvc
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 10Gi
selector:
matchLabels:
pv: zookeeper-logs-pv
---
## 部署
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: zookeeper
labels:
name: zookeeper
spec:
replicas: 1
selector:
matchLabels:
name: zookeeper
template:
metadata:
labels:
name: zookeeper
spec:
containers:
- name: zookeeper
image: zookeeper:3.4.13
imagePullPolicy: Always
volumeMounts:
- mountPath: /logs
name: zookeeper-logs
- mountPath: /data
name: zookeeper-data
- mountPath: /datalog
name: zookeeper-datalog
ports:
- containerPort: 2181
- containerPort: 2888
- containerPort: 3888
volumes:
- name: zookeeper-logs
persistentVolumeClaim:
claimName: zookeeper-logs-pvc
- name: zookeeper-data
persistentVolumeClaim:
claimName: zookeeper-data-pvc
- name: zookeeper-datalog
persistentVolumeClaim:
claimName: zookeeper-datalog-pvc
安裝nimbus
第一步:安裝nimbus配置文件config map
注意:nimbus-cm.yaml中的zookeeper為zookeeper的service名稱
kubectl apply -fnimbus-cm.yaml -n xxxxxx
nimbus-cm.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: nimbus-cm
data:
storm.yaml: |
# DataSource
storm.zookeeper.servers: [zookeeper]
nimbus.seeds: [nimbus]
storm.log.dir: "/logs"
storm.local.dir: "/data"
第二步:安裝Deployment
kubectl apply -f nimbus.yaml -n xxxxxx
nimbus.yaml
注意創建PV時,存儲地址,根據實際情況修改
##創建Service
apiVersion: v1
kind: Service
metadata:
name: nimbus
labels:
name: nimbus
spec:
ports:
- port: 6627
protocol: TCP
targetPort: 6627
name: nimbus-6627
selector:
name: storm-nimbus
---
##創建PV,注意修改nfs存儲地址,根據實際情況調整
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: storm-nimbus-data-pv
labels:
pv: storm-nimbus-data-pv
spec:
capacity:
storage: 5Gi
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Retain
nfs:
server: 192.168.8.158
path: /data/k8s
##創建pvc
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: storm-nimbus-data-pvc
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 5Gi
selector:
matchLabels:
pv: storm-nimbus-data-pv
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: storm-nimbus-logs-pv
labels:
pv: storm-nimbus-logs-pv
spec:
capacity:
storage: 5Gi
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Retain
nfs:
server: 192.168.8.158
path: /data/k8s
##創建pvc
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: storm-nimbus-logs-pvc
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 5Gi
selector:
matchLabels:
pv: storm-nimbus-logs-pv
---
## 部署
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: storm-nimbus
labels:
name: storm-nimbus
spec:
replicas: 1
selector:
matchLabels:
name: storm-nimbus
template:
metadata:
labels:
name: storm-nimbus
spec:
hostname: nimbus
imagePullSecrets:
- name: e6-aliyun-image
containers:
- name: storm-nimbus
image: storm:1.2.2
imagePullPolicy: Always
command:
- storm
- nimbus
#args:
#- nimbus
volumeMounts:
- mountPath: /conf/
name: configmap-volume
- mountPath: /logs
name: storm-nimbus-logs
- mountPath: /data
name: storm-nimbus-data
ports:
- containerPort: 6627
volumes:
- name: storm-nimbus-logs
persistentVolumeClaim:
claimName: storm-nimbus-logs-pvc
- name: storm-nimbus-data
persistentVolumeClaim:
claimName: storm-nimbus-data-pvc
- name: configmap-volume
configMap:
name: nimbus-cm
# hostNetwork: true
# dnsPolicy: ClusterFirstWithHostNet
安裝nimbus-ui
kubectl create deployment stormui --image=adejonge/storm-ui -n xxxxxx
第二步:安裝svc
kubectl expose deployment stormui --port=8080 --type=nodeport -n xxxxxxx
第三步:創建config map
安裝zk-ui
安裝方式
kubectl apply -f zookeeper-program-ui.yaml -n xxxxxxx
配置文件
zookeeper-program-ui.yaml
##創建Service
---
apiVersion: v1
kind: Service
metadata:
name: zookeeper-ui
labels:
name: zookeeper-ui
spec:
type: NodePort
ports:
- port: 9090
protocol: TCP
targetPort: 9090
name: zookeeper-ui-8080
nodePort: 30012
selector:
name: zookeeper-ui
---
## 部署
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: zookeeper-ui
labels:
name: zookeeper-ui
spec:
replicas: 1
selector:
matchLabels:
name: zookeeper-ui
template:
metadata:
labels:
name: zookeeper-ui
spec:
containers:
- name: zookeeper-ui
image: maauso/zkui
imagePullPolicy: Always
env:
- name: ZKLIST
value: 192.168.8.158:30000
ports:
- containerPort: 9090