K8S資源管理總結及dashboard創建
K8S核心資源管理方法:
- 陳述式管理:基於眾多kubectl命令
- 聲明式管理:基於k8s資源配置清單
- GUI式管理:基於K8S儀表盤(dashboard)
K8S四種核心的附件
CNI網絡插件:如果集群足夠小,其實不需要網絡插件,只需要改iptables規則,增加幾條路由就可以了。
在恰當的時候,用恰當的技術,解決恰當的問題。
-
CNI網絡插件
- flannel(在100台機器的規模內)
- 三種常用工作模式
- NAT
- VxLAN
- 優化SNAT規則(集群內容器通信,能獲取到具體的容器IP,而不是宿主機IP)
-
K8S的服務發現
- 集群網絡:clusterIP
- Service資源:Service Name
- CoreDNS軟件:實現了Service Name和Cluster IP的自動關聯
-
K8S的服務暴露
- Ingress資源:專用於暴露7層應用到K8S集群外的一種核心資源(http/https)
- ingress controller:一個簡化版的nginx(調度流量)+go腳本(動態識別yaml)
- Traefik軟件:實現了ingress控制器的一個軟件
-
GUI圖形化管理插件
開始安裝dashboard
在運維主機上hdss7-200
[root@hdss7-200 k8s-yaml]# docker pull k8scn/kubernetes-dashboard-amd64:v1.8.3
[root@hdss7-200 k8s-yaml]# docker images | grep dashboard
k8scn/kubernetes-dashboard-amd64 v1.8.3 fcac9aa03fd6 2 years ago 102MB
[root@hdss7-200 k8s-yaml]# docker tag fcac9aa03fd6 harbor.od.com/public/dashboard.od.com:v1.8.3
[root@hdss7-200 k8s-yaml]# docker push !$
rbac.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
labels:
k8s-app: kubernetes-dashboard
addonmanager.kubernetes.io/mode: Reconcile
name: kubernetes-dashboard-admin
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: kubernetes-dashboard-admin
namespace: kube-system
labels:
k8s-app: kubernetes-dashboard
addonmanager.kubernetes.io/mode: Reconcile
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: kubernetes-dashboard-admin
namespace: kube-system
dp.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: kubernetes-dashboard
namespace: kube-system
labels:
k8s-app: kubernetes-dashboard
kubernetes.io/cluster-service: "true"
addonmanager.kubernetes.io/mode: Reconcile
spec:
selector:
matchLabels:
k8s-app: kubernetes-dashboard
template:
metadata:
labels:
k8s-app: kubernetes-dashboard
annotations:
scheduler.alpha.kubernetes.io/critical-pod: ''
spec:
priorityClassName: system-cluster-critical
containers:
- name: kubernetes-dashboard
image: harbor.od.com/public/dashboard:v1.8.3
resources:
limits:
cpu: 100m
memory: 300Mi
requests:
cpu: 50m
memory: 100Mi
ports:
- containerPort: 8443
protocol: TCP
args:
# PLATFORM-SPECIFIC ARGS HERE
- --auto-generate-certificates
volumeMounts:
- name: tmp-volume
mountPath: /tmp
livenessProbe:
httpGet:
scheme: HTTPS
path: /
port: 8443
initialDelaySeconds: 30
timeoutSeconds: 30
volumes:
- name: tmp-volume
emptyDir: {}
serviceAccountName: kubernetes-dashboard-admin
tolerations:
- key: "CriticalAddonsOnly"
operator: "Exists"
- --auto-generate-certificates
# 自動生成證書
livenessProbe:
# 容器存活性探針
# 容器就緒性探針
# 目標是判定容器在K8S容器編排環境下是否正常啟動,或者運行的過程中是否有異常退出
# 就緒性探針
# 容器被拉起來之后,先用就緒性探針去探測它,直到它滿足我的要求,我認定它為就緒狀態,就是running狀態
port: 8443
# 監聽8443是否存在,存在則說明容器是正常狀態
svc.yaml
apiVersion: v1
kind: Service
metadata:
name: kubernetes-dashboard
namespace: kube-system
labels:
k8s-app: kubernetes-dashboard
kubernetes.io/cluster-service: "true"
addonmanager.kubernetes.io/mode: Reconcile
spec:
selector:
k8s-app: kubernetes-dashboard
ports:
- port: 443
targetPort: 8443
ingress.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: kubernetes-dashboard
namespace: kube-system
annotations:
kubernetes.io/ingress.class: traefik
spec:
rules:
- host: dashboard.od.com
http:
paths:
- backend:
serviceName: kubernetes-dashboard
servicePort: 443
創建資源
[root@hdss7-22 ~]# kubectl apply -f http://k8s-yaml.od.com/dashboard/rbac.yaml
serviceaccount/kubernetes-dashboard-admin created
clusterrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard-admin created
[root@hdss7-22 ~]# kubectl apply -f http://k8s-yaml.od.com/dashboard/dp.yaml
deployment.apps/kubernetes-dashboard created
[root@hdss7-22 ~]# kubectl apply -f http://k8s-yaml.od.com/dashboard/svc.yaml
service/kubernetes-dashboard created
[root@hdss7-22 ~]# kubectl apply -f http://k8s-yaml.od.com/dashboard/ingress.yaml
ingress.extensions/kubernetes-dashboard created
查看web頁面