第1題: 基於角色的訪問控制-RBAC
創建一個名為deployment-clusterrole的clusterrole,該clusterrole只允許創建Deployment、Daemonset、Statefulset的create操作
在名字為app-team1的namespace下創建一個名為cicd-token的serviceAccount,並且將上一步創建clusterrole的權限綁定到該serviceAccount
kubectl create clusterrole deployment-clusterrole --verb=create --resource=deployments,ds,sts
kubectl create serviceaccount cicd-token
kubectl create rolebinding NAME --clusterrole=NAME|--role=NAME [--user=username] [--group=groupname] [--serviceaccount=namespace:serviceaccountname] [--dry-run=server|client|none]
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: secret-reader
rules:
- apiGroups: [""]
resources: ["secrets"]
verbs: ["get", "watch", "list"]
apiVersion: v1
kind: ServiceAccount
metadata:
name: cicd-token
第2題 節點維護-指定node節點不可用
將k8s-node-1節點設置為不可用,然后重新調度該節點上的所有Pod
[root@node-21-243 ~]# kubectl drain k8s-node-1 --force=true
第3題 K8s版本升級
現有的 Kubernetes 集權正在運行的版本是 1.21.0,僅將主節點上的所有 kubernetes 控制面板和組件升級到版本 1.22.0 另外,在主節點上升級 kubelet 和 kubectl
#設置為維護狀態
$ kubectl config use-context mk8s
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master Ready control-plane,master 11d v1.21.0
k8s-node01 Ready <none> 8d v1.21.0
k8s-node02 Ready <none> 11d v1.21.0
$ kubectl cordon k8s-master
# 驅逐Pod
$ kubectl drain k8s-master --delete-emptydir-data --ignore-daemonsets –force
#按照題目提示ssh到一個master節點
$ ssh k8s-master
$ apt update
$ apt-cache policy kubeadm | grep 1.22.0 #查看支持哪個版本
$ apt-get install kubeadm=1.22.0-00
#驗證升級計划
$ kubeadm upgrade plan
# 看到如下信息,可升級到指定版本
You can now apply the upgrade by executing the following command:
kubeadm upgrade apply v1.22.0
# 開始升級Master節點
$ kubeadm upgrade apply v1.22.0 --etcd-upgrade=false
[upgrade/successful] SUCCESS! Your cluster was upgraded to "v1.22.0". Enjoy!
[upgrade/kubelet] Now that your control plane is upgraded, please proceed with upgrading your kubelets if you haven't already done so.
# 升級kubectl和kubelet
$ apt-get install -y kubelet=1.22.0-00 kubectl=1.22.0-00
$ systemctl daemon-reload
$ systemctl restart kubelet
$ kubectl uncordon k8s-master
node/k8s-master uncordoned
$ kubectl get node
NAME STATUS ROLES AGE VERSION
k8s-master Ready control-plane,master 11d v1.22.0
k8s-node01 Ready <none> 8d v1.21.0
k8s-node02 Ready <none> 11d v1.21.0
第4題 Etcd數據庫備份恢復
針對etcd實例https://127.0.0.1:2379創建一個快照,保存到/srv/data/etcd-snapshot.db。在創建快照的過程中,如果卡住了,就鍵入ctrl+c終止,然后重試。
然后恢復一個已經存在的快照: /var/lib/backup/etcd-snapshot-previous.db
執行etcdctl命令的證書存放在:
ca證書:/opt/KUIN00601/ca.crt
客戶端證書:/opt/KUIN00601/etcd-client.crt
客戶端密鑰:/opt/KUIN00601/etcd-client.key
ETCDCTL_API=3 etcdctl --endpoints 10.2.0.9:2379 \
--cert=/etc/kubernetes/pki/etcd/server.crt \
--key=/etc/kubernetes/pki/etcd/server.key \
--cacert=/etc/kubernetes/pki/etcd/ca.crt \
snapshot save /srv/data/etcd-snapshot.db
ETCDCTL_API=3 etcdctl --endpoints 10.2.0.9:2379 snapshot restore /var/lib/backup/etcd-snapshot-previous.db
第5題 網絡策略NetworkPolicy
創建一個名字為all-port-from-namespace的NetworkPolicy,這個NetworkPolicy允許internal命名空間下的Pod訪問該命名空間下的9000端口。
並且不允許不是internal命令空間的下的Pod訪問
不允許訪問沒有監聽9000端口的Pod。
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: test-network-policy
namespace: default
spec:
podSelector:
matchLabels:
role: db
policyTypes:
- Ingress
ingress:
- from:
- namespaceSelector:
matchLabels:
user: alice
ports:
- protocol: TCP
port: 6379
第6題 四層負載均衡service
重新配置一個已經存在的deployment front-end,在名字為nginx的容器里面添加一個端口配置,名字為http,暴露端口號為80,然后創建一個service,名字為front-end-svc,暴露該deployment的http端口,並且service的類型為NodePort。
kubectl edit deployment front-end
spec:
containers:
ports:
- name: http
containerPort: 80
apiVersion: v1
kind: Service
metadata:
name: front-end-svc
spec:
type: NodePort
selector:
app: MyApp
ports:
- port: 80
targetPort: 80
nodePort: 30007
第7題 七層負載均衡Ingress
在ing-internal 命名空間下創建一個ingress,名字為pong,代理的service hi,端口為5678,配置路徑/hi。
驗證:訪問curl -kL <INTERNAL_IP>/hi會返回hi
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: pong
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- http:
paths:
- path: /hi
pathType: Prefix
backend:
service:
name: hi
port:
number: 5678
第8題 Deployment管理pod擴縮容
擴容名字為loadbalancer的deployment的副本數為6
kubectl scale --replicas=6 deployment/loadbalancer
第9題 pod指定節點部署
創建一個Pod,名字為nginx-kusc00401,鏡像地址是nginx,調度到具有disk=spinning標簽的節點上
apiVersion: v1
kind: Pod
metadata:
name: nginx-kusc00401
labels:
env: test
spec:
containers:
- name: nginx
image: nginx
imagePullPolicy: IfNotPresent
nodeSelector:
disk: spinning
第10題 檢查Node節點的健康狀態
檢查集群中有多少節點為Ready狀態,並且去除包含NoSchedule污點的節點。之后將數字寫到/opt/KUSC00402/kusc00402.txt
kubectl get node | awk '{print $2}' | grep -w Ready | wc -l
第11題 一個Pod封裝多個容器
創建一個Pod,名字為kucc1,這個Pod可能包含1-4容器,該題為四個:nginx+redis+memcached+consul
apiVersion: v1
kind: Pod
metadata:
name: kucc1
spec:
containers:
- name: nginx
image: nginx
- name: redis
image: redis
- name: memcached
image: memcached
- name: consul
image: consul
第12題 持久化存儲卷Persistent、Volume
創建一個pv,名字為app-config,大小為2Gi,訪問權限為ReadWriteMany。Volume的類型為hostPath,路徑為/srv/app-config
apiVersion: v1
kind: PersistentVolume
metadata:
name: example-pv
spec:
capacity:
storage: 100Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Delete
storageClassName: local-storage
local:
path: /mnt/disks/ssd1
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- example-node
第13題 PersistentVolumeClaim
創建一個名字為pv-volume的pvc,指定storageClass為csi-hostpath-sc,大小為10Mi
然后創建一個Pod,名字為web-server,鏡像為nginx,並且掛載該PVC至/usr/share/nginx/html,掛載的權限為ReadWriteOnce。之后通過kubectl edit或者kubectl path將pvc改成70Mi,並且記錄修改記錄。
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: myclaim
spec:
accessModes:
- ReadWriteOnce
volumeMode: Filesystem
resources:
requests:
storage: 8Gi
storageClassName: slow
selector:
matchLabels:
release: "stable"
matchExpressions:
- {key: environment, operator: In, values: [dev]}
apiVersion: v1
kind: Pod
metadata:
name: mypod
spec:
containers:
- name: myfrontend
image: nginx
volumeMounts:
- mountPath: "/var/www/html"
name: mypd
volumes:
- name: mypd
persistentVolumeClaim:
claimName: myclaim
kubectl edit pvc pv-volume
第14題 監控Pod日志
監控名為foobar的Pod的日志,並過濾出具有unable-access-website 信息的行,然后將寫入到 /opt/KUTR00101/foobar
kubectl logs -f foobar | grep unable-access-website >> /opt/KUTR00101/foobar
第15題 Sidecar代理
添加一個名為busybox且鏡像為busybox的sidecar到一個已經存在的名為legacy-app的Pod上,這個sidecar的啟動命令為/bin/sh, -c, 'tail -n+1 -f /var/log/legacy-app.log'。
並且這個sidecar和原有的鏡像掛載一個名為logs的volume,掛載的目錄為/var/log/
apiVersion: v1
kind: Pod
metadata:
name: counter
spec:
containers:
- name: count
image: busybox
args:
- /bin/sh
- -c
- >
i=0;
while true;
do
echo "$i: $(date)" >> /var/log/1.log;
echo "$(date) INFO $i" >> /var/log/2.log;
i=$((i+1));
sleep 1;
done
volumeMounts:
- name: varlog
mountPath: /var/log
volumes:
- name: varlog
emptyDir: {}
第16題 監控Pod度量指標
找出具有name=cpu-user的Pod,並過濾出使用CPU最高的Pod,然后把它的名字寫在已經存在的/opt/KUTR00401/KUTR00401.txt文件里(注意他沒有說指定namespace。所以需要使用-A指定所以namespace)
kubectl top pod --selector='name=cpu-user' --sort-by='cpu' -A
第17題 集群故障排查 – kubelet故障
一個名為wk8s-node-0的節點狀態為NotReady,讓其他恢復至正常狀態,並確認所有的更改開機自動完成
kubectl describe node node-21-243