CKA考題整理
1、使用kubeadm搭建k8s;
參考鏈接:https://kubernetes.io/zh/docs/setup/production-environment/tools/kubeadm/install-kubeadm/
添加k8s阿里源:
cat <<EOF > /etc/yum.repos.d/kubernetes.repo [kubernetes] name=Kubernetes baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/ enabled=1 gpgcheck=1 repo_gpgcheck=1 gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg EOF yum list kubelet kubeadm kubectl --showduplicates |sort -r
yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes
systemctl enable --now kubelet
配置 kubelet 使用的 cgroup 驅動程序 docker配置: cat > /etc/docker/daemon.json <<EOF { "registry-mirrors": ["https://b9pmyelo.mirror.aliyuncs.com"], "exec-opts": ["native.cgroupdriver=systemd"] } EOF systemctl restart docker k8s配置: vim /etc/sysconfig/kubelet KUBELET_EXTRA_ARGS=--cgroup-driver=systemd 需要重新啟動 kubelet: systemctl daemon-reload systemctl restart kubelet
查看服務狀態
systemctl status kubelet
拉取鏡像
kubeadm config images pull
集群初始化
kubeadm init \ --apiserver-advertise-address=10.255.20.137 \ --image-repository=registry.aliyuncs.com/google_containers \ --kubernetes-version=1.19.0 \ --pod-network-cidr=10.244.0.0/16 \ --ignore-preflight-errors=all \ --upload-certs
初始化配置文件
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
添加node節點
kubeadm join 10.255.20.137:6443 \
--token 3zayke.4f0va2516jbg1wdy \
--discovery-token-ca-cert-hash sha256:4e64aed48f750b79a7969ca45a6d999c51275fa67071730423a5934e8f6e85d1 \
--ignore-preflight-errors=all \
--v=5
wget https://docs.projectcalico.org/v3.11/manifests/calico.yaml 修改yaml中的pod網段,默認為192.168.0.0/16,修改為初始化時的網段,如:10.244.0.0/16
kubectl apply -f calico.yaml
查看k8s節點
kubectl get node
NAME STATUS ROLES AGE VERSION
master Ready master 38m v1.19.0
node1 Ready <none> 23m v1.19.0
node2 Ready <none> 26m v1.19.0
查看master節點是否打污點
kubectl describe node master|grep -A1 Taints
Taints: node-role.kubernetes.io/master:NoSchedule
Unschedulable: false
集群若為單個master節點,需要取消污點
kubectl taint nodes master node-role.kubernetes.io/master:NoSchedule-
查看所有的pod是否正常運行
kubectl get pod -A
# 設置污點 $ kubectl taint nodes node1 key1=value1:NoSchedule # 節點說明中,查找 Taints 字段 $ kubectl describe pod pod-name il # 去除污點 $ kubectl taint nodes node1 key1:NoSchedule- NoSchedule:表示 k8s 將不會將 Pod 調度到具有該污點的 Node 上 PreferNoSchedule:表示 k8s 將盡量避免將 Pod 調度到具有該污點的 Node 上 NoExecute:表示 k8s 將不會將 Pod 調度到具有該污點的 Node 上,同時會將 Node 上已經存在的 Pod 驅逐出去
配置命令行補全功能 yum -y install bash-completion echo "source <(kubectl completion bash)" >> ~/.bashrc
2、測試創建一個pod
參考鏈接:https://kubernetes.io/zh/docs/concepts/workloads/pods/
參考鏈接:https://kubernetes.io/zh/docs/tasks/inject-data-application/define-command-argument-container/
kubectl run nginx --image=nginx --dry-run=client -oyaml |tee -a pod.yaml apiVersion: v1 kind: Pod metadata: creationTimestamp: null labels: run: nginx name: nginx spec: containers: - image: nginx name: nginx resources: {} dnsPolicy: ClusterFirst restartPolicy: Always status: {}
kubectl create -f pod.yaml
查看pod的詳細信息
kubectl describe pod nhinx
查看pod的狀態
kubectl get pod
NAME READY STATUS RESTARTS AGE
nginx 1/1 Running 0 100s
3、新建命名空間,在該命名空間中創建一個pod
參考鏈接:https://kubernetes.io/zh/docs/tasks/administer-cluster/namespaces/
參考鏈接:https://kubernetes.io/zh/docs/tasks/inject-data-application/define-command-argument-container/
生成命名空間的yaml kubectl create namespace cka --dry-run=client -oyaml > pod_cka.yaml 設置分割 echo '---' >> pod_cka.yaml 追加nginx的yaml kubectl run nginx --image=nginx --dry-run=client -oyaml >> pod_cka.yaml 創建pod kubectl create -f pod_cka.yaml
yaml文件如下: apiVersion: v1 kind: Namespace metadata: name: cka --- apiVersion: v1 kind: Pod metadata: labels: run: nginx name: nginx spec: containers: - image: nginx name: nginx dnsPolicy: ClusterFirst restartPolicy: Always
4、創建一個deployment並暴露Service
參考鏈接:https://kubernetes.io/zh/docs/concepts/workloads/controllers/deployment/
參考鏈接:https://kubernetes.io/zh/docs/concepts/services-networking/connect-applications-service/
方法一:
apiVersion: apps/v1 kind: Deployment metadata: name: nginx-deployment labels: app: nginx spec: replicas: 3 selector: matchLabels: app: nginx template: metadata: labels: app: nginx spec: containers: - name: nginx image: nginx:1.14.2 ports: - containerPort: 80 kubectl create -f deployment.yaml
kubectl expose deployment my-dep --port=80 --target-port=80 --type=NodePort
方法二:
apiVersion: v1 kind: Service metadata: name: my-nginx labels: run: my-nginx spec: type: NodePort ports: - port: 80 targetPort: 80 protocol: TCP name: http selector: run: my-nginx --- apiVersion: apps/v1 kind: Deployment metadata: name: my-nginx spec: selector: matchLabels: run: my-nginx replicas: 1 template: metadata: labels: run: my-nginx spec: containers: - name: nginx image: nginx:1.14.2 ports: - containerPort: 80 kubectl create -f service_deploy.yaml
5、列出命名空間下指定標簽pod
參考鏈接:https://kubernetes.io/zh/docs/concepts/overview/working-with-objects/labels/
apiVersion: apps/v1 kind: Deployment metadata: labels: app: my-dep name: my-dep spec: replicas: 3 selector: matchLabels: app: my-dep template: metadata: labels: app: my-dep environment: production tier: frontend spec: containers: - image: nginx name: nginx ports: - containerPort: 5701 resources: {} kubectl create -f deployment.yaml kubectl get pods -l environment=production,tier=frontend
6、查看pod日志,並將日志中Error的行記錄到指定文件
參考鏈接:https://kubernetes.io/zh/docs/concepts/cluster-administration/logging/
apiVersion: v1 kind: Pod metadata: name: counter spec: containers: - name: count image: busybox args: [/bin/sh, -c, 'i=0; while true; do echo "ERROR: $i: $(date)"; i=$((i+1)); sleep 1; done'] kubectl create -f pod_logs.yaml kubectl logs counter |grep "ERROR" >/tmp/counter_ERR.log
7、查看指定標簽使用cpu最高的pod,並記錄到到指定文件
參考鏈接:https://kubernetes.io/zh/docs/concepts/cluster-administration/logging/
部署metrics-server 參考地址:https://github.com/kubernetes-sigs/metrics-server/tree/master wget https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml 根據集群的實際信息進行如下修改: 設置連接到集群的方式,默認情況下使用“InternalIP”即可; --kubelet-preferred-address-types - The priority of node address types used when determining an address for connecting to a particular node (default [Hostname,InternalDNS,InternalIP,ExternalDNS,ExternalIP]) 不要驗證Kubelets提供的服務證書的CA。僅用於測試目的,否則需要創建CA證書。 --kubelet-insecure-tls - Do not verify the CA of serving certificates presented by Kubelets. For testing purposes only. --- diff components.yaml components-new.yaml 136c136,137 < image: k8s.gcr.io/metrics-server/metrics-server:v0.4.1 --- > - --kubelet-insecure-tls > image: phperall/metrics-server:v0.4.1 kubectl create -f components-new.yaml kubectl get pod -n kube-system -l k8s-app=metrics-server NAME READY STATUS RESTARTS AGE metrics-server-798b77bcf9-8wf4m 1/1 Running 0 2m44s
kubectl get pod -l app=my-dep |grep -v NAME|awk '{print $1}'|xargs -I {} kubectl top pod {}|grep -v NAME|sort -n -k 2
8、在節點上配置kubelet托管啟動一個pod
參考鏈接:https://kubernetes.io/zh/docs/tasks/configure-pod-container/static-pod/
cat <<EOF >/etc/kubernetes/manifests/static-web.yaml apiVersion: v1 kind: Pod metadata: name: static-web labels: role: myrole spec: containers: - name: web image: nginx ports: - name: web containerPort: 80 protocol: TCP EOF kubectl get pod -l role=myrole NAME READY STATUS RESTARTS AGE static-web-node2 1/1 Running 0 5m13s
9、向pod中添加一個init容器,init容器創建一個空文件,如果該空文件沒有被檢測到,pod就退出
參考鏈接:https://kubernetes.io/zh/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/
參考鏈接:https://kubernetes.io/zh/docs/concepts/workloads/pods/init-containers/
apiVersion: v1 kind: Pod metadata: name: myapp-pod labels: app: myapp spec: containers: - name: liveness image: busybox:1.28 command: ['sh', '-c', 'echo The app is running! && sleep 3600'] livenessProbe: exec: command: - cat - /tmp/healthy initialDelaySeconds: 5 periodSeconds: 5 initContainers: - name: init-myservice image: busybox:1.28 command: ['sh', '-c', 'touch /tmp/healthy; sleep 2; rm -rf /tmp/healthy; sleep 3'] kubectl create -f init_pod.yaml kubectl get pod -w
10、創建一個deployment 副本數 3,然后滾動更新鏡像版本,並記錄這個更新記錄,最后再回滾到上一個版本
參考鏈接:https://kubernetes.io/zh/docs/concepts/workloads/controllers/deployment/
apiVersion: apps/v1 kind: Deployment metadata: name: nginx-deployment labels: app: nginx spec: replicas: 3 selector: matchLabels: app: nginx template: metadata: labels: app: nginx spec: containers: - name: nginx image: nginx:1.14.2 ports: - containerPort: 80 創建 kubectl create -f deployment.yaml 更新 kubectl set image deployment/nginx-deployment nginx=nginx:1.16.1 --record 查看更新狀態 kubectl rollout status deployment.v1.apps/nginx-deployment 查看歷史更新記錄 kubectl rollout history deployment.v1.apps/nginx-deployment 查看指定版本更新信息 kubectl rollout history deployment.v1.apps/nginx-deployment --revision=2 回滾到指定版本 kubectl rollout undo deployment.v1.apps/nginx-deployment --to-revision=2
11、給web deployment擴容副本數為3
參考鏈接:https://kubernetes.io/zh/docs/concepts/workloads/controllers/deployment/
kubectl scale deployment.v1.apps/nginx-deployment --replicas=3
12、創建一個pod,其中運行着nginx、redis、memcached、consul 4個容器
參考鏈接:https://kubernetes.io/zh/docs/tasks/configure-pod-container/assign-pods-nodes/
apiVersion: v1 kind: Pod metadata: name: multiple-pod labels: env: test spec: containers: - name: nginx image: nginx imagePullPolicy: IfNotPresent - name: redis image: redis imagePullPolicy: IfNotPresent - name: memcached image: memcached imagePullPolicy: IfNotPresent - name: consul image: consul imagePullPolicy: IfNotPresent kubectl create -f multiple_pod.yaml kubectl get pod
13、生成一個deployment yaml文件保存到/opt/deploy.yaml
參考鏈接:kubectl create deployment -h
kubectl create deployment my-dep --image=busybox --dry-run=client -oyaml >> /opt/deploy.yaml
14、創建一個pod,分配到指定標簽node上
參考鏈接:https://kubernetes.io/zh/docs/tasks/configure-pod-container/assign-pods-nodes/
kubectl get nodes kubectl label nodes node1 disktype=ssd kubectl get nodes --show-labels apiVersion: v1 kind: Pod metadata: name: nginx labels: env: test spec: containers: - name: nginx image: nginx imagePullPolicy: IfNotPresent nodeSelector: disktype: ssd kubectl create -f ssd_pod.yaml kubectl get pod -o wide
15、確保在每個節點上運行一個pod
參考鏈接:https://kubernetes.io/zh/docs/concepts/workloads/controllers/statefulset/
apiVersion: apps/v1 kind: DaemonSet metadata: name: busybox labels: k8s-app: my-app spec: selector: matchLabels: name: my-app template: metadata: labels: name: my-app spec: containers: - name: busybox image: busybox command: ['sh','-c','sleep 3600'] kubectl create -f daemonset.yaml kubectl get pod -owide
16、查看集群中狀態為ready的node數量,不包含被打了NodeSchedule污點的節點,並將結果寫到/opt/node.txt
參考鏈接:https://kubernetes.io/zh/docs/concepts/scheduling-eviction/taint-and-toleration/
kubectl get node|awk '/Ready/{print $1}'|xargs -I {} kubectl describe node {} |grep Taint |grep -vc NoSchedule > /opt/node.txt
17、設置成node不能調度,並使已被調度的pod重新調度
參考鏈接:https://kubernetes.io/zh/docs/reference/kubectl/overview/
參考鏈接:https://kubernetes.io/zh/docs/tasks/administer-cluster/safely-drain-node/
kubectl cordon node1
kubectl drain node1 --ignore-daemonsets
去掉標記:
kubectl patch node NodeName -p "{\"spec\":{\"unschedulable\":false}}"
18、給一個pod創建service,並可以通過ClusterIP訪問
參考鏈接:https://kubernetes.io/zh/docs/tasks/access-application-cluster/service-access-application-cluster/
kubectl run nginx --image=nginx kubectl expose pod nginx --port=80 --target-port=80 --type=NodePort
19、任意名稱創建deployment和service,然后使用busybox容器nslookup解析service
參考鏈接:https://kubernetes.io/zh/docs/concepts/workloads/controllers/deployment/
參考鏈接:https://kubernetes.io/zh/docs/concepts/services-networking/connect-applications-service/
參考鏈接:https://kubernetes.io/zh/docs/tasks/administer-cluster/dns-debugging-resolution/
參考鏈接:https://kubernetes.io/zh/docs/concepts/services-networking/dns-pod-service/
創建deployment服務 kubectl create deployment nginx-dns --image=nginx 通過service把deployment映射出去 kubectl expose deployment nginx-dns --name=nginx-dns --port=80 創建busybox kubectl run bs-dns --image=busybox:1.28.4 busybox sleep 36000 或者 apiVersion: v1 kind: Pod metadata: labels: run: bs-dns name: bs-dns spec: containers: - image: busybox:1.28.4 name: bs-dns command: ['sh','-c','busybox sleep 3600'] 解析服務 kubectl exec -it bs-dns -- nslookup nginx-dns
20、列出命名空間下某個service關聯的所有pod,並將pod名稱寫到/opt/pod.txt文件中(使用標簽篩選)
參考鏈接:kubectl get svc -h
kubectl get svc my-test --show-labels
kubectl get pod -l app=my-test -o name > /opt/pod.txt
21、創建一個secret,並創建2個pod,pod1掛載該secret,路徑為/etc/foo,pod2使用環境變量引用該secret,該變量的環境變量名為ABC
參考鏈接:https://kubernetes.io/zh/docs/tasks/inject-data-application/distribute-credentials-secure/
apiVersion: v1 kind: Secret metadata: name: test-secret data: username: bXktYXBw password: Mzk1MjgkdmRnN0pi
# username/password值經過base64編碼。
kubectl create -f secret.yaml apiVersion: v1 kind: Pod metadata: name: secret-test-pod spec: containers: - name: test-container image: nginx volumeMounts: # name must match the volume name below - name: secret-volume mountPath: /etc/secret-volume # The secret data is exposed to Containers in the Pod through a Volume. volumes: - name: secret-volume secret: secretName: test-secret kubectl create -f test_secret.yaml apiVersion: v1 kind: Pod metadata: name: env-single-secret spec: containers: - name: envars-test-container image: nginx env: - name: ABC valueFrom: secretKeyRef: name: test-secret key: username - name: ABCD valueFrom: secretKeyRef: name: test-secret key: password kubectl create -f test_secret3.yaml kubectl exec -i -t env-single-secret -- /bin/sh -c 'echo $ABC' my-app kubectl exec -i -t env-single-secret -- /bin/sh -c 'echo $ABCD' 39528$vdg7Jb
22、創建一個Pod使用PV自動供給
參考鏈接:https://kubernetes.io/zh/docs/concepts/storage/persistent-volumes/
參考鏈接:https://kubernetes.io/zh/docs/tasks/configure-pod-container/configure-persistent-volume-storage/#create-a-persistentvolume
使用hostpath方式提供pv(首先在所有的node節點上面創建/mnt/data目錄) sudo mkdir /mnt/data sudo sh -c "echo 'Hello from Kubernetes storage' > /mnt/data/index.html"
創建pv(一般運維人員創建) apiVersion: v1 kind: PersistentVolume metadata: labels: app: my-pv name: my-pv spec: #storageClassName: manual capacity: storage: 5Gi accessModes: - ReadWriteOnce hostPath: path: /mnt/data/ kubectl create -f pv.yaml
創建pvc(一般開發人員創建) apiVersion: v1 kind: PersistentVolumeClaim metadata: labels: type: local name: my-pvc spec:
#storageClassName: manual accessModes: - ReadWriteOnce resources: requests: storage: 5Gi kubectl create -f pvc.yaml
使用pvc創建pod apiVersion: v1 kind: Pod metadata: labels: run: my-pod name: my-pod spec: volumes: - name: test-pvc persistentVolumeClaim: claimName: my-pvc containers: - image: nginx name: my-pod volumeMounts: - name: test-pvc mountPath: "/usr/share/nginx/html" kubectl create -f my-pod.yaml
測試 kubectl exec -i -t my-pod -- curl localhost Hello from Kubernetes storage
23、創建一個pod並掛載數據卷,不可以用持久卷
參考鏈接:https://kubernetes.io/zh/docs/concepts/storage/volumes/
apiVersion: v1 kind: Pod metadata: name: test-pd spec: containers: - image: k8s.gcr.io/test-webserver name: test-container volumeMounts: - mountPath: /cache name: cache-volume volumes: - name: cache-volume emptyDir: {} kubectl create -f pod.yaml
24、將pv按照名稱、容量排序,並保存到/opt/pv文件
參考鏈接:
kubectl get pv --sort-by=.metadata.name > /opt/pv kubectl get pv --sort-by=.spec.capacity.storage >> /opt/pv
25、Bootstrap Token方式增加一台Node(二進制)
參考鏈接:https://www.cnblogs.com/hlc-123/articles/14163603.html
26、Etcd數據庫備份與恢復(kubeadm)
https://kubernetes.io/docs/tasks/administer-cluster/configure-upgrade-etcd/
備份etcd yum -y install etcd 查看etcd密鑰位置 ls /etc/kubernetes/pki/etcd/ ca.crt ca.key healthcheck-client.crt healthcheck-client.key peer.crt peer.key server.crt server.key 查看etcd的幫助信息 ETCDCTL_API=3 etcdctl -h snapshot save Stores an etcd node backend snapshot to a given file snapshot restore Restores an etcd member snapshot to an etcd directory snapshot status Gets backend snapshot status of a given file --cacert="" verify certificates of TLS-enabled secure servers using this CA bundle --cert="" identify secure client using this TLS certificate file --endpoints=[127.0.0.1:2379] gRPC endpoints --key="" identify secure client using this TLS key file 備份etcd ETCDCTL_API=3 etcdctl --endpoints https://127.0.0.1:2379 snapshot save snapshotdb --cacert="/etc/kubernetes/pki/etcd/ca.crt" --cert="/etc/kubernetes/pki/etcd/server.crt" --key="/etc/kubernetes/pki/etcd/server.key" Snapshot saved at snapshotdb 查看備份的文件 ls snapshotdb 查看文件的狀態 ETCDCTL_API=3 etcdctl --endpoints https://127.0.0.1:2379 snapshot status snapshotdb --cacert="/etc/kubernetes/pki/etcd/ca.crt" --cert="/etc/kubernetes/pki/etcd/server.crt" --key="/etc/kubernetes/pki/etcd/server.key" 593dda57, 480925, 1532, 3.3 MB
恢復etcd mv /etc/kubernetes/manifests/ /etc/kubernetes/manifests.bak mv /var/lib/etcd/ /var/lib/etcd.bak ETCDCTL_API=3 etcdctl snapshot restore snapshotdb --data-dir=/var/lib/etcd 2020-12-20 15:30:29.156579 I | mvcc: restore compact to 479751 2020-12-20 15:30:29.176899 I | etcdserver/membership: added member 8e9e05c52164694d [http://localhost:2380] to cluster cdf818194e3a8c32 啟動etcd和apiserver mv /etc/kubernetes/manifests.bak/ /etc/kubernetes/manifests
27、給定一個Kubernetes集群,排查管理節點組件存在問題
kubectl get cs
systemctl start xxx
systemctl enable xxx
28、工作節點 NotReady狀態怎么解決?
ssh k8s-node1 systemctl start kubelet systemctl enable kubelet
29、升級管理節點kubelet ,kubectl 組件由1.18 升級為1.19 ,工作節點不升級
參考鏈接:https://v1-19.docs.kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/
升級管理節點 查看當前版本信息 kubectl version 查看安裝軟件版本 yum list --showduplicates kubeadm --disableexcludes=kubernetes 再次查看軟件版本 kubeadm version 驅逐node上面的pod,並設置為不可調度 kubectl drain master --ignore-daemonsets kubectl get node NAME STATUS ROLES AGE VERSION master Ready,SchedulingDisabled master 2d8h v1.19.0 node1 Ready <none> 2d8h v1.19.0 node2 Ready <none> 2d8h v1.19.0 檢查環境是否可以升級,並獲取升級的版本信息 kubeadm upgrade plan 升級 kubeadm upgrade apply v1.19.3 [upgrade/successful] SUCCESS! Your cluster was upgraded to "v1.19.3". Enjoy! 取消不可調度 kubectl uncordon master node/master uncordoned kubectl get node NAME STATUS ROLES AGE VERSION master Ready master 2d9h v1.19.0 node1 Ready <none> 2d8h v1.19.0 node2 Ready <none> 2d8h v1.19.0 升級kubelet和kubectl yum -y install kubelet-1.19.3 kubectl-1.19.3 systemctl daemon-reload systemctl restart kubelet 再次查看node信息 kubectl get node NAME STATUS ROLES AGE VERSION master Ready master 2d9h v1.19.3 node1 Ready <none> 2d8h v1.19.0 node2 Ready <none> 2d8h v1.19.0
30、創建一個ingress
參考鏈接:https://kubernetes.io/docs/concepts/services-networking/ingress/
apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: tls-example-ingress spec: tls: - hosts: - https-example.foo.com secretName: testsecret-tls rules: - host: https-example.foo.com http: paths: - path: / pathType: Prefix backend: service: name: service1 port: number: 80 kubectl create -f ingress.yaml kubectl get ingress
31、Pod創建一個邊車容器讀取業務容器日志
參考鏈接:https://kubernetes.io/docs/concepts/cluster-administration/logging/
apiVersion: v1 kind: Pod metadata: name: counter spec: containers: - name: count image: busybox args: - /bin/sh - -c - > i=0; while true; do echo "$(date) INFO $i" >> /var/log/1.log; i=$((i+1)); sleep 1; done volumeMounts: - name: varlog mountPath: /var/log - name: count-log-1 image: busybox args: [/bin/sh, -c, 'tail -n+1 -f /var/log/1.log'] volumeMounts: - name: varlog mountPath: /var/log volumes: - name: varlog emptyDir: {}
32、創建一個clusterrole,關聯到一個服務賬號
參考鏈接:https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/
參考鏈接:https://kubernetes.io/docs/reference/access-authn-authz/rbac/
# 創建用戶 $ kubectl create serviceaccount dashboard-admin -n kube-system # 用戶授權 $ kubectl create clusterrolebinding dashboard-admin --clusterrole=cluster-admin --serviceaccount=kube-system:dashboard-admin
33、default命名空間下所有pod可以互相訪問,也可以訪問其他命名空間Pod,但其他命名空間不能訪問default命名空間Pod
參考鏈接:https://kubernetes.io/docs/concepts/services-networking/network-policies/
創建測試pod
# kubectl get pod -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES nginx-pod 1/1 Running 0 5h19m 10.244.166.143 node1 <none> <none> apiVersion: v1 kind: Pod metadata: namespace: kube-system labels: run: busybox name: busybox1 spec: containers: - image: busybox:1.24 name: busybox1 imagePullPolicy: IfNotPresent command: ['/bin/sh','-c','sleep 36000'] restartPolicy: Always kubectl create -f busybox.yaml kubectl exec -it -n kube-system busybox1 -- sh ping 10.244.166.143 PING 10.244.166.143 (10.244.166.143): 56 data bytes 64 bytes from 10.244.166.143: seq=0 ttl=63 time=0.673 ms 64 bytes from 10.244.166.143: seq=1 ttl=63 time=0.199 ms 64 bytes from 10.244.166.143: seq=2 ttl=63 time=0.419 ms
kubectl exec -it my-dep-busybox-68bb779f99-2lk5c -- sh
/ # ping 10.244.166.143
PING 10.244.166.143 (10.244.166.143): 56 data bytes
64 bytes from 10.244.166.143: seq=0 ttl=62 time=4.566 ms
64 bytes from 10.244.166.143: seq=1 ttl=62 time=1.458 ms
創建網絡策略
apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: default-deny-ingress namespace: default spec: podSelector: {} policyTypes: - Ingress ingress: - from: - podSelector: {} kubectl create -f network_policy.yaml
創建策略之后無法通信(default命名空間下相互可以通信)
kubectl exec -it -n kube-system busybox1 -- sh / # / # / # ping 10.244.166.143 PING 10.244.166.143 (10.244.166.143): 56 data bytes
kubectl exec -it my-dep-busybox-68bb779f99-2lk5c -- sh
/ # ping 10.244.166.143
PING 10.244.166.143 (10.244.166.143): 56 data bytes
64 bytes from 10.244.166.143: seq=0 ttl=62 time=4.566 ms
64 bytes from 10.244.166.143: seq=1 ttl=62 time=1.458 ms