CKA考题整理
1、使用kubeadm搭建k8s;
参考链接:https://kubernetes.io/zh/docs/setup/production-environment/tools/kubeadm/install-kubeadm/
添加k8s阿里源:
cat <<EOF > /etc/yum.repos.d/kubernetes.repo [kubernetes] name=Kubernetes baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/ enabled=1 gpgcheck=1 repo_gpgcheck=1 gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg EOF yum list kubelet kubeadm kubectl --showduplicates |sort -r
yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes
systemctl enable --now kubelet
配置 kubelet 使用的 cgroup 驱动程序 docker配置: cat > /etc/docker/daemon.json <<EOF { "registry-mirrors": ["https://b9pmyelo.mirror.aliyuncs.com"], "exec-opts": ["native.cgroupdriver=systemd"] } EOF systemctl restart docker k8s配置: vim /etc/sysconfig/kubelet KUBELET_EXTRA_ARGS=--cgroup-driver=systemd 需要重新启动 kubelet: systemctl daemon-reload systemctl restart kubelet
查看服务状态
systemctl status kubelet
拉取镜像
kubeadm config images pull
集群初始化
kubeadm init \ --apiserver-advertise-address=10.255.20.137 \ --image-repository=registry.aliyuncs.com/google_containers \ --kubernetes-version=1.19.0 \ --pod-network-cidr=10.244.0.0/16 \ --ignore-preflight-errors=all \ --upload-certs
初始化配置文件
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
添加node节点
kubeadm join 10.255.20.137:6443 \
--token 3zayke.4f0va2516jbg1wdy \
--discovery-token-ca-cert-hash sha256:4e64aed48f750b79a7969ca45a6d999c51275fa67071730423a5934e8f6e85d1 \
--ignore-preflight-errors=all \
--v=5
wget https://docs.projectcalico.org/v3.11/manifests/calico.yaml 修改yaml中的pod网段,默认为192.168.0.0/16,修改为初始化时的网段,如:10.244.0.0/16
kubectl apply -f calico.yaml
查看k8s节点
kubectl get node
NAME STATUS ROLES AGE VERSION
master Ready master 38m v1.19.0
node1 Ready <none> 23m v1.19.0
node2 Ready <none> 26m v1.19.0
查看master节点是否打污点
kubectl describe node master|grep -A1 Taints
Taints: node-role.kubernetes.io/master:NoSchedule
Unschedulable: false
集群若为单个master节点,需要取消污点
kubectl taint nodes master node-role.kubernetes.io/master:NoSchedule-
查看所有的pod是否正常运行
kubectl get pod -A
# 设置污点 $ kubectl taint nodes node1 key1=value1:NoSchedule # 节点说明中,查找 Taints 字段 $ kubectl describe pod pod-name il # 去除污点 $ kubectl taint nodes node1 key1:NoSchedule- NoSchedule:表示 k8s 将不会将 Pod 调度到具有该污点的 Node 上 PreferNoSchedule:表示 k8s 将尽量避免将 Pod 调度到具有该污点的 Node 上 NoExecute:表示 k8s 将不会将 Pod 调度到具有该污点的 Node 上,同时会将 Node 上已经存在的 Pod 驱逐出去
配置命令行补全功能 yum -y install bash-completion echo "source <(kubectl completion bash)" >> ~/.bashrc
2、测试创建一个pod
参考链接:https://kubernetes.io/zh/docs/concepts/workloads/pods/
参考链接:https://kubernetes.io/zh/docs/tasks/inject-data-application/define-command-argument-container/
kubectl run nginx --image=nginx --dry-run=client -oyaml |tee -a pod.yaml apiVersion: v1 kind: Pod metadata: creationTimestamp: null labels: run: nginx name: nginx spec: containers: - image: nginx name: nginx resources: {} dnsPolicy: ClusterFirst restartPolicy: Always status: {}
kubectl create -f pod.yaml
查看pod的详细信息
kubectl describe pod nhinx
查看pod的状态
kubectl get pod
NAME READY STATUS RESTARTS AGE
nginx 1/1 Running 0 100s
3、新建命名空间,在该命名空间中创建一个pod
参考链接:https://kubernetes.io/zh/docs/tasks/administer-cluster/namespaces/
参考链接:https://kubernetes.io/zh/docs/tasks/inject-data-application/define-command-argument-container/
生成命名空间的yaml kubectl create namespace cka --dry-run=client -oyaml > pod_cka.yaml 设置分割 echo '---' >> pod_cka.yaml 追加nginx的yaml kubectl run nginx --image=nginx --dry-run=client -oyaml >> pod_cka.yaml 创建pod kubectl create -f pod_cka.yaml
yaml文件如下: apiVersion: v1 kind: Namespace metadata: name: cka --- apiVersion: v1 kind: Pod metadata: labels: run: nginx name: nginx spec: containers: - image: nginx name: nginx dnsPolicy: ClusterFirst restartPolicy: Always
4、创建一个deployment并暴露Service
参考链接:https://kubernetes.io/zh/docs/concepts/workloads/controllers/deployment/
参考链接:https://kubernetes.io/zh/docs/concepts/services-networking/connect-applications-service/
方法一:
apiVersion: apps/v1 kind: Deployment metadata: name: nginx-deployment labels: app: nginx spec: replicas: 3 selector: matchLabels: app: nginx template: metadata: labels: app: nginx spec: containers: - name: nginx image: nginx:1.14.2 ports: - containerPort: 80 kubectl create -f deployment.yaml
kubectl expose deployment my-dep --port=80 --target-port=80 --type=NodePort
方法二:
apiVersion: v1 kind: Service metadata: name: my-nginx labels: run: my-nginx spec: type: NodePort ports: - port: 80 targetPort: 80 protocol: TCP name: http selector: run: my-nginx --- apiVersion: apps/v1 kind: Deployment metadata: name: my-nginx spec: selector: matchLabels: run: my-nginx replicas: 1 template: metadata: labels: run: my-nginx spec: containers: - name: nginx image: nginx:1.14.2 ports: - containerPort: 80 kubectl create -f service_deploy.yaml
5、列出命名空间下指定标签pod
参考链接:https://kubernetes.io/zh/docs/concepts/overview/working-with-objects/labels/
apiVersion: apps/v1 kind: Deployment metadata: labels: app: my-dep name: my-dep spec: replicas: 3 selector: matchLabels: app: my-dep template: metadata: labels: app: my-dep environment: production tier: frontend spec: containers: - image: nginx name: nginx ports: - containerPort: 5701 resources: {} kubectl create -f deployment.yaml kubectl get pods -l environment=production,tier=frontend
6、查看pod日志,并将日志中Error的行记录到指定文件
参考链接:https://kubernetes.io/zh/docs/concepts/cluster-administration/logging/
apiVersion: v1 kind: Pod metadata: name: counter spec: containers: - name: count image: busybox args: [/bin/sh, -c, 'i=0; while true; do echo "ERROR: $i: $(date)"; i=$((i+1)); sleep 1; done'] kubectl create -f pod_logs.yaml kubectl logs counter |grep "ERROR" >/tmp/counter_ERR.log
7、查看指定标签使用cpu最高的pod,并记录到到指定文件
参考链接:https://kubernetes.io/zh/docs/concepts/cluster-administration/logging/
部署metrics-server 参考地址:https://github.com/kubernetes-sigs/metrics-server/tree/master wget https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml 根据集群的实际信息进行如下修改: 设置连接到集群的方式,默认情况下使用“InternalIP”即可; --kubelet-preferred-address-types - The priority of node address types used when determining an address for connecting to a particular node (default [Hostname,InternalDNS,InternalIP,ExternalDNS,ExternalIP]) 不要验证Kubelets提供的服务证书的CA。仅用于测试目的,否则需要创建CA证书。 --kubelet-insecure-tls - Do not verify the CA of serving certificates presented by Kubelets. For testing purposes only. --- diff components.yaml components-new.yaml 136c136,137 < image: k8s.gcr.io/metrics-server/metrics-server:v0.4.1 --- > - --kubelet-insecure-tls > image: phperall/metrics-server:v0.4.1 kubectl create -f components-new.yaml kubectl get pod -n kube-system -l k8s-app=metrics-server NAME READY STATUS RESTARTS AGE metrics-server-798b77bcf9-8wf4m 1/1 Running 0 2m44s
kubectl get pod -l app=my-dep |grep -v NAME|awk '{print $1}'|xargs -I {} kubectl top pod {}|grep -v NAME|sort -n -k 2
8、在节点上配置kubelet托管启动一个pod
参考链接:https://kubernetes.io/zh/docs/tasks/configure-pod-container/static-pod/
cat <<EOF >/etc/kubernetes/manifests/static-web.yaml apiVersion: v1 kind: Pod metadata: name: static-web labels: role: myrole spec: containers: - name: web image: nginx ports: - name: web containerPort: 80 protocol: TCP EOF kubectl get pod -l role=myrole NAME READY STATUS RESTARTS AGE static-web-node2 1/1 Running 0 5m13s
9、向pod中添加一个init容器,init容器创建一个空文件,如果该空文件没有被检测到,pod就退出
参考链接:https://kubernetes.io/zh/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/
参考链接:https://kubernetes.io/zh/docs/concepts/workloads/pods/init-containers/
apiVersion: v1 kind: Pod metadata: name: myapp-pod labels: app: myapp spec: containers: - name: liveness image: busybox:1.28 command: ['sh', '-c', 'echo The app is running! && sleep 3600'] livenessProbe: exec: command: - cat - /tmp/healthy initialDelaySeconds: 5 periodSeconds: 5 initContainers: - name: init-myservice image: busybox:1.28 command: ['sh', '-c', 'touch /tmp/healthy; sleep 2; rm -rf /tmp/healthy; sleep 3'] kubectl create -f init_pod.yaml kubectl get pod -w
10、创建一个deployment 副本数 3,然后滚动更新镜像版本,并记录这个更新记录,最后再回滚到上一个版本
参考链接:https://kubernetes.io/zh/docs/concepts/workloads/controllers/deployment/
apiVersion: apps/v1 kind: Deployment metadata: name: nginx-deployment labels: app: nginx spec: replicas: 3 selector: matchLabels: app: nginx template: metadata: labels: app: nginx spec: containers: - name: nginx image: nginx:1.14.2 ports: - containerPort: 80 创建 kubectl create -f deployment.yaml 更新 kubectl set image deployment/nginx-deployment nginx=nginx:1.16.1 --record 查看更新状态 kubectl rollout status deployment.v1.apps/nginx-deployment 查看历史更新记录 kubectl rollout history deployment.v1.apps/nginx-deployment 查看指定版本更新信息 kubectl rollout history deployment.v1.apps/nginx-deployment --revision=2 回滚到指定版本 kubectl rollout undo deployment.v1.apps/nginx-deployment --to-revision=2
11、给web deployment扩容副本数为3
参考链接:https://kubernetes.io/zh/docs/concepts/workloads/controllers/deployment/
kubectl scale deployment.v1.apps/nginx-deployment --replicas=3
12、创建一个pod,其中运行着nginx、redis、memcached、consul 4个容器
参考链接:https://kubernetes.io/zh/docs/tasks/configure-pod-container/assign-pods-nodes/
apiVersion: v1 kind: Pod metadata: name: multiple-pod labels: env: test spec: containers: - name: nginx image: nginx imagePullPolicy: IfNotPresent - name: redis image: redis imagePullPolicy: IfNotPresent - name: memcached image: memcached imagePullPolicy: IfNotPresent - name: consul image: consul imagePullPolicy: IfNotPresent kubectl create -f multiple_pod.yaml kubectl get pod
13、生成一个deployment yaml文件保存到/opt/deploy.yaml
参考链接:kubectl create deployment -h
kubectl create deployment my-dep --image=busybox --dry-run=client -oyaml >> /opt/deploy.yaml
14、创建一个pod,分配到指定标签node上
参考链接:https://kubernetes.io/zh/docs/tasks/configure-pod-container/assign-pods-nodes/
kubectl get nodes kubectl label nodes node1 disktype=ssd kubectl get nodes --show-labels apiVersion: v1 kind: Pod metadata: name: nginx labels: env: test spec: containers: - name: nginx image: nginx imagePullPolicy: IfNotPresent nodeSelector: disktype: ssd kubectl create -f ssd_pod.yaml kubectl get pod -o wide
15、确保在每个节点上运行一个pod
参考链接:https://kubernetes.io/zh/docs/concepts/workloads/controllers/statefulset/
apiVersion: apps/v1 kind: DaemonSet metadata: name: busybox labels: k8s-app: my-app spec: selector: matchLabels: name: my-app template: metadata: labels: name: my-app spec: containers: - name: busybox image: busybox command: ['sh','-c','sleep 3600'] kubectl create -f daemonset.yaml kubectl get pod -owide
16、查看集群中状态为ready的node数量,不包含被打了NodeSchedule污点的节点,并将结果写到/opt/node.txt
参考链接:https://kubernetes.io/zh/docs/concepts/scheduling-eviction/taint-and-toleration/
kubectl get node|awk '/Ready/{print $1}'|xargs -I {} kubectl describe node {} |grep Taint |grep -vc NoSchedule > /opt/node.txt
17、设置成node不能调度,并使已被调度的pod重新调度
参考链接:https://kubernetes.io/zh/docs/reference/kubectl/overview/
参考链接:https://kubernetes.io/zh/docs/tasks/administer-cluster/safely-drain-node/
kubectl cordon node1
kubectl drain node1 --ignore-daemonsets
去掉标记:
kubectl patch node NodeName -p "{\"spec\":{\"unschedulable\":false}}"
18、给一个pod创建service,并可以通过ClusterIP访问
参考链接:https://kubernetes.io/zh/docs/tasks/access-application-cluster/service-access-application-cluster/
kubectl run nginx --image=nginx kubectl expose pod nginx --port=80 --target-port=80 --type=NodePort
19、任意名称创建deployment和service,然后使用busybox容器nslookup解析service
参考链接:https://kubernetes.io/zh/docs/concepts/workloads/controllers/deployment/
参考链接:https://kubernetes.io/zh/docs/concepts/services-networking/connect-applications-service/
参考链接:https://kubernetes.io/zh/docs/tasks/administer-cluster/dns-debugging-resolution/
参考链接:https://kubernetes.io/zh/docs/concepts/services-networking/dns-pod-service/
创建deployment服务 kubectl create deployment nginx-dns --image=nginx 通过service把deployment映射出去 kubectl expose deployment nginx-dns --name=nginx-dns --port=80 创建busybox kubectl run bs-dns --image=busybox:1.28.4 busybox sleep 36000 或者 apiVersion: v1 kind: Pod metadata: labels: run: bs-dns name: bs-dns spec: containers: - image: busybox:1.28.4 name: bs-dns command: ['sh','-c','busybox sleep 3600'] 解析服务 kubectl exec -it bs-dns -- nslookup nginx-dns
20、列出命名空间下某个service关联的所有pod,并将pod名称写到/opt/pod.txt文件中(使用标签筛选)
参考链接:kubectl get svc -h
kubectl get svc my-test --show-labels
kubectl get pod -l app=my-test -o name > /opt/pod.txt
21、创建一个secret,并创建2个pod,pod1挂载该secret,路径为/etc/foo,pod2使用环境变量引用该secret,该变量的环境变量名为ABC
参考链接:https://kubernetes.io/zh/docs/tasks/inject-data-application/distribute-credentials-secure/
apiVersion: v1 kind: Secret metadata: name: test-secret data: username: bXktYXBw password: Mzk1MjgkdmRnN0pi
# username/password值经过base64编码。
kubectl create -f secret.yaml apiVersion: v1 kind: Pod metadata: name: secret-test-pod spec: containers: - name: test-container image: nginx volumeMounts: # name must match the volume name below - name: secret-volume mountPath: /etc/secret-volume # The secret data is exposed to Containers in the Pod through a Volume. volumes: - name: secret-volume secret: secretName: test-secret kubectl create -f test_secret.yaml apiVersion: v1 kind: Pod metadata: name: env-single-secret spec: containers: - name: envars-test-container image: nginx env: - name: ABC valueFrom: secretKeyRef: name: test-secret key: username - name: ABCD valueFrom: secretKeyRef: name: test-secret key: password kubectl create -f test_secret3.yaml kubectl exec -i -t env-single-secret -- /bin/sh -c 'echo $ABC' my-app kubectl exec -i -t env-single-secret -- /bin/sh -c 'echo $ABCD' 39528$vdg7Jb
22、创建一个Pod使用PV自动供给
参考链接:https://kubernetes.io/zh/docs/concepts/storage/persistent-volumes/
参考链接:https://kubernetes.io/zh/docs/tasks/configure-pod-container/configure-persistent-volume-storage/#create-a-persistentvolume
使用hostpath方式提供pv(首先在所有的node节点上面创建/mnt/data目录) sudo mkdir /mnt/data sudo sh -c "echo 'Hello from Kubernetes storage' > /mnt/data/index.html"
创建pv(一般运维人员创建) apiVersion: v1 kind: PersistentVolume metadata: labels: app: my-pv name: my-pv spec: #storageClassName: manual capacity: storage: 5Gi accessModes: - ReadWriteOnce hostPath: path: /mnt/data/ kubectl create -f pv.yaml
创建pvc(一般开发人员创建) apiVersion: v1 kind: PersistentVolumeClaim metadata: labels: type: local name: my-pvc spec:
#storageClassName: manual accessModes: - ReadWriteOnce resources: requests: storage: 5Gi kubectl create -f pvc.yaml
使用pvc创建pod apiVersion: v1 kind: Pod metadata: labels: run: my-pod name: my-pod spec: volumes: - name: test-pvc persistentVolumeClaim: claimName: my-pvc containers: - image: nginx name: my-pod volumeMounts: - name: test-pvc mountPath: "/usr/share/nginx/html" kubectl create -f my-pod.yaml
测试 kubectl exec -i -t my-pod -- curl localhost Hello from Kubernetes storage
23、创建一个pod并挂载数据卷,不可以用持久卷
参考链接:https://kubernetes.io/zh/docs/concepts/storage/volumes/
apiVersion: v1 kind: Pod metadata: name: test-pd spec: containers: - image: k8s.gcr.io/test-webserver name: test-container volumeMounts: - mountPath: /cache name: cache-volume volumes: - name: cache-volume emptyDir: {} kubectl create -f pod.yaml
24、将pv按照名称、容量排序,并保存到/opt/pv文件
参考链接:
kubectl get pv --sort-by=.metadata.name > /opt/pv kubectl get pv --sort-by=.spec.capacity.storage >> /opt/pv
25、Bootstrap Token方式增加一台Node(二进制)
参考链接:https://www.cnblogs.com/hlc-123/articles/14163603.html
26、Etcd数据库备份与恢复(kubeadm)
https://kubernetes.io/docs/tasks/administer-cluster/configure-upgrade-etcd/
备份etcd yum -y install etcd 查看etcd密钥位置 ls /etc/kubernetes/pki/etcd/ ca.crt ca.key healthcheck-client.crt healthcheck-client.key peer.crt peer.key server.crt server.key 查看etcd的帮助信息 ETCDCTL_API=3 etcdctl -h snapshot save Stores an etcd node backend snapshot to a given file snapshot restore Restores an etcd member snapshot to an etcd directory snapshot status Gets backend snapshot status of a given file --cacert="" verify certificates of TLS-enabled secure servers using this CA bundle --cert="" identify secure client using this TLS certificate file --endpoints=[127.0.0.1:2379] gRPC endpoints --key="" identify secure client using this TLS key file 备份etcd ETCDCTL_API=3 etcdctl --endpoints https://127.0.0.1:2379 snapshot save snapshotdb --cacert="/etc/kubernetes/pki/etcd/ca.crt" --cert="/etc/kubernetes/pki/etcd/server.crt" --key="/etc/kubernetes/pki/etcd/server.key" Snapshot saved at snapshotdb 查看备份的文件 ls snapshotdb 查看文件的状态 ETCDCTL_API=3 etcdctl --endpoints https://127.0.0.1:2379 snapshot status snapshotdb --cacert="/etc/kubernetes/pki/etcd/ca.crt" --cert="/etc/kubernetes/pki/etcd/server.crt" --key="/etc/kubernetes/pki/etcd/server.key" 593dda57, 480925, 1532, 3.3 MB
恢复etcd mv /etc/kubernetes/manifests/ /etc/kubernetes/manifests.bak mv /var/lib/etcd/ /var/lib/etcd.bak ETCDCTL_API=3 etcdctl snapshot restore snapshotdb --data-dir=/var/lib/etcd 2020-12-20 15:30:29.156579 I | mvcc: restore compact to 479751 2020-12-20 15:30:29.176899 I | etcdserver/membership: added member 8e9e05c52164694d [http://localhost:2380] to cluster cdf818194e3a8c32 启动etcd和apiserver mv /etc/kubernetes/manifests.bak/ /etc/kubernetes/manifests
27、给定一个Kubernetes集群,排查管理节点组件存在问题
kubectl get cs
systemctl start xxx
systemctl enable xxx
28、工作节点 NotReady状态怎么解决?
ssh k8s-node1 systemctl start kubelet systemctl enable kubelet
29、升级管理节点kubelet ,kubectl 组件由1.18 升级为1.19 ,工作节点不升级
参考链接:https://v1-19.docs.kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/
升级管理节点 查看当前版本信息 kubectl version 查看安装软件版本 yum list --showduplicates kubeadm --disableexcludes=kubernetes 再次查看软件版本 kubeadm version 驱逐node上面的pod,并设置为不可调度 kubectl drain master --ignore-daemonsets kubectl get node NAME STATUS ROLES AGE VERSION master Ready,SchedulingDisabled master 2d8h v1.19.0 node1 Ready <none> 2d8h v1.19.0 node2 Ready <none> 2d8h v1.19.0 检查环境是否可以升级,并获取升级的版本信息 kubeadm upgrade plan 升级 kubeadm upgrade apply v1.19.3 [upgrade/successful] SUCCESS! Your cluster was upgraded to "v1.19.3". Enjoy! 取消不可调度 kubectl uncordon master node/master uncordoned kubectl get node NAME STATUS ROLES AGE VERSION master Ready master 2d9h v1.19.0 node1 Ready <none> 2d8h v1.19.0 node2 Ready <none> 2d8h v1.19.0 升级kubelet和kubectl yum -y install kubelet-1.19.3 kubectl-1.19.3 systemctl daemon-reload systemctl restart kubelet 再次查看node信息 kubectl get node NAME STATUS ROLES AGE VERSION master Ready master 2d9h v1.19.3 node1 Ready <none> 2d8h v1.19.0 node2 Ready <none> 2d8h v1.19.0
30、创建一个ingress
参考链接:https://kubernetes.io/docs/concepts/services-networking/ingress/
apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: tls-example-ingress spec: tls: - hosts: - https-example.foo.com secretName: testsecret-tls rules: - host: https-example.foo.com http: paths: - path: / pathType: Prefix backend: service: name: service1 port: number: 80 kubectl create -f ingress.yaml kubectl get ingress
31、Pod创建一个边车容器读取业务容器日志
参考链接:https://kubernetes.io/docs/concepts/cluster-administration/logging/
apiVersion: v1 kind: Pod metadata: name: counter spec: containers: - name: count image: busybox args: - /bin/sh - -c - > i=0; while true; do echo "$(date) INFO $i" >> /var/log/1.log; i=$((i+1)); sleep 1; done volumeMounts: - name: varlog mountPath: /var/log - name: count-log-1 image: busybox args: [/bin/sh, -c, 'tail -n+1 -f /var/log/1.log'] volumeMounts: - name: varlog mountPath: /var/log volumes: - name: varlog emptyDir: {}
32、创建一个clusterrole,关联到一个服务账号
参考链接:https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/
参考链接:https://kubernetes.io/docs/reference/access-authn-authz/rbac/
# 创建用户 $ kubectl create serviceaccount dashboard-admin -n kube-system # 用户授权 $ kubectl create clusterrolebinding dashboard-admin --clusterrole=cluster-admin --serviceaccount=kube-system:dashboard-admin
33、default命名空间下所有pod可以互相访问,也可以访问其他命名空间Pod,但其他命名空间不能访问default命名空间Pod
参考链接:https://kubernetes.io/docs/concepts/services-networking/network-policies/
创建测试pod
# kubectl get pod -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES nginx-pod 1/1 Running 0 5h19m 10.244.166.143 node1 <none> <none> apiVersion: v1 kind: Pod metadata: namespace: kube-system labels: run: busybox name: busybox1 spec: containers: - image: busybox:1.24 name: busybox1 imagePullPolicy: IfNotPresent command: ['/bin/sh','-c','sleep 36000'] restartPolicy: Always kubectl create -f busybox.yaml kubectl exec -it -n kube-system busybox1 -- sh ping 10.244.166.143 PING 10.244.166.143 (10.244.166.143): 56 data bytes 64 bytes from 10.244.166.143: seq=0 ttl=63 time=0.673 ms 64 bytes from 10.244.166.143: seq=1 ttl=63 time=0.199 ms 64 bytes from 10.244.166.143: seq=2 ttl=63 time=0.419 ms
kubectl exec -it my-dep-busybox-68bb779f99-2lk5c -- sh
/ # ping 10.244.166.143
PING 10.244.166.143 (10.244.166.143): 56 data bytes
64 bytes from 10.244.166.143: seq=0 ttl=62 time=4.566 ms
64 bytes from 10.244.166.143: seq=1 ttl=62 time=1.458 ms
创建网络策略
apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: default-deny-ingress namespace: default spec: podSelector: {} policyTypes: - Ingress ingress: - from: - podSelector: {} kubectl create -f network_policy.yaml
创建策略之后无法通信(default命名空间下相互可以通信)
kubectl exec -it -n kube-system busybox1 -- sh / # / # / # ping 10.244.166.143 PING 10.244.166.143 (10.244.166.143): 56 data bytes
kubectl exec -it my-dep-busybox-68bb779f99-2lk5c -- sh
/ # ping 10.244.166.143
PING 10.244.166.143 (10.244.166.143): 56 data bytes
64 bytes from 10.244.166.143: seq=0 ttl=62 time=4.566 ms
64 bytes from 10.244.166.143: seq=1 ttl=62 time=1.458 ms