污點狀態:
-
NoSchedule:如果 Node 上帶有污點 effect 為 NoSchedule,而 Node 上不帶相應容忍,Kubernetes 就不會調度 Pod 到這台 Node 上。
-
PreferNoShedule:如果 Node 上帶有污點 effect 為 PreferNoShedule,這時候 Kubernetes 會努力不要調度這個 Pod 到這個 Node 上。
-
NoExecute:如果 Node 上帶有污點 effect 為 NoExecute,這個已經在 Node 上運行的 Pod 會從 Node 上驅逐掉。沒有運行在 Node 的 Pod 不能被調度到這個 Node 上。
污點值:
- 污點 value 的值可以為 NoSchedule、PreferNoSchedule 或 NoExecute
污點屬性:
-
污點是k8s集群的pod中的一種屬性
-
污點屬性分為以上三種
污點組成:
- key、value 及一個 effect 三個元素
<key>=<value>:<effect>
1、設置單污點及單容忍度
kubectl taint nodes master1 node-role.kubernetes.io/master=:NoSchedule
kubectl taint node node1 key1=value1:NoSchedule # 設置value值
kubectl taint node master1 key2=:PreferNoSchedule # 不設置value值
2、設置多污點及多容忍度
kubectl taint nodes node1 key1=value1:NoSchedule
kubectl taint nodes node1 key1=value1:NoExecute
kubectl taint nodes node1 key2=value2:NoSchedule
3、查看pod中的污點狀態
[root@master1 ~]# kubectl describe nodes master1
Name: master1
Roles: master
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=master1
kubernetes.io/os=linux
node-role.kubernetes.io/master=
Annotations: flannel.alpha.coreos.com/backend-data: {"VNI":1,"VtepMAC":"36:51:e1:31:e5:9e"}
flannel.alpha.coreos.com/backend-type: vxlan
flannel.alpha.coreos.com/kube-subnet-manager: true
flannel.alpha.coreos.com/public-ip: 192.168.200.3
kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Wed, 13 Jan 2021 06:04:10 -0500
Taints: node-role.kubernetes.io/master:NoSchedule # 污點狀態及容忍度
Unschedulable: false
Lease:
HolderIdentity: master1
AcquireTime: <unset>
RenewTime: Thu, 14 Jan 2021 01:14:07 -0500
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
NetworkUnavailable False Wed, 13 Jan 2021 06:12:43 -0500 Wed, 13 Jan 2021 06:12:43 -0500 FlannelIsUp Flannel is running on this node
MemoryPressure False Thu, 14 Jan 2021 01:11:17 -0500 Wed, 13 Jan 2021 06:50:32 -0500 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Thu, 14 Jan 2021 01:11:17 -0500 Wed, 13 Jan 2021 06:50:32 -0500 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Thu, 14 Jan 2021 01:11:17 -0500 Wed, 13 Jan 2021 06:50:32 -0500 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Thu, 14 Jan 2021 01:11:17 -0500 Wed, 13 Jan 2021 06:50:32 -0500 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 192.168.200.3
Hostname: master1
Capacity:
cpu: 4
ephemeral-storage: 17394Mi
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 2897500Ki
pods: 110
Allocatable:
cpu: 4
ephemeral-storage: 16415037823
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 2795100Ki
pods: 110
System Info:
Machine ID: feb4edfea2404d3c8ad028ca4593bb32
System UUID: C6F44D56-0F24-6114-23E7-8DF6CD4E4CFE
Boot ID: afcc0ef6-d767-4b97-9a7b-9b2500757f2e
Kernel Version: 3.10.0-862.el7.x86_64
OS Image: CentOS Linux 7 (Core)
Operating System: linux
Architecture: amd64
Container Runtime Version: docker://19.3.0
Kubelet Version: v1.18.2
Kube-Proxy Version: v1.18.2
PodCIDR: 10.244.0.0/24
PodCIDRs: 10.244.0.0/24
Non-terminated Pods: (6 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE
--------- ---- ------------ ---------- --------------- ------------- ---
kube-system etcd-master1 0 (0%) 0 (0%) 0 (0%) 0 (0%) 19h
kube-system kube-apiserver-master1 250m (6%) 0 (0%) 0 (0%) 0 (0%) 19h
kube-system kube-controller-manager-master1 200m (5%) 0 (0%) 0 (0%) 0 (0%) 19h
kube-system kube-flannel-ds-wzf7w 100m (2%) 100m (2%) 50Mi (1%) 50Mi (1%) 19h
kube-system kube-proxy-7h5sb 0 (0%) 0 (0%) 0 (0%) 0 (0%) 19h
kube-system kube-scheduler-master1 100m (2%) 0 (0%) 0 (0%) 0 (0%) 19h
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 650m (16%) 100m (2%)
memory 50Mi (1%) 50Mi (1%)
ephemeral-storage 0 (0%) 0 (0%)
hugepages-1Gi 0 (0%) 0 (0%)
hugepages-2Mi 0 (0%) 0 (0%)
Events: <none>
4、過濾出有幾台節點存在污和容忍度是什么
[root@master1 ~]# kubectl describe node master1 | grep Taints
Taints: node-role.kubernetes.io/master:NoSchedule
[root@master1 ~]# kubectl describe node master2 | grep Taints
Taints: node-role.kubernetes.io/master:NoSchedule
[root@master1 ~]# kubectl describe node master3 | grep Taints
Taints: node-role.kubernetes.io/master:NoSchedule
5、有無污點返回的結果
Taints: node-role.kubernetes.io/master:NoSchedule # 有污點
Taints: <none> # 沒污點
6、刪除污點使其pod能夠調度和使用
kubectl taint node master1 node-role.kubernetes.io/master:NoSchedule-
kubectl taint nodes master1 key:NoSchedule-