Kubernetes 資源預留(一)


Node Allocatable Resources

Allcatable

除了 kubelet、runtime 等 kubernetes 守護進程和用戶 pod 之外,Kubernetes 節點通常還運行許多操作系統系統守護進程。 Kubernetes 假設節點中的所有可用計算資源(稱為容量)都可用於用戶 Pod。 實際上,系統守護進程使用大量資源,它們的可用性對於系統的穩定性至關重要。 為了解決這個問題,該提案引入了 Allocatable 的概念,它標識了用戶 Pod 可用的計算資源量。 具體來說,kubelet 將提供一些旋鈕來為 OS 系統守護進程和 kubernetes 守護進程保留資源

Kubernetes 的節點可以按照 Capacity調度。默認情況下 pod 能夠使用節點全部可用容量。 但這樣也存在問題,因為節點自己通常運行了不少驅動 OS 和 Kubernetes 的系統守護進程。 除非為這些系統守護進程留出資源,否則它們將與 pod 爭奪資源並導致節點資源短缺問題。kubelet公開了一個名為node allocatable的特性,有助於為系統守護進程預留計算資源。 Kubernetes 推薦集群管理員按照每個節點上的工作負載密度配置node allocatable

            Node Capacity
    ---------------------------
    |     kube-reserved       |
    |-------------------------|
    |     system-reserved     |
    |-------------------------|
    |    eviction-threshold   |
    |-------------------------|
    |      allocatable        |
    |   (available for pods)  |
    ---------------------------
Node Capacity  是指Kubernetes Node所有資源總量
Kube-reserved 是指為Kubernetes組件預留的資源量,如(Docker deamon, kubelet, kube proxy)
System-reserved 是指為系統守護進程預留的資源量,如不受kubernetes 管理的進程、一般涵蓋在/system 下的raw container的所有進程
eviction-threshold 是指節點的驅逐策略,根據kubelet啟動時注入的參數(--eviction-hard等其它)設定的閾值,如果觸發該閾值,kubelet會根據Kubenetes中的Qos的等級進行驅逐Pod
allocatable     是指節點上的Pod所完全能夠分配的資源,具體公式如:[Allocatable] = [Node Capacity] - [Kube-Reserved] - [System-Reserved] - [Hard-Eviction-Threshold]

Kubernetes 節點上的 Allocatable 被定義為 pod 可用計算資源量。調度器不會超額申請 Allocatable。 目前支持 CPU, memory 和 ephemeral-storage 這幾個參數

Kube-Reserved

kubereserved是為kubernetes組件預留資源而設立的參數配置,設置此參數必須需要重啟kubelet(在kublet啟動時通過cmd-line引入),因此kubelet正常運行時無法將其更改(不排除在以后的版本fix這一缺陷實現熱加載)

--kube-reserved=cpu=500m,memory=500Mi

早期的功能僅支持CPU & MEM資源預留,但在以后kubernetes會相繼支持更多的資源限制,比如本地存儲 & IO 權重以實現kubernetes node的資源分配可靠性

System-Reserved

最初實現,systemreserved 與 kubereserved 的功能是一樣的,同樣都是資源預留功能,但是二者的意義是不同的,很明顯kubereserved是為kubernetes組件實現資源預留;而systemreserved是為非kubernetes組件的系統進程而預留的資源值

Evictions Thresholds

為了提高節點的可靠性,每當節點內存或本地存儲耗盡時,kubelet 都會驅逐 pod, evictions & node allocatable可共同有助於提高節點穩定性

自1.5版本開始,evictions是基於節點整體的capacity的使用情況,kubelet驅逐pod是基於Qos與自定義的eviction thresholds,更多的細節請參考文檔

從1.6版本開始,默認情況下,節點上的所有pod使用 cgroups 強制執行 Allocatable,pod 不能超過 Allocatable的值,cgroups強制執行 meory & cpu resource.limits

官方范例參考

    1. Node Capacity is 32Gi, kube-reserved is 2Gi, system-reserved is 1Gi, eviction-hard is set to <100Mi
    2. For this node, the effective Node Allocatable is 28.9Gi only; i.e. if kube and system components use up all their reservation, the memory available for pods is only 28.9Gi and kubelet will evict pods once overall usage of pods crosses that threshold.
    3. If we enforce Node Allocatable (28.9Gi) via top level cgroups, then pods can never exceed 28.9Gi in which case evictions will not be performed unless kernel memory consumption is above 100Mi.(如果我們通過頂級 cgroup 強制執行節點可分配 (28.9Gi),那么 pod 永遠不會超過 28.9Gi,在這種情況下,除非內核內存消耗超過 100Mi,否則不會執行驅逐)

注意事項

通過systemreserved為系統預留資源時,一定要謹慎操作,如果預留的資源不足以滿足OS進程正常運行那么必定帶來的是致命錯誤(OOM完犢子),但是也不是不可以使用,前提條件是你必須對自己系統的資源使用情況了如指掌,那具體的策略操作必定是神來之手
systemd-logind ssh session資源是在/user.slice,具體的資源使用率則不會計入node,具體資源限制是在/user.slice下,理想的狀況是放在SystemReserved的頂層的cgroup

具體配置

前提條件

  1. 啟用cgroupsPerQOS,默認值是 true,即啟動
  2. 配置 cgroup driver,kubelet支持在主機上使用 cgroup 驅動操作 cgroup 層次結構,可通過--cgroup-driver配置 (這里的cgroup driver一定要與Container runtimes一致)參考地址
  • cgroupfs 是默認的驅動,在主機上直接操作 cgroup 文件系統以對 cgroup 沙箱進行管理
  • systemd 是可選的驅動,使用 init 系統支持的資源的瞬時切片管理 cgroup 沙箱

 配置參數

  • --enforce-node-allocatable=[pods][,][kube-reserved][,][system-reserved]

參數默認值是 pods

如果--cgroups-per-qos=false, 那么該值必須空,否則kublet啟動報錯

如果指定了kube-reserved & system-reserved ,那么對應的參數--kube-resrved-cgroup & --system-reserved-cgroup要必須指定,否則為此資源預留失敗

以上值pods, kube-reserved, system-reserved,如果指定了代表enable對應資源預留

  • --cgroups-per-qos=true

啟用基於 QoS 的 Cgroup 層次結構(官方原文:Enable QoS based Cgroup hierarchy: top level cgroups for QoS Classes And all Burstable and BestEffort pods are brought up under their specific top level QoS cgroup. Dynamic Kubelet Config (beta): This field should not be updated without a full node reboot. It is safest to keep this value the same as the local config. Default: true)此值為 true 時創建頂級的 QoS 和 Pod cgroup。(默認值為 true)(已棄用:在 --config 指定的配置文件中進行設置。有關更多信息,請參閱 https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/。)

為了更好的在Node節點范圍上實施Node Allocatable功能,您必須通過 --cgroups-per-qos 標志啟用新的 cgroup 層次結構。 該參數是默認設置是true。啟用后,kubelet 將在其管理的 cgroup 層次結構中創建所有終端用戶的 Pod。

更多詳細說明,看下文cgroup的推薦配置

在系統與之對應的配置,如下

<root@HK-K8S-WN1 /sys/fs/cgroup/memory># pwd
/sys/fs/cgroup/memory <root@HK-K8S-WN1 /sys/fs/cgroup/memory># ls -l
total 0
drwxr-xr-x   2 root root 0 Jul 20 22:31 assist
-rw-r--r--   1 root root 0 Aug 14 14:37 cgroup.clone_children
--w--w--w-   1 root root 0 Aug 14 15:37 cgroup.event_control
-rw-r--r--   1 root root 0 Aug 14 15:37 cgroup.procs
-r--r--r--   1 root root 0 Aug 14 15:37 cgroup.sane_behavior
drwxr-xr-x   5 root root 0 Mar 16 14:04 kubepods.slice
-rw-r--r--   1 root root 0 Aug 14 14:37 memory.failcnt
--w-------   1 root root 0 Aug 14 15:37 memory.force_empty
-rw-r--r--   1 root root 0 Aug 14 14:37 memory.kmem.failcnt
-rw-r--r--   1 root root 0 Aug 14 14:37 memory.kmem.limit_in_bytes
-rw-r--r--   1 root root 0 Aug 14 14:37 memory.kmem.max_usage_in_bytes
-r--r--r--   1 root root 0 Aug 14 15:37 memory.kmem.slabinfo
-rw-r--r--   1 root root 0 Aug 14 14:37 memory.kmem.tcp.failcnt
-rw-r--r--   1 root root 0 Aug 14 14:37 memory.kmem.tcp.limit_in_bytes
-rw-r--r--   1 root root 0 Aug 14 14:37 memory.kmem.tcp.max_usage_in_bytes
-r--r--r--   1 root root 0 Aug 14 14:37 memory.kmem.tcp.usage_in_bytes
-r--r--r--   1 root root 0 Aug 14 14:37 memory.kmem.usage_in_bytes
-rw-r--r--   1 root root 0 Aug 14 14:37 memory.limit_in_bytes
-rw-r--r--   1 root root 0 Aug 14 14:37 memory.max_usage_in_bytes
-rw-r--r--   1 root root 0 Aug 14 14:37 memory.memsw.failcnt
-rw-r--r--   1 root root 0 Aug 14 14:37 memory.memsw.limit_in_bytes
-rw-r--r--   1 root root 0 Aug 14 14:37 memory.memsw.max_usage_in_bytes
-r--r--r--   1 root root 0 Aug 14 14:37 memory.memsw.usage_in_bytes
-rw-r--r--   1 root root 0 Aug 14 15:37 memory.move_charge_at_immigrate
-r--r--r--   1 root root 0 Aug 14 15:37 memory.numa_stat
-rw-r--r--   1 root root 0 Aug 14 15:37 memory.oom_control
----------   1 root root 0 Aug 14 15:37 memory.pressure_level
-rw-r--r--   1 root root 0 Aug 14 14:37 memory.soft_limit_in_bytes
-r--r--r--   1 root root 0 Aug 14 14:37 memory.stat
-rw-r--r--   1 root root 0 Aug 14 15:37 memory.swappiness
-r--r--r--   1 root root 0 Aug 14 14:37 memory.usage_in_bytes
-rw-r--r--   1 root root 0 Aug 14 14:37 memory.use_hierarchy
-rw-r--r--   1 root root 0 Aug 14 15:37 notify_on_release
-rw-r--r--   1 root root 0 Aug 14 15:37 release_agent
drwxr-xr-x 111 root root 0 Mar 16 14:01 system.slice
-rw-r--r--   1 root root 0 Aug 14 15:37 tasks
drwxr-xr-x   2 root root 0 Mar 16 14:01 user.slice
<root@HK-K8S-WN1 /sys/fs/cgroup/memory># cd kubepods.slice/
<root@HK-K8S-WN1 /sys/fs/cgroup/memory/kubepods.slice># ls
cgroup.clone_children                                   memory.failcnt                  memory.kmem.tcp.failcnt             memory.max_usage_in_bytes        memory.numa_stat            memory.usage_in_bytes
cgroup.event_control                                    memory.force_empty              memory.kmem.tcp.limit_in_bytes      memory.memsw.failcnt             memory.oom_control          memory.use_hierarchy
cgroup.procs                                            memory.kmem.failcnt             memory.kmem.tcp.max_usage_in_bytes  memory.memsw.limit_in_bytes      memory.pressure_level       notify_on_release
kubepods-besteffort.slice                               memory.kmem.limit_in_bytes      memory.kmem.tcp.usage_in_bytes      memory.memsw.max_usage_in_bytes  memory.soft_limit_in_bytes  tasks
kubepods-burstable.slice                                memory.kmem.max_usage_in_bytes  memory.kmem.usage_in_bytes          memory.memsw.usage_in_bytes      memory.stat
kubepods-pod1d216a46_5b77_4afe_8df7_c7a3af8f737e.slice  memory.kmem.slabinfo            memory.limit_in_bytes               memory.move_charge_at_immigrate  memory.swappiness
<root@HK-K8S-WN1 /sys/fs/cgroup/memory/kubepods.slice># 
  • --kube-reserved

kubernetes 系統預留的資源配置,以一組 ResourceName=ResourceQuantity 格式表示。(例如:cpu=200m,memory=500Mi,ephemeral-storage=1Gi)。當前支持用於根文件系統的 CPU、內存(memory)和本地臨時存儲。請參閱 http://kubernetes.io/docs/user-guide/compute-resources 獲取更多信息。(默認值為 none)(已棄用:在 --config 指定的配置文件中進行設置。有關更多信息,請參閱 https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/。)

  • --kube-reserved-cgroup

通過--kube-reserved參數配置為 kubernetes 組件其保留計算資源而設置的最頂層cgroup的絕對路徑,與--kube-reserved相呼應,成對配置

Example /kube.slice

  • --system-reserved

系統預留的資源配置,以一組 ”ResourceName=ResourceQuantity“ 的格式表示,(例如:cpu=200m,memory=500Mi,ephemeral-storage=1Gi)。目前僅支持 CPU 和內存(memory)的設置【官方原文,但是可以支持本地存儲限額配置】。請參考 http://kubernetes.io/docs/user-guide/compute-resources 獲取更多信息。(默認值為 ”none“)(已棄用:在 --config 指定的配置文件中進行設置。有關更多信息,請參閱 https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/。)

  • --system-reserved-cgroup

簡單來講就是使用--system-reserved為非Kubernetes組件的系統進程而預留的一部分資源,就等於是為系統進程預留的資源配置,但是必須設置--system-reserved的最頂層的cgroup的絕對路徑,(默認值為 ‘’ 空)(已棄用:在 --config 指定的配置文件中進行設置。有關更多信息,請參閱 https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/。)

Example /system.slice

  • --eviction-hard 
強驅逐閾值,代表如果觸發此值,立即驅逐節點的Pod,具體如何驅逐Pod則根本Kubernetes定義的Qos等級(例如:memory.available < 1Gi(內存可用值小於 1 G))設置。(默認值為 imagefs.available<15%,memory.available<100Mi,nodefs.available<10%,nodefs.inodesFree<5%)(已棄用:在 --config 指定的配置文件中進行設置。有關更多信息,請參閱 https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/。)

配置過程

規則說明:如果--enforce-node-allocatable(enforceNodeAllocatable)指定systemReserved & kubeReserved值時,那么必須配置對應的cgroup參數(systemReservedCgroup & kubeReservedCgroup),相反如果enforceNodeAllocatable沒有指定(systemReserved & kubeReserved)則不會生效,雖然從kubectl describe nodes看到的Allocatable的值減少,但是並沒有真正限制Kubernetes組件及OS守護進程的資源限制,也就說OS守護進程及組件在資源請求分配時是無限的

解釋說明:為什么Kubernetes官方默認並沒有限制kubeReserved & systemReserved是因為它他們資源使用很難控制,如果設置不當后果很嚴重直接會影響整個平台或者是OS無法正常運行,將是致命性問題,但是Kubernetes保留了evictionHard默認驅逐策略

"evictionHard": {
      "imagefs.available": "15%",
      "memory.available": "100Mi",
      "nodefs.available": "10%",
      "nodefs.inodesFree": "5%"
    }
  • 驗證參數設置關聯性

開啟kubelet日志等級模式,如下

<root@HK-K8S-WN4 /var/lib/kubelet># cat kubeadm-flags.env 
#KUBELET_KUBEADM_ARGS="--cgroup-driver=systemd --network-plugin=cni --pod-infra-container-image=k8s.gcr.io/pause:3.2"
# 上面--cgroup-driver=systemd 因為在新版本此方法雖然還可以使用,但是啟動時會報錯,需要在config.yaml配置,所以刪除了此部分配置,具體配置如下文
KUBELET_KUBEADM_ARGS="--network-plugin=cni --pod-infra-container-image=k8s.gcr.io/pause:3.2 --v=5"

配置kubeReserved(具體的配置在/var/lib/kubelet目錄下,配置文件config.yaml安裝之后默認生成的配置文件,當然kubelet啟動的時候會指向該配置--config=/var/lib/kubelet/config.yaml),如下

在原文件的基礎上,修改或新增需要的配置,可能不理解的是為什么沒有ephemeral-storage的配置,解釋一下,這年頭硬盤值幾個錢,有必要限額嗎?關鍵是目前硬盤不像CPU&MEM更有實際意義的限額,而且目前也不能按IO的權重來配置,還有重復上文,最好不要做systemReserved的限額,以免引火上身

apiVersion: kubelet.config.k8s.io/v1beta1
authentication: anonymous: enabled:
false webhook: cacheTTL: 0s enabled: true x509: clientCAFile: /etc/kubernetes/pki/ca.crt authorization: mode: Webhook webhook: cacheAuthorizedTTL: 0s cacheUnauthorizedTTL: 0s clusterDNS: - 10.10.0.10 clusterDomain: nflow.so
# 只開啟對kube-reserved資源預留 enforceNodeAllocatable:
- pods - kube-reserved # - system-reserved
#
以下僅僅是驗證,是在以上enforceNodeAllocatable只開啟kube-reserved時,但下文同時一配置kubeRserved & systemReserved,在kubelet啟動時是否報錯
systemReserved: cpu: 200m memory: 2000Mi kubeReserved: cpu: 200m memory: 500Mi kubeReservedCgroup: /kube.slice systemReservedCgroup: /system.slice cgroupDriver: systemd maxPods: 64 cpuManagerReconcilePeriod: 0s evictionPressureTransitionPeriod: 4m0s fileCheckFrequency: 0s healthzBindAddress: 127.0.0.1 healthzPort: 10248 httpCheckFrequency: 0s imageMinimumGCAge: 0s kind: KubeletConfiguration nodeStatusReportFrequency: 0s nodeStatusUpdateFrequency: 0s rotateCertificates: true runtimeRequestTimeout: 0s staticPodPath: /etc/kubernetes/manifests streamingConnectionIdleTimeout: 0s syncFrequency: 0s volumeStatsAggPeriod: 0s

以下kubelet啟動過程,挑重要的看

Aug 15 18:55:05 HK-K8S-WN4 systemd[1]: Started kubelet: The Kubernetes Node Agent.
Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.372383   21987 flags.go:33] FLAG: --add-dir-header="false"
Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.372440   21987 flags.go:33] FLAG: --address="0.0.0.0"
Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.372453   21987 flags.go:33] FLAG: --allowed-unsafe-sysctls="[]"
Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.372463   21987 flags.go:33] FLAG: --alsologtostderr="false"
Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.372468   21987 flags.go:33] FLAG: --anonymous-auth="true"
Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.372475   21987 flags.go:33] FLAG: --application-metrics-count-limit="100"
Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.372480   21987 flags.go:33] FLAG: --authentication-token-webhook="false"
Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.372485   21987 flags.go:33] FLAG: --authentication-token-webhook-cache-ttl="2m0s"
Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.372491   21987 flags.go:33] FLAG: --authorization-mode="AlwaysAllow"
Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.372497   21987 flags.go:33] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s"
Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.372503   21987 flags.go:33] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s"
Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.372508   21987 flags.go:33] FLAG: --azure-container-registry-config=""
Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.372513   21987 flags.go:33] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id"
Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.372519   21987 flags.go:33] FLAG: --bootstrap-checkpoint-path=""
Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.372523   21987 flags.go:33] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/bootstrap-kubelet.conf"
Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.372529   21987 flags.go:33] FLAG: --cert-dir="/var/lib/kubelet/pki"
Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.372534   21987 flags.go:33] FLAG: --cgroup-driver="cgroupfs"
Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.372539   21987 flags.go:33] FLAG: --cgroup-root=""
Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.372544   21987 flags.go:33] FLAG: --cgroups-per-qos="true"
Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.372549   21987 flags.go:33] FLAG: --chaos-chance="0"
Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.372558   21987 flags.go:33] FLAG: --client-ca-file=""
Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.372563   21987 flags.go:33] FLAG: --cloud-config=""
Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.372567   21987 flags.go:33] FLAG: --cloud-provider=""
Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.372572   21987 flags.go:33] FLAG: --cluster-dns="[]"
Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.372579   21987 flags.go:33] FLAG: --cluster-domain=""
Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.372595   21987 flags.go:33] FLAG: --cni-bin-dir="/opt/cni/bin"
Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.372601   21987 flags.go:33] FLAG: --cni-cache-dir="/var/lib/cni/cache"
Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.372607   21987 flags.go:33] FLAG: --cni-conf-dir="/etc/cni/net.d"
Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.372612   21987 flags.go:33] FLAG: --config="/var/lib/kubelet/config.yaml"
Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.372618   21987 flags.go:33] FLAG: --container-hints="/etc/cadvisor/container_hints.json"
Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.372624   21987 flags.go:33] FLAG: --container-log-max-files="5"
Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.372631   21987 flags.go:33] FLAG: --container-log-max-size="10Mi"
Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.372636   21987 flags.go:33] FLAG: --container-runtime="docker"
Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.372642   21987 flags.go:33] FLAG: --container-runtime-endpoint="unix:///var/run/dockershim.sock"
Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.372648   21987 flags.go:33] FLAG: --containerd="/run/containerd/containerd.sock"
Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.372654   21987 flags.go:33] FLAG: --contention-profiling="false"
Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.372659   21987 flags.go:33] FLAG: --cpu-cfs-quota="true"
Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.372664   21987 flags.go:33] FLAG: --cpu-cfs-quota-period="100ms"
Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.372670   21987 flags.go:33] FLAG: --cpu-manager-policy="none"
Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.372677   21987 flags.go:33] FLAG: --cpu-manager-reconcile-period="10s"
Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.372683   21987 flags.go:33] FLAG: --docker="unix:///var/run/docker.sock"
Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.372690   21987 flags.go:33] FLAG: --docker-endpoint="unix:///var/run/docker.sock"
Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.372696   21987 flags.go:33] FLAG: --docker-env-metadata-whitelist=""
Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.372702   21987 flags.go:33] FLAG: --docker-only="false"
Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.372707   21987 flags.go:33] FLAG: --docker-root="/var/lib/docker"
Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.372713   21987 flags.go:33] FLAG: --docker-tls="false"
Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.372718   21987 flags.go:33] FLAG: --docker-tls-ca="ca.pem"
Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.372723   21987 flags.go:33] FLAG: --docker-tls-cert="cert.pem"
Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.372728   21987 flags.go:33] FLAG: --docker-tls-key="key.pem"
Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.372734   21987 flags.go:33] FLAG: --dynamic-config-dir=""
Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.372741   21987 flags.go:33] FLAG: --enable-cadvisor-json-endpoints="false"
Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.372746   21987 flags.go:33] FLAG: --enable-controller-attach-detach="true"
Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.372753   21987 flags.go:33] FLAG: --enable-debugging-handlers="true"
Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.372759   21987 flags.go:33] FLAG: --enable-load-reader="false"
Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.372764   21987 flags.go:33] FLAG: --enable-server="true"
Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.372768   21987 flags.go:33] FLAG: --enforce-node-allocatable="[pods]"
Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.372780   21987 flags.go:33] FLAG: --event-burst="10"
Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.372785   21987 flags.go:33] FLAG: --event-qps="5"
Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.372790   21987 flags.go:33] FLAG: --event-storage-age-limit="default=0"
Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.372795   21987 flags.go:33] FLAG: --event-storage-event-limit="default=0"
Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.372800   21987 flags.go:33] FLAG: --eviction-hard="imagefs.available<15%,memory.available<100Mi,nodefs.available<10%,nodefs.inodesFree<5%"
Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.372816   21987 flags.go:33] FLAG: --eviction-max-pod-grace-period="0"
Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.372821   21987 flags.go:33] FLAG: --eviction-minimum-reclaim=""
Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.372829   21987 flags.go:33] FLAG: --eviction-pressure-transition-period="5m0s"
Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.372834   21987 flags.go:33] FLAG: --eviction-soft=""
Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.372839   21987 flags.go:33] FLAG: --eviction-soft-grace-period=""
Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.372844   21987 flags.go:33] FLAG: --exit-on-lock-contention="false"
Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.372849   21987 flags.go:33] FLAG: --experimental-allocatable-ignore-eviction="false"
Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.372855   21987 flags.go:33] FLAG: --experimental-bootstrap-kubeconfig="/etc/kubernetes/bootstrap-kubelet.conf"
Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.372861   21987 flags.go:33] FLAG: --experimental-check-node-capabilities-before-mount="false"
Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.372866   21987 flags.go:33] FLAG: --experimental-dockershim="false"
Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.372871   21987 flags.go:33] FLAG: --experimental-dockershim-root-directory="/var/lib/dockershim"
Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.372877   21987 flags.go:33] FLAG: --experimental-kernel-memcg-notification="false"
Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.372881   21987 flags.go:33] FLAG: --experimental-mounter-path=""
Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.372885   21987 flags.go:33] FLAG: --fail-swap-on="true"
Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.372890   21987 flags.go:33] FLAG: --feature-gates=""
Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.372897   21987 flags.go:33] FLAG: --file-check-frequency="20s"
Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.372903   21987 flags.go:33] FLAG: --global-housekeeping-interval="1m0s"
Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.372908   21987 flags.go:33] FLAG: --hairpin-mode="promiscuous-bridge"
Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.372913   21987 flags.go:33] FLAG: --healthz-bind-address="127.0.0.1"
Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.372919   21987 flags.go:33] FLAG: --healthz-port="10248"
Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.372924   21987 flags.go:33] FLAG: --help="false"
Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.372930   21987 flags.go:33] FLAG: --hostname-override=""
Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.372935   21987 flags.go:33] FLAG: --housekeeping-interval="10s"
Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.372942   21987 flags.go:33] FLAG: --http-check-frequency="20s"
Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.372948   21987 flags.go:33] FLAG: --image-gc-high-threshold="85"
Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.372953   21987 flags.go:33] FLAG: --image-gc-low-threshold="80"
Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.372958   21987 flags.go:33] FLAG: --image-pull-progress-deadline="1m0s"
Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.372964   21987 flags.go:33] FLAG: --image-service-endpoint=""
Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.372969   21987 flags.go:33] FLAG: --iptables-drop-bit="15"
Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.372975   21987 flags.go:33] FLAG: --iptables-masquerade-bit="14"
Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.372981   21987 flags.go:33] FLAG: --keep-terminated-pod-volumes="false"
Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.372987   21987 flags.go:33] FLAG: --kube-api-burst="10"
Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.372992   21987 flags.go:33] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf"
Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.372998   21987 flags.go:33] FLAG: --kube-api-qps="5"
Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.373003   21987 flags.go:33] FLAG: --kube-reserved=""
Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.373009   21987 flags.go:33] FLAG: --kube-reserved-cgroup=""
Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.373014   21987 flags.go:33] FLAG: --kubeconfig="/etc/kubernetes/kubelet.conf"
Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.373020   21987 flags.go:33] FLAG: --kubelet-cgroups=""
Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.373027   21987 flags.go:33] FLAG: --lock-file=""
Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.373032   21987 flags.go:33] FLAG: --log-backtrace-at=":0"
Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.373039   21987 flags.go:33] FLAG: --log-cadvisor-usage="false"
Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.373044   21987 flags.go:33] FLAG: --log-dir=""
Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.373050   21987 flags.go:33] FLAG: --log-file=""
Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.373055   21987 flags.go:33] FLAG: --log-file-max-size="1800"
Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.373060   21987 flags.go:33] FLAG: --log-flush-frequency="5s"
Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.373066   21987 flags.go:33] FLAG: --logtostderr="true"
Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.373071   21987 flags.go:33] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id"
Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.373078   21987 flags.go:33] FLAG: --make-iptables-util-chains="true"
Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.373083   21987 flags.go:33] FLAG: --manifest-url=""
Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.373088   21987 flags.go:33] FLAG: --manifest-url-header=""
Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.373097   21987 flags.go:33] FLAG: --master-service-namespace="default"
Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.373103   21987 flags.go:33] FLAG: --max-open-files="1000000"
Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.373110   21987 flags.go:33] FLAG: --max-pods="110"
Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.373116   21987 flags.go:33] FLAG: --maximum-dead-containers="-1"
Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.373121   21987 flags.go:33] FLAG: --maximum-dead-containers-per-container="1"
Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.373128   21987 flags.go:33] FLAG: --minimum-container-ttl-duration="0s"
Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.373134   21987 flags.go:33] FLAG: --minimum-image-ttl-duration="2m0s"
Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.373139   21987 flags.go:33] FLAG: --network-plugin="cni"
Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.373144   21987 flags.go:33] FLAG: --network-plugin-mtu="0"
Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.373149   21987 flags.go:33] FLAG: --node-ip=""
Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.373155   21987 flags.go:33] FLAG: --node-labels=""
Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.373163   21987 flags.go:33] FLAG: --node-status-max-images="50"
Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.373168   21987 flags.go:33] FLAG: --node-status-update-frequency="10s"
Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.373183   21987 flags.go:33] FLAG: --non-masquerade-cidr="10.0.0.0/8"
Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.373188   21987 flags.go:33] FLAG: --oom-score-adj="-999"
Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.373194   21987 flags.go:33] FLAG: --pod-cidr=""
Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.373199   21987 flags.go:33] FLAG: --pod-infra-container-image="k8s.gcr.io/pause:3.2"
Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.373205   21987 flags.go:33] FLAG: --pod-manifest-path=""
Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.373210   21987 flags.go:33] FLAG: --pod-max-pids="-1"
Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.373215   21987 flags.go:33] FLAG: --pods-per-core="0"
Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.373221   21987 flags.go:33] FLAG: --port="10250"
Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.373227   21987 flags.go:33] FLAG: --protect-kernel-defaults="false"
Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.373232   21987 flags.go:33] FLAG: --provider-id=""
Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.373237   21987 flags.go:33] FLAG: --qos-reserved=""
Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.373243   21987 flags.go:33] FLAG: --read-only-port="10255"
Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.373249   21987 flags.go:33] FLAG: --really-crash-for-testing="false"
Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.373256   21987 flags.go:33] FLAG: --redirect-container-streaming="false"
Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.373261   21987 flags.go:33] FLAG: --register-node="true"
Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.373266   21987 flags.go:33] FLAG: --register-schedulable="true"
Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.373271   21987 flags.go:33] FLAG: --register-with-taints=""
Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.373279   21987 flags.go:33] FLAG: --registry-burst="10"
Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.373285   21987 flags.go:33] FLAG: --registry-qps="5"
Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.373290   21987 flags.go:33] FLAG: --reserved-cpus=""
Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.373295   21987 flags.go:33] FLAG: --resolv-conf="/etc/resolv.conf"
Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.373300   21987 flags.go:33] FLAG: --root-dir="/var/lib/kubelet"
Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.373306   21987 flags.go:33] FLAG: --rotate-certificates="false"
Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.373312   21987 flags.go:33] FLAG: --rotate-server-certificates="false"
Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.373319   21987 flags.go:33] FLAG: --runonce="false"
Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.373325   21987 flags.go:33] FLAG: --runtime-cgroups=""
Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.373331   21987 flags.go:33] FLAG: --runtime-request-timeout="2m0s"
Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.373336   21987 flags.go:33] FLAG: --seccomp-profile-root="/var/lib/kubelet/seccomp"
Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.373342   21987 flags.go:33] FLAG: --serialize-image-pulls="true"
Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.373347   21987 flags.go:33] FLAG: --skip-headers="false"
Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.373352   21987 flags.go:33] FLAG: --skip-log-headers="false"
Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.373359   21987 flags.go:33] FLAG: --stderrthreshold="2"
Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.373364   21987 flags.go:33] FLAG: --storage-driver-buffer-duration="1m0s"
Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.373369   21987 flags.go:33] FLAG: --storage-driver-db="cadvisor"
Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.373375   21987 flags.go:33] FLAG: --storage-driver-host="localhost:8086"
Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.373380   21987 flags.go:33] FLAG: --storage-driver-password="root"
Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.373386   21987 flags.go:33] FLAG: --storage-driver-secure="false"
Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.373392   21987 flags.go:33] FLAG: --storage-driver-table="stats"
Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.373398   21987 flags.go:33] FLAG: --storage-driver-user="root"
Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.373403   21987 flags.go:33] FLAG: --streaming-connection-idle-timeout="4h0m0s"
Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.373410   21987 flags.go:33] FLAG: --sync-frequency="1m0s"
Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.373415   21987 flags.go:33] FLAG: --system-cgroups=""
Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.373420   21987 flags.go:33] FLAG: --system-reserved=""
Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.373426   21987 flags.go:33] FLAG: --system-reserved-cgroup=""
Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.373431   21987 flags.go:33] FLAG: --tls-cert-file=""
Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.373436   21987 flags.go:33] FLAG: --tls-cipher-suites="[]"
Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.373444   21987 flags.go:33] FLAG: --tls-min-version=""
Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.373449   21987 flags.go:33] FLAG: --tls-private-key-file=""
Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.373454   21987 flags.go:33] FLAG: --topology-manager-policy="none"
Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.373459   21987 flags.go:33] FLAG: --v="5"
Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.373465   21987 flags.go:33] FLAG: --version="false"
Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.373474   21987 flags.go:33] FLAG: --vmodule=""
Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.373480   21987 flags.go:33] FLAG: --volume-plugin-dir="/usr/libexec/kubernetes/kubelet-plugins/volume/exec/"
Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.373487   21987 flags.go:33] FLAG: --volume-stats-agg-period="1m0s"
Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.373542   21987 feature_gate.go:243] feature gates: &{map[]}
Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.376643   21987 feature_gate.go:243] feature gates: &{map[]}
Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.376722   21987 feature_gate.go:243] feature gates: &{map[]}
Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.386413   21987 mount_linux.go:178] Detected OS with systemd
Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.386524   21987 server.go:272] KubeletConfiguration: config.KubeletConfiguration{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, StaticPodPath:"/etc/kubernetes/manifests", SyncFrequency:v1.Duration{Duration:60000000000}, FileCheckFrequency:v1.Duration{Duration:20000000000}, HTTPCheckFrequency:v1.Duration{Duration:20000000000}, StaticPodURL:"", StaticPodURLHeader:map[string][]string(nil), Address:"0.0.0.0", Port:10250, ReadOnlyPort:0, TLSCertFile:"/var/lib/kubelet/pki/kubelet.crt", TLSPrivateKeyFile:"/var/lib/kubelet/pki/kubelet.key", TLSCipherSuites:[]string(nil), TLSMinVersion:"", RotateCertificates:true, ServerTLSBootstrap:false, Authentication:config.KubeletAuthentication{X509:config.KubeletX509Authentication{ClientCAFile:"/etc/kubernetes/pki/ca.crt"}, Webhook:config.KubeletWebhookAuthentication{Enabled:true, CacheTTL:v1.Duration{Duration:120000000000}}, Anonymous:config.KubeletAnonymousAuthentication{Enabled:false}}, Authorization:config.KubeletAuthorization{Mode:"Webhook", Webhook:config.KubeletWebhookAuthorization{CacheAuthorizedTTL:v1.Duration{Duration:300000000000}, CacheUnauthorizedTTL:v1.Duration{Duration:30000000000}}}, RegistryPullQPS:5, RegistryBurst:10, EventRecordQPS:5, EventBurst:10, EnableDebuggingHandlers:true, EnableContentionProfiling:false, HealthzPort:10248, HealthzBindAddress:"127.0.0.1", OOMScoreAdj:-999, ClusterDomain:"nflow.so", ClusterDNS:[]string{"10.10.0.10"}, StreamingConnectionIdleTimeout:v1.Duration{Duration:14400000000000}, NodeStatusUpdateFrequency:v1.Duration{Duration:10000000000}, NodeStatusReportFrequency:v1.Duration{Duration:300000000000}, NodeLeaseDurationSeconds:40, ImageMinimumGCAge:v1.Duration{Duration:120000000000}, ImageGCHighThresholdPercent:85, ImageGCLowThresholdPercent:80, VolumeStatsAggPeriod:v1.Duration{Duration:60000000000}, KubeletCgroups:"", SystemCgroups:"", CgroupRoot:"", CgroupsPerQOS:true, CgroupDriver:"systemd", CPUManagerPolicy:"none", CPUManagerReconcilePeriod:v1.Duration{Duration:10000000000}, TopologyManagerPolicy:"none", QOSReserved:map[string]string(nil), RuntimeRequestTimeout:v1.Duration{Duration:120000000000}, HairpinMode:"promiscuous-bridge", MaxPods:64, PodCIDR:"", PodPidsLimit:-1, ResolverConfig:"/etc/resolv.conf", CPUCFSQuota:true, CPUCFSQuotaPeriod:v1.Duration{Duration:100000000}, MaxOpenFiles:1000000, ContentType:"application/vnd.kubernetes.protobuf", KubeAPIQPS:5, KubeAPIBurst:10, SerializeImagePulls:true, EvictionHard:map[string]string{"imagefs.available":"15%", "memory.available":"100Mi", "nodefs.available":"10%", "nodefs.inodesFree":"5%"}, EvictionSoft:map[string]string(nil), EvictionSoftGracePeriod:map[string]string(nil), EvictionPressureTransitionPeriod:v1.Duration{Duration:240000000000}, EvictionMaxPodGracePeriod:0, EvictionMinimumReclaim:map[string]string(nil), PodsPerCore:0, EnableControllerAttachDetach:true, ProtectKernelDefaults:false, MakeIPTablesUtilChains:true, IPTablesMasqueradeBit:14, IPTablesDropBit:15, FeatureGates:map[string]bool(nil), FailSwapOn:true, ContainerLogMaxSize:"10Mi", ContainerLogMaxFiles:5, ConfigMapAndSecretChangeDetectionStrategy:"Watch", AllowedUnsafeSysctls:[]string(nil), SystemReserved:map[string]string{"cpu":"200m", "memory":"2000Mi"}, KubeReserved:map[string]string{"cpu":"200m", "memory":"500Mi"}, SystemReservedCgroup:"/system.slice", KubeReservedCgroup:"/kube.slice", EnforceNodeAllocatable:[]string{"pods", "kube-reserved"}, ReservedSystemCPUs:"", ShowHiddenMetricsForVersion:""}
Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.386672   21987 server.go:417] Version: v1.18.5
Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.391547   21987 feature_gate.go:243] feature gates: &{map[]}
Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.391880   21987 feature_gate.go:243] feature gates: &{map[]}
Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.392288   21987 plugins.go:100] No cloud provider specified.
Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.392314   21987 server.go:537] No cloud provider specified: "" from the config file: ""
Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.392332   21987 server.go:838] Client rotation is on, will bootstrap in background
Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.430340   21987 bootstrap.go:84] Current kubeconfig file contents are still valid, no bootstrap necessary
Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.430451   21987 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.430889   21987 server.go:865] Starting client certificate rotation.
Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.430909   21987 certificate_manager.go:282] Certificate rotation is enabled.
Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.431619   21987 dynamic_cafile_content.go:129] Loaded a new CA Bundle and Verifier for "client-ca-bundle::/etc/kubernetes/pki/ca.crt"
Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.432074   21987 manager.go:146] cAdvisor running in container: "/sys/fs/cgroup/cpu,cpuacct/system.slice/kubelet.service"
Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.432271   21987 plugin.go:40] CRI-O not connected: Get http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info: dial unix /var/run/crio/crio.sock: connect: no such file or directory
Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.432432   21987 certificate_manager.go:553] Certificate expiration is 2022-08-15 06:58:46 +0000 UTC, rotation deadline is 2022-05-13 05:07:22.359802602 +0000 UTC
Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.432453   21987 certificate_manager.go:288] Waiting 6498h12m16.927353019s for next certificate rotation
Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.432505   21987 dynamic_cafile_content.go:167] Starting client-ca-bundle::/etc/kubernetes/pki/ca.crt
Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.481957   21987 fs.go:125] Filesystem UUIDs: map[8770013a-4455-4a77-b023-04d04fa388c8:/dev/vda1]
Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.481987   21987 fs.go:126] Filesystem partitions: map[/data/docker/containers/7e0790e192702a3f04ee841347556628862876a8c443061db6043b539ba0c2d7/mounts/shm:{mountpoint:/data/docker/containers/7e0790e192702a3f04ee841347556628862876a8c443061db6043b539ba0c2d7/mounts/shm major:0 minor:111 fsType:tmpfs blockSize:0} /data/docker/containers/bc5d2ea198a9e1c423ad5867290fc99db482d24d8729cd8a42ce69de5b2e3619/mounts/shm:{mountpoint:/data/docker/containers/bc5d2ea198a9e1c423ad5867290fc99db482d24d8729cd8a42ce69de5b2e3619/mounts/shm major:0 minor:65 fsType:tmpfs blockSize:0} /data/docker/containers/e26bef869e39e8cd04c94f62f585ef454d2293afc9a7ba398d562379a1cfac54/mounts/shm:{mountpoint:/data/docker/containers/e26bef869e39e8cd04c94f62f585ef454d2293afc9a7ba398d562379a1cfac54/mounts/shm major:0 minor:54 fsType:tmpfs blockSize:0} /dev/shm:{mountpoint:/dev/shm major:0 minor:23 fsType:tmpfs blockSize:0} /dev/vda1:{mountpoint:/ major:253 minor:1 fsType:ext4 blockSize:0} /run:{mountpoint:/run major:0 minor:25 fsType:tmpfs blockSize:0} /run/user/501:{mountpoint:/run/user/501 major:0 minor:45 fsType:tmpfs blockSize:0} /sys/fs/cgroup:{mountpoint:/sys/fs/cgroup major:0 minor:26 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/410f3b52-3fa2-4a75-8811-bd2b4e60b1bd/volumes/kubernetes.io~projected/hubble-tls:{mountpoint:/var/lib/kubelet/pods/410f3b52-3fa2-4a75-8811-bd2b4e60b1bd/volumes/kubernetes.io~projected/hubble-tls major:0 minor:50 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/410f3b52-3fa2-4a75-8811-bd2b4e60b1bd/volumes/kubernetes.io~secret/cilium-token-8lbcl:{mountpoint:/var/lib/kubelet/pods/410f3b52-3fa2-4a75-8811-bd2b4e60b1bd/volumes/kubernetes.io~secret/cilium-token-8lbcl major:0 minor:49 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/410f3b52-3fa2-4a75-8811-bd2b4e60b1bd/volumes/kubernetes.io~secret/clustermesh-secrets:{mountpoint:/var/lib/kubelet/pods/410f3b52-3fa2-4a75-8811-bd2b4e60b1bd/volumes/kubernetes.io~secret/clustermesh-secrets major:0 minor:48 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/75aa6245-eac7-46ee-9d13-7b521071074d/volumes/kubernetes.io~secret/kube-router-token-btqcq:{mountpoint:/var/lib/kubelet/pods/75aa6245-eac7-46ee-9d13-7b521071074d/volumes/kubernetes.io~secret/kube-router-token-btqcq major:0 minor:47 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/9d01c4c7-6c73-4bd0-8cf2-468b4a70d0f5/volumes/kubernetes.io~secret/csi-admin-token-r4x5b:{mountpoint:/var/lib/kubelet/pods/9d01c4c7-6c73-4bd0-8cf2-468b4a70d0f5/volumes/kubernetes.io~secret/csi-admin-token-r4x5b major:0 minor:108 fsType:tmpfs blockSize:0} overlay_0-109:{mountpoint:/data/docker/overlay2/2f44e5a0ca90fd122c19d0ab3ac6ac03d2e167427f5bd1e48438acf20547c3fb/merged major:0 minor:109 fsType:overlay blockSize:0} overlay_0-120:{mountpoint:/data/docker/overlay2/bd2e48aed6772c7cf9773d0d21db98089931d6e1f8fdcf45b9ae3a2e9c5d3a1b/merged major:0 minor:120 fsType:overlay blockSize:0} overlay_0-129:{mountpoint:/data/docker/overlay2/540c822eef37596e6b6872ccf20080018f3b4fe1d92e6d4a5be7badae61e9c2f/merged major:0 minor:129 fsType:overlay blockSize:0} overlay_0-52:{mountpoint:/data/docker/overlay2/4516e81acee49b86624c247cd4c59b64bebda52499bce9a8736ab6d4f04a17e4/merged major:0 minor:52 fsType:overlay blockSize:0} overlay_0-63:{mountpoint:/data/docker/overlay2/ad70382b0bc0ded5ab59552be7675ebd13765c429c49db2d628723076f027de4/merged major:0 minor:63 fsType:overlay blockSize:0} overlay_0-85:{mountpoint:/data/docker/overlay2/4a6714d6f732dd7d4fdbf47578cbb8c04047631f41d97c3c1ad44389a0e35d8e/merged major:0 minor:85 fsType:overlay blockSize:0} overlay_0-91:{mountpoint:/data/docker/overlay2/5900995ba47e551caf1fbc873a7351601c56431d9a940c72c1175fdbc74815c8/merged major:0 minor:91 fsType:overlay blockSize:0}]
Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.483815   21987 manager.go:193] Machine: {NumCores:2 CpuFrequency:2499988 MemoryCapacity:4126961664 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:20191225111607875619293640639763 SystemUUID:c886c8fa-a1e6-45b5-9c9a-ed72dd7ca192 BootID:88e6fa2c-7d81-4449-bd0c-2d06877e2746 Filesystems:[{Device:/dev/shm DeviceMajor:0 DeviceMinor:23 Capacity:2063478784 Type:vfs Inodes:503779 HasInodes:true} {Device:/sys/fs/cgroup DeviceMajor:0 DeviceMinor:26 Capacity:2063478784 Type:vfs Inodes:503779 HasInodes:true} {Device:/var/lib/kubelet/pods/75aa6245-eac7-46ee-9d13-7b521071074d/volumes/kubernetes.io~secret/kube-router-token-btqcq DeviceMajor:0 DeviceMinor:47 Capacity:2063478784 Type:vfs Inodes:503779 HasInodes:true} {Device:/var/lib/kubelet/pods/410f3b52-3fa2-4a75-8811-bd2b4e60b1bd/volumes/kubernetes.io~secret/cilium-token-8lbcl DeviceMajor:0 DeviceMinor:49 Capacity:2063478784 Type:vfs Inodes:503779 HasInodes:true} {Device:overlay_0-85 DeviceMajor:0 DeviceMinor:85 Capacity:316933124096 Type:vfs Inodes:19660800 HasInodes:true} {Device:/data/docker/containers/7e0790e192702a3f04ee841347556628862876a8c443061db6043b539ba0c2d7/mounts/shm DeviceMajor:0 DeviceMinor:111 Capacity:67108864 Type:vfs Inodes:503779 HasInodes:true} {Device:/var/lib/kubelet/pods/410f3b52-3fa2-4a75-8811-bd2b4e60b1bd/volumes/kubernetes.io~secret/clustermesh-secrets DeviceMajor:0 DeviceMinor:48 Capacity:2063478784 Type:vfs Inodes:503779 HasInodes:true} {Device:/data/docker/containers/bc5d2ea198a9e1c423ad5867290fc99db482d24d8729cd8a42ce69de5b2e3619/mounts/shm DeviceMajor:0 DeviceMinor:65 Capacity:67108864 Type:vfs Inodes:503779 HasInodes:true} {Device:overlay_0-91 DeviceMajor:0 DeviceMinor:91 Capacity:316933124096 Type:vfs Inodes:19660800 HasInodes:true} {Device:overlay_0-129 DeviceMajor:0 DeviceMinor:129 Capacity:316933124096 Type:vfs Inodes:19660800 HasInodes:true} {Device:overlay_0-120 DeviceMajor:0 DeviceMinor:120 Capacity:316933124096 Type:vfs Inodes:19660800 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:25 Capacity:2063478784 Type:vfs Inodes:503779 HasInodes:true} {Device:/dev/vda1 DeviceMajor:253 DeviceMinor:1 Capacity:316933124096 Type:vfs Inodes:19660800 HasInodes:true} {Device:overlay_0-52 DeviceMajor:0 DeviceMinor:52 Capacity:316933124096 Type:vfs Inodes:19660800 HasInodes:true} {Device:/data/docker/containers/e26bef869e39e8cd04c94f62f585ef454d2293afc9a7ba398d562379a1cfac54/mounts/shm DeviceMajor:0 DeviceMinor:54 Capacity:67108864 Type:vfs Inodes:503779 HasInodes:true} {Device:overlay_0-63 DeviceMajor:0 DeviceMinor:63 Capacity:316933124096 Type:vfs Inodes:19660800 HasInodes:true} {Device:/var/lib/kubelet/pods/9d01c4c7-6c73-4bd0-8cf2-468b4a70d0f5/volumes/kubernetes.io~secret/csi-admin-token-r4x5b DeviceMajor:0 DeviceMinor:108 Capacity:2063478784 Type:vfs Inodes:503779 HasInodes:true} {Device:/run/user/501 DeviceMajor:0 DeviceMinor:45 Capacity:412696576 Type:vfs Inodes:503779 HasInodes:true} {Device:/var/lib/kubelet/pods/410f3b52-3fa2-4a75-8811-bd2b4e60b1bd/volumes/kubernetes.io~projected/hubble-tls DeviceMajor:0 DeviceMinor:50 Capacity:2063478784 Type:vfs Inodes:503779 HasInodes:true} {Device:overlay_0-109 DeviceMajor:0 DeviceMinor:109 Capacity:316933124096 Type:vfs Inodes:19660800 HasInodes:true}] DiskMap:map[253:0:{Name:vda Major:253 Minor:0 Size:322122547200 Scheduler:mq-deadline}] NetworkDevices:[{Name:cilium_host MacAddress:ee:6b:ad:ca:17:06 Speed:10000 Mtu:1500} {Name:cilium_net MacAddress:76:b1:f7:1a:fd:b2 Speed:10000 Mtu:1500} {Name:eth0 MacAddress:00:16:3e:02:92:2d Speed:-1 Mtu:1500} {Name:kube-bridge MacAddress:aa:31:02:a9:46:d6 Speed:-1 Mtu:1500} {Name:lxc_health MacAddress:aa:14:67:c3:fc:bc Speed:10000 Mtu:1500}] Topology:[{Id:0 Memory:4126961664 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0 1] Caches:[{Size:32768 Type:Data Level:1} {Size:32768 Type:Instruction Level:1} {Size:1048576 Type:Unified Level:2}]}] Caches:[{Size:34603008 Type:Unified Level:3}]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None}
Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.501454   21987 manager.go:199] Version: {KernelVersion:5.11.1-1.el7.elrepo.x86_64 ContainerOsVersion:CentOS Linux 7 (Core) DockerVersion:19.03.12 DockerAPIVersion:1.40 CadvisorVersion: CadvisorRevision:}
Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.501551   21987 server.go:471] Sending events to api server.
Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.501602   21987 server.go:647] --cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /
Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.502100   21987 container_manager_linux.go:266] container manager verified user specified cgroup-root exists: []
Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.502116   21987 container_manager_linux.go:271] Creating Container Manager object based on Node Config: {RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: ContainerRuntime:docker CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:systemd KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName:/kube.slice SystemReservedCgroupName:/system.slice ReservedSystemCPUs: EnforceNodeAllocatable:map[kube-reserved:{} pods:{}] KubeReserved:map[cpu:{i:{value:200 scale:-3} d:{Dec:<nil>} s:200m Format:DecimalSI} memory:{i:{value:524288000 scale:0} d:{Dec:<nil>} s:500Mi Format:BinarySI}] SystemReserved:map[cpu:{i:{value:200 scale:-3} d:{Dec:<nil>} s:200m Format:DecimalSI} memory:{i:{value:2097152000 scale:0} d:{Dec:<nil>} s: Format:BinarySI}] HardEvictionThresholds:[{Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:<nil>} {Signal:nodefs.available Operator:LessThan Value:{Quantity:<nil> Percentage:0.1} GracePeriod:0s MinReclaim:<nil>} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity:<nil> Percentage:0.05} GracePeriod:0s MinReclaim:<nil>} {Signal:imagefs.available Operator:LessThan Value:{Quantity:<nil> Percentage:0.15} GracePeriod:0s MinReclaim:<nil>}]} QOSReserved:map[] ExperimentalCPUManagerPolicy:none ExperimentalCPUManagerReconcilePeriod:10s ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none}
Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.502277   21987 topology_manager.go:126] [topologymanager] Creating topology manager with none policy
Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.502288   21987 container_manager_linux.go:301] [topologymanager] Initializing Topology Manager with none policy
Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.502296   21987 container_manager_linux.go:306] Creating device plugin manager: true
Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.502305   21987 manager.go:133] Creating Device Plugin manager at /var/lib/kubelet/device-plugins/kubelet.sock
Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.502341   21987 oom_linux.go:65] attempting to set "/proc/self/oom_score_adj" to "-999"
Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.502395   21987 client.go:75] Connecting to docker on unix:///var/run/docker.sock
Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.502410   21987 client.go:92] Start docker client with request timeout=2m0s
Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: W0815 18:55:05.513729   21987 docker_service.go:561] Hairpin mode set to "promiscuous-bridge" but kubenet is not enabled, falling back to "hairpin-veth"
Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.513754   21987 docker_service.go:238] Hairpin mode set to "hairpin-veth"
Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.540226   21987 cni.go:206] Using CNI configuration file /etc/cni/net.d/05-cilium.conf
Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.568437   21987 cni.go:206] Using CNI configuration file /etc/cni/net.d/05-cilium.conf
Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.568464   21987 plugins.go:166] Loaded network plugin "cni"
Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.568513   21987 docker_service.go:253] Docker cri networking managed by cni
Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.587883   21987 docker_service.go:258] Docker Info: &{ID:EFHU:36UO:VNUN:HLL2:RFUN:56FC:YVPO:Y4J6:VMVW:X2FK:675K:LKGX Containers:8 ContainersRunning:7 ContainersPaused:0 ContainersStopped:1 Images:8 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:[] Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:[] Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6tables:true Debug:false NFd:58 OomKillDisable:true NGoroutines:64 SystemTime:2021-08-15T18:55:05.570729171+08:00 LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:5.11.1-1.el7.elrepo.x86_64 OperatingSystem:CentOS Linux 7 (Core) OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:0xc000773b20 NCPU:2 MemTotal:4126961664 GenericResources:[] DockerRootDir:/data/docker HTTPProxy: HTTPSProxy: NoProxy: Name:HK-K8S-WN4 Labels:[] ExperimentalBuild:false ServerVersion:19.03.12 ClusterStore: ClusterAdvertise: Runtimes:map[runc:{Path:runc Args:[]}] DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:[] Nodes:0 Managers:0 Cluster:<nil> Warnings:[]} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7ad184331fa3e55e52b890ea95e65ba581ae3429 Expected:7ad184331fa3e55e52b890ea95e65ba581ae3429} RuncCommit:{ID:dc9208a3303feef5b3839f4323d9beb36df0a9dd Expected:dc9208a3303feef5b3839f4323d9beb36df0a9dd} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings:[]}
Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.587981   21987 docker_service.go:271] Setting cgroupDriver to systemd
Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.588072   21987 kubelet.go:367] RemoteRuntimeEndpoint: "unix:///var/run/dockershim.sock", RemoteImageEndpoint: "unix:///var/run/dockershim.sock"
Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.588084   21987 kubelet.go:370] Starting the GRPC server for the docker CRI shim.
Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.588112   21987 docker_server.go:59] Start dockershim grpc server
Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.611906   21987 container_manager_linux.go:870] attempting to apply oom_score_adj of -999 to pid 1025
Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.611931   21987 oom_linux.go:65] attempting to set "/proc/1025/oom_score_adj" to "-999"
Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.627934   21987 cni.go:206] Using CNI configuration file /etc/cni/net.d/05-cilium.conf
Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.628614   21987 remote_runtime.go:51] Connecting to runtime service unix:///var/run/dockershim.sock
Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.628672   21987 remote_runtime.go:59] parsed scheme: ""
Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.628683   21987 remote_runtime.go:59] scheme "" not registered, fallback to default scheme
Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.628719   21987 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{/var/run/dockershim.sock  <nil> 0 <nil>}] <nil> <nil>}
Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.628730   21987 clientconn.go:933] ClientConn switching balancer to "pick_first"
Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.628762   21987 remote_image.go:41] Connecting to image service unix:///var/run/dockershim.sock
Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.628778   21987 remote_image.go:50] parsed scheme: ""
Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.628783   21987 remote_image.go:50] scheme "" not registered, fallback to default scheme
Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.628792   21987 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{/var/run/dockershim.sock  <nil> 0 <nil>}] <nil> <nil>}
Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.628797   21987 clientconn.go:933] ClientConn switching balancer to "pick_first"
Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.628816   21987 server.go:1072] Using root directory: /var/lib/kubelet
Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.628829   21987 kubelet.go:292] Adding pod path: /etc/kubernetes/manifests
Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.628847   21987 file.go:68] Watching path "/etc/kubernetes/manifests"
Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.628859   21987 kubelet.go:317] Watching apiserver
Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.629142   21987 balancer_conn_wrappers.go:78] pickfirstBalancer: HandleSubConnStateChange: 0xc000258060, {CONNECTING <nil>}
Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.629325   21987 balancer_conn_wrappers.go:78] pickfirstBalancer: HandleSubConnStateChange: 0xc0002581e0, {CONNECTING <nil>}
Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.631755   21987 balancer_conn_wrappers.go:78] pickfirstBalancer: HandleSubConnStateChange: 0xc000258060, {READY <nil>}
Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.631794   21987 balancer_conn_wrappers.go:78] pickfirstBalancer: HandleSubConnStateChange: 0xc0002581e0, {READY <nil>}
Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.631885   21987 config.go:303] Setting pods for source file
Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.634401   21987 reflector.go:175] Starting reflector *v1.Pod (0s) from k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46
Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.634421   21987 reflector.go:211] Listing and watching *v1.Pod from k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46
Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.634753   21987 reflector.go:175] Starting reflector *v1.Service (0s) from k8s.io/kubernetes/pkg/kubelet/kubelet.go:517
Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.634766   21987 reflector.go:211] Listing and watching *v1.Service from k8s.io/kubernetes/pkg/kubelet/kubelet.go:517
Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.634997   21987 reflector.go:175] Starting reflector *v1.Node (0s) from k8s.io/kubernetes/pkg/kubelet/kubelet.go:526
Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.635017   21987 reflector.go:211] Listing and watching *v1.Node from k8s.io/kubernetes/pkg/kubelet/kubelet.go:526
Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.637246   21987 plugins.go:64] Registering credential provider: .dockercfg
Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.643276   21987 config.go:303] Setting pods for source api
Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.643323   21987 config.go:412] Receiving a new pod "kube-router-7zjsg_kube-system(75aa6245-eac7-46ee-9d13-7b521071074d)"
Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.643343   21987 config.go:412] Receiving a new pod "csi-plugin-nssx2_kube-system(9d01c4c7-6c73-4bd0-8cf2-468b4a70d0f5)"
Aug 15 18:55:05 HK-K8S-WN4 kubelet[21987]: I0815 18:55:05.643354   21987 config.go:412] Receiving a new pod "cilium-44mzg_kube-system(410f3b52-3fa2-4a75-8811-bd2b4e60b1bd)"
Aug 15 18:55:10 HK-K8S-WN4 kubelet[21987]: I0815 18:55:10.655929   21987 cni.go:206] Using CNI configuration file /etc/cni/net.d/05-cilium.conf
Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: E0815 18:55:11.773874   21987 aws_credentials.go:77] while getting AWS credentials NoCredentialProviders: no valid providers in chain. Deprecated.
Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: For verbose messaging see aws.Config.CredentialsChainVerboseErrors
Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.773930   21987 azure_credentials.go:158] Azure config unspecified, disabling
Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.784323   21987 kuberuntime_manager.go:211] Container runtime docker initialized, version: 19.03.12, apiVersion: 1.40.0
Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.784559   21987 plugins.go:628] Loaded volume plugin "kubernetes.io/aws-ebs"
Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.784576   21987 plugins.go:628] Loaded volume plugin "kubernetes.io/gce-pd"
Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.784601   21987 plugins.go:628] Loaded volume plugin "kubernetes.io/cinder"
Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.784612   21987 plugins.go:628] Loaded volume plugin "kubernetes.io/azure-disk"
Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.784621   21987 plugins.go:628] Loaded volume plugin "kubernetes.io/azure-file"
Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.784631   21987 plugins.go:628] Loaded volume plugin "kubernetes.io/vsphere-volume"
Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.784648   21987 plugins.go:628] Loaded volume plugin "kubernetes.io/empty-dir"
Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.784657   21987 plugins.go:628] Loaded volume plugin "kubernetes.io/git-repo"
Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.784670   21987 plugins.go:628] Loaded volume plugin "kubernetes.io/host-path"
Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.784680   21987 plugins.go:628] Loaded volume plugin "kubernetes.io/nfs"
Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.784690   21987 plugins.go:628] Loaded volume plugin "kubernetes.io/secret"
Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.784699   21987 plugins.go:628] Loaded volume plugin "kubernetes.io/iscsi"
Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.784712   21987 plugins.go:628] Loaded volume plugin "kubernetes.io/glusterfs"
Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.784728   21987 plugins.go:628] Loaded volume plugin "kubernetes.io/rbd"
Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.784738   21987 plugins.go:628] Loaded volume plugin "kubernetes.io/quobyte"
Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.784750   21987 plugins.go:628] Loaded volume plugin "kubernetes.io/cephfs"
Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.784762   21987 plugins.go:628] Loaded volume plugin "kubernetes.io/downward-api"
Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.784772   21987 plugins.go:628] Loaded volume plugin "kubernetes.io/fc"
Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.784781   21987 plugins.go:628] Loaded volume plugin "kubernetes.io/flocker"
Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.784793   21987 plugins.go:628] Loaded volume plugin "kubernetes.io/configmap"
Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.784817   21987 plugins.go:628] Loaded volume plugin "kubernetes.io/projected"
Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.784844   21987 plugins.go:628] Loaded volume plugin "kubernetes.io/portworx-volume"
Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.784860   21987 plugins.go:628] Loaded volume plugin "kubernetes.io/scaleio"
Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.784871   21987 plugins.go:628] Loaded volume plugin "kubernetes.io/local-volume"
Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.784881   21987 plugins.go:628] Loaded volume plugin "kubernetes.io/storageos"
Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.784916   21987 plugins.go:628] Loaded volume plugin "kubernetes.io/csi"
Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.785047   21987 server.go:1126] Started kubelet
Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.785066   21987 healthz.go:120] No default health checks specified. Installing the ping handler.
Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.785076   21987 healthz.go:124] Installing health checkers for (/healthz): "ping"
Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: E0815 18:55:11.785218   21987 kubelet.go:1305] Image garbage collection failed once. Stats initialization may not have completed yet: failed to get imageFs info: unable to find data in memory cache
Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.786490   21987 config.go:100] Looking for [api file], have seen map[]
Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.787140   21987 event.go:278] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"hk-k8s-wn4", UID:"hk-k8s-wn4", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'Starting' Starting kubelet.
Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.787451   21987 server.go:145] Starting to listen on 0.0.0.0:10250
Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.787525   21987 healthz.go:124] Installing health checkers for (/healthz): "ping","log","syncloop"
Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.790564   21987 server.go:393] Adding debug handlers to kubelet server.
Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.796426   21987 hostutil_linux.go:209] Directory /var/lib/kubelet is already on a shared mount
Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.796527   21987 fs_resource_analyzer.go:64] Starting FS ResourceAnalyzer
Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.796551   21987 iptables.go:442] running iptables: iptables [-w -N KUBE-MARK-DROP -t nat]
Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.797458   21987 csi_plugin.go:280] Initializing migrated drivers on CSINode
Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.806105   21987 volume_manager.go:263] The desired_state_of_world populator starts
Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.806122   21987 volume_manager.go:265] Starting Kubelet Volume Manager
Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.806408   21987 reflector.go:175] Starting reflector *v1.CSIDriver (0s) from k8s.io/client-go/informers/factory.go:135
Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.806421   21987 reflector.go:211] Listing and watching *v1.CSIDriver from k8s.io/client-go/informers/factory.go:135
Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.806647   21987 desired_state_of_world_populator.go:139] Desired state populator starts to run
Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.814855   21987 iptables.go:442] running iptables: iptables [-w -C KUBE-MARK-DROP -t nat -j MARK --set-xmark 0x00008000/0x00008000]
Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.816187   21987 iptables.go:442] running iptables: iptables [-w -N KUBE-FIREWALL -t filter]
Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.817438   21987 kubelet.go:1287] Container garbage collection succeeded
Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.822136   21987 image_gc_manager.go:231] Pod kube-system/csi-plugin-nssx2, container csi-plugin uses image registry.cn-hangzhou.aliyuncs.com/acs/csi-plugin@sha256:37aa7701b108f291acac92b554b1cf53eacc9f1302440e0ec49ec0c77535106e(sha256:03c3f08975c9bd6b7614eea5ffb475a3408cb3ae30b72f0a1cf560b8a70c17d4)
Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.822152   21987 image_gc_manager.go:231] Pod kube-system/csi-plugin-nssx2, container disk-driver-registrar uses image registry.cn-hangzhou.aliyuncs.com/acs/csi-node-driver-registrar@sha256:273175c272162d480d06849e09e6e3cdb0245239e3a82df6630df3bc059c6571(sha256:c2103589e99f907333422ae78702360ad258a8f0366c20e341c9e0c53743e78a)
Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.822164   21987 image_gc_manager.go:231] Pod kube-system/cilium-44mzg, container cilium-agent uses image sha256:aaf366dbd941c0565213f676e90715df61f8ecc14ed2b87a3ac23d57a8e28b67(sha256:aaf366dbd941c0565213f676e90715df61f8ecc14ed2b87a3ac23d57a8e28b67)
Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.822191   21987 image_gc_manager.go:231] Pod kube-system/cilium-44mzg, container clean-cilium-state uses image quay.io/cilium/cilium@sha256:97daafddef3b6180b7dbfa7f45e07c673ee50441dc271b75779a689be22b3882(sha256:aaf366dbd941c0565213f676e90715df61f8ecc14ed2b87a3ac23d57a8e28b67)
Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.822198   21987 image_gc_manager.go:231] Pod kube-system/kube-router-7zjsg, container kube-router uses image cloudnativelabs/kube-router@sha256:31a87823700700c6ca3271fc72b413c682f890cb1e21b223fc2fabfdcf636f2f(sha256:32e7524455b959260146ba0b1d515bf4fc5f71413ea7154239cd3b164cf377f8)
Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.822208   21987 image_gc_manager.go:242] Adding image ID sha256:32e7524455b959260146ba0b1d515bf4fc5f71413ea7154239cd3b164cf377f8 to currentImages
Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.822219   21987 image_gc_manager.go:247] Image ID sha256:32e7524455b959260146ba0b1d515bf4fc5f71413ea7154239cd3b164cf377f8 is new
Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.822229   21987 image_gc_manager.go:255] Setting Image ID sha256:32e7524455b959260146ba0b1d515bf4fc5f71413ea7154239cd3b164cf377f8 lastUsed to 2021-08-15 18:55:11.822203539 +0800 CST m=+6.498045011
Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.822245   21987 image_gc_manager.go:259] Image ID sha256:32e7524455b959260146ba0b1d515bf4fc5f71413ea7154239cd3b164cf377f8 has size 97879110
Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.822252   21987 image_gc_manager.go:242] Adding image ID sha256:6d6859d1a42a2395a8eacc41c718a039210b377f922d19076ebbdd74aa047e89 to currentImages
Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.822260   21987 image_gc_manager.go:247] Image ID sha256:6d6859d1a42a2395a8eacc41c718a039210b377f922d19076ebbdd74aa047e89 is new
Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.822268   21987 image_gc_manager.go:259] Image ID sha256:6d6859d1a42a2395a8eacc41c718a039210b377f922d19076ebbdd74aa047e89 has size 169410912
Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.822275   21987 image_gc_manager.go:242] Adding image ID sha256:c19ae228f0699185488b6a1c0debb9c6b79672181356ad455c9a7924a41a01bb to currentImages
Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.822284   21987 image_gc_manager.go:247] Image ID sha256:c19ae228f0699185488b6a1c0debb9c6b79672181356ad455c9a7924a41a01bb is new
Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.822291   21987 image_gc_manager.go:259] Image ID sha256:c19ae228f0699185488b6a1c0debb9c6b79672181356ad455c9a7924a41a01bb has size 25967786
Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.822298   21987 image_gc_manager.go:242] Adding image ID sha256:aaf366dbd941c0565213f676e90715df61f8ecc14ed2b87a3ac23d57a8e28b67 to currentImages
Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.822307   21987 image_gc_manager.go:247] Image ID sha256:aaf366dbd941c0565213f676e90715df61f8ecc14ed2b87a3ac23d57a8e28b67 is new
Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.822317   21987 image_gc_manager.go:255] Setting Image ID sha256:aaf366dbd941c0565213f676e90715df61f8ecc14ed2b87a3ac23d57a8e28b67 lastUsed to 2021-08-15 18:55:11.822203539 +0800 CST m=+6.498045011
Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.822327   21987 image_gc_manager.go:259] Image ID sha256:aaf366dbd941c0565213f676e90715df61f8ecc14ed2b87a3ac23d57a8e28b67 has size 433944370
Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.822333   21987 image_gc_manager.go:242] Adding image ID sha256:d771cc9785a13659c0e9363cae1d238bb58114a3340f805564daaed2494475f8 to currentImages
Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.822341   21987 image_gc_manager.go:247] Image ID sha256:d771cc9785a13659c0e9363cae1d238bb58114a3340f805564daaed2494475f8 is new
Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.822349   21987 image_gc_manager.go:259] Image ID sha256:d771cc9785a13659c0e9363cae1d238bb58114a3340f805564daaed2494475f8 has size 9988981
Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.822356   21987 image_gc_manager.go:242] Adding image ID sha256:03c3f08975c9bd6b7614eea5ffb475a3408cb3ae30b72f0a1cf560b8a70c17d4 to currentImages
Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.822365   21987 image_gc_manager.go:247] Image ID sha256:03c3f08975c9bd6b7614eea5ffb475a3408cb3ae30b72f0a1cf560b8a70c17d4 is new
Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.822374   21987 image_gc_manager.go:255] Setting Image ID sha256:03c3f08975c9bd6b7614eea5ffb475a3408cb3ae30b72f0a1cf560b8a70c17d4 lastUsed to 2021-08-15 18:55:11.822203539 +0800 CST m=+6.498045011
Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.822383   21987 image_gc_manager.go:259] Image ID sha256:03c3f08975c9bd6b7614eea5ffb475a3408cb3ae30b72f0a1cf560b8a70c17d4 has size 440410596
Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.822391   21987 image_gc_manager.go:242] Adding image ID sha256:80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c to currentImages
Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.822400   21987 image_gc_manager.go:247] Image ID sha256:80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c is new
Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.822409   21987 image_gc_manager.go:255] Setting Image ID sha256:80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c lastUsed to 2021-08-15 18:55:11.822203539 +0800 CST m=+6.498045011
Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.822418   21987 image_gc_manager.go:259] Image ID sha256:80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c has size 682696
Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.822425   21987 image_gc_manager.go:242] Adding image ID sha256:c2103589e99f907333422ae78702360ad258a8f0366c20e341c9e0c53743e78a to currentImages
Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.822433   21987 image_gc_manager.go:247] Image ID sha256:c2103589e99f907333422ae78702360ad258a8f0366c20e341c9e0c53743e78a is new
Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.822441   21987 image_gc_manager.go:255] Setting Image ID sha256:c2103589e99f907333422ae78702360ad258a8f0366c20e341c9e0c53743e78a lastUsed to 2021-08-15 18:55:11.822203539 +0800 CST m=+6.498045011
Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.822451   21987 image_gc_manager.go:259] Image ID sha256:c2103589e99f907333422ae78702360ad258a8f0366c20e341c9e0c53743e78a has size 17057647
Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.824391   21987 iptables.go:442] running iptables: iptables [-w -C KUBE-FIREWALL -t filter -m comment --comment kubernetes firewall for dropping marked packets -m mark --mark 0x00008000/0x00008000 -j DROP]
Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.825707   21987 iptables.go:442] running iptables: iptables [-w -C KUBE-FIREWALL -t filter -m comment --comment block incoming localnet connections --dst 127.0.0.0/8 ! --src 127.0.0.0/8 -m conntrack ! --ctstate RELATED,ESTABLISHED,DNAT -j DROP]
Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.826971   21987 iptables.go:442] running iptables: iptables [-w -C OUTPUT -t filter -j KUBE-FIREWALL]
Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.828038   21987 iptables.go:442] running iptables: iptables [-w -C INPUT -t filter -j KUBE-FIREWALL]
Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.830020   21987 kubelet.go:2184] Container runtime status: Runtime Conditions: RuntimeReady=true reason: message:, NetworkReady=true reason: message:
Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.830132   21987 clientconn.go:106] parsed scheme: "unix"
Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.830145   21987 clientconn.go:106] scheme "unix" not registered, fallback to default scheme
Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.830236   21987 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}
Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.830249   21987 clientconn.go:933] ClientConn switching balancer to "pick_first"
Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.830412   21987 balancer_conn_wrappers.go:78] pickfirstBalancer: HandleSubConnStateChange: 0xc00021c3e0, {CONNECTING <nil>}
Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.830914   21987 balancer_conn_wrappers.go:78] pickfirstBalancer: HandleSubConnStateChange: 0xc00021c3e0, {READY <nil>}
Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.833893   21987 factory.go:137] Registering containerd factory
Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.834021   21987 factory.go:122] Registration of the crio container factory failed: Get http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info: dial unix /var/run/crio/crio.sock: connect: no such file or directory
Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.834957   21987 iptables.go:442] running iptables: iptables [-w -N KUBE-MARK-MASQ -t nat]
Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.838765   21987 iptables.go:442] running iptables: iptables [-w -N KUBE-POSTROUTING -t nat]
Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.840624   21987 iptables.go:442] running iptables: iptables [-w -C KUBE-MARK-MASQ -t nat -j MARK --set-xmark 0x00004000/0x00004000]
Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.842706   21987 iptables.go:442] running iptables: iptables [-w -C POSTROUTING -t nat -m comment --comment kubernetes postrouting rules -j KUBE-POSTROUTING]
Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.844552   21987 kubelet_network_linux.go:136] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it
Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.844573   21987 iptables.go:442] running iptables: iptables [-w -C KUBE-POSTROUTING -t nat -m comment --comment kubernetes service traffic requiring SNAT -m mark --mark 0x00004000/0x00004000 -j MASQUERADE]
Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.846705   21987 status_manager.go:158] Starting to sync pod status with apiserver
Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.846733   21987 kubelet.go:1821] Starting kubelet main sync loop.
Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: E0815 18:55:11.846780   21987 kubelet.go:1845] skipping pod synchronization - [container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]
Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.846806   21987 generic.go:191] GenericPLEG: Relisting
Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.846891   21987 iptables.go:442] running iptables: iptables [-w -N KUBE-KUBELET-CANARY -t mangle]
Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.847396   21987 reflector.go:175] Starting reflector *v1beta1.RuntimeClass (0s) from k8s.io/client-go/informers/factory.go:135
Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.847412   21987 reflector.go:211] Listing and watching *v1beta1.RuntimeClass from k8s.io/client-go/informers/factory.go:135
Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.849229   21987 iptables.go:442] running iptables: iptables [-w -N KUBE-KUBELET-CANARY -t nat]
Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.852559   21987 iptables.go:442] running iptables: iptables [-w -N KUBE-KUBELET-CANARY -t filter]
Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.854368   21987 generic.go:155] GenericPLEG: 410f3b52-3fa2-4a75-8811-bd2b4e60b1bd/edede45c07948ccf6481c412f4baa01c2bfa35e2a3112f3f1c3e4fff6bce02e4: non-existent -> running
Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.854387   21987 generic.go:155] GenericPLEG: 410f3b52-3fa2-4a75-8811-bd2b4e60b1bd/96eea5f3e430c09edf3eb45335a76ddd8bae3ecb40ca3fa9ecc68079d49d37d5: non-existent -> exited
Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.854395   21987 generic.go:155] GenericPLEG: 410f3b52-3fa2-4a75-8811-bd2b4e60b1bd/bc5d2ea198a9e1c423ad5867290fc99db482d24d8729cd8a42ce69de5b2e3619: non-existent -> running
Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.854403   21987 generic.go:155] GenericPLEG: 75aa6245-eac7-46ee-9d13-7b521071074d/6a6ffc658ca662a5687233778315e696ffa2feda94cc5cca5c9b5d7c44396fed: non-existent -> running
Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.854411   21987 generic.go:155] GenericPLEG: 75aa6245-eac7-46ee-9d13-7b521071074d/e26bef869e39e8cd04c94f62f585ef454d2293afc9a7ba398d562379a1cfac54: non-existent -> running
Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.854420   21987 generic.go:155] GenericPLEG: 9d01c4c7-6c73-4bd0-8cf2-468b4a70d0f5/2050ed38bda21ca20bc0b43a1e4bced9b91ff1822f25ac50faf276628d641aa3: non-existent -> running
Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.854427   21987 generic.go:155] GenericPLEG: 9d01c4c7-6c73-4bd0-8cf2-468b4a70d0f5/4177ed13c65dd6d4aaa1199c071d336449095d197c827719b18b5523129b9829: non-existent -> running
Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.854434   21987 generic.go:155] GenericPLEG: 9d01c4c7-6c73-4bd0-8cf2-468b4a70d0f5/7e0790e192702a3f04ee841347556628862876a8c443061db6043b539ba0c2d7: non-existent -> running
Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.857642   21987 kuberuntime_manager.go:930] getSandboxIDByPodUID got sandbox IDs ["bc5d2ea198a9e1c423ad5867290fc99db482d24d8729cd8a42ce69de5b2e3619"] for pod "cilium-44mzg_kube-system(410f3b52-3fa2-4a75-8811-bd2b4e60b1bd)"
Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.862818   21987 factory.go:356] Registering Docker factory
Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.862838   21987 factory.go:54] Registering systemd factory
Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.863114   21987 factory.go:101] Registering Raw factory
Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.863403   21987 manager.go:1158] Started watching for new ooms in manager
Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.865549   21987 nvidia.go:53] No NVIDIA devices found.
Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.865566   21987 factory.go:177] Factory "containerd" was unable to handle container "/"
Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.865600   21987 factory.go:177] Factory "docker" was unable to handle container "/"
Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.865612   21987 factory.go:166] Error trying to work out if we can handle /: / not handled by systemd handler
Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.865618   21987 factory.go:177] Factory "systemd" was unable to handle container "/"
Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.865629   21987 factory.go:173] Using factory "raw" for container "/"
Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.865922   21987 manager.go:950] Added container: "/" (aliases: [], namespace: "")
Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.866152   21987 handler.go:325] Added event &{/ 2021-08-15 18:48:02.761929273 +0800 CST containerCreation {<nil>}}
Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.866199   21987 manager.go:272] Starting recovery of all containers
Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.871132   21987 container.go:467] Start housekeeping for container "/"
Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.875459   21987 generic.go:386] PLEG: Write status for cilium-44mzg/kube-system: &container.PodStatus{ID:"410f3b52-3fa2-4a75-8811-bd2b4e60b1bd", Name:"cilium-44mzg", Namespace:"kube-system", IPs:[]string{}, ContainerStatuses:[]*container.ContainerStatus{(*container.ContainerStatus)(0xc0005828c0), (*container.ContainerStatus)(0xc0005829a0)}, SandboxStatuses:[]*v1alpha2.PodSandboxStatus{(*v1alpha2.PodSandboxStatus)(0xc00018ec00)}} (err: <nil>)
Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.876396   21987 kuberuntime_manager.go:930] getSandboxIDByPodUID got sandbox IDs ["e26bef869e39e8cd04c94f62f585ef454d2293afc9a7ba398d562379a1cfac54"] for pod "kube-router-7zjsg_kube-system(75aa6245-eac7-46ee-9d13-7b521071074d)"
Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.878700   21987 factory.go:177] Factory "containerd" was unable to handle container "/system.slice"
Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.878716   21987 factory.go:177] Factory "docker" was unable to handle container "/system.slice"
Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.878728   21987 factory.go:166] Error trying to work out if we can handle /system.slice: /system.slice not handled by systemd handler
Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.878735   21987 factory.go:177] Factory "systemd" was unable to handle container "/system.slice"
Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.878745   21987 factory.go:170] Factory "raw" can handle container "/system.slice", but ignoring.
Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.878757   21987 manager.go:908] ignoring container "/system.slice"
Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.878763   21987 factory.go:177] Factory "containerd" was unable to handle container "/system.slice/chronyd.service"
Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.878770   21987 factory.go:177] Factory "docker" was unable to handle container "/system.slice/chronyd.service"
Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.878778   21987 factory.go:166] Error trying to work out if we can handle /system.slice/chronyd.service: /system.slice/chronyd.service not handled by systemd handler
Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.878784   21987 factory.go:177] Factory "systemd" was unable to handle container "/system.slice/chronyd.service"
Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.878793   21987 factory.go:170] Factory "raw" can handle container "/system.slice/chronyd.service", but ignoring.
Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.878803   21987 manager.go:908] ignoring container "/system.slice/chronyd.service"
Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.878810   21987 factory.go:177] Factory "containerd" was unable to handle container "/aegis"
Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.878816   21987 factory.go:177] Factory "docker" was unable to handle container "/aegis"
Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.878825   21987 factory.go:166] Error trying to work out if we can handle /aegis: /aegis not handled by systemd handler
Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.878830   21987 factory.go:177] Factory "systemd" was unable to handle container "/aegis"
Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.878838   21987 factory.go:170] Factory "raw" can handle container "/aegis", but ignoring.
Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.878847   21987 manager.go:908] ignoring container "/aegis"
Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.878853   21987 factory.go:177] Factory "containerd" was unable to handle container "/system.slice/run-user-501.mount"
Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.878860   21987 factory.go:177] Factory "docker" was unable to handle container "/system.slice/run-user-501.mount"
Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.878867   21987 factory.go:170] Factory "systemd" can handle container "/system.slice/run-user-501.mount", but ignoring.
Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.878878   21987 manager.go:908] ignoring container "/system.slice/run-user-501.mount"
Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.884381   21987 factory.go:166] Error trying to work out if we can handle /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod75aa6245_eac7_46ee_9d13_7b521071074d.slice/docker-6a6ffc658ca662a5687233778315e696ffa2feda94cc5cca5c9b5d7c44396fed.scope: failed to load container: container "6a6ffc658ca662a5687233778315e696ffa2feda94cc5cca5c9b5d7c44396fed" in namespace "k8s.io": not found
Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.884396   21987 factory.go:177] Factory "containerd" was unable to handle container "/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod75aa6245_eac7_46ee_9d13_7b521071074d.slice/docker-6a6ffc658ca662a5687233778315e696ffa2feda94cc5cca5c9b5d7c44396fed.scope"
Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.884650   21987 generic.go:386] PLEG: Write status for kube-router-7zjsg/kube-system: &container.PodStatus{ID:"75aa6245-eac7-46ee-9d13-7b521071074d", Name:"kube-router-7zjsg", Namespace:"kube-system", IPs:[]string{}, ContainerStatuses:[]*container.ContainerStatus{(*container.ContainerStatus)(0xc000582c40)}, SandboxStatuses:[]*v1alpha2.PodSandboxStatus{(*v1alpha2.PodSandboxStatus)(0xc000f4e000)}} (err: <nil>)
Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.886339   21987 factory.go:173] Using factory "docker" for container "/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod75aa6245_eac7_46ee_9d13_7b521071074d.slice/docker-6a6ffc658ca662a5687233778315e696ffa2feda94cc5cca5c9b5d7c44396fed.scope"
Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.886714   21987 kuberuntime_manager.go:930] getSandboxIDByPodUID got sandbox IDs ["7e0790e192702a3f04ee841347556628862876a8c443061db6043b539ba0c2d7"] for pod "csi-plugin-nssx2_kube-system(9d01c4c7-6c73-4bd0-8cf2-468b4a70d0f5)"
Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.888434   21987 manager.go:950] Added container: "/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod75aa6245_eac7_46ee_9d13_7b521071074d.slice/docker-6a6ffc658ca662a5687233778315e696ffa2feda94cc5cca5c9b5d7c44396fed.scope" (aliases: [k8s_kube-router_kube-router-7zjsg_kube-system_75aa6245-eac7-46ee-9d13-7b521071074d_0 6a6ffc658ca662a5687233778315e696ffa2feda94cc5cca5c9b5d7c44396fed], namespace: "docker")
Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.889039   21987 handler.go:325] Added event &{/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod75aa6245_eac7_46ee_9d13_7b521071074d.slice/docker-6a6ffc658ca662a5687233778315e696ffa2feda94cc5cca5c9b5d7c44396fed.scope 2021-08-15 07:04:08.260947258 +0000 UTC containerCreation {<nil>}}
Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.889071   21987 factory.go:177] Factory "containerd" was unable to handle container "/kubepods.slice/kubepods-besteffort.slice"
Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.889081   21987 factory.go:177] Factory "docker" was unable to handle container "/kubepods.slice/kubepods-besteffort.slice"
Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.889092   21987 factory.go:166] Error trying to work out if we can handle /kubepods.slice/kubepods-besteffort.slice: /kubepods.slice/kubepods-besteffort.slice not handled by systemd handler
Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.889098   21987 factory.go:177] Factory "systemd" was unable to handle container "/kubepods.slice/kubepods-besteffort.slice"
Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.889107   21987 factory.go:173] Using factory "raw" for container "/kubepods.slice/kubepods-besteffort.slice"
Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.889386   21987 manager.go:950] Added container: "/kubepods.slice/kubepods-besteffort.slice" (aliases: [], namespace: "")
Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.889646   21987 handler.go:325] Added event &{/kubepods.slice/kubepods-besteffort.slice 2021-08-15 18:47:53.833242529 +0800 CST containerCreation {<nil>}}
Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.889670   21987 factory.go:177] Factory "containerd" was unable to handle container "/assist"
Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.889678   21987 factory.go:177] Factory "docker" was unable to handle container "/assist"
Aug 15 18:55:11 HK-K8S-WN4 kubelet[21987]: I0815 18:55:11.889688   21987 factory.go:166] Error trying to work out if we can handle /assist: /assist not handled by systemd handler
Aug 15 18:55:12 HK-K8S-WN4 kubelet[21987]: I0815 18:55:12.107535   21987 config.go:100] Looking for [api file], have seen map[]
Aug 15 18:55:12 HK-K8S-WN4 kubelet[21987]: I0815 18:55:12.108392   21987 cpu_manager.go:184] [cpumanager] starting with none policy
Aug 15 18:55:12 HK-K8S-WN4 kubelet[21987]: I0815 18:55:12.108402   21987 cpu_manager.go:185] [cpumanager] reconciling every 10s
Aug 15 18:55:12 HK-K8S-WN4 kubelet[21987]: I0815 18:55:12.108418   21987 state_mem.go:36] [cpumanager] initializing new in-memory state store
Aug 15 18:55:12 HK-K8S-WN4 kubelet[21987]: I0815 18:55:12.108640   21987 state_mem.go:88] [cpumanager] updated default cpuset: ""
Aug 15 18:55:12 HK-K8S-WN4 kubelet[21987]: I0815 18:55:12.108652   21987 state_mem.go:96] [cpumanager] updated cpuset assignments: "map[]"
Aug 15 18:55:12 HK-K8S-WN4 kubelet[21987]: I0815 18:55:12.108664   21987 state_checkpoint.go:136] [cpumanager] state checkpoint: restored state from checkpoint
Aug 15 18:55:12 HK-K8S-WN4 kubelet[21987]: I0815 18:55:12.108671   21987 state_checkpoint.go:137] [cpumanager] state checkpoint: defaultCPUSet:
Aug 15 18:55:12 HK-K8S-WN4 kubelet[21987]: I0815 18:55:12.108679   21987 policy_none.go:43] [cpumanager] none policy: Start
Aug 15 18:55:12 HK-K8S-WN4 kubelet[21987]: I0815 18:55:12.109957   21987 node_container_manager_linux.go:75] Attempting to enforce Node Allocatable with config: {KubeReservedCgroupName:/kube.slice SystemReservedCgroupName:/system.slice ReservedSystemCPUs: EnforceNodeAllocatable:map[kube-reserved:{} pods:{}] KubeReserved:map[cpu:{i:{value:200 scale:-3} d:{Dec:<nil>} s:200m Format:DecimalSI} memory:{i:{value:524288000 scale:0} d:{Dec:<nil>} s:500Mi Format:BinarySI}] SystemReserved:map[cpu:{i:{value:200 scale:-3} d:{Dec:<nil>} s:200m Format:DecimalSI} memory:{i:{value:2097152000 scale:0} d:{Dec:<nil>} s: Format:BinarySI}] HardEvictionThresholds:[{Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:<nil>} {Signal:nodefs.available Operator:LessThan Value:{Quantity:<nil> Percentage:0.1} GracePeriod:0s MinReclaim:<nil>} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity:<nil> Percentage:0.05} GracePeriod:0s MinReclaim:<nil>} {Signal:imagefs.available Operator:LessThan Value:{Quantity:<nil> Percentage:0.15} GracePeriod:0s MinReclaim:<nil>}]}
Aug 15 18:55:12 HK-K8S-WN4 kubelet[21987]: I0815 18:55:12.110066   21987 node_container_manager_linux.go:121] Enforcing kube reserved on cgroup "/kube.slice" with limits: map[cpu:{i:{value:200 scale:-3} d:{Dec:<nil>} s:200m Format:DecimalSI} memory:{i:{value:524288000 scale:0} d:{Dec:<nil>} s:500Mi Format:BinarySI}]
Aug 15 18:55:12 HK-K8S-WN4 kubelet[21987]: I0815 18:55:12.110090   21987 node_container_manager_linux.go:141] Enforcing limits on cgroup ["kube"] with 824649664312 cpu shares, 824649664304 bytes of memory, and 0 processes
Aug 15 18:55:12 HK-K8S-WN4 kubelet[21987]: I0815 18:55:12.110149   21987 cgroup_manager_linux.go:276] The Cgroup [kube] has some missing paths: [/sys/fs/cgroup/cpu,cpuacct/kube.slice /sys/fs/cgroup/systemd/kube.slice /sys/fs/cgroup/pids/kube.slice /sys/fs/cgroup/hugetlb/kube.slice /sys/fs/cgroup/cpu,cpuacct/kube.slice]
Aug 15 18:55:12 HK-K8S-WN4 kubelet[21987]: F0815 18:55:12.110207   21987 kubelet.go:1383] Failed to start ContainerManager Failed to enforce Kube Reserved Cgroup Limits on "/kube.slice": ["kube"] cgroup does not exist

根據上文的報錯信息,手動創建缺少的cgroup,如下

cgroup_manager_linux.go:276] The Cgroup [kube] has some missing paths: [/sys/fs/cgroup/cpu,cpuacct/kube.slice /sys/fs/cgroup/systemd/kube.slice /sys/fs/cgroup/pids/kube.slice /sys/fs/cgroup/hugetlb/kube.slice /sys/fs/cgroup/cpu,cpuacct/kube.slice]

mkdir /sys/fs/cgroup/systemd/kube.slice
mkdir /sys/fs/cgroup/pids/kube.slice
mkdir /sys/fs/cgroup/hugetlb/kube.slice
mkdir /sys/fs/cgroup/cpu,cpuacct/kube.slice

再次啟動

驗證kubeRserved的資源限額,如下

# 具體內存限額
<root@HK-K8S-WN4 /sys/fs/cgroup/memory/kube.slice># cat memory.limit_in_bytes 524288000
# 具體CPU限額 <root@HK-K8S-WN4 /sys/fs/cgroup/cpu/kube.slice># cat cpu.shares 204 <root@HK-K8S-WN4 /sys/fs/cgroup/cpu/kube.slice># cat cpuacct.stat user 0 system 0 <root@HK-K8S-WN4 /sys/fs/cgroup/cpu/kube.slice># cat cgroup.procs <root@HK-K8S-WN4 /sys/fs/cgroup/cpu/kube.slice># cat cpu.cfs_period_us 100000

參考配置

apiVersion: kubelet.config.k8s.io/v1beta1
authentication:
  anonymous:
    enabled: false
  webhook:
    cacheTTL: 0s
    enabled: true
  x509:
    clientCAFile: /etc/kubernetes/pki/ca.crt
authorization:
  mode: Webhook
  webhook:
    cacheAuthorizedTTL: 0s
    cacheUnauthorizedTTL: 0s
clusterDNS:
- 10.10.0.10
clusterDomain: nflow.so
enforceNodeAllocatable:
  - pods
  - kube-reserved
  - system-reserved
systemReserved:
  cpu: 200m
  memory: 2000Mi
kubeReserved:
  cpu: 200m
  memory: 500Mi
kubeReservedCgroup: /kube.slice
systemReservedCgroup: /system.slice
evictionHard:
  memory.available: "500Mi"
  imagefs.available": "15%"
  nodefs.available": "10%"
  nodefs.inodesFree": "5%"
evictionMinimumReclaim:
  memory.available: "300Mi"
  nodefs.available: "500Mi"
  imagefs.available: "2Gi"
cgroupDriver: systemd
maxPods: 64
cpuManagerReconcilePeriod: 0s
evictionPressureTransitionPeriod: 4m0s
fileCheckFrequency: 0s
healthzBindAddress: 127.0.0.1
healthzPort: 10248
httpCheckFrequency: 0s
imageMinimumGCAge: 0s

 kubelet.service(這里存在一個問題,node節點重啟時會清除上面創建的cgroup.slice)可以參考以下方法解決

[Unit]
Description=kubelet: The Kubernetes Node Agent
Documentation=https://kubernetes.io/docs/
Wants=network-online.target
After=network-online.target

[Service]
ExecStartPre=-/bin/mkdir /sys/fs/cgroup/cpuset/system.slice
ExecStartPre=-/bin/mkdir /sys/fs/cgroup/hugetlb/system.slice
ExecStartPre=-/bin/mkdir /sys/fs/cgroup/hugetlb/system.slic
ExecStartPre=-/bin/mkdir /sys/fs/cgroup/cpu,cpetacct/kube.slice
ExecStartPre=-/bin/mkdir /sys/fs/cgroup/systemd/kube.slice
ExecStartPre=-/bin/mkdir /sys/fs/cgroup/pids/kube.slice
ExecStartPre=-/bin/mkdir /sys/fs/cgroup/hugetlb/kube.slice
ExecStartPre=-/bin/mkdir /sys/fs/cgroup/cpu,cpuacct/kube.slice
ExecStartPre=-/bin/mkdir /sys/fs/cgroup/cpuset/kube.slice
ExecStartPre=-/bin/mkdir /sys/fs/cgroup/memory/kube.slice

ExecStart=/usr/bin/kubelet
Restart=always
StartLimitInterval=0
RestartSec=10

[Install]
WantedBy=multi-user.target

 

Cgroups推薦配置

Kubernetes node的cgroup推薦配置如下

  • 所有OS的守護進程統一放在SystemReserved的頂級cgroup下

    /sys/fs/cgroup/memory/system.slice

  • Kubelet & Container Runtime統一放在KubeReserved的頂級cgroup下(如果配置kubereserved策略時,kubereservedcgroup需要在kubelet的config.yaml配置文件中指定,同時需要在節點手動創建)為什么將Container Runtime放在KubeReserved的cgroup下,官方的理由如下2點
  1. Kubernetes 節點上的Container Runtime肯定是受Kubelet來管控的(換句話意思就是,既然使用Kubernetes平台作為Container Runtime管理,那么就應該受kubelet來管控);這里還有一種情況就是手動創建的容器,所以肯定的是不受kubelet管控,那理應不屬於KuebeReserved的cgroup來做資源限制
  2. KubeRserved的cgroup中的資源消耗一定是與節點上運行Pod的數量息息相關的,簡單理解來說就是多則所以耗資源

下文的cgroup的層級結構推薦使用專用的cgroups以便於為kubelet & runtime獨立追蹤在使用時

/ (Cgroup Root)
.
+..systemreserved or system.slice (Specified via `--system-reserved-cgroup`; `SystemReserved` enforced here *optionally* by kubelet)
.        .    .tasks(sshd,udev,etc)
.
.
+..podruntime or podruntime.slice (Specified via `--kube-reserved-cgroup`; `KubeReserved` enforced here *optionally* by kubelet)
.     .
.     +..kubelet
.     .   .tasks(kubelet)
.        .
.     +..runtime
.         .tasks(docker-engine, containerd)
.     
.
+..kubepods or kubepods.slice (Node Allocatable enforced here by Kubelet)
.     .
.     +..PodGuaranteed
.     .      .
.     .      +..Container1
.     .      .        .tasks(container processes)
.     .      .
.     .        +..PodOverhead
.     .        .        .tasks(per-pod processes)
.     .        ...
.     .
.     +..Burstable
.     .      .
.     .      +..PodBurstable
.     .      .        .
.     .      .        +..Container1
.     .      .        .         .tasks(container processes)
.     .      .        +..Container2
.     .      .        .         .tasks(container processes)
.     .      .        .
.           .           .            ...
.     .      .
.     .      ...
.     .
.      .
.     +..Besteffort
.     .      .
.     .      +..PodBesteffort
.     .      .            .
.     .      .        +..Container1
.     .      .        .         .tasks(container processes)
.     .      .        +..Container2
.     .      .        .         .tasks(container processes)
.     .      .        .
.           .           .            ...
.     .      .
.      .      ...

SystemReservedCgroup & KebeReservedCgroup 需要手動創建,(如果kubelet是給自身與docker deamon創建cgroups,那么它將會自動的創建KubeRservedCgroup未驗證,官方是這樣說的)

kubepods cgroups如果不存在,kuelet會自動創建它

  1. 如果 cgroup 驅動設置為 systemd,那么 Kubelet 將通過 systemd 創建一個 kubepods.slice
  2.  默認情況下,Kubelet 會通過 cgroupfs 直接 mkdir /kubepods cgroup

如果 Kubelet 是使用容器化管理的,那么kubelet的Container Runtime的cgroups是在KuberReservedCgroup下


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM