記錄一次kubernetes集群worker節點/var/log/messages系統日志錯誤信息如下:
Jan 7 09:54:20 worker02 systemd: Created slice libcontainer_9036_systemd_test_default.slice. Jan 7 09:54:20 worker02 systemd: Removed slice libcontainer_9036_systemd_test_default.slice. Jan 7 09:54:20 worker02 systemd: Created slice libcontainer_9036_systemd_test_default.slice. Jan 7 09:54:20 worker02 systemd: Removed slice libcontainer_9036_systemd_test_default.slice. Jan 7 09:54:20 worker02 systemd: Created slice libcontainer_9042_systemd_test_default.slice. Jan 7 09:54:20 worker02 systemd: Removed slice libcontainer_9042_systemd_test_default.slice. Jan 7 09:54:20 worker02 systemd: Created slice libcontainer_9042_systemd_test_default.slice. Jan 7 09:54:20 worker02 systemd: Removed slice libcontainer_9042_systemd_test_default.slice. Jan 7 09:54:20 worker02 systemd: Created slice libcontainer_9107_systemd_test_default.slice. Jan 7 09:54:20 worker02 systemd: Removed slice libcontainer_9107_systemd_test_default.slice. Jan 7 09:54:20 worker02 systemd: Created slice libcontainer_9107_systemd_test_default.slice. Jan 7 09:54:20 worker02 systemd: Removed slice libcontainer_9107_systemd_test_default.slice. Jan 7 09:54:20 worker02 systemd: Created slice libcontainer_9114_systemd_test_default.slice. Jan 7 09:54:20 worker02 systemd: Removed slice libcontainer_9114_systemd_test_default.slice. Jan 7 09:54:20 worker02 systemd: Created slice libcontainer_9114_systemd_test_default.slice. Jan 7 09:54:20 worker02 systemd: Removed slice libcontainer_9114_systemd_test_default.slice. Jan 7 09:54:20 worker02 systemd: Created slice libcontainer_9128_systemd_test_default.slice. Jan 7 09:54:20 worker02 systemd: Removed slice libcontainer_9128_systemd_test_default.slice. Jan 7 09:54:20 worker02 systemd: Created slice libcontainer_9128_systemd_test_default.slice. Jan 7 09:54:20 worker02 systemd: Removed slice libcontainer_9128_systemd_test_default.slice. Jan 7 09:54:20 worker02 systemd: Created slice libcontainer_9142_systemd_test_default.slice. Jan 7 09:54:20 worker02 systemd: Removed slice libcontainer_9142_systemd_test_default.slice. Jan 7 09:54:20 worker02 systemd: Created slice libcontainer_9142_systemd_test_default.slice. Jan 7 09:54:20 worker02 systemd: Removed slice libcontainer_9142_systemd_test_default.slice. Jan 7 09:54:21 worker02 systemd: Created slice libcontainer_9148_systemd_test_default.slice.
問題原因:
問題大致原因是由於將 cgroup-driver 設置為 systemd 后引起的,但是這些錯誤信息不會影響容器指標
https://github.com/opencontainers/runc/blob/master/libcontainer/cgroups/systemd/apply_systemd.go#L123
https://www.ibm.com/support/knowledgecenter/en/SSBS6K_3.2.0/troubleshoot/cgroup_driver.html
https://www.ibm.com/support/pages/recurring-messages-complain-scope-libcontainer-nnnnn-has-no-pids-refusing
解決方法:
## 方法1: cat <<\EOF >/etc/rsyslog.d/ignore-systemd-session-slice.conf if ($programname == "systemd") and ($msg contains "_systemd_test_default.slice" or$msg contains "systemd-test-default-dependencies.scope") then { stop } EOF systemctl restart rsyslog.service ## 方法2: cat <<\EOF >/etc/rsyslog.d/ignore-systemd-session-slice.conf :rawmsg, contains, "libcontainer" ~ EOF systemctl restart rsyslog.service
建議采用如下方式:
從根源上解決:修改kubelet啟動參數
KUBELET_EXTRA_ARGS=--kubelet-cgroups=/system.slice/kubelet.service --runtime-cgroups=/system.slice/docker.service
修改kubelet服務啟動的配置文件
cat /etc/sysconfig/kubelet KUBELET_EXTRA_ARGS= #KUBELET_EXTRA_ARGS=--kubelet-cgroups=/system.slice/kubelet.service --runtime-cgroups=/system.slice/docker.service --feature-gates=LocalStorageCapacityIsolation=true \ # --kube-reserved-cgroup=/kubepods.slice --kube-reserved=cpu=500m,memory=500Mi,ephemeral-storage=1Gi \ # --system-reserved-cgroup=/system.slice --system-reserved=cpu=500m,memory=500Mi,ephemeral-storage=1Gi \ # --eviction-hard=memory.available<500Mi,nodefs.available<10% # --max-pods=200
重啟kubelet服務
systemctl restart kubelet.service
2020.04.28記錄
kubernetes集群worker節點/var/log/messages系統日志錯誤信息如下:
Apr 28 11:25:45 worker02 kernel: nfs4_reclaim_open_state: 6 callbacks suppressed Apr 28 11:25:45 worker02 kernel: NFS: nfs4_reclaim_open_state: Lock reclaim failed! Apr 28 11:25:45 worker02 kernel: NFS: nfs4_reclaim_open_state: Lock reclaim failed! Apr 28 11:25:45 worker02 kernel: NFS: nfs4_reclaim_open_state: Lock reclaim failed! Apr 28 11:25:45 worker02 kernel: NFS: nfs4_reclaim_open_state: Lock reclaim failed! Apr 28 11:25:45 worker02 kernel: NFS: nfs4_reclaim_open_state: Lock reclaim failed! Apr 28 11:25:45 worker02 kernel: NFS: nfs4_reclaim_open_state: Lock reclaim failed! Apr 28 11:25:45 worker02 kernel: NFS: nfs4_reclaim_open_state: Lock reclaim failed! Apr 28 11:25:45 worker02 kernel: NFS: nfs4_reclaim_open_state: Lock reclaim failed! Apr 28 11:25:45 worker02 kernel: NFS: nfs4_reclaim_open_state: Lock reclaim failed! Apr 28 11:25:45 worker02 kernel: NFS: nfs4_reclaim_open_state: Lock reclaim failed!
原因:文件句柄不釋放,發生死鎖
解決方法:優化worker節點的內核參數並重啟nfs-server服務
有條件建議使用 GlusterFS 或者Ceph 等其他文件系統作為底層存儲,性能、穩定,高可用都勝過 nfs
以下配置僅供參考,具體請根據服務器配置來自行優化:
1、調整sysctl.conf配置文件
cat /etc/sysctl.conf net.ipv4.ip_forward=1 net.bridge.bridge-nf-call-iptables=1 net.bridge.bridge-nf-call-ip6tables=1 fs.aio-max-nr = 1048576 fs.file-max = 76724600 net.core.netdev_max_backlog = 10000 net.core.rmem_default = 262144 # The default setting of the socket receive buffer in bytes. net.core.rmem_max = 4194304 # The maximum receive socket buffer size in bytes net.core.wmem_default = 262144 # The default setting (in bytes) of the socket send buffer. net.core.wmem_max = 4194304 # The maximum send socket buffer size in bytes. net.core.somaxconn = 4096 net.ipv4.tcp_max_syn_backlog = 4096 net.ipv4.tcp_keepalive_intvl = 20 net.ipv4.tcp_keepalive_probes = 3 net.ipv4.tcp_keepalive_time = 60 net.ipv4.tcp_mem = 8388608 12582912 16777216 net.ipv4.tcp_fin_timeout = 5 net.ipv4.tcp_synack_retries = 2 net.ipv4.tcp_syncookies = 1 # 開啟SYN Cookies。當出現SYN等待隊列溢出時,啟用cookie來處理,可防范少量的SYN攻擊 net.ipv4.tcp_timestamps = 1 # 減少time_wait net.ipv4.tcp_tw_recycle = 0 # 如果=1則開啟TCP連接中TIME-WAIT套接字的快速回收,但是NAT環境可能導致連接失敗,建議服務端關閉它 net.ipv4.tcp_tw_reuse = 1 # 開啟重用。允許將TIME-WAIT套接字重新用於新的TCP連接 net.ipv4.tcp_max_tw_buckets = 262144 net.ipv4.tcp_rmem = 8192 87380 16777216 net.ipv4.tcp_wmem = 8192 65536 16777216 net.nf_conntrack_max = 1200000 net.netfilter.nf_conntrack_max = 1200000 vm.dirty_background_bytes = 409600000 # 系統臟頁到達這個值,系統后台刷臟頁調度進程 pdflush(或其他) 自動將(dirty_expire_centisecs/100)秒前的臟頁刷到磁盤 vm.dirty_expire_centisecs = 3000 # 比這個值老的臟頁,將被刷到磁盤。3000表示30秒。 vm.dirty_ratio = 95 # 如果系統進程刷臟頁太慢,使得系統臟頁超過內存 95 % 時,則用戶進程如果有寫磁盤的操作(如fsync, fdatasync等調用),則需要主動把系統臟頁刷出。 # 有效防止用戶進程刷臟頁,在單機多實例,並且使用CGROUP限制單實例IOPS的情況下非常有效。 vm.dirty_writeback_centisecs = 100 # pdflush(或其他)后台刷臟頁進程的喚醒間隔, 100表示1秒。 vm.mmap_min_addr = 65536 vm.overcommit_memory = 0 # 在分配內存時,允許少量over malloc, 如果設置為 1, 則認為總是有足夠的內存,內存較少的測試環境可以使用 1 . vm.overcommit_ratio = 90 # 當overcommit_memory = 2 時,用於參與計算允許指派的內存大小。 vm.swappiness = 0 # 關閉交換分區 vm.zone_reclaim_mode = 0 # 禁用 numa, 或者在vmlinux中禁止. net.ipv4.ip_local_port_range = 40000 65535 # 本地自動分配的TCP, UDP端口號范圍 fs.nr_open=20480000 # 單個進程允許打開的文件句柄上限 net.ipv4.tcp_max_syn_backlog = 16384 net.core.somaxconn = 16384
2、執行sysctl -p命令使參數立即生效
sysctl -p
3、修改文件最大打開數vi /etc/security/limits.conf
* soft nofile 1024000 * hard nofile 1024000 * soft nproc unlimited * hard nproc unlimited * soft core unlimited * hard core unlimited * soft memlock unlimited * hard memlock unlimited
4、重啟nfs-server服務端服務
systemctl restart nfs-server.service