使用kubeadm安裝Kubernetes 1.15.3 並開啟 ipvs


一、安裝前准備

機器列表

主機名 IP
node-1(master) 1.1.1.101
node-2(node) 1.1.1.102
node-3(node) 1.1.1.103

設置時區

cp /usr/share/zoneinfo/Asia/Shanghai /etc/localtime
timedatectl set-timezone Asia/Shanghai

關閉防火牆及SELINUX

systemctl stop firewalld
systemctl disable firewalld
setenforce 0
sed -i 's/=enforcing/=disabled/g' /etc/selinux/config

關閉 swap 分區

swapoff -a
sed -i '/swap/s/^/#/' /etc/fstab

默認情況下,Kubelet不允許所在的主機存在交換分區,后期規划的時候,可以考慮在系統安裝的時候不創建交換分區,針對已經存在交換分區的可以設置忽略禁止使用Swap的限制,不然無法啟動Kubelet。

[root@node-1 ~]# vim /etc/sysconfig/kubelet
KUBELET_EXTRA_ARGS="--fail-swap-on=false"

加載 br_netfilter 模塊,並創建/etc/sysctl.d/k8s.conf文件

lsmod | grep br_netfilter
modprobe br_netfilter

cat <<EOF >  /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables  = 1
net.ipv4.ip_forward                 = 1
EOF
sysctl --system

ipvs准備

由於ipvs已經加入到了內核的主干,所以為kube-proxy開啟ipvs的前提需要加載以下的內核模塊:

ip_vs
ip_vs_rr
ip_vs_wrr
ip_vs_sh
nf_conntrack_ipv4

在所有的Kubernetes節點上執行以下腳本:

cat > /etc/sysconfig/modules/ipvs.modules <<EOF
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack_ipv4
EOF
chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep -e ip_vs -e nf_conntrack_ipv4

上面腳本創建了的/etc/sysconfig/modules/ipvs.modules文件,保證在節點重啟后能自動加載所需模塊。 使用lsmod | grep -e ip_vs -e nf_conntrack_ipv4命令查看是否已經正確加載所需的內核模塊。

# 檢查加載的模塊
lsmod | grep -e ipvs -e nf_conntrack_ipv4
# 或者
cut -f1 -d " "  /proc/modules | grep -e ip_vs -e nf_conntrack_ipv4

接下來還需要確保各個節點上已經安裝了ipset軟件包。 為了便於查看ipvs的代理規則,最好安裝一下管理工具ipvsadm。

yum install ipset ipvsadm -y

如果以上前提條件如果不滿足,則即使kube-proxy的配置開啟了ipvs模式,也會退回到iptables模式。

准備 k8s的yum倉庫

[root@node-1 yum.repos.d]# cat kubernetes.repo
[kubernetes]
name=Kubernetes Repo
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
enabled=1

准備docker-ce 的倉庫

wget  https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

二、集群部署

安裝 docker-ce 和 kubeadm

yum install docker-ce kubelet kubeadm kubectl

啟動docker、kubelet並設置開機啟動

注意,此時kubelet是無法正常啟動的,可以查看/var/log/messages有報錯信息,等待執行初始化之后即可正常,為正常現象。

systemctl enable docker && systemctl start docker
systemctl enable kubelet && systemctl start kubelet

各節點提前下載鏡像

[root@node-2 opt]# cat k8s-image-download.sh
#!/bin/bash
# liyongjian5179@163.com
# download k8s 1.15.3 images
# get image-list by 'kubeadm config images list --kubernetes-version=v1.15.3'
# gcr.azk8s.cn/google-containers == k8s.gcr.io

if [ $# -ne 1 ];then
    echo "please user in: ./`basename $0` KUBERNETES-VERSION"
    exit 1
fi
version=$1

images=`kubeadm config images list --kubernetes-version=${version} |awk -F'/' '{print $2}'`

for imageName in ${images[@]};do
    docker pull gcr.azk8s.cn/google-containers/$imageName
    docker tag  gcr.azk8s.cn/google-containers/$imageName k8s.gcr.io/$imageName
    docker rmi  gcr.azk8s.cn/google-containers/$imageName
done

[root@node-2 opt]#./k8s-image-download.sh 1.15.3

主節點執行

kubeadm init --kubernetes-version=v1.15.3 --pod-network-cidr=10.244.0.0/16 --service-cidr=10.96.0.0/12 --ignore-preflight-errors=Swap --ignore-preflight-errors=NumCPU --apiserver-advertise-address=1.1.1.101

  • --apiserver-advertise-address:指定用 Master 的哪個IP地址與 Cluster的其他節點通信。
  • --service-cidr:指定Service網絡的范圍,即負載均衡VIP使用的IP地址段。
  • --pod-network-cidr:指定Pod網絡的范圍,即Pod的IP地址段。
  • --image-repository:Kubenetes默認Registries地址是 k8s.gcr.io,一般在國內並不能訪問 gcr.io,可以將其指定為阿里雲鏡像地址:registry.aliyuncs.com/google_containers。
  • --kubernetes-version=v1.15.3:指定要安裝的版本號。
  • --ignore-preflight-errors=:忽略運行時的錯誤,例如執行時存在[ERROR NumCPU]和[ERROR Swap],忽略這兩個報錯就是增加--ignore-preflight-errors=NumCPU 和--ignore-preflight-errors=Swap的配置即可。

如果有多個網卡,最好指定一下 apiserver-advertise 地址

[root@node-1 ~]# kubeadm init --kubernetes-version=v1.15.3 --pod-network-cidr=10.244.0.0/16 --service-cidr=10.96.0.0/12 --ignore-preflight-errors=Swap --ignore-preflight-errors=NumCPU --apiserver-advertise-address=1.1.1.101
[init] Using Kubernetes version: v1.15.3
[preflight] Running pre-flight checks
	[WARNING NumCPU]: the number of available CPUs 1 is less than the required 2
	[WARNING Service-Docker]: docker service is not enabled, please run 'systemctl enable docker.service'
	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Activating the kubelet service
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [node-1 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 1.1.1.101]
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [node-1 localhost] and IPs [1.1.1.101 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [node-1 localhost] and IPs [1.1.1.101 127.0.0.1 ::1]
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 22.503724 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.15" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node node-1 as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node node-1 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: z1609x.bg2tkrsrfwlrl3rb
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 1.1.1.101:6443 --token z1609x.bg2tkrsrfwlrl3rb \
    --discovery-token-ca-cert-hash sha256:0753a3d2f04c6c34c5ad88d4be3bc508b1e5b9d00908b29442f7068645521703

初始化操作主要經歷了下面15個步驟,每個階段均輸出均使用[步驟名稱]作為開頭:

  1. [init]:指定版本進行初始化操作
  2. [preflight] :初始化前的檢查和下載所需要的Docker鏡像文件。
  3. [kubelet-start] :生成kubelet的配置文件”/var/lib/kubelet/config.yaml”,沒有這個文件kubelet無法啟動,所以初始化之前的kubelet實際上啟動失敗。
  4. [certificates]:生成Kubernetes使用的證書,存放在/etc/kubernetes/pki目錄中。
  5. [kubeconfig] :生成 KubeConfig 文件,存放在/etc/kubernetes目錄中,組件之間通信需要使用對應文件。
  6. [control-plane]:使用/etc/kubernetes/manifest目錄下的YAML文件,安裝 Master 組件。
  7. [etcd]:使用/etc/kubernetes/manifest/etcd.yaml安裝Etcd服務。
  8. [wait-control-plane]:等待control-plan部署的Master組件啟動。
  9. [apiclient]:檢查Master組件服務狀態。
  10. [uploadconfig]:更新配置
  11. [kubelet]:使用configMap配置kubelet。
  12. [patchnode]:更新CNI信息到Node上,通過注釋的方式記錄。
  13. [mark-control-plane]:為當前節點打標簽,打了角色Master,和不可調度標簽,這樣默認就不會使用Master節點來運行Pod。
  14. [bootstrap-token]:生成token記錄下來,后邊使用kubeadm join往集群中添加節點時會用到
  15. [addons]:安裝附加組件CoreDNS和kube-proxy

kubectl默認會在執行的用戶家目錄下面的.kube目錄下尋找config文件。這里是將在初始化時[kubeconfig]步驟生成的admin.conf拷貝到.kube/config。

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

安裝網絡插件

在此處,安裝 flannel

[root@node-1 ~]# mkdir k8s
[root@node-1 ~]# cd k8s/ && wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
--2019-08-27 11:51:28--  https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
Resolving raw.githubusercontent.com (raw.githubusercontent.com)... 151.101.0.133, 151.101.64.133, 151.101.128.133, ...
Connecting to raw.githubusercontent.com (raw.githubusercontent.com)|151.101.0.133|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 12487 (12K) [text/plain]
Saving to: ‘kube-flannel.yml’

100%[=========================================================================>] 12,487      --.-K/s   in 0s

2019-08-27 11:51:29 (52.9 MB/s) - ‘kube-flannel.yml’ saved [12487/12487]

如果Node有多個網卡的話,參考flannel issues 39701,目前需要在kube-flannel.yml中使用--iface參數指定集群主機內網網卡的名稱,否則可能會出現dns無法解析。需要將kube-flannel.yml下載到本地,flanneld啟動參數加上--iface=<iface-name>

[root@node-1 k8s]# vim kube-flannel.yml
找到 /opt/bin/flanneld 這一行,在下面加入
“- --iface=eth1” 指定使用eth1網卡 (有很多個)

啟動 flannel

[root@node-1 k8s]# kubectl apply -f kube-flannel.yml
podsecuritypolicy.policy/psp.flannel.unprivileged created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds-amd64 created
daemonset.apps/kube-flannel-ds-arm64 created
daemonset.apps/kube-flannel-ds-arm created
daemonset.apps/kube-flannel-ds-ppc64le created
daemonset.apps/kube-flannel-ds-s390x created

node-2 節點加入集群

[root@node-2 ~]# kubeadm join 1.1.1.101:6443 --token z1609x.bg2tkrsrfwlrl3rb \
>     --discovery-token-ca-cert-hash sha256:0753a3d2f04c6c34c5ad88d4be3bc508b1e5b9d00908b29442f7068645521703
[preflight] Running pre-flight checks
	[WARNING Service-Docker]: docker service is not enabled, please run 'systemctl enable docker.service'
	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.15" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Activating the kubelet service
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

node-3節點加入集群

[root@node-3 opt]# kubeadm join 1.1.1.101:6443 --token z1609x.bg2tkrsrfwlrl3rb \
>     --discovery-token-ca-cert-hash sha256:0753a3d2f04c6c34c5ad88d4be3bc508b1e5b9d00908b29442f7068645521703
[preflight] Running pre-flight checks
	[WARNING Service-Docker]: docker service is not enabled, please run 'systemctl enable docker.service'
	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.15" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Activating the kubelet service
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

查看集群健康狀態

確認個組件都處於healthy狀態

[root@node-1 ~]# kubectl get cs
NAME                 STATUS    MESSAGE             ERROR
scheduler            Healthy   ok
controller-manager   Healthy   ok
etcd-0               Healthy   {"health":"true"}

集群初始化如果遇到問題,可以使用下面的命令進行清理:

kubeadm reset
ifconfig cni0 down
ip link delete cni0
ifconfig flannel.1 down
ip link delete flannel.1
rm -rf /var/lib/cni/

查看集群節點狀態, 已經都變成 Ready 狀態了

[root@node-1 k8s]# kubectl get nodes
NAME     STATUS   ROLES    AGE   VERSION
node-1   Ready    master   99m   v1.15.3
node-2   Ready    <none>   23m   v1.15.3
node-3   Ready    <none>   23m   v1.15.3

查看集群 pod 狀態都是Running狀態

[root@node-1 k8s]# kubectl get pod --all-namespaces -o wide
NAMESPACE     NAME                             READY   STATUS    RESTARTS   AGE     IP           NODE     NOMINATED NODE   READINESS GATES
kube-system   coredns-5c98db65d4-2n6vq         1/1     Running   0          100m    10.244.0.2   node-1   <none>           <none>
kube-system   coredns-5c98db65d4-6lpr2         1/1     Running   0          100m    10.244.1.2   node-3   <none>           <none>
kube-system   etcd-node-1                      1/1     Running   0          99m     1.1.1.101    node-1   <none>           <none>
kube-system   kube-apiserver-node-1            1/1     Running   0          99m     1.1.1.101    node-1   <none>           <none>
kube-system   kube-controller-manager-node-1   1/1     Running   0          99m     1.1.1.101    node-1   <none>           <none>
kube-system   kube-flannel-ds-amd64-clrjj      1/1     Running   0          2m19s   1.1.1.102    node-2   <none>           <none>
kube-system   kube-flannel-ds-amd64-hth2m      1/1     Running   0          2m19s   1.1.1.101    node-1   <none>           <none>
kube-system   kube-flannel-ds-amd64-wmqnf      1/1     Running   0          2m19s   1.1.1.103    node-3   <none>           <none>
kube-system   kube-proxy-8kgdr                 1/1     Running   0          25m     1.1.1.102    node-2   <none>           <none>
kube-system   kube-proxy-g456v                 1/1     Running   0          25m     1.1.1.103    node-3   <none>           <none>
kube-system   kube-proxy-gdtqx                 1/1     Running   0          100m    1.1.1.101    node-1   <none>           <none>
kube-system   kube-scheduler-node-1            1/1     Running   0          99m     1.1.1.101    node-1   <none>           <none>

測試 dns 解析是否正常

[root@node-1 k8s]# kubectl run -it busybox --image=radial/busyboxplus:curl
kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
If you don't see a command prompt, try pressing enter.
[ root@busybox-c86c47799-qr9wq:/ ]$ nslookup kubernetes
Server:    10.96.0.10
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local

Name:      kubernetes
Address 1: 10.96.0.1 kubernetes.default.svc.cluster.local
[ root@busybox-c86c47799-qr9wq:/ ]$ nslookup kubernetes.default
Server:    10.96.0.10
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local

Name:      kubernetes.default
Address 1: 10.96.0.1 kubernetes.default.svc.cluster.local

刪除節點

在master節點上執行:kubectl drain node-3 --delete-local-data --force --ignore-daemonsets

當節點變成不可調度狀態時候 SchedulingDisabled,執行 kubectl delete node node-3

[root@node-1 k8s]# kubectl get nodes
NAME     STATUS   ROLES    AGE    VERSION
node-1   Ready    master   125m   v1.15.3
node-2   Ready    <none>   49m    v1.15.3
node-3   Ready    <none>   49m    v1.15.3
[root@node-1 k8s]# kubectl drain node-3 --delete-local-data --force --ignore-daemonsets
node/node-3 cordoned
WARNING: ignoring DaemonSet-managed Pods: kube-system/kube-flannel-ds-amd64-wmqnf, kube-system/kube-proxy-g456v
evicting pod "coredns-5c98db65d4-6lpr2"
evicting pod "nginx-deploy-7689897d8d-kfc7v"
pod/nginx-deploy-7689897d8d-kfc7v evicted
pod/coredns-5c98db65d4-6lpr2 evicted
node/node-3 evicted
[root@node-1 k8s]# kubectl get nodes
NAME     STATUS                     ROLES    AGE    VERSION
node-1   Ready                      master   125m   v1.15.3
node-2   Ready                      <none>   50m    v1.15.3
node-3   Ready,SchedulingDisabled   <none>   50m    v1.15.3
[root@node-1 k8s]# kubectl delete node node-3
node "node-3" deleted
[root@node-1 k8s]# kubectl get nodes
NAME     STATUS   ROLES    AGE    VERSION
node-1   Ready    master   126m   v1.15.3
node-2   Ready    <none>   50m    v1.15.3

在 node-3 上執行

kubeadm reset
ifconfig cni0 down
ip link delete cni0
ifconfig flannel.1 down
ip link delete flannel.1
rm -rf /var/lib/cni/
systemctl stop kubelet

kube-proxy 開啟 ipvs

修改ConfigMap的kube-system/kube-proxy中的config.conf,mode: “ipvs”

[root@node-1 k8s]# kubectl edit cm kube-proxy -n kube-system
...
    ipvs:
      excludeCIDRs: null
      minSyncPeriod: 0s
      scheduler: ""
      strictARP: false
      syncPeriod: 30s
    kind: KubeProxyConfiguration
    metricsBindAddress: 127.0.0.1:10249
    mode: "ipvs"
    nodePortAddresses: null
    oomScoreAdj: -999
    portRange: ""
    resourceContainer: /kube-proxy
...

configmap/kube-proxy edited

對於Kubernetes來說,可以直接將這三個Pod刪除之后,會自動重建。

[root@node-1 k8s]# kubectl get pods -n kube-system|grep proxy
kube-proxy-8kgdr                 1/1     Running   0          79m
kube-proxy-dq8zz                 1/1     Running   0          24m
kube-proxy-gdtqx                 1/1     Running   0          155m

批量刪除 kube-proxy

kubectl get pod -n kube-system | grep kube-proxy | awk '{system("kubectl delete pod "$1" -n kube-system")}'

由於你已經通過ConfigMap修改了kube-proxy的配置,所以后期增加的Node節點,會直接使用ipvs模式。

查看日志

[root@node-1 k8s]# kubectl get pods -n kube-system|grep proxy
kube-proxy-84mgz                 1/1     Running   0          16s
kube-proxy-r8sxj                 1/1     Running   0          15s
kube-proxy-wjdmp                 1/1     Running   0          12s
[root@node-1 k8s]# kubectl logs kube-proxy-84mgz -n kube-system
I0827 04:59:16.916862       1 server_others.go:170] Using ipvs Proxier.
W0827 04:59:16.917140       1 proxier.go:401] IPVS scheduler not specified, use rr by default
I0827 04:59:16.917748       1 server.go:534] Version: v1.15.3
I0827 04:59:16.927407       1 conntrack.go:52] Setting nf_conntrack_max to 131072
I0827 04:59:16.929217       1 config.go:187] Starting service config controller
I0827 04:59:16.929236       1 controller_utils.go:1029] Waiting for caches to sync for service config controller
I0827 04:59:16.929561       1 config.go:96] Starting endpoints config controller
I0827 04:59:16.929577       1 controller_utils.go:1029] Waiting for caches to sync for endpoints config controller
I0827 04:59:17.029899       1 controller_utils.go:1036] Caches are synced for endpoints config controller
I0827 04:59:17.029954       1 controller_utils.go:1036] Caches are synced for service config controller

日志中打印出了Using ipvs Proxier,說明ipvs模式已經開啟。

使用ipvsadm測試,可以查看之前創建的Service已經使用LVS創建了集群。

[root@node-1 ~]# ipvsadm -Ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  10.96.0.1:443 rr
  -> 1.1.1.101:6443               Masq    1      0          0
TCP  10.96.0.10:53 rr
  -> 10.244.0.2:53                Masq    1      0          0
  -> 10.244.2.8:53                Masq    1      0          0
TCP  10.96.0.10:9153 rr
  -> 10.244.0.2:9153              Masq    1      0          0
  -> 10.244.2.8:9153              Masq    1      0          0
UDP  10.96.0.10:53 rr
  -> 10.244.0.2:53                Masq    1      0          0
  -> 10.244.2.8:53                Masq    1      0          0

附:通過配置文件啟動主節點

在 master 節點配置 kubeadm 初始化文件,可以通過如下命令導出默認的初始化配置:

[root@node-1 ~]# kubeadm config print init-defaults > kubeadm.yaml

然后根據我們自己的需求修改配置,比如修改 imageRepository 的值,kube-proxy 的模式為 ipvs,另外需要注意的是我們這里是准備安裝 flannel 網絡插件的,需要將 networking.podSubnet 設置為 10.244.0.0/16

apiVersion: kubeadm.k8s.io/v1beta2
bootstrapTokens:
- groups:
  - system:bootstrappers:kubeadm:default-node-token
  token: abcdef.0123456789abcdef
  ttl: 24h0m0s
  usages:
  - signing
  - authentication
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 1.1.1.101
  bindPort: 6443
  #controlPlaneEndpoint: 1.1.1.100 #如果前面配置了負載均衡,此處填寫vip地址
nodeRegistration:
  criSocket: /var/run/dockershim.sock
  name: node-1
  taints:
  - effect: NoSchedule
    key: node-role.kubernetes.io/master
---
apiServer:
  timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta2
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns:
  type: CoreDNS		#dns 類型
etcd:
  local:
    dataDir: /var/lib/etcd
imageRepository: gcr.azk8s.cn/google_containers
kind: ClusterConfiguration
kubernetesVersion: v1.15.3
networking:
  dnsDomain: cluster.local
  podSubnet: 10.244.0.0/16
  serviceSubnet: 10.96.0.0/12
scheduler: {}
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
mode: ipvs  # kube-proxy 模式

執行

kubeadm init --config kubeadm.yaml

如果忘記了執行后的 join 命令,可以使用命令 kubeadm token create--print-join-command重新獲取。


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM