使用kubeadm安裝K8s-1.14.2


使用kubeadm安裝無疑是一種不錯的選擇。參考:https://www.cnblogs.com/benjamin77/p/9783797.html

1、環境准備

1.1系統配置

系統是CentOS Linux release 7.6

[root@k8s-master ~]# cat /etc/hosts
192.168.1.134 k8s-master
192.168.1.135 k8s-node1
192.168.1.136 k8s-node2

禁用防火牆和selinux

systemctl stop firewalld
systemctl disable firewalld
setenforce 0
sed -i "s/SELINUX=enforcing/SELINUX=disabled/g" /etc/selinux/config

關閉系統的Swap(Kubernetes 1.8開始要求)
swapoff -a
yes | cp /etc/fstab /etc/fstab_bak
cat /etc/fstab_bak |grep -v swap > /etc/fstab
 

添加內核參數文件 /etc/sysctl.d/k8s.conf

echo """ vm.swappiness = 0
net.ipv4.ip_forward = 1 net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 """ > /etc/sysctl.conf sysctl -p
 

執行命令使修改生效

modprobe br_netfilter
sysctl -p /etc/sysctl.d/k8s.conf

同步時間

ntpdate -u ntp.api.bz

升級內核到最新(已准備內核離線安裝包,可選)centos7 升級內核

所有機器需要設定/etc/sysctl.d/k8s.conf的系統參數

# https://github.com/moby/moby/issues/31208 
# ipvsadm -l --timout
# 修復ipvs模式下長連接timeout問題 小於900即可
cat <<EOF > /etc/sysctl.d/k8s.conf
net.ipv4.tcp_keepalive_time = 600
net.ipv4.tcp_keepalive_intvl = 30
net.ipv4.tcp_keepalive_probes = 10
net.ipv6.conf.all.disable_ipv6 = 1
net.ipv6.conf.default.disable_ipv6 = 1
net.ipv6.conf.lo.disable_ipv6 = 1
net.ipv4.neigh.default.gc_stale_time = 120
net.ipv4.conf.all.rp_filter = 0
net.ipv4.conf.default.rp_filter = 0
net.ipv4.conf.default.arp_announce = 2
net.ipv4.conf.lo.arp_announce = 2
net.ipv4.conf.all.arp_announce = 2
net.ipv4.ip_forward = 1
net.ipv4.tcp_max_tw_buckets = 5000
net.ipv4.tcp_syncookies = 1
net.ipv4.tcp_max_syn_backlog = 1024
net.ipv4.tcp_synack_retries = 2
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.netfilter.nf_conntrack_max = 2310720
fs.inotify.max_user_watches=89100
fs.may_detach_mounts = 1
fs.file-max = 52706963
fs.nr_open = 52706963
net.bridge.bridge-nf-call-arptables = 1
vm.swappiness = 0
vm.overcommit_memory=1
vm.panic_on_oom=0
EOF
sysctl --system

設置開機啟動

# 啟動docker
sed -i "13i ExecStartPost=/usr/sbin/iptables -P FORWARD ACCEPT" /usr/lib/systemd/system/docker.service
systemctl daemon-reload
systemctl enable docker
systemctl start docker
# 設置kubelet開機啟動
systemctl enable kubelet

systemctl enable keepalived
systemctl enable haproxy

設置免密登錄

# 1、三次回車后,密鑰生成完成
ssh-keygen
# 2、拷貝密鑰到其他節點
ssh-copy-id -i ~/.ssh/id_rsa.pub  用戶名字@192.168.x.xxx

 

 

 

1.2安裝Docker(所有節點)

yum install -y yum-utils device-mapper-persistent-data lvm2
 wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -O /etc/yum.repos.d/docker.repo

查看可下載的版本

[root@k8s-master ~]# yum list docker-ce.x86_64  --showduplicates |sort -r
# yum makecache fast
# yum install docker-ce -y
[root@k8s-node1 ~]# docker -v
Docker version 18.06.1-ce, build e68fc7a

[root@k8s-node1 ~]#sed -i "13i ExecStartPost=/usr/sbin/iptables -P FORWARD ACCEPT" /usr/lib/systemd/system/docker.service
[root@k8s-node1 ~]# systemctl daemon-reload ;systemctl start docker ;systemctl enable docker

2.使用kubeadm部署Kubernetes

2.1安裝kubelet 和 kubeadm

生成kubernetes的yum倉庫配置文件/etc/yum.repos.d/kubernetes.repo,內容如下:(默認安裝最新版本,我此時安裝時版本為1.14.2)

[kubernetes]

name=Kubernetes

baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/

gpgcheck=0

gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg

enabled=1
yum makecache fast
yum install -y kubelet kubeadm kubectl
或者指定版本號:yum -y install kubelet-1.14.2-0.x86_64 kubeadm-1.14.2-0.x86_64 kubectl-1.14.2-0.x86_64

  

關閉swap功能

[root@k8s-node1 yum.repos.d]# swapoff  -a
[root@k8s-node1 yum.repos.d]# sysctl -p /etc/sysctl.d/k8s.conf 

注釋掉 、/etc/fstab 中swap的條目

 

 mount -a

 

echo "KUBELET_EXTRA_ARGS=--fail-swap-on=false" > /etc/sysconfig/kubelet 

 

2.2 使用kubeadm init初始化集群

在各節點開機啟動kubelet服務:

systemctl enable kubelet.service

使用kubeadm初始化集群,會發生一下錯誤,這是由於初始化時,先從本地查找 有沒有kubenetes組件的相關鏡像如果找不到就從谷歌鏡像站下載,如果你不FQ就只能讓本地存在這些鏡像。

我們可以從docker鏡像站下載kubernetes相關組件的鏡像然后給他重新打tag

[root@k8s-master ~]#kubeadm init --kubernetes-version=v1.14.2 --pod-network-cidr=10.244.0.0/16 --service-cidr=10.96.0.0/12 --apiserver-advertise-address=10.0.0.11 --ignore-preflight-errors=Swap

執行腳本

[root@k8s-master ~]# cat k8s.sh

docker pull mirrorgooglecontainers/kube-apiserver:v1.14.2
docker pull mirrorgooglecontainers/kube-controller-manager:v1.14.2
docker pull mirrorgooglecontainers/kube-scheduler:v1.14.2
docker pull mirrorgooglecontainers/kube-proxy:v1.14.2
docker pull mirrorgooglecontainers/pause:3.1
docker pull mirrorgooglecontainers/etcd:3.3.10
docker pull coredns/coredns:1.3.1

docker tag mirrorgooglecontainers/kube-proxy:v1.14.2 k8s.gcr.io/kube-proxy:v1.14.2
docker tag mirrorgooglecontainers/kube-scheduler:v1.14.2 k8s.gcr.io/kube-scheduler:v1.14.2
docker tag mirrorgooglecontainers/kube-apiserver:v1.14.2 k8s.gcr.io/kube-apiserver:v1.14.2
docker tag mirrorgooglecontainers/kube-controller-manager:v1.14.2 k8s.gcr.io/kube-controller-manager:v1.14.2
docker tag mirrorgooglecontainers/etcd:3.3.10 k8s.gcr.io/etcd:3.3.10
docker tag coredns/coredns:1.3.1 k8s.gcr.io/coredns:1.3.1
docker tag mirrorgooglecontainers/pause:3.1 k8s.gcr.io/pause:3.1

docker rmi mirrorgooglecontainers/kube-proxy:v1.14.2
docker rmi mirrorgooglecontainers/kube-scheduler:v1.14.2
docker rmi mirrorgooglecontainers/kube-apiserver:v1.14.2
docker rmi mirrorgooglecontainers/kube-controller-manager:v1.14.2
docker rmi mirrorgooglecontainers/etcd:3.3.10
docker rmi coredns/coredns:1.3.1
docker rmi mirrorgooglecontainers/pause:3.1

 

bash k8s.sh
[root@k8s-master ~]# docker images

 

具體操作如下:

查看kubernetes的版本

[root@k8s-master yum.repos.d]# kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.2", GitCommit:"4ed3216f3ec431b140b1d899130a69fc671678f4", GitTreeState:"clean", BuildDate:"2019-05-30T16:43:08Z", GoVersion:"go1.10.4", Compiler:"gc", Platform:"linux/amd64"}

 

再次在master節點上執行初始化

[root@k8s-master ~]#kubeadm init --kubernetes-version=v1.14.2 --pod-network-cidr=10.244.0.0/16 --service-cidr=10.96.0.0/12 --apiserver-advertise-address=10.0.0.11 --ignore-preflight-errors=Swap

輸出信息如下:

[root@k8s-master ~]#kubeadm init --kubernetes-version=v1.14.2 --pod-network-cidr=10.244.0.0/16 --service-cidr=10.96.0.0/12 --apiserver-advertise-address=10.0.0.11 --ignore-preflight-errors=Swap

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.1.153:6443 --token il8ser.l9jyab9xa6m2t971 \
--discovery-token-ca-cert-hash sha256:492ec57cb9723ae8a71c2b9668bb7a86f0333c530ea5db10540882d6a6463efc

按照上面輸出提示進行操作

[root@k8s-master ~]# mkdir -p $HOME/.kube
[root@k8s-master ~]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@k8s-master ~]# sudo chown $(id -u):$(id -g) $HOME/.kube/config

 

mkdir -p ~/k8s/ && cd ~/k8s

 

[root@k8s-master k8s]# wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
#這里改成自己的網卡
# vim kube-flannel.yml args: - --ip-masq - --kube-subnet-mgr - --iface=eth0 # kubectl apply -f kube-flannel.yml

查看集群狀態

集群初始化如果遇到問題,可以使用下面的命令進行清理:

kubeadm reset
ifconfig cni0 down
ip link delete cni0
ifconfig flannel.1 down
ip link delete flannel.1
rm -rf /var/lib/cni/

2.3 安裝Pod Network

#這時master狀態為notready 是因為沒有網絡插件
[root@k8s-master k8s]# kubectl get nodes
NAME         STATUS     ROLES    AGE   VERSION
k8s-master   NotReady   master   27m   v1.12.1

 

 

接下來安裝flannel network add-on:

 

 獲取組件健康狀態

[root@k8s-master k8s]# kubectl get cs
NAME                 STATUS    MESSAGE              ERROR
scheduler            Healthy   ok                   
controller-manager   Healthy   ok                   
etcd-0               Healthy   {"health": "true"}   

 

 

 

[root@k8s-master k8s]# kubectl describe node k8s-master
Name:               k8s-master
Roles:              master
Labels:             beta.kubernetes.io/arch=amd64
                    beta.kubernetes.io/os=linux
                    kubernetes.io/hostname=k8s-master
                    node-role.kubernetes.io/master=
Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
                    node.alpha.kubernetes.io/ttl: 0
                    volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp:  Wed, 17 Oct 2018 21:24:01 +0800
Taints:             node-role.kubernetes.io/master:NoSchedule
                    node.kubernetes.io/not-ready:NoSchedule
Unschedulable:      false

 

 

 

上面輸出顯示首先會下載一個flannel鏡像,namespace全部為running狀態, master為ready

 

2.4master參與工作負載

出於安全考慮Pod不會被調度到Master Node上,也就是說Master Node不參與工作負載。這是因為當前的master節點node1被打上了node-role.kubernetes.io/master:NoSchedule的污點:

[root@k8s-master k8s]# kubectl describe node k8s-master | grep Taint     
Taints:             node-role.kubernetes.io/master:NoSchedule

#   如果需要改回來不想讓master節點參與到工作負載

kubectl taint node k8s-master node-role.kubernetes.io/master="":NoSchedule

 

 

 去除污點使k8s-master參與負載

[root@k8s-master k8s]# kubectl taint nodes k8s-master node-role.kubernetes.io/master-
node/k8s-master untainted

[root@k8s-master k8s]# kubectl describe node k8s-master | grep Taint 
Taints: <none>

2.5測試DNS

kubectl run curl --image=radial/busyboxplus:curl -it

 

 

2.6 向Kubernetes集群中添加Node節點

下面我們將node1 node2這個主機添加到Kubernetes集群中, 在node1和node2上執行:

 

[root@k8s-node1 ~]# kubeadm join 10.0.0.11:6443 --token i4us8x.pw2f3botcnipng8e --discovery-token-ca-cert-hash sha256:d16ac747c2312ae829aa29a3596f733f920ca3d372d9f1b34d33c938be067e51

 

 

 查看節點,

 

 

原因是節點k8-node1也要獲取鏡像,執行以上的獲取鏡像的腳本即可,兩個節點分別重置集群,kubeadm reset,然后重新初始化。

 

 從master節點如果需要移出這個node1節點

在master節點上執行:

kubectl drain k8s-node1 --delete-local-data --force --ignore-daemonsets
kubectl delete node k8s-node1

 

 


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM