kubeadm單集群部署k8s1.15.1&flannel網絡


說明

本次實驗在Windows下的VMware進行
系統配置及初始化配置在所有的主機執行
容器鏡像全部替換為國內可拉取的
pod網絡采用flannel

實驗環境

主機名 IP地址 角色 OS CPU/MEM 網卡/模式 平台
k8s-master01 192.168.181.158 master CentOS7.6 2C/2G x1/NAT VMware
k8s-node1 192.168.181.159 node1 CentOS7.6 2C/2G x1/NAT VMware
k8s-node2 192.168.181.160 node2 CentOS7.6 2C/2G x1/NAT VMware

初始配置

基本配置為三個主機都需要的操作

history格式設置

cat >> /etc/bashrc << "EOF" # history actions record,include action time, user, login ip HISTFILESIZE=4000 HISTSIZE=4000 USER_IP=`who -u am i 2>/dev/null| awk '{print $NF}'|sed -e 's/[()]//g'` if [ -z $USER_IP ] then USER_IP=`hostname` fi HISTTIMEFORMAT="%F %T $USER_IP:`whoami` " export HISTTIMEFORMAT EOF

安裝常用軟件

yum install -y net-tools iproute lrzsz vim bash-completion wget tree bridge-utils unzip bind-utils git gcc

主機名設置

hostnamectl set-hostname k8s-master01 hostnamectl set-hostname k8s-node01 hostnamectl set-hostname k8s-node02

靜態IP設置

設置靜態IP,進行calico網絡方案時,發現配置之后,ip有變化

cat /etc/sysconfig/network-scripts/ifcfg-ens33
TYPE="Ethernet" PROXY_METHOD="none" BROWSER_ONLY="no" BOOTPROTO="static" DEFROUTE="yes" IPV4_FAILURE_FATAL="no" IPV6INIT="yes" IPV6_AUTOCONF="yes" IPV6_DEFROUTE="yes" IPV6_FAILURE_FATAL="no" IPV6_ADDR_GEN_MODE="stable-privacy" NAME="ens33" UUID="41e83853-95e3-4b09-861b-e36dd3ead61b" DEVICE="ens33" ONBOOT="yes" # 根據主機ip設置 IPADDR="192.168.181.158" PREFIX="24" GATEWAY="192.168.181.2" DNS1="202.96.128.166" IPV6_PRIVACY="no"

重啟網絡

systemctl restart network

修改/etc/hosts

cat >> /etc/hosts << EOF 192.168.181.158 k8s-master01 192.168.181.159 k8s-node01 192.168.181.160 k8s-node02 EOF

關閉selinux

sed -i 's/^SELINUX=enforcing$/SELINUX=disabled/' /etc/selinux/config && setenforce 0

時間同步

# 安裝 chrony 服務,centos7.6默認自帶了,沒有的按如下安裝 yum install -y chrony systemctl start chronyd systemctl enable chronyd

關閉防火牆

systemctl stop firewalld systemctl disable firewalld

關閉swap分區

sed -i '11s/\/dev/# \/dev/g' /etc/fstab swapoff -a

yum源設置

mkdir /etc/yum.repos.d/ori
mv /etc/yum.repos.d/CentOS-* /etc/yum.repos.d/ori/
cat > /etc/yum.repos.d/CentOS-Base.repo << "EOF" # CentOS-Base.repo # # The mirror system uses the connecting IP address of the client and the # update status of each mirror to pick mirrors that are updated to and # geographically close to the client. You should use this for CentOS updates # unless you are manually picking other mirrors. # # If the mirrorlist= does not work for you, as a fall back you can try the # remarked out baseurl= line instead. # # [base] name=CentOS-$releasever - Base baseurl=https://mirrors.tuna.tsinghua.edu.cn/centos/$releasever/os/$basearch/ #mirrorlist=http://mirrorlist.centos.org/?release=$releasever&arch=$basearch&repo=os gpgcheck=1 gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7 #released updates [updates] name=CentOS-$releasever - Updates baseurl=https://mirrors.tuna.tsinghua.edu.cn/centos/$releasever/updates/$basearch/ #mirrorlist=http://mirrorlist.centos.org/?release=$releasever&arch=$basearch&repo=updates gpgcheck=1 gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7 #additional packages that may be useful [extras] name=CentOS-$releasever - Extras baseurl=https://mirrors.tuna.tsinghua.edu.cn/centos/$releasever/extras/$basearch/ #mirrorlist=http://mirrorlist.centos.org/?release=$releasever&arch=$basearch&repo=extras gpgcheck=1 gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7 #additional packages that extend functionality of existing packages [centosplus] name=CentOS-$releasever - Plus baseurl=https://mirrors.tuna.tsinghua.edu.cn/centos/$releasever/centosplus/$basearch/ #mirrorlist=http://mirrorlist.centos.org/?release=$releasever&arch=$basearch&repo=centosplus gpgcheck=1 enabled=0 gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7 EOF

安裝epel並配置epel源

yum install -y epel-release
cat > /etc/yum.repos.d/epel.repo <<"EOF" [epel] name=Extra Packages for Enterprise Linux 7 - $basearch baseurl=https://mirrors.tuna.tsinghua.edu.cn/epel/7/$basearch #mirrorlist=https://mirrors.fedoraproject.org/metalink?repo=epel-7&arch=$basearch failovermethod=priority enabled=1 gpgcheck=1 gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7 [epel-debuginfo] name=Extra Packages for Enterprise Linux 7 - $basearch - Debug baseurl=https://mirrors.tuna.tsinghua.edu.cn/epel/7/$basearch/debug #mirrorlist=https://mirrors.fedoraproject.org/metalink?repo=epel-debug-7&arch=$basearch failovermethod=priority enabled=0 gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7 gpgcheck=1 [epel-source] name=Extra Packages for Enterprise Linux 7 - $basearch - Source baseurl=https://mirrors.tuna.tsinghua.edu.cn/epel/7/SRPMS #mirrorlist=https://mirrors.fedoraproject.org/metalink?repo=epel-source-7&arch=$basearch failovermethod=priority enabled=0 gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7 gpgcheck=1 EOF yum clean all yum makecache

升級內核

查看當前發行版和內核
​``` [root@k8s-master01 ~]# cat /etc/redhat-release CentOS Linux release 7.6.1810 (Core) [root@k8s-master01 ~]# uname -r 3.10.0-957.el7.x86_64 ​``` 啟用 ELRepo 倉庫 ​``` rpm -Uvh http://www.elrepo.org/elrepo-release-7.0-3.el7.elrepo.noarch.rpm ​``` 查看可用內核包 ​``` yum --disablerepo="*" --enablerepo="elrepo-kernel" list available ​``` 安裝最新內核 ​``` yum --enablerepo=elrepo-kernel install -y kernel-ml kernel-ml-devel kernel-ml-headers ​``` 查看已安裝的內核 ​``` [root@k8s-master01 ~]# awk -F\' '$1=="menuentry " {print i++ " : " $2}' /etc/grub2.cfg 0 : CentOS Linux (5.1.14-1.el7.elrepo.x86_64) 7 (Core) 1 : CentOS Linux (3.10.0-957.el7.x86_64) 7 (Core) 2 : CentOS Linux (0-rescue-8d615a05e5de49a08ca0e56b285958f7) 7 (Core) ​``` 設置啟動內核,即就是編號為0的那個 ​``` grub2-set-default 0 sed -i 's/saved/0/g' /etc/default/grub ​``` 關閉NUMA ​``` sed -i 's/quiet/quiet numa=off/g' /etc/default/grub ​``` 重新生成grub2配置文件 ​``` grub2-mkconfig -o /boot/grub2/grub.cfg reboot ​```

配置IPVS內核

默認情況下,Kube-proxy將在kubeadm部署的集群中以iptables模式運行

需要注意的是,當內核版本大於4.19時,移除了nf_conntrack_ipv4模塊,kubernetes官方建議使用nf_conntrack代替,否則報錯無法找到nf_conntrack_ipv4模塊

yum install -y ipset ipvsadm
cat > /etc/sysconfig/modules/ipvs.modules <<EOF
#!/bin/bash modprobe -- ip_vs modprobe -- ip_vs_rr modprobe -- ip_vs_wrr modprobe -- ip_vs_sh modprobe -- nf_conntrack EOF chmod +x /etc/sysconfig/modules/ipvs.modules bash /etc/sysconfig/modules/ipvs.modules

配置內核參數

cat > /etc/sysctl.d/k8s.conf <<EOF
net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 net.ipv4.ip_nonlocal_bind = 1 net.ipv4.ip_forward = 1 vm.swappiness=0 EOF modprobe br_netfilter sysctl -p /etc/sysctl.d/k8s.conf

打開文件數

echo "* soft nofile 65536" >> /etc/security/limits.conf echo "* hard nofile 65536" >> /etc/security/limits.conf

安裝docker

wget -P /etc/yum.repos.d/ http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo yum install -y docker-ce

docker配置修改和鏡像加速

[ ! -d /etc/docker ] && mkdir /etc/docker
cat > /etc/docker/daemon.json <<EOF
{
  "exec-opts": ["native.cgroupdriver=systemd"], "log-driver": "json-file", "log-opts": { "max-size": "100m" }, "storage-driver": "overlay2", "storage-opts": [ "overlay2.override_kernel_check=true" ], "registry-mirrors": ["https://uyah70su.mirror.aliyuncs.com"] } EOF # 啟動docker systemctl daemon-reload && systemctl restart docker && systemctl enable docker

到這一步完成之后可以打虛擬機快照保存狀態了

安裝 kubelet、kubeadm 和 kubectl

kubelet 運行在 Cluster 所有節點上,負責啟動 Pod 和容器。
kubeadm 用於初始化 Cluster。
kubectl 是 Kubernetes 命令行工具。通過 kubectl 可以部署和管理應用,查看各種資源,創建、刪除和更新各種組件。

# 添加阿里雲yum源 cat <<EOF > /etc/yum.repos.d/kubernetes.repo [kubernetes] name=Kubernetes baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64 enabled=1 gpgcheck=1 repo_gpgcheck=1 gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg EOF # 默認安裝最新版本,此處為1.15.1 yum install -y kubeadm kubelet kubectl systemctl enable kubelet && systemctl start kubelet

啟用kubectl命令的自動補全功能

# 安裝並配置bash-completion yum install -y bash-completion echo 'source /usr/share/bash-completion/bash_completion' >> /etc/profile source /etc/profile echo "source <(kubectl completion bash)" >> ~/.bashrc source ~/.bashrc

到這一步可以打一個快照,方便后續進行flannel網絡測試

初始化Master

使用kubeadm config print init-defaults可以打印集群初始化默認的使用的配置

這里采用命令行方式初始化,注意默認鏡像倉庫由於在國外,不能訪問,這里指定為阿里雲鏡像倉庫

需要注意這里使用的網絡方案是flannel,注意CIDR

# kubernetes-version版本和前面安裝的kubelet和kubectl一致
[root@k8s-master01 ~]# kubeadm init --apiserver-advertise-address 192.168.181.158 --kubernetes-version="v1.15.1" --pod-network-cidr=10.244.0.0/16 --image-repository=registry.aliyuncs.com/google_containers | tee kubeadm-init.log

初始化完成之后,底部會有節點加入master方法提示,其他兩個節點復制執行即可加入master節點

配置kubectl命令

無論在master節點或node節點,要能夠執行kubectl命令必須進行以下配置
root用戶配置

cat << EOF >> ~/.bashrc
export KUBECONFIG=/etc/kubernetes/admin.conf EOF source ~/.bashrc

普通用戶配置

 mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config

等集群配置完成后,可以在master節點和node節點進行以上配置,以支持kubectl命令。針對node節點復制master節點/etc/kubernetes/admin.conf到本地。
查看集群狀態
配置完成后在任意主機上查看

kubectl get nodes kubectl get pod -n kube-system kubectl get cs

由於未安裝網絡插件,coredns處於pending狀態,node處於notready狀態。

安裝flannel網絡

Kubernetes 支持多種網絡方案,這里我們先使用 flannel。

這里要注意,默認的flannel配置文件拉取鏡像在國外,國內拉取失敗,很多網上文章沒注意這一步,導致flannel部署失敗

# master安裝flannel [root@k8s-master ~]# mkdir k8s wget -P k8s/ https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml sed -i 's#quay.io#quay-mirror.qiniu.com#g' k8s/kube-flannel.yml kubectl apply -f k8s/kube-flannel.yml

加入node節點

節點加入master,從初始化輸出或kubeadm-init.log中獲取命令

kubeadm join 192.168.181.158:6443 --token l3ofhh.ebsctxgnlub8mwei \ --discovery-token-ca-cert-hash sha256:c9bbe567f213051ebed76b0ac217f231356a4a6078245b01498f83ce8b9a73c1

移除node節點

# 需要移除的k8s-node2節點執行 kubectl drain k8s-node2 --delete-local-data --force --ignore-daemonsets kubectl delete node k8s-node2 kubeadm reset ifconfig cni0 down ip link delete cni0 ifconfig flannel.1 down ip link delete flannel.1 rm -rf /var/lib/cni/ # k8s-master01 執行 kubectl delete node k8s-node2 # 執行完之后,要重新加入可以按前面的步驟執行添加node和配置kubectl命令 # 集群初始化如果遇到問題(例如CNI問題),k8s-node2可以使用下面的命令進行清理,執行之后還未解決,那么在k8s-master01節點繼續執行如下語句 kubeadm reset systemctl stop kubelet systemctl stop docker rm -rf /var/lib/cni/ rm -rf /var/lib/kubelet/* rm -rf /etc/cni/ ifconfig cni0 down ifconfig flannel.1 down ifconfig docker0 down ip link delete cni0 ip link delete flannel.1 ##重啟kubelet systemctl restart kubelet ##重啟docker systemctl restart docker

信息查看

kubectl get nodes kubectl get pods -n kube-system kubectl get pods --all-namespaces # 查看日志 journalctl --since 12:00:00 -u kubelet

測試DNS

kubectl run curl --image=radial/busyboxplus:curl -it # 進入應用后,解析DNS,這里一定是可以解析出默認DNS,否則后續pod啟動無法分配ip nslookup kubernetes.default

kube-proxy開啟ipvs

kubectl get configmap kube-proxy -n kube-system -o yaml > kube-proxy-configmap.yaml sed -i 's/mode: ""/mode: "ipvs"/' kube-proxy-configmap.yaml kubectl apply -f kube-proxy-configmap.yaml rm -f kube-proxy-configmap.yaml kubectl get pod -n kube-system | grep kube-proxy | awk '{system("kubectl delete pod "$1" -n kube-system")}'

或者用以下方法也可以修改,修改ConfigMap的kube-system/kube-proxy中的config.conf,mode: "ipvs"

kubectl edit configmap kube-proxy -n kube-system kubectl get pod -n kube-system | grep kube-proxy | awk '{system("kubectl delete pod "$1" -n kube-system")}' 

查看IPVS配置

yum install -y ipvsadm ipvsadm -ln

 

參考:

https://www.cnblogs.com/AutoSmart/p/11260829.html





免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM