本系列匯總,請查看這里:https://www.cnblogs.com/uncleyong/p/16721826.html
參考官網
kubeadm是官方提供的快速搭建k8s集群的開源工具,對於非運維人員學習k8s,kubeadm方式安裝相對更簡單。
kubeadm創建一個集群:https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/
kubeadm創建一個高可用集群:https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/high-availability/
安裝前提
One or more machines running a deb/rpm-compatible Linux OS; for example: Ubuntu or CentOS. 2 GiB or more of RAM per machine--any less leaves little room for your apps. At least 2 CPUs on the machine that you use as a control-plane node. Full network connectivity among all machines in the cluster. You can use either a public or a private network.
網段規划
虛擬機(宿主機):192.168.117.x
虛擬機nat網絡使用的是vmnet8
service:10.96.0.0/12
pod:172.16.0.0/12
參考:https://www.cnblogs.com/uncleyong/p/6959568.html
集群規划
對於學習而言,我們搭建一個非高可用集群即可,1個master,2個node
master:192.168.117.171,主機名為k8s-master01
node1:192.168.117.172,主機名為k8s-node01
node2:192.168.117.173,主機名為k8s-node02
說明:ip請根據你vmware網絡實際情況設置
下面是ip地址范圍
創建虛擬機
通過vof模板文件創建虛擬機(如需要vof模板文件,請微信聯系獲取)
vof已完成相關配置:https://www.cnblogs.com/uncleyong/p/15471002.html
創建master
vmware直接打開ovf文件
輸入新虛擬機名稱和存儲路徑
調整內存和cpu
調整為4c8g
ip a
修改ip:vim /etc/sysconfig/network-scripts/ifcfg-ens33
192.168.117.171
重啟網卡:systemctl restart network
創建node1
調整為3c8g
修改ip:vim /etc/sysconfig/network-scripts/ifcfg-ens33
192.168.117.172
重啟網卡:systemctl restart network
創建node2
調整為3c8g
修改ip:vim /etc/sysconfig/network-scripts/ifcfg-ens33
192.168.117.173
重啟網卡:systemctl restart network
xshell分別連接master、node1、node2
虛擬機存放目錄
修改主機名
分別在3個虛擬機上執行:
171:hostnamectl set-hostname k8s-master01
172:hostnamectl set-hostname k8s-node01
173:hostnamectl set-hostname k8s-node02
斷開xshell后重新連接,即可看到新的主機名
或者不斷開xshell,執行命令:bash
確認一下:
當然,也可以一步到位:hostnamectl set-hostname k8s-master01 && bash
配置hosts
所有節點:vim /etc/hosts
192.168.117.171 k8s-master01 192.168.117.172 k8s-node01 192.168.117.173 k8s-node02
說明:下面需要保留
配置k8s需要的yum源
所有節點:
cat <<EOF > /etc/yum.repos.d/kubernetes.repo [kubernetes] name=Kubernetes baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/ enabled=1 gpgcheck=1 repo_gpgcheck=1 gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg EOF
刪除包含內容的行
sed -i -e '/mirrors.cloud.aliyuncs.com/d' -e '/mirrors.aliyuncs.com/d' /etc/yum.repos.d/CentOS-Base.repo
清除yum緩存:yum clean all
生成yum緩存:yum makecache fast
更新yum源:yum -y update
master01節點免密鑰登錄其他節點
在master01上操作:ssh-keygen -t rsa
一直回車
公鑰發到其它節點
for i in k8s-master01 k8s-node01 k8s-node02;do ssh-copy-id -i .ssh/id_rsa.pub $i;done
驗證
ssh k8s-node02,未輸入密碼就連接到k8s-node02節點了
exit,返回到master01節點
安裝k8s組件
最新版本是1.22
查看小版本:yum list kubeadm.x86_64 --showduplicates | sort -r | grep 1.22
我們安裝最新版:1.22.3
所有節點執行:yum install kubeadm-1.22.3 kubelet-1.22.3 kubectl-1.22.3 -y
所有節點配置kubelet使用阿里雲的pause鏡像
cat >/etc/sysconfig/kubelet<<EOF KUBELET_EXTRA_ARGS="--pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.5" EOF
所有節點設置kubelet開機自啟動:systemctl enable kubelet
查看是否開機啟動:systemctl is-enabled kubelet
集群初始化
參考:https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/
master節點創建kubeadm-config.yaml配置文件(里面ip改為你自己的,我的是:192.168.117.171;文件從網盤下載)
master節點拉取鏡像:kubeadm config images pull --config /root/kubeadm-config.yaml
master01節點初始化:kubeadm init --config /root/kubeadm-config.yaml --upload-certs
初始化成功,生成token,其它節點加入時使用
Your Kubernetes control-plane has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config Alternatively, if you are the root user, you can run: export KUBECONFIG=/etc/kubernetes/admin.conf You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ You can now join any number of the control-plane node running the following command on each as root: kubeadm join 192.168.117.171:6443 --token 7t2weq.bjbawausm0jaxury \ --discovery-token-ca-cert-hash sha256:af8e08cec375af6b26a33ae55b5093c1c8c2361eb93dcfd415ce30c72a863f66 \ --control-plane --certificate-key 3ee5335688b1d714274f826f05ec73443aa926edf03d1aa07e51bb4390ee0dd3 Please note that the certificate-key gives access to cluster sensitive data, keep it secret! As a safeguard, uploaded-certs will be deleted in two hours; If necessary, you can use "kubeadm init phase upload-certs --upload-certs" to reload certs afterward. Then you can join any number of worker nodes by running the following on each as root: kubeadm join 192.168.117.171:6443 --token 7t2weq.bjbawausm0jaxury \ --discovery-token-ca-cert-hash sha256:af8e08cec375af6b26a33ae55b5093c1c8c2361eb93dcfd415ce30c72a863f66
生成的配置文件和證書文件
ls /etc/kubernetes/
ls /etc/kubernetes/manifests/
ls /etc/kubernetes/pki/
master01配置訪問k8s集群的環境變量
cat <<EOF >> /root/.bashrc export KUBECONFIG=/etc/kubernetes/admin.conf EOF
讓配置生效
source /root/.bashrc
節點狀態:kubectl get node
pod狀態(所有的系統組件都是以容器的方式運行)
kubectl get po -n kube-system
因為calico還沒安裝,還沒法調度到節點,所以上面coredns是pending狀態
集群加入node節點
node節點執行
kubeadm join 192.168.117.171:6443 --token 7t2weq.bjbawausm0jaxury \ --discovery-token-ca-cert-hash sha256:af8e08cec375af6b26a33ae55b5093c1c8c2361eb93dcfd415ce30c72a863f66
node01執行
node02執行
查看集群狀態:kubectl get node
此時執行:kubectl get po -A -owide
ip都是宿主機的ip,因為這些pod和宿主機共用網絡,而coredns不和宿主機共用網絡,ip一列是<none>,需要網絡插件來分配ip
安裝calico
https://docs.projectcalico.org/getting-started/kubernetes/self-managed-onprem/onpremises
配置文件獲取地址:https://docs.projectcalico.org/manifests/calico.yaml
curl https://docs.projectcalico.org/manifests/calico.yaml -O
修改
- name: CALICO_IPV4POOL_CIDR value: "172.16.0.0/12"
kubectl apply -f calico.yaml(calico.yam文件和涉及的鏡像,都從網盤下載)
kubectl get po -n kube-system -owide
說明:下面有一個Pending,是因為master有污點
kubectl get node
將master01節點的front-proxy-ca.crt復制到所有node節點
scp /etc/kubernetes/pki/front-proxy-ca.crt k8s-node01:/etc/kubernetes/pki/front-proxy-ca.crt
scp /etc/kubernetes/pki/front-proxy-ca.crt k8s-node02:/etc/kubernetes/pki/front-proxy-ca.crt
安裝metrics server和dashboard
https://www.cnblogs.com/uncleyong/p/15701535.html
其它配置
去掉污點允許master節點部署pod
kubeadm安裝的k8s集群,master節點默認不允許部署pod
kubectl describe node |grep NoSchedule -C 5
去掉:kubectl taint node -l node-role.kubernetes.io/master node-role.kubernetes.io/master:NoSchedule-
kubectl get po -A -owide,pending的消失了,可以看到,是調度到了master節點
kube-proxy改為性能更好的ipvs
master執行:kubectl edit cm kube-proxy -n kube-system
搜索mode,改為:
ks8集群驗證
1、查看node狀態
2、查看容器狀態
3、驗證calico
pod可以ping本節點同namespace的pod
pod可以ping本節點跨namespace的pod
pod可以ping跨節點相同namespace的pod
pod可以ping跨節點不同namespace的pod
所有節點可以ping一個pod
pod可以ping外網
4、驗證kube-proxy
5、k8s和coredns的svc能telnet通
6、驗證coredns
解析不同namespace
解析相同namespace
原文:https://www.cnblogs.com/uncleyong/p/15499732.html