所有節點操作
K8S的安裝部署可以參考文檔:http://m.bubuko.com/infodetail-3144195.html
需要在每一台機器上執行的操作
l 各節點禁用防火牆
# systemctl stop firewalld
# systemctl disable firewalld
l 禁用SELINUX:
# setenforce 0
# sed -i ‘s/SELINUX=enforcing/SELINUX=disabled/g‘ /etc/selinux/config
#
SELINUX=disabled
l 創建/etc/sysctl.d/k8s.conf文件,添加如下內容:
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
執行命令使修改生效。
# modprobe br_netfilter
# sysctl -p /etc/sysctl.d/k8s.conf
l 在所有的Kubernetes節點上執行以下腳本:
# cat > /etc/sysconfig/modules/ipvs.modules <<EOF
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack_ipv4
EOF
# chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep -e ip_vs -e nf_conntrack_ipv4
上面腳本創建了的/etc/sysconfig/modules/ipvs.modules文件,保證在節點重啟后能自動加載所需模塊。 使用lsmod | grep -e ip_vs -e nf_conntrack_ipv4命令查看是否已經正確加載所需的內核模塊。
l 接下來還需要確保各個節點上已經安裝了ipset軟件包
# yum -y install ipset
l 為了便於查看ipvs的代理規則,最好安裝一下管理工具ipvsadm
# yum -y install ipvsadm
部署master節點
l 安裝kubeadm和kubelet:
配置kubernetes.repo的源,由於官方源國內無法訪問,這里使用阿里雲yum源
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
測試地址https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64是否可用,如果不可用需要×××
# curl https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
# yum -y makecache fast
# yum install -y kubelet kubeadm kubectl
Kubernetes 1.8開始要求關閉系統的Swap,如果不關閉,默認配置下kubelet將無法啟動。 關閉系統的Swap方法如下:
# swapoff -a
修改 /etc/fstab 文件,注釋掉 SWAP 的自動掛載,
# UUID=2d1e946c-f45d-4516-86cf-946bde9bdcd8 swap swap defaults 0 0
使用free -m確認swap已經關閉。 swappiness參數調整,修改/etc/sysctl.d/k8s.conf添加下面一行:
vm.swappiness=0
使修改生效
# sysctl -p /etc/sysctl.d/k8s.conf
l 使用kubeadm init初始化集群
開機啟動kubelet服務:
systemctl enable kubelet.service
配置Master節點
# mkdir working && cd working
生成配置文件
# kubeadm config print init-defaults ClusterConfiguration > kubeadm.yaml
修改配置文件
# vim kubeadm.yaml
# 修改imageRepository:k8s.gcr.io
imageRepository: registry.aliyuncs.com/google_containers
# 修改KubernetesVersion:v1.15.0
kubernetesVersion: v1.15.0
# 配置MasterIP
advertiseAddress: 192.168.1.21
# 配置子網網絡
networking:
dnsDomain: cluster.local
podSubnet: 10.244.0.0/16
serviceSubnet: 10.96.0.0/12
scheduler: {}
kubeadm init --config kubeadm.yaml --ignore-preflight-errors=Swap
初始化出錯[kubelet-check] Initial timeout of 40s passed.的時候,可以參考
https://blog.csdn.net/gs80140/article/details/92798027
注意這一條命令需要保存好(添加集群使用)
kubeadm join 192.168.169.21:6443 –token 4qcl2f.gtl3h8e5kjltuo0r \ –discovery-token-ca-cert-hash sha256:7ed5404175cc0bf18dbfe53f19d4a35b1e3d40c19b10924275868ebf2a3bbe6e
下面的命令是配置常規用戶如何使用kubectl訪問集群:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
查看一下集群狀態,確認個組件都處於healthy狀態:
# kubectl get cs
集群初始化如果遇到問題,可以使用下面的命令進行清理:
# kubeadm reset
# ifconfig cni0 down
# ip link delete cni0
# ifconfig flannel.1 down
# ip link delete flannel.1
# rm -rf /var/lib/cni/
l 安裝Pod Network
接下來安裝flannel network add-on:
# mkdir -p ~/k8s/
# cd ~/k8s
# curl -O https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
# kubectl apply -f kube-flannel.yml
kubectl get pod -n kube-system
測試集群DNS是否可用
kubectl run curl --image=radial/busyboxplus:curl -it
部署node節點
yum install -y yum-utils device-mapper-persistent-data lvm2
yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
yum install -y --setopt=obsoletes=0 docker-ce
l 安裝kubeadm和kubelet:
配置kubernetes.repo的源,由於官方源國內無法訪問,這里使用阿里雲yum源
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
測試地址https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64是否可用,如果不可用需要×××
# curl https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
# yum -y makecache fast
# yum install -y kubelet kubeadm kubectl
systemctl start docker
systemctl enable docker
l Node節點加入集群
kubeadm join 192.168.30.30:6443 --token abcdef.0123456789abcdef --discovery-token-ca-cert-hash sha256:eac10da3dbc0414542f3a4c0f220264706b693467611e856844229d1b96b9f6d
vi /etc/sysconfig/kubelet KUBELET_EXTRA_ARGS="--fail-swap-on=false" (不執行此操作導致node節點一直為notready)
kubeadm部署k8s
環境
master01:192.168.1.110 (最少2核CPU)
node01:192.168.1.100
規划
services網絡:10.96.0.0/12
pod網絡:10.244.0.0/16
1.配置hosts解析各主機
vim /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6192.168.1.110 master01192.168.1.100 node01
2.同步各主機時間
yum install -y ntpdate
ntpdate time.windows.com
14 Mar 16:51:32 ntpdate[46363]: adjust time server 13.65.88.161 offset -0.001108 sec
3.關閉SWAP,關閉selinux
swapoff -a
vim /etc/selinux/config
# This file controls the state of SELinux on the system.
# SELINUX= can take one of these three values:
# enforcing - SELinux security policy is enforced.
# permissive - SELinux prints warnings instead of enforcing.
# disabled - No SELinux policy is loaded.
SELINUX=disabled
4.安裝docker-ce
yum install -y yum-utils device-mapper-persistent-data lvm2yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repoyum makecache fastyum -y install docker-ce
Docker 安裝后出現:WARNING: bridge-nf-call-iptables is disabled 的解決辦法
vim /etc/sysctl.conf
# sysctl settings are defined through files in
# /usr/lib/sysctl.d/, /run/sysctl.d/, and /etc/sysctl.d/.
#
# Vendors settings live in /usr/lib/sysctl.d/.
# To override a whole file, create a new file with the same in
# /etc/sysctl.d/ and put new settings there. To override
# only specific settings, add a file with a lexically later
# name in /etc/sysctl.d/ and put new settings there.
#
# For more information, see sysctl.conf(5) and sysctl.d(5).
net.bridge.bridge-nf-call-ip6tables=1
net.bridge.bridge-nf-call-iptables=1
net.bridge.bridge-nf-call-arptables=1
systemctl enable docker && systemctl start docker
5.安裝kubernetes
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpgEOF
setenforce 0yum install -y kubelet kubeadm kubectl
systemctl enable kubelet && systemctl start kubelet
6.初始化集群
kubeadm init --image-repository registry.aliyuncs.com/google_containers --kubernetes-version v1.13.1 --pod-network-cidr=10.244.0.0/16
Your Kubernetes master has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
You can now join any number of machines by running the following on each node
as root:
kubeadm join 192.168.1.110:6443 --token wgrs62.vy0trlpuwtm5jd75 --discovery-token-ca-cert-hash sha256:6e947e63b176acf976899483d41148609a6e109067ed6970b9fbca8d9261c8d0
7.手動部署flannel
flannel網址:https://github.com/coreos/flannel
for Kubernetes v1.7+
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
podsecuritypolicy.extensions/psp.flannel.unprivileged created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.extensions/kube-flannel-ds-amd64 created
daemonset.extensions/kube-flannel-ds-arm64 created
daemonset.extensions/kube-flannel-ds-arm created
daemonset.extensions/kube-flannel-ds-ppc64le created
daemonset.extensions/kube-flannel-ds-s390x created
8.node配置
安裝docker kubelet kubeadm
docker安裝同步驟4,kubelet kubeadm安裝同步驟5
9.node加入到master
kubeadm join 192.168.1.110:6443 --token wgrs62.vy0trlpuwtm5jd75 --discovery-token-ca-cert-hash sha256:6e947e63b176acf976899483d41148609a6e109067ed6970b9fbca8d9261c8d0
kubectl get nodes #查看node狀態
NAME STATUS ROLES AGE VERSION
localhost.localdomain NotReady <none> 130m v1.13.4
master01 Ready master 4h47m v1.13.4
node01 Ready <none> 94m v1.13.4
kubectl get cs #查看組件狀態
NAME STATUS MESSAGE ERROR
scheduler Healthy ok
controller-manager Healthy ok
etcd-0 Healthy {"health": "true"}
kubectl get ns #查看名稱空間
NAME STATUS AGE
default Active 4h41m
kube-public Active 4h41m
kube-system Active 4h41m
kubectl get pods -n kube-system #查看pod狀態
NAME READY STATUS RESTARTS AGE
coredns-78d4cf999f-bszbk 1/1 Running 0 4h44m
coredns-78d4cf999f-j68hb 1/1 Running 0 4h44m
etcd-master01 1/1 Running 0 4h43m
kube-apiserver-master01 1/1 Running 1 4h43m
kube-controller-manager-master01 1/1 Running 2 4h43m
kube-flannel-ds-amd64-27x59 1/1 Running 1 126m
kube-flannel-ds-amd64-5sxgk 1/1 Running 0 140m
kube-flannel-ds-amd64-xvrbw 1/1 Running 0 91m
kube-proxy-4pbdf 1/1 Running 0 91m
kube-proxy-9fmrl 1/1 Running 0 4h44m
kube-proxy-nwkl9 1/1 Running 0 126m
kube-scheduler-master01 1/1 Running 2 4h43m