環境
三台主機,一台master,兩台node
192.168.31.11 k8s-1 作為master 192.168.31.12 k8s-2 作為node節點 192.168.31.13 k8s-3 作為node節點
每台主機Centos版本使用 CentOS Linux release 7.6.1810 (Core)
軟件版本信息:
docker-ce-selinux-17.03.3.ce-1.el7.noarch docker-ce-17.03.2.ce-1.el7.centos.x86_64 kubelet-1.11.1-0.x86_64 kubeadm-1.11.1-0.x86_64 kubernetes-cni-0.6.0-0.x86_64 kubectl-1.11.1-0.x86_64
1.主機的基本設置
修改主機名
# 192.168.31.11 hostnamectl set-hostname k8s-1 # 192.168.31.12 hostnamectl set-hostname k8s-2 # 192.168.31.13 hostnamectl set-hostname k8s-3
修改每台主機的hosts文件添加如下內容
192.168.31.11 k8s-1 192.168.31.12 k8s-2 192.168.31.13 k8s-3
每台主機都系統時間同步
# 修改時區 ln -svf /usr/share/zoneinfo/Asia/Shanghai /etc/localtime yum -y install ntpdate ntpdate cn.pool.ntp.org
每台主機都關閉防火牆及selinux
systemctl disable firewalld systemctl stop firewalld setenforce 0 # 修改文件設置selinux永久關閉 vi /etc/sysconfig/selinux SELINUX=enforcing
關閉swap分區(如果創建主機時刪除swap分區,這里不用修改)
swapoff -a # 修改文件防止下次啟動掛載swap sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab
2.配置安裝前環境
以下操作在每台主機上都需要操作
每台主機都設置docker-ce源信息
# 下載wget yum -y install wget # 下載源repo wget -O /etc/yum.repos.d/docker-ce.repo https://mirrors.tuna.tsinghua.edu.cn/docker-ce/linux/centos/docker-ce.repo # 修改源到 sed -i 's@download.docker.com@mirrors.tuna.tsinghua.edu.cn/docker-ce@g' /etc/yum.repos.d/docker-ce.repo
每台主機都設置Kubernetes倉庫
# vi /etc/yum.repos.d/kubernetes.repo 添加如下內容 [kubernetes] name=Kubernetes Repo baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/ gpgcheck=0 enable=1
更新yum倉庫
yum clean all
yum repolist
每台主機都需要安裝docker-ce包(master上面有些服務是需要運行在docker容器中)
# 需要先升級docker-ce-selinux,版本太低 yum -y install https://mirrors.tuna.tsinghua.edu.cn/docker-ce/linux/centos/7/x86_64/stable/Packages/docker-ce-selinux-17.03.3.ce-1.el7.noarch.rpm # 安裝docker-ce yum -y install docker-ce-17.03.2.ce
設置並啟動docker服務
# 配置加速器到配置文件 mkdir -p /etc/docker tee /etc/docker/daemon.json <<-'EOF' { "registry-mirrors": ["https://registry.docker-cn.com"] } EOF # 啟動服務 systemctl daemon-reload systemctl enable docker systemctl start docker # 打開iptables內生的橋接相關功能 echo 1 > /proc/sys/net/bridge/bridge-nf-call-ip6tables echo 1 > /proc/sys/net/bridge/bridge-nf-call-iptables
3.master上面安裝並啟動服務
安裝需要的rpm包
# 先安裝 kubelet-1.10.10 # 原因是因為直接安裝kubelet1.11.1的話會導致kubernetes-cni版本過高, # 這里安裝kubelet1.10.10會依賴安裝kubernetes-cni-0.6.0版本 # 暫時沒有找到好的辦法解決這個問題 yum -y install kubelet-1.10.10 # 安裝需要的版本1.11.10 yum -y install kubeadm-1.11.1 kubelet-1.11.1 kubectl-1.11.1
配置啟動kubelet服務
vi /etc/sysconfig/kubelet KUBELET_EXTRA_ARGS="--fail-swap-on=false" KUBE_PROXY=MODE=ipvs # 配置為開機啟動 systemctl enable kubelet
提前拉取鏡像(kubeadm在初始化過程會拉取鏡像,但是鏡像都是Google提供,無法下載)
docker pull xiyangxixia/k8s-proxy-amd64:v1.11.1 docker tag xiyangxixia/k8s-proxy-amd64:v1.11.1 k8s.gcr.io/kube-proxy-amd64:v1.11.1 docker pull xiyangxixia/k8s-scheduler:v1.11.1 docker tag xiyangxixia/k8s-scheduler:v1.11.1 k8s.gcr.io/kube-scheduler-amd64:v1.11.1 docker pull xiyangxixia/k8s-controller-manager:v1.11.1 docker tag xiyangxixia/k8s-controller-manager:v1.11.1 k8s.gcr.io/kube-controller-manager-amd64:v1.11.1 docker pull xiyangxixia/k8s-apiserver-amd64:v1.11.1 docker tag xiyangxixia/k8s-apiserver-amd64:v1.11.1 k8s.gcr.io/kube-apiserver-amd64:v1.11.1 docker pull xiyangxixia/k8s-etcd:3.2.18 docker tag xiyangxixia/k8s-etcd:3.2.18 k8s.gcr.io/etcd-amd64:3.2.18 docker pull xiyangxixia/k8s-coredns:1.1.3 docker tag xiyangxixia/k8s-coredns:1.1.3 k8s.gcr.io/coredns:1.1.3 docker pull xiyangxixia/k8s-pause:3.1 docker tag xiyangxixia/k8s-pause:3.1 k8s.gcr.io/pause:3.1 docker pull xiyangxixia/k8s-flannel:v0.10.0-s390x docker tag xiyangxixia/k8s-flannel:v0.10.0-s390x quay.io/coreos/flannel:v0.10.0-s390x docker pull xiyangxixia/k8s-flannel:v0.10.0-ppc64le docker tag xiyangxixia/k8s-flannel:v0.10.0-ppc64le quay.io/coreos/flannel:v0.10.0-ppc64l docker pull xiyangxixia/k8s-flannel:v0.10.0-arm docker tag xiyangxixia/k8s-flannel:v0.10.0-arm quay.io/coreos/flannel:v0.10.0-arm docker pull xiyangxixia/k8s-flannel:v0.10.0-amd64 docker tag xiyangxixia/k8s-flannel:v0.10.0-amd64 quay.io/coreos/flannel:v0.10.0-amd64
初始化Kubernetes master節點
kubeadm init --kubernetes-version=v1.11.1 --pod-network-cidr=10.244.0.0/16 --service-cidr=10.96.0.0/12 --ignore-preflight-errors=Swap # --kubernetes-version=v1.11.1 因為無法連接Google服務所以需要手動指定版本信息 # --pod-network-cidr=10.244.0.0/16 pod獲取的ip的網段,默認即可 # --service-cidr=10.96.0.0/12 指定service網段,不要和物理服務節點同一個網段 # --ignore-preflight-errors=Swap/all 忽略錯誤
初始化成功后,根據提示信息執行操作
mkdir -p $HOME/.kube cp -i /etc/kubernetes/admin.conf $HOME/.kube/config chown $(id -u):$(id -g) $HOME/.kube/config
等待所有服務啟動(docker中運行)后,使用命令查詢信息
# 查詢組件信息 kubectl get cs # 查詢節點信息(flannel還沒有部署,提示為NotReady) kubectl get nodes # 查詢命名空間 kubectl get ns
部署網絡插件flannel
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml # 查看下載的images docker images | grep flannel
再次驗證節點信息,顯示Ready
kubectl get nodes
4.節點node上安裝並啟動服務
兩個節點操作完全相同
安裝需要的rpm包,確保防火牆,selinux關閉並設置永久關閉,docker-ce安裝並啟動
# 先安裝 kubelet-1.10.10 # 原因是因為直接安裝kubelet1.11.1的話會導致kubernetes-cni版本過高, # 這里安裝kubelet1.10.10會依賴安裝kubernetes-cni-0.6.0版本 # 暫時沒有找到好的辦法解決這個問題 yum -y install kubelet-1.10.10 # kubectl 其實沒有必要安裝 yum -y install kubeadm-1.11.1 kubelet-1.11.1 kubectl-1.11.1
配置啟動kubelet服務
vi /etc/sysconfig/kubelet
KUBELET_EXTRA_ARGS="--fail-swap-on=false" KUBE_PROXY=MODE=ipvs # 配置為開機啟動 systemctl enable kubelet
在master上面操作,創建token
# 查看當前所有的token kubeadm token list # 創建token,並記下 kubeadm token create [root@k8s-1 .kube]# kubeadm token create 8d5cbr.n84orohakj3o5ppd # 如果沒有值--discovery-token-ca-cert-hash,可以通過在master節點上運行以下命令鏈來獲取 openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | \ openssl dgst -sha256 -hex | sed 's/^.* //' [root@k8s-1 .kube]# openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | \ > openssl dgst -sha256 -hex | sed 's/^.* //' febac84e25f527f8ee8770a35165164ea8f930929ae0d648405240b3850f5c53
提前下載鏡像(初始化過程中會拉取鏡像,但是下載不了)
docker pull xiyangxixia/k8s-pause:3.1 docker tag xiyangxixia/k8s-pause:3.1 k8s.gcr.io/pause:3.1 docker pull xiyangxixia/k8s-proxy-amd64:v1.11.1 docker tag xiyangxixia/k8s-proxy-amd64:v1.11.1 k8s.gcr.io/kube-proxy-amd64:v1.11.1 docker pull xiyangxixia/k8s-flannel:v0.10.0-amd64 docker tag xiyangxixia/k8s-flannel:v0.10.0-amd64 quay.io/coreos/flannel:v0.10.0-amd64
根據創建的token初始化node並加入集群
kubeadm join 192.168.31.11:6443 --token 8d5cbr.n84orohakj3o5ppd --discovery-token-ca-cert-hash sha256:febac84e25f527f8ee8770a35165164ea8f930929ae0d648405240b3850f5c53 --ignore-preflight-errors=Swap # 192.168.31.11:6443 這里是master的ip地址,master必須關閉防火牆,關閉selinux # --token 這里是kubeadm token create 輸入的值 # --discovery-token-ca-cert-hash sha256: 這里是執行命令獲取的sha256的值
5.驗證集群是否初始化成功
查詢節點鏡像是否下載
[root@k8s-1 .kube]# docker images REPOSITORY TAG IMAGE ID CREATED SIZE quay.io/coreos/flannel v0.11.0-amd64 ff281650a721 2 months ago 52.5 MB k8s.gcr.io/kube-proxy-amd64 v1.11.1 d5c25579d0ff 8 months ago 97.8 MB xiyangxixia/k8s-proxy-amd64 v1.11.1 d5c25579d0ff 8 months ago 97.8 MB k8s.gcr.io/kube-apiserver-amd64 v1.11.1 816332bd9d11 8 months ago 187 MB xiyangxixia/k8s-apiserver-amd64 v1.11.1 816332bd9d11 8 months ago 187 MB k8s.gcr.io/kube-controller-manager-amd64 v1.11.1 52096ee87d0e 8 months ago 155 MB xiyangxixia/k8s-controller-manager v1.11.1 52096ee87d0e 8 months ago 155 MB k8s.gcr.io/kube-scheduler-amd64 v1.11.1 272b3a60cd68 8 months ago 56.8 MB xiyangxixia/k8s-scheduler v1.11.1 272b3a60cd68 8 months ago 56.8 MB xiyangxixia/k8s-coredns 1.1.3 b3b94275d97c 10 months ago 45.6 MB k8s.gcr.io/coredns 1.1.3 b3b94275d97c 10 months ago 45.6 MB k8s.gcr.io/etcd-amd64 3.2.18 b8df3b177be2 12 months ago 219 MB xiyangxixia/k8s-etcd 3.2.18 b8df3b177be2 12 months ago 219 MB quay.io/coreos/flannel v0.10.0-s390x 463654e4ed2d 14 months ago 47 MB xiyangxixia/k8s-flannel v0.10.0-s390x 463654e4ed2d 14 months ago 47 MB quay.io/coreos/flannel v0.10.0-ppc64l e2f67d69dd84 14 months ago 53.5 MB xiyangxixia/k8s-flannel v0.10.0-ppc64le e2f67d69dd84 14 months ago 53.5 MB xiyangxixia/k8s-flannel v0.10.0-arm c663d02f7966 14 months ago 39.9 MB quay.io/coreos/flannel v0.10.0-arm c663d02f7966 14 months ago 39.9 MB quay.io/coreos/flannel v0.10.0-amd64 f0fad859c909 14 months ago 44.6 MB xiyangxixia/k8s-flannel v0.10.0-amd64 f0fad859c909 14 months ago 44.6 MB k8s.gcr.io/pause 3.1 da86e6ba6ca1 15 months ago 742 kB xiyangxixia/k8s-pause 3.1 da86e6ba6ca1 15 months ago 742 kB
查詢節點信息(master上執行)是否都是Ready
[root@k8s-1 .kube]# kubectl get nodes NAME STATUS ROLES AGE VERSION k8s-1 Ready master 14m v1.11.1 k8s-2 Ready <none> 4m v1.11.1 k8s-3 Ready <none> 4m v1.11.1
查詢kube-system命名空間下的關於node節點的pod信息
[root@k8s-1 .kube]# kubectl get pods -n kube-system -o wide NAME READY STATUS RESTARTS AGE IP NODE coredns-78fcdf6894-44qf5 1/1 Running 0 14m 10.244.0.2 k8s-1 coredns-78fcdf6894-bxb2m 1/1 Running 0 14m 10.244.0.3 k8s-1 etcd-k8s-1 1/1 Running 0 13m 192.168.31.11 k8s-1 kube-apiserver-k8s-1 1/1 Running 0 13m 192.168.31.11 k8s-1 kube-controller-manager-k8s-1 1/1 Running 0 14m 192.168.31.11 k8s-1 kube-flannel-ds-amd64-cr8j8 1/1 Running 0 6m 192.168.31.11 k8s-1 kube-flannel-ds-amd64-kxk5w 1/1 Running 0 4m 192.168.31.12 k8s-2 kube-flannel-ds-amd64-pk4zl 1/1 Running 0 4m 192.168.31.13 k8s-3 kube-proxy-mxsrg 1/1 Running 0 4m 192.168.31.12 k8s-2 kube-proxy-tp95q 1/1 Running 0 4m 192.168.31.13 k8s-3 kube-proxy-twpvt 1/1 Running 0 14m 192.168.31.11 k8s-1 kube-scheduler-k8s-1 1/1 Running 0 14m 192.168.31.11 k8s-1