CentOS上部署Kubernetes集群


1、開始前系統環境准備

# 1、設置基本環境
yum install -y net-tools conntrack-tools wget vim  ntpdate libseccomp libtool-ltdl 			#在所有的機器上執行,安裝基本命令
systemctl stop firewalld && systemctl disable firewalld		#執行關閉防火牆和SELinux
setenforce 0		#關閉selinux
sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config
swapoff -a			#關閉swap
sed -i 's/.*swap.*/#&/' /etc/fstab

# 2、設置免密登陸
ssh-keygen -t rsa		#配置免密登陸
ssh-copy-id <ip地址>		#拷貝密鑰

# 3、更改國內yum源
mv /etc/yum.repos.d/CentOS-Base.repo /etc/yum.repos.d/CentOS-Base.repo.$(date +%Y%m%d)
wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.cloud.tencent.com/repo/centos7_base.repo
wget -O /etc/yum.repos.d/epel.repo http://mirrors.cloud.tencent.com/repo/epel-7.repo
#docker源
wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -O /etc/yum.repos.d/docker-ce.repo

#配置國內Kubernetes源
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
yum clean all && yum makecache -y

#有的人是寫成這樣的kubernetes源
[root@localhost ~]#  cat >> /etc/yum.repos.d/kubernetes.repo << EOF
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=0
EOF

# 4、配置內核參數,將橋接的IPv4流量傳遞到IPtables鏈
modprobe br_netfilter	#加載br_netfilter模塊

cat <<EOF >  /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl -p /etc/sysctl.d/k8s.conf
sysctl --system
ls /proc/sys/net/bridge

# 5.配置文件描述數
echo "* soft nofile 65536" >> /etc/security/limits.conf
echo "* hard nofile 65536" >> /etc/security/limits.conf
echo "* soft nproc 65536"  >> /etc/security/limits.conf
echo "* hard nproc 65536"  >> /etc/security/limits.conf
echo "* soft  memlock  unlimited"  >> /etc/security/limits.conf
echo "* hard memlock  unlimited"  >> /etc/security/limits.conf

2、安裝Docker

如今Docker分為了Docker-CE和Docker-EE兩個版本,CE為社區版即免費版,EE為企業版即商業版。我們選擇使用CE版。在所有的機器上操作

#1.安裝yum源工具包
yum install -y yum-utils device-mapper-persistent-data lvm2
#2.下載docker-ce官方的yum源配置文件
yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
#3.禁用docker-c-edge源配edge是不開發版,不穩定,下載stable版
yum-config-manager --disable docker-ce-edge
#4.更新本地YUM源緩存
yum makecache fast
#5.安裝Docker-ce相應版本
yum -y install docker-ce
#6.修改配置文件
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2375 -H unix:///var/run/docker.sock
#7.設置開機自啟動
systemctl daemon-reload
systemctl restart docker && systemctl enable docker

運行hello world驗證

[root@localhost ~]# systemctl start docker
[root@localhost ~]# docker run hello-world
Unable to find image 'hello-world:latest' locally
latest: Pulling from library/hello-world
9a0669468bf7: Pull complete
Digest: sha256:0e06ef5e1945a718b02a8c319e15bae44f47039005530bc617a5d071190ed3fc
Status: Downloaded newer image for hello-world:latest

Hello from Docker!
This message shows that your installation appears to be working correctly.

To generate this message, Docker took the following steps:
1. The Docker client contacted the Docker daemon.
2. The Docker daemon pulled the "hello-world" image from the Docker Hub.
3. The Docker daemon created a new container from that image which runs the
   executable that produces the output you are currently reading.
4. The Docker daemon streamed that output to the Docker client, which sent it
   to your terminal.

To try something more ambitious, you can run an Ubuntu container with:
$ docker run -it ubuntu bash

Share images, automate workflows, and more with a free Docker ID:
https://cloud.docker.com/

For more examples and ideas, visit:
https://docs.docker.com/engine/userguide/

3、安裝kubelet與kubeadm包

使用kubeadm init命令初始化集群之下載Docker鏡像到所有主機的實始化時會下載kubeadm必要的依賴鏡像,同時安裝etcd,kube-dns,kube-proxy,由於我們GFW防火牆問題我們不能直接訪問,因此先通過其它方法下載下面列表中的鏡像,然后導入到系統中,再使用kubeadm init來初始化集群

1.使用DaoCloud加速器(可以跳過這一步)

[root@localhost ~]# curl -sSL https://get.daocloud.io/daotools/set_mirror.sh | sh -s http://0d236e3f.m.daocloud.io
docker version >= 1.12
{"registry-mirrors": ["http://0d236e3f.m.daocloud.io"]}
Success.
You need to restart docker to take effect: sudo systemctl restart docker
[root@localhost ~]# systemctl restart docker

2.下載鏡像,自己通過Dockerfile到dockerhub生成對鏡像,也可以克隆其他人的

手動下載kubernetes的相關鏡像,下載地址為:https://hub.docker.com/u/warrior下載后需要將鏡像名改為以 k8s.gcr.io或者gcr.io/google_containers 開頭的名稱(具體名稱按照系統提示操作)。

#參考別人的寫法(我這里沒有參考用的自己的方式下載的)
images=(kube-controller-manager-amd64 etcd-amd64 k8s-dns-sidecar-amd64 kube-proxy-amd64 kube-apiserver-amd64 kube-scheduler-amd64 pause-amd64 k8s-dns-dnsmasq-nanny-amd64 k8s-dns-kube-dns-amd64)
for imageName in ${images[@]} ; do
 docker pull champly/$imageName
 docker tag champly/$imageName gcr.io/google_containers/$imageName
 docker rmi champly/$imageName
done

# 然后修改版本號,需要和kubernetes的版本對應
docker tag gcr.io/google_containers/etcd-amd64 gcr.io/google_containers/etcd-amd64:3.0.17 && \
docker rmi gcr.io/google_containers/etcd-amd64 && \

docker tag gcr.io/google_containers/k8s-dns-dnsmasq-nanny-amd64 gcr.io/google_containers/k8s-dns-dnsmasq-nanny-amd64:1.14.5 && \
docker rmi gcr.io/google_containers/k8s-dns-dnsmasq-nanny-amd64 && \

docker tag gcr.io/google_containers/k8s-dns-kube-dns-amd64 gcr.io/google_containers/k8s-dns-kube-dns-amd64:1.14.5 && \
docker rmi gcr.io/google_containers/k8s-dns-kube-dns-amd64 && \

docker tag gcr.io/google_containers/k8s-dns-sidecar-amd64 gcr.io/google_containers/k8s-dns-sidecar-amd64:1.14.2 && \
docker rmi gcr.io/google_containers/k8s-dns-sidecar-amd64 && \

docker tag gcr.io/google_containers/kube-apiserver-amd64 gcr.io/google_containers/kube-apiserver-amd64:v1.7.5 && \
docker rmi gcr.io/google_containers/kube-apiserver-amd64 && \

docker tag gcr.io/google_containers/kube-controller-manager-amd64 gcr.io/google_containers/kube-controller-manager-amd64:v1.7.5 && \
docker rmi gcr.io/google_containers/kube-controller-manager-amd64 && \

docker tag gcr.io/google_containers/kube-proxy-amd64 gcr.io/google_containers/kube-proxy-amd64:v1.6.0 && \
docker rmi gcr.io/google_containers/kube-proxy-amd64 && \

docker tag gcr.io/google_containers/kube-scheduler-amd64 gcr.io/google_containers/kube-scheduler-amd64:v1.7.5 && \
docker rmi gcr.io/google_containers/kube-scheduler-amd64 && \

docker tag gcr.io/google_containers/pause-amd64 gcr.io/google_containers/pause-amd64:3.0 && \
docker rmi gcr.io/google_containers/pause-amd64

#以上是別人的容器,修改的tag是老的,可以參考如下方式下載鏡像,下載的容器版本需要和kubernetes軟件的版本一致,找了2個下載比較多的鏡像版本,我們以他們的鏡像為准(mirrorgooglecontainers  googlecontainer)

export image=pause:3.1
docker pull mirrorgooglecontainers/${image}
docker tag mirrorgooglecontainers/${image} k8s.gcr.io/${image}
docker rmi mirrorgooglecontainers/${image}
export image=kube-apiserver:v1.14.3
docker pull mirrorgooglecontainers/${image}
docker tag mirrorgooglecontainers/${image} k8s.gcr.io/${image}
docker rmi mirrorgooglecontainers/${image}
export image=kube-scheduler:v1.14.3
docker pull mirrorgooglecontainers/${image}
docker tag mirrorgooglecontainers/${image} k8s.gcr.io/${image}
docker rmi mirrorgooglecontainers/${image}
export image=kube-controller-manager:v1.14.3
docker pull mirrorgooglecontainers/${image}
docker tag mirrorgooglecontainers/${image} k8s.gcr.io/${image}
docker rmi mirrorgooglecontainers/${image}
export image=kube-proxy:v1.14.3
docker pull mirrorgooglecontainers/${image}
docker tag mirrorgooglecontainers/${image} k8s.gcr.io/${image}
docker rmi mirrorgooglecontainers/${image}
export image=k8s-dns-kube-dns-amd64:1.15.3
docker pull mirrorgooglecontainers/${image}
docker tag mirrorgooglecontainers/${image} k8s.gcr.io/${image}
docker rmi mirrorgooglecontainers/${image}
export image=k8s-dns-dnsmasq-nanny-amd64:1.15.3
docker pull mirrorgooglecontainers/${image}
docker tag mirrorgooglecontainers/${image} k8s.gcr.io/${image}
docker rmi mirrorgooglecontainers/${image}
export image=k8s-dns-sidecar-amd64:1.15.3
docker pull mirrorgooglecontainers/${image}
docker tag mirrorgooglecontainers/${image} k8s.gcr.io/${image}
docker rmi mirrorgooglecontainers/${image}
export image=etcd:3.3.10
docker pull mirrorgooglecontainers/${image}
docker tag mirrorgooglecontainers/${image} k8s.gcr.io/${image}
docker rmi mirrorgooglecontainers/${image}
export image=coredns:1.3.1
docker pull coredns/${image}
docker tag coredns/${image} k8s.gcr.io/${image}
docker rmi coredns/${image}

3.安裝kubectl kubelet kubeadm kubernetes-cni

yum list kubectl kubelet kubeadm kubernetes-cni		#查看可安裝的包
已加載插件:fastestmirror
Loading mirror speeds from cached hostfile
* base: mirrors.tuna.tsinghua.edu.cn
* extras: mirrors.sohu.com
* updates: mirrors.sohu.com
#顯示可安裝的軟件包
kubeadm.x86_64                                    1.14.3-0                                              kubernetes
kubectl.x86_64                                    1.14.3-0                                             kubernetes
kubelet.x86_64                                    1.14.3-0                                              kubernetes
kubernetes-cni.x86_64                             0.7.5-0                                              kubernetes
[root@localhost ~]#

#然后安裝kubectl kubelet kubeadm kubernetes-cni
yum install -y kubectl kubelet kubeadm kubernetes-cni

# Kubelet負責與其他節點集群通信,並進行本節點Pod和容器生命周期的管理。
# Kubeadm是Kubernetes的自動化部署工具,降低了部署難度,提高效率。
# Kubectl是Kubernetes集群管理工具

systemctl enable kubelet && systemctl start kubelet		#啟動所有主機上的kubelet服務

4.初始化master master節點上操作

定義POD的網段為: 10.244.0.0/16, --apiserver-advertise-address參數 指定的就是master本機IP地址。

由於kubeadm 默認從官網k8s.grc.io下載所需鏡像,國內無法訪問,也可以通過–image-repository指定阿里雲鏡像倉庫地址,咱們已經下載好了鏡像可以不用加上這個參數

kubeadm reset && kubeadm init --apiserver-advertise-address=192.168.10.13 --kubernetes-version=v1.14.3 --pod-network-cidr=10.244.0.0/16
#如果沒有准備好鏡像可以指定阿里雲鏡像的倉庫地址。使用下面的命令。
kubeadm init --kubernetes-version=1.14.3 --apiserver-advertise-address=192.168.10.13 --image-repository registry.aliyuncs.com/google_containers --service-cidr=10.1.0.0/16 --pod-network-cidr=10.244.0.0/16

#集群初始化成功后返回如下信息:
[preflight] Running pre-flight checks
[reset] Stopping the kubelet service
[reset] Unmounting mounted directories in "/var/lib/kubelet"
[reset] Removing kubernetes-managed containers
[reset] Deleting contents of stateful directories: [/var/lib/kubelet /etc/cni/net.d /var/lib/dockershim /var/lib/etcd]
[reset] Deleting contents of config directories: [/etc/kubernetes/manifests /etc/kubernetes/pki]
[reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
[kubeadm] WARNING: kubeadm is in beta, please do not use it for production clusters.
[init] Using Kubernetes version: v1.14.3
[init] Using Authorization modes: [Node RBAC]
[preflight] Running pre-flight checks
[preflight] WARNING: docker version is greater than the most recently validated version. Docker version: 17.09.0-ce. Max validated version: 1.12
[preflight] Starting the kubelet service
[kubeadm] WARNING: starting in 1.8, tokens expire after 24 hours by default (if you require a non-expiring token use --token-ttl 0)
[certificates] Generated CA certificate and key.
[certificates] Generated API server certificate and key.
[certificates] API Server serving cert is signed for DNS names [master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.0.100]
[certificates] Generated API server kubelet client certificate and key.
[certificates] Generated service account token signing key and public key.
[certificates] Generated front-proxy CA certificate and key.
[certificates] Generated front-proxy client certificate and key.
[certificates] Valid certificates and keys now exist in "/etc/kubernetes/pki"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf"
[apiclient] Created API client, waiting for the control plane to become ready
[apiclient] All control plane components are healthy after 34.002949 seconds
[token] Using token: 0696ed.7cd261f787453bd9
[apiconfig] Created RBAC rules
[addons] Applied essential addon: kube-proxy
[addons] Applied essential addon: kube-dns

Your Kubernetes master has initialized successfully!

To start using your cluster, you need to run (as a regular user):

 mkdir -p $HOME/.kube
 sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
 sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
 http://kubernetes.io/docs/admin/addons/

You can now join any number of machines by running the following on each node
as root:
#沒有問題的情況下會返回如下參數(一定要記好)
kubeadm join 192.168.10.13:6443 --token wdmykh.u84g6ijzu4n99qez \
    --discovery-token-ca-cert-hash sha256:868b5d27c078ddd3ce98bf67bbad4d8568d3cb134763b732e7ad4b47eda196b2 

#遇到報錯問題處理
1、[ERROR FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables contents are not set to 1
解決辦法:echo "1" > /proc/sys/net/bridge/bridge-nf-call-iptables

2、[ERROR Swap]: running with swap on is not supported. Please disable swap
解決辦法:關閉swap分區 swapoff -a
vim /etc/fstab		#注釋下面一行
#/dev/mapper/rhel-swap   swap                    swap    defaults        0 0

3、[ERROR DirAvailable--var-lib-etcd]: /var/lib/etcd is not empty
解決辦法:直接刪除/var/lib/etcd文件夾 rm -rf /var/lib/etcd

4、報錯內容:error execution phase preflight: [preflight] Some fatal errors occurred:
        [ERROR Port-6443]: Port 6443 is in use
        [ERROR Port-10251]: Port 10251 is in use
        [ERROR Port-10252]: Port 10252 is in use
        [ERROR FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml]: /etc/kubernetes/manifests/kube-apiserver.yaml already exists
        [ERROR FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml]: /etc/kubernetes/manifests/kube-controller-manager.yaml already exists
        [ERROR FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml]: /etc/kubernetes/manifests/kube-scheduler.yaml already exists
        [ERROR FileAvailable--etc-kubernetes-manifests-etcd.yaml]: /etc/kubernetes/manifests/etcd.yaml already exists
        [ERROR Port-10250]: Port 10250 is in use
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`

解決辦法:安裝提示忽略掉加上參數 --ignore-preflight-errors=all

5、This error is likely caused by:
        - The kubelet is not running
        - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
解決辦法:暫無

這個一定要記住,添加節點需要

# 配置kubectl工具
mkdir -p /root/.kube
cp /etc/kubernetes/admin.conf /root/.kube/config	#如果是其他用戶也要拷貝到對應的用戶目錄下然后賦權
kubectl get nodes
kubectl get cs

#部署flannel網絡
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/a70459be0084506e4ec919aa1c114638878db11b/Documentation/kube-flannel.yml
#或者這樣操作
docker pull quay.io/coreos/flannel:v0.8.0-amd64
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/v0.8.0/Documentation/kube-flannel.yml
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/v0.8.0/Documentation/kube-flannel-rbac.yml

5.添加node節點

#在node節點的機器上執行操作
kubeadm join 192.168.10.13:6443 --token wdmykh.u84g6ijzu4n99qez --discovery-token-ca-cert-hash sha256:868b5d27c078ddd3ce98bf67bbad4d8568d3cb134763b732e7ad4b47eda196b2
#提示如下
[kubeadm] WARNING: kubeadm is in beta, please do not use it for production clusters.
[preflight] Running pre-flight checks
[preflight] WARNING: docker version is greater than the most recently validated version. Docker version: 17.09.0-ce. Max validated version: 1.12
[preflight] WARNING: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
[preflight] Starting the kubelet service
[discovery] Trying to connect to API Server "192.168.10.13:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://192.168.10.13:6443"
[discovery] Cluster info signature and contents are valid, will use API Server "https://192.168.10.13:6443"
[discovery] Successfully established connection with API Server "192.168.10.13:6443"
[bootstrap] Detected server version: v1.14.3
[bootstrap] The server supports the Certificates API (certificates.k8s.io/v1beta1)
[csr] Created API client to obtain unique certificate for this node, generating keys and certificate signing request
[csr] Received signed certificate from the API server, generating KubeConfig...
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"

Node join complete:
* Certificate signing request sent to master and response
 received.
* Kubelet informed of new secure connection details.

Run 'kubectl get nodes' on the master to see this machine join.

6.查看集群

[root@master ~]# kubectl get cs
NAME                 STATUS    MESSAGE              ERROR
scheduler            Healthy   ok
controller-manager   Healthy   ok
etcd-0               Healthy   {"health": "true"}
[root@master ~]# kubectl get nodes
NAME      STATUS     AGE       VERSION
master    Ready      24m       v1.7.5
node1     NotReady   45s       v1.7.5
node2     NotReady   7s        v1.7.5
[root@master ~]# kubectl get pods --all-namespaces
NAMESPACE     NAME                             READY     STATUS              RESTARTS   AGE
kube-system   etcd-master                      1/1       Running             0          24m
kube-system   kube-apiserver-master            1/1       Running             0          24m
kube-system   kube-controller-manager-master   1/1       Running             0          24m
kube-system   kube-dns-2425271678-h48rw        0/3       ImagePullBackOff    0          25m
kube-system   kube-flannel-ds-28n3w            1/2       CrashLoopBackOff    13         24m
kube-system   kube-flannel-ds-ndspr            0/2       ContainerCreating   0          41s
kube-system   kube-flannel-ds-zvx9j            0/2       ContainerCreating   0          1m
kube-system   kube-proxy-qxxzr                 0/1       ImagePullBackOff    0          41s
kube-system   kube-proxy-shkmx                 0/1       ImagePullBackOff    0          25m
kube-system   kube-proxy-vtk52                 0/1       ContainerCreating   0          1m
kube-system   kube-scheduler-master            1/1       Running             0          24m
[root@master ~]#

如果出現:The connection to the server localhost:8080 was refused - did you specify the right host or port?

解決辦法: 為了使用kubectl訪問apiserver,在~/.bash_profile中追加下面的環境變量: export KUBECONFIG=/etc/kubernetes/admin.conf source ~/.bash_profile 重新初始化kubectl


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM