[原]使用kubeadm部署kubernetes(一)


#######################    以下為聲明  #####################

在公眾號  木子李的菜田

輸入關鍵詞:   k8s        

有系列安裝文檔

此文檔是之前做筆記在兩台機上進行的實踐,kubernetes處於不斷開發階段

不能保證每個步驟都能准確到同步開發進度,所以如果安裝部署過程中有問題請盡量google

本文章分為兩部分:

[原]使用kubeadm部署kubernetes(一)

[原]部署kubernetes dashboard(二)

按照下面步驟能得到什么?

1.兩台主機:一台為server ,另外一台為node節點

2.在node節點上安裝部署dashboard插件 並以kubernetes dashboard的方式呈現

3.解決遇到的問題

#######################    以下為正文  #####################
###
OS: CentOS 7.5
kubernetes :Kubernetes v1.14.1
network model: flannel
###
【在兩台機器上都要做的步驟如下】
 
0.修改添加node的hosts文件和停止防火牆
    
systemctl stop firewalld && systemctl disable firewalld
1.免密雙方登錄
2.時間同步   (可參見 https://www.cnblogs.com/horizonli/p/9539436.html
3.關閉selinux
# Set SELinux in permissive mode (effectively disabling it)
setenforce 0
sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config
4.打開數據包轉發
cat <<EOF >  /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
EOF
sysctl --system

  

5.加載網絡篩選器
modprobe br_netfilter
lsmod | grep br_netfilter
6.檢查橋工具包(如果沒有需要重新安裝)
rpm -qa |grep bridge-utils
7關閉swap
swapoff -a

 

vim /etc/fstab
在這行前面添加#
#/dev/mapper/centos-swap swap                    swap    defaults        0 0

  

8(可選項,可以不選擇執行,只作為參考,則使用的是iptables模式).kube-proxy開啟ipvs的前置條件
cat > /etc/sysconfig/modules/ipvs.modules <<EOF
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack_ipv4
EOF
chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep -e ip_vs -e nf_conntrack_ipv4
【注意】上面腳本創建了的/etc/sysconfig/modules/ipvs.modules文件,保證在節點重啟后能自動加載所需模塊。 使用lsmod | grep -e ip_vs -e nf_conntrack_ipv4命令查看是否已經正確加載所需的內核模塊。
接下來還需要確保各個節點上已經安裝了ipset軟件包yum install ipset。 為了便於查看ipvs的代理規則,最好安裝一下管理工具ipvsadm yum install ipvsadm 如果以上前提條件如果不滿足,
則即使kube-proxy的配置開啟了ipvs模式,也會退回到iptables模式

  

 
9.安裝docker
Kubernetes從1.6開始使用CRI(Container Runtime Interface)容器運行時接口。默認的容器運行時仍然是Docker,使用的是kubelet中內置dockershim CRI實現
安裝docker的yum源
yum install -y yum-utils device-mapper-persistent-data lvm2
sudo yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
sudo yum makecache fast

 或者使用 

yum install -y yum-utils device-mapper-persistent-data lvm2
yum-config-manager \
--add-repo \
https://download.docker.com/linux/centos/docker-ce.repo

【注意】  

在后面kubeadm進行init的時候會報個類似這樣的錯誤:
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 18.09.3. Latest validated version: 18.06
通過查看k8s更改記錄可以查到支持的docker版本號
指定特定的docker版本號進行安裝
  • kubeadm now properly recognizes Docker 18.09.0 and newer, but still treats 18.06 as the default supported version.
如果之前有錯誤的docker需要先卸載: yum remove docker-ce* -y
列出可安裝的docker版本
yum list docker-ce.x86_64  --showduplicates |sort -r
docker-ce.x86_64            3:18.09.0-3.el7                    docker-ce-stable
docker-ce.x86_64            18.06.3.ce-3.el7                   docker-ce-stable
docker-ce.x86_64            18.06.2.ce-3.el7                   docker-ce-stable
docker-ce.x86_64            18.06.1.ce-3.el7                   docker-ce-stable
docker-ce.x86_64            18.06.0.ce-3.el7                   docker-ce-stable
 
yum install docker-ce-18.06.3.ce-3.el7 -y
 
systemctl start docker && systemctl enable docker  
 
編輯或者創建/etc/docker/daemon.json  更改設置鏡像庫
[root@k8s-master flannel]# cat <<EOF  >/etc/docker/daemon.json
{
"registry-mirrors": ["https://72idtxd8.mirror.aliyuncs.com"]
}
EOF
 
systemctl reset-failed docker.service && systemctl restart docker.service
 
10安裝[kubernetes]
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
能kexue上網的用下面這個
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
exclude=kube*
EOF
sudo yum makecache fast
yum install -y kubelet kubeadm kubectl 
官網對三個工具的介紹
kubeadm: the command to bootstrap the cluster.
kubelet: the component that runs on all of the machines in your cluster and does things like starting pods and containers.
kubectl: the command line util to talk to your cluster.
11 【只在master機器上執行】檢查cgroup driver:
docker info | grep -i cgroup
Cgroup Driver: cgroupfs
如果不是cgroupfs 就用下面的命令改變:
sed -i 's/cgroup-driver=systemd/cgroup-driver=cgroupfs/g' /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
systemctl daemon-reload
systemctl restart kubelet    ---》這句不執行會報錯如下:
failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file
"/var/lib/kubelet/config.yaml", error: open /var/lib/kubelet/config.yaml: no such file or directory  
需要在成功啟動初始化cluster的時候才能執行成功  所以需要先執行 kubeadm --init
%%%%%  下面是官方文檔參考


When using Docker, kubeadm will automatically detect the cgroup driver for the kubelet and set it in the /var/lib/kubelet/kubeadm-flags.env file during runtime.
If you are using a different CRI, you have to modify the file /etc/default/kubelet with your cgroup-driver value, like so:

KUBELET_EXTRA_ARGS=--cgroup-driver=<value>
This file will be used by kubeadm init and kubeadm join to source extra user defined arguments for the kubelet.
Please mind, that you only have to do that if the cgroup driver of your CRI is not cgroupfs, because that is the default value in the kubelet already.
Restarting the kubelet is required:


systemctl daemon-reload
systemctl restart kubelet

12【這步在每個node上(包括master)都要進行操作】

剛才安裝的是k8s軟件,現在來處理k8s需要使用的鏡像【可能在kubelet init的時候會遇到如下的錯誤】:

  Failed to pull image "quay.io/coreos/flannel:v0.11.0-amd64" 等等

為了能解決這些問題,可以提前處理好鏡像問題,使用kubeadm config images list 查看需要處理的基礎鏡像問題,

之所以為基礎鏡像是因為在kubelet init的時候就需要用到的鏡像,后面還會有其他插件安裝時候需要的鏡像,當遇到

問題時再看看是什么鏡像需要存在。

[root@k8s-master ~]# kubeadm config images list
k8s.gcr.io/kube-apiserver:v1.13.4
k8s.gcr.io/kube-controller-manager:v1.13.4
k8s.gcr.io/kube-scheduler:v1.13.4
k8s.gcr.io/kube-proxy:v1.13.4
k8s.gcr.io/pause:3.1
k8s.gcr.io/etcd:3.2.24
k8s.gcr.io/coredns:1.2.6  

根據上面查出來的鏡像名稱,我寫了個腳本來處理:

編輯拉取腳本:vim pull_image.sh
#!/bin/bash
#### 基礎鏡像 ##### images=( kube-apiserver:v1.14.1 kube-controller-manager:v1.14.1 kube-scheduler:v1.14.1 kube-proxy:v1.14.1 pause:3.1 etcd:3.3.10 coredns:1.3.1 kubernetes-dashboard-amd64:v1.10.1 ) for imageName in ${images[@]};do docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName k8s.gcr.io/$imageName done ### 插件鏡像 network: flannel image ### docker pull quay-mirror.qiniu.com/coreos/flannel:v0.11.0-amd64 docker tag quay-mirror.qiniu.com/coreos/flannel:v0.11.0-amd64 quay.io/coreos/flannel:v0.11.0-amd64
### 運行時插件鏡像pause image ### docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.1 docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.1 k8s.gcr.io/pause:3.1
chmod a+x  pull_image.sh
./pull_image.sh
docker images

13【這步驟在master機器上執行】

 

 ~]# kubelet --version
Kubernetes v1.14.1  

 初始化kubernetes master 機器:

kubeadm init --pod-network-cidr=10.244.0.0/16 --apiserver-advertise-address=192.168.0.166 --kubernetes-version=v1.14.1  > kube_init.log
10.244.0.0/16  是使用flannel網絡要設置的ip網絡地址(官網介紹一定要使用這個)
【官網介紹】
For flannel to work correctly, you must pass --pod-network-cidr=10.244.0.0/16 to kubeadm init.
kubeadm init --pod-network-cidr=10.244.0.0/16 --apiserver-advertise-address=10.50.10.10 --kubernetes-version=v1.13.4
192.168.0.166  為maser的主機ip地址
v1.14.1  是上面查出來的kubernetes 版本號

下面是我的操作記錄:僅供參考:

 

[root@rancher ~]# kubeadm init --pod-network-cidr=10.244.0.0/16 --apiserver-advertise-address=192.168.0.166 --kubernetes-version=v1.14.1
[init] Using Kubernetes version: v1.14.1
[preflight] Running pre-flight checks
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". 
Please follow the guide at https://kubernetes.io/docs/setup/cri/ [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service' [preflight] Pulling images required for setting up a Kubernetes cluster [preflight] This might take a minute or two, depending on the speed of your internet connection [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Activating the kubelet service [certs] Using certificateDir folder "/etc/kubernetes/pki" [certs] Generating "ca" certificate and key [certs] Generating "apiserver" certificate and key [certs] apiserver serving cert is signed for DNS names [rancher kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.0.166] [certs] Generating "apiserver-kubelet-client" certificate and key [certs] Generating "etcd/ca" certificate and key [certs] Generating "etcd/healthcheck-client" certificate and key [certs] Generating "apiserver-etcd-client" certificate and key [certs] Generating "etcd/server" certificate and key [certs] etcd/server serving cert is signed for DNS names [rancher localhost] and IPs [192.168.0.166 127.0.0.1 ::1] [certs] Generating "etcd/peer" certificate and key [certs] etcd/peer serving cert is signed for DNS names [rancher localhost] and IPs [192.168.0.166 127.0.0.1 ::1] [certs] Generating "front-proxy-ca" certificate and key [certs] Generating "front-proxy-client" certificate and key [certs] Generating "sa" key and public key [kubeconfig] Using kubeconfig folder "/etc/kubernetes" [kubeconfig] Writing "admin.conf" kubeconfig file [kubeconfig] Writing "kubelet.conf" kubeconfig file [kubeconfig] Writing "controller-manager.conf" kubeconfig file [kubeconfig] Writing "scheduler.conf" kubeconfig file [control-plane] Using manifest folder "/etc/kubernetes/manifests" [control-plane] Creating static Pod manifest for "kube-apiserver" [control-plane] Creating static Pod manifest for "kube-controller-manager" [control-plane] Creating static Pod manifest for "kube-scheduler" [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests" [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s [apiclient] All control plane components are healthy after 15.505486 seconds [upload-config] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace [kubelet] Creating a ConfigMap "kubelet-config-1.14" in namespace kube-system with the configuration for the kubelets in the cluster [upload-certs] Skipping phase. Please see --experimental-upload-certs [mark-control-plane] Marking the node rancher as control-plane by adding the label "node-role.kubernetes.io/master=''" [mark-control-plane] Marking the node rancher as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule] [bootstrap-token] Using token: dije5w.ipijm49d8c9isxie [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster [bootstrap-token] creating the "cluster-info" ConfigMap in the "kube-public" namespace [addons] Applied essential addon: CoreDNS [addons] Applied essential addon: kube-proxy Your Kubernetes control-plane has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ Then you can join any number of worker nodes by running the following on each as root: kubeadm join 192.168.0.166:6443 --token dije5w.ipijm49d8c9isxie \ --discovery-token-ca-cert-hash sha256:c1aaaafc79d85141e60a73c43562e6b06cb8d9cdc24cc0a649c8d6e0f24c5f42

  

14【在master上執行下面的操作】

以下完整步驟內容請在公眾號: 木子李的菜田 輸入: k8s 

為了能再node上正常操作kubelet,需要將master上的admin配置文件保存到每個node節點上

。。。。。  

【在node上執行下面操作】

。。。。。

  下面是非常很重要的一步!!!!

。。。。。。

【注意注意】如果在第一次kubeadm init的時候失敗了,需要重新kubeadm 進行重新初始化,需要做下面的操作:

 

。。。。。。

檢查kubernetes是否安裝正確:

[root@master ~]# kubectl get pods --namespace=kube-system
NAME                             READY   STATUS    RESTARTS   AGE
coredns-86c58d9df4-srtht         0/1     Pending   0          3h
coredns-86c58d9df4-tl7ww         0/1     Pending   0          3h
etcd-master                      1/1     Running   0          179m
kube-apiserver-master            1/1     Running   0          179m
kube-controller-manager-master   1/1     Running   0          3h
kube-proxy-2sdmn                 1/1     Running   1          3h
kube-proxy-ln5tk                 1/1     Running   1          173m
kube-scheduler-master            1/1     Running   0          3h

 

發現很多都還不正確,這個時候先別急,繼續第15和16步

 15 【只在master上進行操作】配置CNI-----這步跟網絡息息相關

vim /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
添加下面的內容
。。。。。。

  

16安裝flannel

#]kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/a70459be0084506e4ec919aa1c114638878db11b/Documentation/kube-flannel.yml
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.extensions/kube-flannel-ds-amd64 created
daemonset.extensions/kube-flannel-ds-arm64 created
daemonset.extensions/kube-flannel-ds-arm created
daemonset.extensions/kube-flannel-ds-ppc64le created
daemonset.extensions/kube-flannel-ds-s390x created

  

 17最后在master和node上檢查:

全部是Running 才算是真正的正確部署

[root@master ~]# kubectl get pods --namespace=kube-system
NAME                             READY   STATUS    RESTARTS   AGE
coredns-86c58d9df4-srtht         1/1     Running   0          3h16m
coredns-86c58d9df4-tl7ww         1/1     Running   0          3h16m
etcd-master                      1/1     Running   1          3h15m
kube-apiserver-master            1/1     Running   1          3h15m
kube-controller-manager-master   1/1     Running   1          3h16m
kube-flannel-ds-amd64-7mntm      1/1     Running   0          12m
kube-flannel-ds-amd64-bxzdn      1/1     Running   0          12m
kube-proxy-2sdmn                 1/1     Running   1          3h16m
kube-proxy-ln5tk                 1/1     Running   1          3h9m
kube-scheduler-master            1/1     Running   1          3h16m

 End

文章中有些地方的坑已經通過相關步驟填了,如果還遇到其他問題 請google一下或者使用 

使用describe 查看原因   特別是要注意這個原因是在master還是在node節點上。

 

 

  

  


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM