kubernetes部署1.15.0版本


部署環境 centos7.4

master01: 192.168.85.110

node01: 192.168.85.120

node02: 192.168.85.130

 

所有節點都要寫入hosts 

[root@master01 ~]# cat /etc/hosts

127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4

::1         localhost localhost.localdomain localhost6 localhost6.localdomain6

master01: 192.168.85.110

node01: 192.168.85.120

node02: 192.168.85.130

 

以下都要在所有節點上執行

准備docker yum倉庫

准備k8s yum倉庫

 

配置docker的yum庫

cd /etc/yum.repos.d/

wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

 

配置k8s的yum庫

/etc/yum.repos.d/kubernetes.repo

[kubernetes]

name=kubernetes

baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/

gpgcheck=0

enabled=1

 

所有節點安裝

安裝必備軟件

yum install lrzsz wget vim -y

kubeadm部署

yum 安裝docker

yum -y install docker-ce

 

編輯docker的環境變量

如果有HTTP代理,可以添加自己的代理,沒有就忽略

vim /usr/lib/systemd/system/docker.service

Environment="NO_PROXY=127.0.0.0/8"

 

docker國內加速

mkdir -p /etc/docker

vim /etc/docker/daemon.json

{

  "registry-mirrors": ["https://lvb4p7mn.mirror.aliyuncs.com"]

}

 

加載環境變量

systemctl daemon-reload

 

啟動docker並設置開機啟動 

systemctl start docker

systemctl enable docker

 

kubeadm部署

yum 安裝 kubeadm

yum -y install  kubeadm-1.15.0-0.x86_64  kubectl-1.15.0-0.x86_64 kubelet-1.15.0-0.x86_64 kubernetes-cni-0.7.5-0.x86_64

 

swap沒關的話就忽略swap參數

vim /etc/sysconfig/kubelet

KUBELET_EXTRA_ARGS="--fail-swap-on=false"

KUBE_PROXY_MODE=ipvs

 

開機啟動kubelet

systemctl enable kubelet

 

鏡像加載

kubeadm鏡像安裝

先提前下載鏡像k8s-1.15.0.tar.gz

鏈接: https://pan.baidu.com/s/1AhDsQHUIMd0CQufGteFSXw 提取碼: vshs

 

上傳到各節點

各節點都要加載鏡像 

docker load  -i k8s-1.15.0.tar.gz

 

flannel鏡像安裝

先提前下載鏡像flannel-v0.11.0.tar.gz

鏈接: https://pan.baidu.com/s/1QEssOf2yX1taupQT4lTxQg 提取碼: x42r

 

各節點都要加載鏡像

docker load  -i flannel-v0.11.0.tar.gz

 

kubectl命令自動補全

yum install bash-completion* -y

##寫入環境變量

source <(kubectl completion bash)

echo "source <(kubectl completion bash)" >> ~/.bashrc

 

部署k8s

master節點部署

kubeadm 初始化

kubeadm init  --kubernetes-version=v1.15.0 --pod-network-cidr=10.244.0.0/16  --service-cidr=10.96.0.0/12 --ignore-preflight-errors=all

 

初始化完成后

記住節點要加入的token

kubeadm join 192.168.85.110:6443 --token fo0kd9.ocdrd0obki28g76i  --discovery-token-ca-cert-hash sha256:9a5b3ec15c16926e667281cda008b0b550ed5404628453929b0c2a551cbb0bfd  -- ignore-preflight-errors=all

 

按照要求執行三個步驟

mkdir -p $HOME/.kube

cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

chown $(id -u):$(id -g) $HOME/.kube/config

 

檢查集群健康狀態

[root@master01 ~]# kubectl get cs

NAME                 STATUS    MESSAGE             ERROR

controller-manager   Healthy   ok                 

scheduler            Healthy   ok                 

etcd-0               Healthy   {"health":"true"} 

 

master部署網絡插件flannel

[root@master01 ~]# kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

 

node節點部署

各節點利用token部署

kubeadm join 192.168.85.110:6443 --token fo0kd9.ocdrd0obki28g76i  --discovery-token-ca-cert-hash sha256:9a5b3ec15c16926e667281cda008b0b550ed5404628453929b0c2a551cbb0bfd --ignore-preflight-errors=all

默認token的有效期為24小時,當過期之后,該token就不可用了,以后加入節點需要新token

 

master重新生成新的token

[root@master01 ~]# kubeadm token create

905hgq.1akgmga715dzooxo

[root@master01 ~]# kubeadm token list

TOKEN                     TTL       EXPIRES                     USAGES                   DESCRIPTION   EXTRA GROUPS

905hgq.1akgmga715dzooxo   23h       2019-06-23T15:18:24+08:00   authentication,signing   <none>        system:bootstrappers:kubeadm:default-node-token

 

獲取ca證書sha256編碼hash值

[root@master01 ~]# openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //'

2db0df25f40a3376e35dc847d575a2a7def59604b8196f031663efccbc8290c2

 

利用新token加入集群

kubeadm join 192.168.85.110:6443 --token 905hgq.1akgmga715dzooxo \

   --discovery-token-ca-cert-hash sha256:2db0df25f40a3376e35dc847d575a2a7def59604b8196f031663efccbc8290c2 \

--ignore-preflight-errors=all

 

最后查看各節點是否就緒

[root@master01 ~]# kubectl get node

NAME       STATUS   ROLES    AGE    VERSION

master01   Ready    master   3m1s   v1.15.0

node01     Ready    <none>   72s    v1.15.0

node02     Ready    <none>   54s    v1.15.0

 

開啟ipvs

加載ipvs

內核4.19以上是nf_conntrack,4.19以下是 nf_conntrack_ipv4,其他不變

[root@master01 ~]# uname -r

5.2.2-1.el7.elrepo.x86_64

[root@master01 ~]# cat /etc/sysconfig//modules/ipvs.modules

#!/bin/bash

module=(ip_vs

        ip_vs_rr

        ip_vs_wrr

        ip_vs_sh

        ip_vs_lc

        br_netfilter

        nf_conntrack)

for kernel_module in ${module[@]};do

    /sbin/modinfo -F filename $kernel_module |& grep -qv ERROR && echo $kernel_module >> /etc/modules-load.d/ipvs.conf || :

done

ipvs_modules_dir="/usr/lib/modules/5.2.2-1.el7.elrepo.x86_64/kernel/net/netfilter/ipvs"

for i in `ls $ipvs_modules_dir | sed  -r 's#(.*).ko#\1#'`; do

    /sbin/modinfo -F filename $i  &> /dev/null

    if [ $? -eq 0 ]; then

        /sbin/modprobe $i

    fi

done 

 

[root@master01 ~]# lsmod | grep ip_vs

ip_vs_wlc              16384  0

ip_vs_sed              16384  0

ip_vs_pe_sip           16384  0

nf_conntrack_sip       32768  1 ip_vs_pe_sip

ip_vs_ovf              16384  0

ip_vs_nq               16384  0

ip_vs_mh               16384  0

ip_vs_lblcr            16384  0

ip_vs_lblc             16384  0

ip_vs_ftp              16384  0

nf_nat                 40960  4 ip6table_nat,iptable_nat,xt_MASQUERADE,ip_vs_ftp

ip_vs_fo               16384  0

ip_vs_dh               16384  0

ip_vs_lc               16384  0

ip_vs_sh               16384  0

ip_vs_wrr              16384  0

ip_vs_rr               16384  4

ip_vs                 151552  35 ip_vs_wlc,ip_vs_rr,ip_vs_dh,ip_vs_lblcr,ip_vs_sh,ip_vs_ovf,ip_vs_fo,ip_vs_nq,ip_vs_lblc,ip_vs_pe_sip,ip_vs_wrr,ip_vs_lc,ip_vs_mh,ip_vs_sed,ip_vs_ftp

nf_conntrack          139264  6 xt_conntrack,nf_nat,nf_conntrack_sip,nf_conntrack_netlink,xt_MASQUERADE,ip_vs

nf_defrag_ipv6         24576  2 nf_conntrack,ip_vs

libcrc32c              16384  4 nf_conntrack,nf_nat,xfs,ip_vs

 

 [root@master01 ~]# kubectl -n kube-system edit configmaps kube-proxy

 kind: KubeProxyConfiguration

    metricsBindAddress: 127.0.0.1:10249

    mode: "ipvs"   ### 添加ipvs就行

安裝ipvsadm

 

yum install ipvsadm ipset sysstat conntrack libseccomp conntrack-tools  socat  -y

 

刪除原來的kube-proxy,重新加載kube-proxy

 

[root@master01 ~]# kubectl delete  daemonsets kube-proxy   -n kube-system

 [root@master01 ~]# ipvsadm -Ln

IP Virtual Server version 1.2.1 (size=4096)

Prot LocalAddress:Port Scheduler Flags

  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn

TCP  10.96.0.1:443 rr

  -> 192.168.48.101:6443          Masq    1      2          0        

TCP  10.96.0.10:53 rr

  -> 10.244.0.3:53                Masq    1      0          0        

  -> 10.244.1.3:53                Masq    1      0          0        

TCP  10.96.0.10:9153 rr

  -> 10.244.0.3:9153              Masq    1      0          0        

  -> 10.244.1.3:9153              Masq    1      0          0        

UDP  10.96.0.10:53 rr

  -> 10.244.0.3:53                Masq    1      0          0        

  -> 10.244.1.3:53                Masq    1      0          0        

原文:https://blog.csdn.net/tangwei0928/article/details/93377100


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM