Kubernetes之七----基於ansible部署K8S


一、基礎集群環境搭建

架構圖:

   

服務器清單

1、安裝最小化Ubuntu系統

1、修改內核參數,修改網卡名稱,將ens33改為eth0

root@ubuntu:vim /etc/default/grub
GRUB_CMDLINE_LINUX="net.ifnames=0 biosdevname=0"
root@ubuntu:update-grub

2、修改系統的IP地址

root@node4:~# vim  /etc/netplan/50-cloud-init.yaml
network:
  version: 2
  renderer: networkd
  ethernets:
    eth0:
      dhcp4: no
      addresses: [192.168.7.110/24]
      gateway4: 192.168.7.2
      nameservers:
              addresses: [192.168.7.2]

 3、應用ip配置並重啟測試: 

root@node4:~# netplan  apply

4、修改apt源倉庫/etc/apt/sources.list:1804版本的Ubuntu,參考阿里雲倉庫:https://developer.aliyun.com/mirror/ubuntu?spm=a2c6h.13651102.0.0.53322f701347Pq

deb http://mirrors.aliyun.com/ubuntu/ bionic main restricted universe multiverse
deb-src http://mirrors.aliyun.com/ubuntu/ bionic main restricted universe multiverse

deb http://mirrors.aliyun.com/ubuntu/ bionic-security main restricted universe multiverse
deb-src http://mirrors.aliyun.com/ubuntu/ bionic-security main restricted universe multiverse

deb http://mirrors.aliyun.com/ubuntu/ bionic-updates main restricted universe multiverse
deb-src http://mirrors.aliyun.com/ubuntu/ bionic-updates main restricted universe multiverse

deb http://mirrors.aliyun.com/ubuntu/ bionic-proposed main restricted universe multiverse
deb-src http://mirrors.aliyun.com/ubuntu/ bionic-proposed main restricted universe multiverse

deb http://mirrors.aliyun.com/ubuntu/ bionic-backports main restricted universe multiverse
deb-src http://mirrors.aliyun.com/ubuntu/ bionic-backports main restricted universe multiverse

5、安裝常用命令

# apt-get update  #更新
# apt-get purge ufw lxd lxd-client lxcfs lxc-common #卸載不用的包
# apt-get  install iproute2  ntpdate  tcpdump telnet traceroute nfs-kernel-server nfs-common \
lrzsz tree  openssl libssl-dev libpcre3 libpcre3-dev zlib1g-dev ntpdate tcpdump telnet traceroute \
gcc openssh-server lrzsz tree  openssl libssl-dev libpcre3 \
libpcre3-dev zlib1g-dev ntpdate tcpdump telnet traceroute iotop unzip zip

6、重啟linux系統。  

# reboot

7、想要root用戶登錄,先sudo切換到root用戶,然后修改ssh服務vim /etc/ssh/sshd_config

root@node4:~# sudo su
root@node4:~# vim /etc/ssh/sshd_config
PermitRootLogin yes  # 允許root用戶登錄即可
UseDNS no    # 避免xshell通過ssh連接時進行反向解析

root@node4:~# systemctl restart sshd   # 重啟sshd服務就可以root登錄

二、安裝配置HAProxy及Keepalived

1、配置keepalived服務

1、安裝HAProxy和Keepalived

root@node5:~# apt-get install haproxy keepalived -y

2、找到一個keepalived配置范例,然后復制到/etc/keepalived目錄下進行配置。

root@node5:~# find / -name keepalived.*
/usr/share/doc/keepalived/samples/keepalived.conf.sample 
root@node5:~# cp /usr/share/doc/keepalived/samples/keepalived.conf.sample /etc/keepalived/keepalived.conf #復制范例到/etc/keepalived/目錄下,並起名為keepalived.conf

3、修改keepalived.conf配置文件

root@node5:/etc/keepalived# cat /etc/keepalived/keepalived.conf 
! Configuration File for keepalived

global_defs {
   notification_email {
     acassen
   }
   notification_email_from Alexandre.Cassen@firewall.loc
   smtp_server 192.168.200.1
   smtp_connect_timeout 30
   router_id LVS_DEVEL
}

vrrp_instance VI_1 {
    interface eth0
    virtual_router_id 50
    nopreempt
    priority 100
    advert_int 1
    virtual_ipaddress {
        192.168.7.248 dev eth0 label eth0:0  # VIP添加為192.168.7.248
    }
}

4、重啟keepalived服務,並設置為開機啟動

root@node5:/etc/keepalived# systemctl restart keepalived
root@node5:/etc/keepalived# systemctl enable keepalived

5、查看此時的VIP已經生成

  

 2、配置haproxy服務

1、修改haproxy配置,監聽master的6443端口

# vim /etc/haproxy/haproxy.cfg

listen k8s-api-server-6443
 	bind 192.168.7.248:6443
	mode tcp 
	server 192.168.7.110 192.168.7.110:6443 check inter 2000 fall 3 rise 5

2、重啟haproxy服務,並設置為開機啟動

root@node5:~# systemctl restart haproxy
root@node5:~# systemctl  enable haproxy

3、查看此時監聽的端口

   

 三、配置docker源倉庫,並安裝docker-ce(master、etcd、harbor、node節點都要安裝)

1、官方阿里雲地址:https://developer.aliyun.com/mirror/docker-ce?spm=a2c6h.13651102.0.0.53322f70mkWLiO

# step 1: 安裝必要的一些系統工具
sudo apt-get update
sudo apt-get -y install apt-transport-https ca-certificates curl software-properties-common
# step 2: 安裝GPG證書
curl -fsSL https://mirrors.aliyun.com/docker-ce/linux/ubuntu/gpg | sudo apt-key add -
# Step 3: 寫入軟件源信息
sudo add-apt-repository "deb [arch=amd64] https://mirrors.aliyun.com/docker-ce/linux/ubuntu $(lsb_release -cs) stable"
# Step 4: 更新並安裝Docker-CE
sudo apt-get -y update
sudo apt-get -y install docker-ce

# 安裝指定版本的Docker-CE:
# Step 1: 查找Docker-CE的版本:
# apt-cache madison docker-ce
#   docker-ce | 17.03.1~ce-0~ubuntu-xenial | https://mirrors.aliyun.com/docker-ce/linux/ubuntu xenial/stable amd64 Packages
#   docker-ce | 17.03.0~ce-0~ubuntu-xenial | https://mirrors.aliyun.com/docker-ce/linux/ubuntu xenial/stable amd64 Packages
# Step 2: 安裝指定版本的Docker-CE: (VERSION例如上面的17.03.1~ce-0~ubuntu-xenial)
# sudo apt-get -y install docker-ce=[VERSION] 

2、安裝docker-compose包

root@node3:~# apt-get install docker-compose -y

3、啟動docker服務並設置為開機啟動。

# systemctl start docker
# systemctl enable docker

四、開始配置Harbor倉庫

1、官網下載Harbor軟件,然后傳到linux的/usr/local/src目錄下解壓,並配置

root@node3:/usr/local/src/harbor# tar xvf harbor-offline-installer-v1.7.5.tgz 
root@node3:/usr/local/src/harbor# cd harbor

root@node3:/usr/local/src/harbor# vim  harbor.cfg 
hostname = harbor.struggle.net  # 創建本地域名登陸
ui_url_protocol = https  # 改為https加密登陸
harbor_admin_password = 123456 #登陸密碼
ssl_cert = /usr/local/src/harbor/certs/harborca.crt  # 指定私鑰路徑
ssl_cert_key = /usr/local/src/harbor/certs/harbor-ca.key  #指定證書路徑

修改hosts配置文件,對IP地址進行域名解析

# vim /etc/hosts
192.168.7.112  harbor.struggle.net

主機名改為harbor.struggle.net

# hostnamectl set-hostname harbor.struggle.net

2、開始創建ssl加密文件

root@node3:~# touch .rnd  # 在root目錄下新建一個.rnd文件,才能頒發證書
root@node3:/usr/local/src# cd harbor/
root@node3:/usr/local/src/harbor# mkdir certs

# openssl genrsa -out /usr/local/src/harbor/certs/harbor-ca.key  # 生成私鑰key
# openssl req -x509 -new -nodes -key harbor-ca.key -subj "/CN=harbor.struggle.net" -days 7120  \
-out harborca.crt  # 頒發證書

3、開始安裝harbor服務

# cd /usr/local/src/harbor/
# ./install.sh

4、安裝完成harbor后需要修改本地windows主機的hosts文件解析harbor.struggle.net域名才能在網頁上解析。

  

五、在兩個master主機安裝docker-ce

1、配置docker-ce的apt源並安裝docker-ce,詳情見上面的阿里雲地址:https://developer.aliyun.com/mirror/docker-ce?spm=a2c6h.13651102.0.0.53322f70mkWLiO

2、啟動docker服務

# systemctl start docker
# systemctl enable docker

3、創建一個存放公鑰的目錄,然后將harbor服務器的公鑰傳到此目錄下

root@node1:~# mkdir /etc/docker/certs.d/harbor.struggle.net -p

4、在harbor服務器上將公鑰傳到master主機創建的目錄下

root@harbor:/usr/local/src/harbor/certs# scp harbor-ca.crt 192.168.7.110:/etc/docker/certs.d/harbor.struggle.net

5、重啟master主機的docker服務

root@node1:~# systemctl restart docker

6、修改master主機的/etc/hosts文件,對harbor的域名進行解析

192.168.7.112  harbor.struggle.net

7、然后在master主機登陸harbor,顯示成功即可登陸。。

root@node1:/etc/docker/certs.d/harbor.struggle.net# docker login harbor.struggle.net
Username: admin
Password: 
WARNING! Your password will be stored unencrypted in /root/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store

Login Succeeded

8、在harbor上創建一個項目,然后再下載一個鏡像進行測試是否能夠上傳

  

在master主機上測試鏡像上傳至harbor上

在master主機下載一個測試鏡像

root@node1:~# docker pull alpine

master主機給要上傳的鏡像打標簽

root@node1:~# docker tag alpine:latest harbor.struggle.net/baseimages/alpine:latest # 給要上傳的鏡像打標簽

在master主機上傳打標簽的鏡像

root@node1:~# docker push harbor.struggle.net/baseimages/alpine:latest 

  此時已經上傳成功!!!

  

在每個節點安裝依賴工具

在以上的主機上都安裝python2.7,並創建軟鏈接

參考相關文檔:https://github.com/easzlab/kubeasz/blob/master/docs/setup/00-planning_and_overall_intro.md

# 文檔中腳本默認均以root用戶執行
apt-get update && apt-get upgrade -y && apt-get dist-upgrade -y
# apt-get install python2.7 -y
# ln -s /usr/bin/python2.7 /usr/bin/python  # 創建軟鏈接

CentOS 7 請執行以下腳本:

# 文檔中腳本默認均以root用戶執行
yum update
# 安裝python
yum install python -y

在兩個master節點上克隆ansible項目,或者下載下來再傳到本地linux上

參考地址:https://github.com/easzlab/kubeasz/tree/0.6.0

  

root@k8s-master2:~# apt-get install ansible  -y  # 在兩個master節點上都安裝ansible
root@k8s-master2:~# apt-get install git -y # 兩個master節點都安裝git
root@k8s-master1:~# git clone -b 0.6.1 https://github.com/easzlab/kubeasz.git  # 克隆項目上指定的包

六、在master准備hosts文件  

將/etc/ansible/hosts的文件移走,然后將范例移動到/etc/ansible/目錄下,並將項目上的文件放在/etc/ansible目錄下

root@k8s-master1:/etc/ansible# mv /etc/ansible/hosts /opt/  # 將ansible自帶的hosts文件移走
root@k8s-master1:/etc/ansible# mv kubeasz/* /etc/ansible/  # 將項目的文件存在此目錄下
root@k8s-master1:/etc/ansible# cd /etc/ansible/
root@k8s-master1:/etc/ansible# cp example/hosts.m-masters.example ./hosts  # 將范例移動到ansible目錄下,並起名為hosts文件
root@k8s-master1:/etc/ansible# vim hosts  # 修改hosts文件

修改復制的范例hosts文件

# 集群部署節點:一般為運行ansible 腳本的節點
# 變量 NTP_ENABLED (=yes/no) 設置集群是否安裝 chrony 時間同步
[deploy]
192.168.7.110 NTP_ENABLED=no  # master節點的IP地址

# etcd集群請提供如下NODE_NAME,注意etcd集群必須是1,3,5,7...奇數個節點
[etcd]
192.168.7.113 NODE_NAME=etcd1  # 目前只有一個etcd主機,就先寫一個

[new-etcd] # 預留組,后續添加etcd節點使用
#192.168.1.x NODE_NAME=etcdx

[kube-master]
192.168.7.110  # master節點的IP地址

[new-master] # 預留組,后續添加master節點使用
#192.168.1.5

[kube-node]
192.168.7.115  # node節點的IP地址

[new-node] # 預留組,后續添加node節點使用
#192.168.1.xx

# 參數 NEW_INSTALL:yes表示新建,no表示使用已有harbor服務器
# 如果不使用域名,可以設置 HARBOR_DOMAIN=""
[harbor]
#192.168.1.8 HARBOR_DOMAIN="harbor.yourdomain.com" NEW_INSTALL=no

# 負載均衡(目前已支持多於2節點,一般2節點就夠了) 安裝 haproxy+keepalived
#[lb]
#192.168.1.1 LB_ROLE=backup
#192.168.1.2 LB_ROLE=master

#【可選】外部負載均衡,用於自有環境負載轉發 NodePort 暴露的服務等
[ex-lb]
#192.168.1.6 LB_ROLE=backup EX_VIP=192.168.1.250
#192.168.1.7 LB_ROLE=master EX_VIP=192.168.1.250

[all:vars]
# ---------集群主要參數---------------
#集群部署模式:allinone, single-master, multi-master
DEPLOY_MODE=multi-master

#集群主版本號,目前支持: v1.8, v1.9, v1.10,v1.11, v1.12, v1.13
K8S_VER="v1.13"  #寫上版本
 
# 集群 MASTER IP即 LB節點VIP地址,為區別與默認apiserver端口,設置VIP監聽的服務端口8443
# 公有雲上請使用雲負載均衡內網地址和監聽端口
MASTER_IP="192.168.7.248"   # 寫上VIP地址
KUBE_APISERVER="https://{{ MASTER_IP }}:6443"  # 監聽6443端口

# 集群網絡插件,目前支持calico, flannel, kube-router, cilium
CLUSTER_NETWORK="calico"   # 使用calico

# 服務網段 (Service CIDR),注意不要與內網已有網段沖突
SERVICE_CIDR="10.20.0.0/16"  # 寫上server網段

# POD 網段 (Cluster CIDR),注意不要與內網已有網段沖突
CLUSTER_CIDR="172.20.0.0/16"  #書寫POD網段

# 服務端口范圍 (NodePort Range)
NODE_PORT_RANGE="20000-60000"  #服務端口范圍

# kubernetes 服務 IP (預分配,一般是 SERVICE_CIDR 中第一個IP)
CLUSTER_KUBERNETES_SVC_IP="10.20.0.1"   # 此時的IP地址要在上面分配的10.20.0.0/16的地址段范圍內

# 集群 DNS 服務 IP (從 SERVICE_CIDR 中預分配)
CLUSTER_DNS_SVC_IP="10.20.254.254"   #此行也要在上面分配的server端的IP地址段范圍內

# 集群 DNS 域名
CLUSTER_DNS_DOMAIN="linux36.local."  # DNS域名解析

# 集群basic auth 使用的用戶名和密碼
BASIC_AUTH_USER="admin"
BASIC_AUTH_PASS="123456"

# ---------附加參數--------------------
#默認二進制文件目錄
bin_dir="/usr/bin"  # 指定二進制路勁

#證書目錄
ca_dir="/etc/kubernetes/ssl"  # 默認即可

#部署目錄,即 ansible 工作目錄,建議不要修改
base_dir="/etc/ansible"   #默認即可

在master節點部署免秘鑰登錄,並將harbor制作的公鑰分發給每個主機(harbor和HAProxy不需要)

root@k8s-master1:~# ssh-keygen  # 先在master創建公私鑰對
root@k8s-master1:~# apt-get install sshpass  # 安裝sshpass工具
root@k8s-master1:~# vim scp.sh # 修改一個自動傳遞公鑰的腳本
#!/bin/bash
#目標主機列表
IP="
192.168.7.110
192.168.7.111
192.168.7.113
192.168.7.114
192.168.7.115
"
for node in ${IP};do
	sshpass -p centos  ssh-copy-id ${node} -o StrictHostKeyChecking=no
	if [ $? -eq 0 ];then
	ssh ${node}  "mkdir -p /etc/docker/certs.d/harbor.struggle.net"
	scp /etc/docker/certs.d/harbor.struggle.net/harbor-ca.crt ${node}:/etc/docker/certs.d/harbor.struggle.net/harbor-ca.crt
	ssh ${node} "systemctl restart docker"
		echo "${node} 秘鑰copy完成"
	else
		echo "${node} 秘鑰copy失敗"
	fi
done

執行scp.sh腳本

# bash scp.sh

七、使用ansible部署環境

1、環境初始化

1、將k8s.1-13-5版本傳到linux系統上,並在/etc/ansible/bin目錄下解壓

root@k8s-master1:/etc/ansible/bin# cd /etc/ansible/bin
root@k8s-master1:/etc/ansible/bin# tar xvf k8s.1-13-5.tar.gz # 解壓k8s文件
root@k8s-master1:/etc/ansible/bin#mv bin/* .  #由於解壓后的文件還有一個bin,所以就再次移動到此目錄下

2、修改軟限制和硬限制文件

root@k8s-master1:/etc/ansible# vim roles/prepare/templates/30-k8s-ulimits.conf.j2 
* soft nofile 1000000
* hard nofile 1000000
* soft nproc 1000000
* hard nproc 1000000

3、添加內核參數

root@k8s-master1:/etc/ansible# vim roles/prepare/templates/95-k8s-sysctl.conf.j2 
net.ipv4.ip_nonlocal_bind = 1

 4、修改debian的限制,默認65536,中小型公司可不需要修改

root@k8s-master1:/etc/ansible# vim roles/prepare/tasks/debian.yml 
line: "ulimit -SHn 1000000"

  

5、切換到/etc/ansible目錄下,開始使用ansible-playbook編排

root@k8s-master1:/etc/ansible/bin# cd ..
root@k8s-master1:/etc/ansible# ansible-playbook 01.prepare.yml 

2、部署etcd集群

1、查看此時原有的etcd和etcdctl的版本是3.3.10,需要修改版本

root@k8s-master1://etc/ansible/bini# ./etcd --version
etcd Version: 3.3.10
Git SHA: 27fc7e2
Go Version: go1.10.4
Go OS/Arch: linux/amd64
root@k8s-master1:/etc/ansible/bin# ./etcdctl  --version
etcdctl version: 3.3.10
API version: 2

下載github的3.2.24版本etcd的包:https://github.com/etcd-io/etcd/releases?after=v3.3.12  

   

 2、將下載最新的etcd的3.2.24版本的包解壓並傳到linux指定的目錄下

root@k8s-master1:/etc/ansible# mv bin/etc* /opt/ 將舊版本的etcd和etcdctl移動到opt目錄下
root@k8s-master1:/etc/ansible# cd bin
root@k8s-master1:/etc/ansible/bin# chmod +x etc* 將最新的etcd和etcdctl文件傳到此目錄下,並加上執行權限
root@k8s-master1:/etc/ansible# bin/etcd --version  #驗證版本
etcd Version: 3.2.24
Git SHA: 420a45226
Go Version: go1.8.7
Go OS/Arch: linux/amd64
root@k8s-master1:/etc/ansible# bin/etcdctl --version # 驗證版本
etcdctl version: 3.2.24
API version: 2

3、開始部署etcd

root@k8s-master1:/etc/ansible# ansible-playbook  02.etcd.yml 

4、部署etcd完成之后,在etcd服務器上進行查看進程是否已經運行

  

5、各etcd服務器上驗證etcd服務

for ip in ${NODE_IPS}; do ETCDCTL_API=3 /usr/bin/etcdctl --endpoints=https://${ip}:2379 --cacert=/etc/kubernetes/ssl/ca.pem --cert=/etc/etcd/ssl/etcd.pem --key=/etc/etcd/ssl/etcd-key.pem endpoint health; done

3、部署master   

1、在master主機下載鏡像,然后傳到harbor服務器上,在做ansible-playbook時,保證每個節點都能下載到此鏡像,否則下載國外鏡像有可能下載不下來。

root@k8s-master1:/etc/ansible# docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/pause-amd64:3.1  # 下載此鏡像,將國外鏡像地址轉到阿里雲鏡像進行下載
root@k8s-master1:/etc/ansible# docker login harbor.struggle.net  # 登陸harbor服務器
Authenticating with existing credentials...
WARNING! Your password will be stored unencrypted in /root/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store

Login Succeeded
root@k8s-master1:/etc/ansible# docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/pause-amd64:3.1 harbor.struggle.net/baseimages/pause-amd64:3.1  # 給鏡像打標簽
root@k8s-master1:/etc/ansible# docker push harbor.struggle.net/baseimages/pause-amd64:3.1   # 將鏡像傳到harbor服務器上

2、修改配置文件

root@k8s-master1:/etc/ansible# grep PROXY_MODE roles/ -R
roles/kube-node/defaults/main.yml:PROXY_MODE: "iptables"
roles/kube-node/templates/kube-proxy.service.j2:  --proxy-mode={{ PROXY_MODE }}
root@k8s-master1:/etc/ansible# vim roles/kube-node/defaults/main.yml

SANDBOX_IMAGE: "harbor.struggle.net/baseimages/pause-amd64:3.1"  # 將鏡像地址指向本地harbor地址

  

 3、開始部署master

root@k8s-master1:/etc/ansible# ansible-playbook 04.kube-master.yml 

4、部署node節點

root@k8s-master1:/etc/ansible# ansible-playbook 05.kube-node.yml 

5、部署網絡服務calico  

github網站上可以下載對應版本的calico包:https://github.com/projectcalico/calico/releases/download/v3.3.2/release-v3.3.2.tgz

  

1、將calico包傳到本地linux系統中

root@k8s-master1:/etc/ansible# vim roles/calico/defaults/main.yml 
# 更新支持calico 版本: [v3.2.x] [v3.3.x] [v3.4.x]
calico_ver: "v3.3.2"  # 修改為自己下載的calico版本

2、將下載下來的calico包傳到linux系統中並解壓

root@k8s-master1:/opt#rz
root@k8s-master1:/opt# tar xvf calico-release-v3.3.2.tgz 
root@k8s-master1:/opt/release-v3.3.2/images# cd /opt/release-v3.3.2/images/
root@k8s-master1:/opt/release-v3.3.2/images# ll
total 257720
drwxrwxr-x 2 liu liu      110 Dec  4  2018 ./
drwxrwxr-x 5 liu liu       66 Dec  4  2018 ../
-rw------- 1 liu liu 75645952 Dec  4  2018 calico-cni.tar
-rw------- 1 liu liu 56801280 Dec  4  2018 calico-kube-controllers.tar
-rw------- 1 liu liu 76076032 Dec  4  2018 calico-node.tar
-rw------- 1 liu liu 55373824 Dec  4  2018 calico-typha.tar

 3、將解壓后的文件里的鏡像傳到本地的harbor上

root@k8s-master1:/opt/release-v3.3.2/images# docker login harbor.struggle.net  #登陸harbor的倉庫
Authenticating with existing credentials...
WARNING! Your password will be stored unencrypted in /root/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store
Login Succeeded
root@k8s-master1:/opt/release-v3.3.2/images# docker load -i calico-node.tar # 將calico-node.tar傳到docker中
root@k8s-master1:/opt/release-v3.3.2/images# docker tag calico/node:v3.3.2 harbor.struggle.net/baseimages/calico-node:v3.3.2  # 然后對calico-node打標簽
root@k8s-master1:/opt/release-v3.3.2/images# docker push harbor.struggle.net/baseimages/calico-node:v3.3.2  # 將打好標簽的calico-node傳到harbor倉庫中
root@k8s-master1:/opt/release-v3.3.2/images# docker load -i calico-kube-controllers.tar # 將calico-kube-controllers傳到docker倉庫中
root@k8s-master1:/opt/release-v3.3.2/images# docker  tag calico/kube-controllers:v3.3.2 harbor.struggle.net/baseimages/calico-kube-controllers:v3.3.2  #對calico-kube-controllers打標簽
root@k8s-master1:/opt/release-v3.3.2/images# docker push  harbor.struggle.net/baseimages/calico-kube-controllers:v3.3.2 # 將打好標簽的鏡像傳到harbor倉庫中
root@k8s-master1:/opt/release-v3.3.2/images# docker load -i calico-cni.tar #將calico-cni鏡像傳到本地的docker中
root@k8s-master1:/opt/release-v3.3.2/images# docker tag calico/cni:v3.3.2 harbor.struggle.net/baseimages/calico-cni:v3.3.2  #對上面的鏡像打標簽
root@k8s-master1:/opt/release-v3.3.2/images# docker push harbor.struggle.net/baseimages/calico-cni:v3.3.2  #將打好的標簽傳到本地的harbor倉庫上

4、修改master主機的示例配置文件,將鏡像的路徑都指向本地倉庫路徑

root@k8s-master1:/etc/ansible# vim roles/calico/templates/calico-v3.3.yaml.j2 

        - name: calico-node
          image: harbor.struggle.net/baseimages/calico-node:v3.3.2  # 主要將鏡像指定了本地倉庫路徑

        - name: install-cni
          image: harbor.struggle.net/baseimages/calico-cni:v3.3.2  # 指向本地倉庫路徑

        - name: calico-kube-controllers
          image: harbor.struggle.net/baseimages/calico-kube-controllers:v3.3.2  # 指向本地倉庫路徑

   

   

   

5、將calicoctl更新為最新版本

root@k8s-master1:/etc/ansible/bin# cd /opt/release-v3.3.2/bin/
root@k8s-master1:/opt/release-v3.3.2/bin# cp calicoctl /etc/ansible/bin  # 將3.32版本的執行文件傳到/etc/ansible/bin目錄下
root@k8s-master1:/opt/release-v3.3.2/bin# cd -
/etc/ansible/bin
root@k8s-master1:/etc/ansible/bin# ./calicoctl version  # 查看此時calicoctl 版本,與上面的calico版本一致。
Client Version:    v3.3.2
Build date:        2018-12-03T15:10:51+0000
Git commit:        594fd84e
no etcd endpoints specified

6、將每個node節點服務器的hosts文件的IP解析改為harbor的域名解析,此時才能夠在node節點下載harbor倉庫的鏡像,否則無法下載

# vim  /etc/hosts
192.168.7.112 harbor.struggle.net

7、開始在master使用ansible-playbook編排網絡。

root@k8s-master1:/etc/ansible/bin# cd ..
root@k8s-master1:/etc/ansible# ansible-playbook  06.network.yml

此時K8S的基本搭建完成,驗證此時的node節點狀態

root@k8s-master1:/etc/ansible# calicoctl node status
Calico process is running.

IPv4 BGP status
+---------------+-------------------+-------+----------+-------------+
| PEER ADDRESS  |     PEER TYPE     | STATE |  SINCE   |    INFO     |
+---------------+-------------------+-------+----------+-------------+
| 192.168.7.115 | node-to-node mesh | up    | 13:19:33 | Established |
+---------------+-------------------+-------+----------+-------------+

在master添加新的node節點 

1、將新加的node節點的hosts文件解析harbor域名

root@node5:~# vim /etc/hosts
192.168.7.112 harbor.struggle.net

2、修改/etc/ansible目錄下的hosts配置文件

root@k8s-master1:/etc/ansible# vim hosts
[kube-node]
192.168.7.115
[new-node] # 預留組,后續添加node節點使用
192.168.7.114   # 新添加一個新的node節點

root@k8s-master1:/etc/ansible# vim 20.addnode.yml 
- docker  # 如果添加的節點已經安裝了docker,就把此項刪除,不需要再安裝

root@k8s-master1:/etc/ansible# ansible-playbook  20.addnode.yml 

3、 在master查看此時添加的node節點已經准備就緒

NAME            STATUS                     ROLES    AGE     VERSION
192.168.7.111   Ready                      node     2m3s    v1.13.5
192.168.7.114   Ready                      node     29m     v1.13.5
192.168.7.115   Ready                      node     4h25m   v1.13.5

在master添加新的master節點  

1、修改新的master節點的hosts文件,並將harbor的域名進行解析,或者修改DNS,指向windows主機的DNS上也可以。

# vim  /etc/hosts

192.168.7.112  harbor.struggle.net

2、 修改ansible目錄下的hosts文件,添加一個新的192.168.7.111主機,因此此前已經將master安裝成功,此時需要刪除執行yml內的docker安裝

root@k8s-master1:/etc/ansible# vim  hosts
[kube-master]
192.168.7.110
[new-master] # 預留組,后續添加master節點使用
192.168.7.111  #添加一個新的master

root@k8s-master1:/etc/ansible# vim 21.addmaster.yml 
- docker  # 由於此前的master已經安裝了docker,此時需要將此注釋掉即可

3、開始執行劇本

root@k8s-master1:/etc/ansible# ansible-playbook  21.addmaster.yml 

4、查看新添加的master狀態,顯示以下狀態,說明master添加成功!!

root@k8s-master1:/etc/ansible# kubectl get node
NAME            STATUS                     ROLES    AGE     VERSION
192.168.7.110   Ready,SchedulingDisabled   master   5h4m    v1.13.5
192.168.7.111   Ready,SchedulingDisabled   master   22m     v1.13.5
192.168.7.114   Ready                      node     49m     v1.13.5
192.168.7.115   Ready                      node     4h45m   v1.13.5

再次配置haproxy

修改haproxy,將新加的master監聽起來,作為高可用節點,當一個master宕機后,另一個master還可以繼續使用

root@node5:~# vim /etc/haproxy//haproxy.cfg 
listen k8s-api-server-6443
        bind 192.168.7.248:6443
        mode tcp
        server 192.168.7.110 192.168.7.110:6443 check inter 2000 fall 3 rise 5
        server 192.168.7.111 192.168.7.111:6443 check inter 2000 fall 3 rise 5

root@node5:~# systemctl restart haproxy

  

 

 

  

 

 

  

  

  

  

  

  

  

  

  

  

  

  

  

 

 

  

  

  

  

 

  

  

  

  

  

 

  

  

  

  

  

  

  

  

 

 

 

 

 

 

 

 

 

 

 

  

  

  

  


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM