kubernetes集群部署


一:Kubernetes介紹和環境准備

1. 什么是K8S

Kubernetes是一個全新的基於容器技術的分布式架構領先方案,是一個開放的開發平台。不局限於任何一種語言,沒有限定任何編程接口。是一個完備的分布式系統支撐平台。它構建在docker之上,提供應用部署維護擴展機制等功能,利用Kubernetes能方便地管理跨機器運行容器化的應用。

2. 主要功能

  • 使用Docker對應用程序包裝、實例化;
  • 以集群的方式運行、管理跨機器的容器;
  • 解決Docker跨機器容器之間的通訊問題;
  • Kubernetes的自我修復機制使得容器集群總是運行在用戶期望的狀態;

3. 組件介紹

   Kubernetes中的:Node、Pod、Replication Controller、Service等都可以看作一種“資源對象”,幾乎所有的資源對象都可以通過kubectl工具(API調用)執行增、刪、改、查等操作並將其保存在etcd中持久化存儲。

  • Master:集群控制管理節點,所有的命令都經由master處理。
  • Node:是kubernetes集群的工作負載節點。Master為其分配工作,當某個Node宕機時,Master會將其工作負載自動轉移到其他節點。
  • Pod:是kubernetes最重要也是最基本的概念。每個Pod都會包含一個 “根容器”,還會包含一個或者多個緊密相連的業務容器。
  • ETCD:k8s中,所有數據的存儲以及操作記錄都在etcd中進行存儲,一旦故障,可能導致整個集群的癱瘓或者數據丟失。
  • Label:是一個key=value的鍵值對,其中key與value由用戶自己指定。可以附加到各種資源對象上,一個資源對象可以定義任意數量的Label。
  • RC:Replication Controller聲明某個Pod的副本數在任意時刻都符合某個預期值。定義包含如下: 

 

      (1)Pod期待的副本數(replicas);

      (2)用於篩選目標Pod的Label Selector;

      (3)當Pod副本數小於期望時,用於新的創建Pod的模板template ;

      (4)通過改變RC里的Pod副本數量,可以實現Pod的擴容或縮容功能;

      (5)通過改變RC里Pod模板中的鏡像版本,可以實現Pod的滾動升級功能;

 

3.1 master節點

  • API Server

  集群控制管理節點,所有的命令都經由master處理,所有資源增刪改查的唯一入口。

  提供集群管理的REST API接口,包括認證授權、數據校驗以及集群狀態變更等;

  只有API Server才直接操作etcd;

  其他模塊通過API Server查詢或修改數據;

  提供其他模塊之間的數據交互和通信的樞紐;

  • Scheduler

  負責分配調度Pod到集群內的node節點;
  監聽kube-apiserver,查詢還未分配Node的Pod;
  根據調度策略為這些Pod分配節點;

  • Controller Manager

  所有其他群集級別的功能,目前由控制器Manager執行。資源對象的自動化控制中心;

  由一系列的控制器組成,它通過apiserver監控整個集群的狀態,並確保集群處於預期的工作狀態;

  • ETCD

  所有持久化的狀態信息存儲在ETCD中。

3.2 node節點

   默認情況下,Node節點可動態增加到kubernetes集群中,一旦Node被納入集群管理范圍,kubelet會定時向Master匯報自身的情況,以及之前有哪些Pod在運行等,這樣Master可以獲知每個Node的資源使用情況,並實現高效均衡的資源調度策略。如果Node沒有按時上報信息,則會被Master判斷為失聯,Node狀態會被標記為Not Ready,隨后Master會觸發工作負載轉移流程。

  • Kubelet

  管理Pods以及容器、鏡像、 Volume等,實現對集群對節點的管理。

  • Kube-proxy

  提供網絡代理以及負載均衡,實現與Service通信。

  • Docker Engine

  負責節點的容器的管理工作。

4. 實驗環境准備

4.1 規划

本次實驗是基於2個node+1個master,每個節點都跑一個ETCD。

1. 操作系統 CentOS-7.x-x86_64。
2. 關閉 iptables 和 SELinux。
3. 所有節點的主機名和 IP 地址,使用/etc/hosts 做好主機名解析。

主機名 IP地址(NAT) CPU 內存 描述
k8s-master eth0 : 10.0.0.25 1VCPU 2G Kubernets Master節點/Etcd節點
k8s-node-1 eth0 : 10.0.0.26 1VCPU 2G Kubernets Node節點/ Etcd節點
k8s-node-1 eth0 : 10.0.0.27 1VCPU 2G Kubernets Node節點/ Etcd節點

4.2 網絡設置

4.3 配置靜態IP地址

#將 UUID 和 MAC 地址已經其它配置刪除掉,3個節點除了IP和主機名不同其他相同。
[root@k8s-master ~]# cat /etc/sysconfig/network-scripts/ifcfg-eth0 TYPE=Ethernet BOOTPROTO=static NAME=eth0 DEVICE=eth0 ONBOOT=yes IPADDR=10.0.0.25 NETMASK=255.255.255.0 GATEWAY=10.0.0.254 DNS=223.5.5.5 #重啟網絡服務 [root@k8s-master ~]# systemctl restart network #設置 DNS 解析 [root@k8s-master ~]# vi /etc/resolv.conf nameserver 223.5.5.5

4.4 關閉selinux、防火牆

setenforce 0
sed -i 's#SELINUX=enforcing#SELINUX=disabled#' /etc/selinux/config
systemctl disable firewalld.service
systemctl stop firewalld.service
systemctl stop NetworkManager
systemctl disable NetworkManager

4.5 設置主機名解析

3個節點都做

#3個節點都一樣
[root@k8s-master ~]# cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
10.0.0.25   k8s-master
10.0.0.26   k8s-node-1
10.0.0.27   k8s-node-2

4.6 配置epel源

3個節點都做

rpm -ivh http://mirrors.aliyun.com/epel/epel-release-latest-7.noarch.rpm
#下載常用命令
yum install -y net-tools vim lrzsz tree screen lsof tcpdump nc mtr nmap

#保證能上網
[root@k8s-master ~]# ping www.baidu.com -c3
PING www.a.shifen.com (61.135.169.121) 56(84) bytes of data.
64 bytes from 61.135.169.121: icmp_seq=1 ttl=128 time=5.41 ms
64 bytes from 61.135.169.121: icmp_seq=2 ttl=128 time=6.55 ms
64 bytes from 61.135.169.121: icmp_seq=3 ttl=128 time=8.97 ms

--- www.a.shifen.com ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2023ms
rtt min/avg/max/mdev = 5.418/6.981/8.974/1.486 ms

4.7 配置免秘鑰登錄

只在master節點做

[root@k8s-master ~]# ssh-keygen -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa): 
Enter passphrase (empty for no passphrase): 
Enter same passphrase again: 
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
b1:a0:5b:02:57:0e:8f:1e:25:bf:46:1f:d1:f3:24:c4 root@k8s-master
The key's randomart image is:
+--[ RSA 2048]----+
|    o o .+.      |
|     X   .E .    |
|  . + * o  =     |
|   + + + +  .    |
|    + + S        |
|     =           |
|    .            |
|                 |
|                 |
+-----------------+ [root@k8s-master ~]# ssh-copy-id k8s-master
The authenticity of host 'k8s-master (10.0.0.25)' can't be established.
ECDSA key fingerprint is 75:5c:83:a1:b4:cc:bf:28:71:a5:d5:d1:94:35:3c:9a.
Are you sure you want to continue connecting (yes/no)? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
root@k8s-master's password: 

Number of key(s) added: 1

Now try logging into the machine, with:   "ssh 'k8s-master'"
and check to make sure that only the key(s) you wanted were added.

[root@k8s-master ~]# ssh-copy-id k8s-node-1
The authenticity of host 'k8s-node-1 (10.0.0.26)' can't be established.
ECDSA key fingerprint is 75:5c:83:a1:b4:cc:bf:28:71:a5:d5:d1:94:35:3c:9a.
Are you sure you want to continue connecting (yes/no)? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
root@k8s-node-1's password: 

Number of key(s) added: 1

Now try logging into the machine, with:   "ssh 'k8s-node-1'"
and check to make sure that only the key(s) you wanted were added.

[root@k8s-master ~]# ssh-copy-id k8s-node-2
The authenticity of host 'k8s-node-2 (10.0.0.27)' can't be established.
ECDSA key fingerprint is 75:5c:83:a1:b4:cc:bf:28:71:a5:d5:d1:94:35:3c:9a.
Are you sure you want to continue connecting (yes/no)? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
root@k8s-node-2's password: 

Number of key(s) added: 1

Now try logging into the machine, with:   "ssh 'k8s-node-2'"
and check to make sure that only the key(s) you wanted were added.

 

 

二:Kubernetes集群初始化

1. 安裝Docker

3個節點都做

#第一步:使用國內Docker源
[root@k8s-master ~]# cd /etc/yum.repos.d/
[root@k8s-master yum.repos.d]# wget \https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
#第二步:Docker安裝:
[root@k8s-master ~]# yum install -y docker-ce
#第三步:啟動后台進程:
[root@k8s-master ~]# systemctl start docker
[root@k8s-master ~]# systemctl enable docker

2. 准備部署目錄

3個節點都做

[root@k8s-master ~]# mkdir -p /opt/kubernetes/{cfg,bin,ssl,log}
[root@k8s-master ~]# tree -L 1 /opt/kubernetes/
/opt/kubernetes/
├── bin
├── cfg
├── log
└── ssl

3. 准備軟件包

百度網盤下載地址:

鏈接: https://pan.baidu.com/s/1kUNV7t8SSF_yuP_WDaXj5g

密碼: wexh

4. 解壓軟件包

master節點做

#將下載軟件包上傳到/usr/local/src/
[root@k8s-master ~]# ll /usr/local/src/
總用量 579812
-rw-r--r-- 1 root root 593725046 6月   2 13:27 k8s-v1.10.1-manual.zip
#解壓
cd /usr/local/src/
unzip k8s-v1.10.1-manual.zip
cd /usr/local/src/k8s-v1.10.1-manual/k8s-v1.10.1
tar zxf kubernetes.tar.gz 
tar zxf kubernetes-server-linux-amd64.tar.gz 
tar zxf kubernetes-client-linux-amd64.tar.gz
tar zxf kubernetes-node-linux-amd64.tar.gz
mv ./* /usr/local/src/

5. 修改環境變量

3個節點都做

[root@k8s-master ~]# sed -i 's#PATH=$PATH:$HOME/bin#PATH=$PATH:$HOME/bin:/opt/kubernetes/bin#g' /root/.bash_profile 
[root@k8s-master ~]# source /root/.bash_profile

6. 手動制作CA證書

6.1 安裝 CFSSL

master節點做

[root@k8s-master ~]# cd /usr/local/src/
[root@k8s-master src]# wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64
[root@k8s-master src]# wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64
[root@k8s-master src]# wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64
[root@k8s-master src]# chmod +x cfssl*
[root@k8s-master src]# mv cfssl-certinfo_linux-amd64 /opt/kubernetes/bin/cfssl-certinfo
[root@k8s-master src]# mv cfssljson_linux-amd64  /opt/kubernetes/bin/cfssljson
[root@k8s-master src]# mv cfssl_linux-amd64  /opt/kubernetes/bin/cfssl
#復制cfssl命令文件到k8s-node1和k8s-node2節點。如果實際中多個節點,就都需要同步復制。
[root@k8s-master src]# scp /opt/kubernetes/bin/cfssl* 10.0.0.26: /opt/kubernetes/bin
[root@k8s-master src]# scp /opt/kubernetes/bin/cfssl* 10.0.0.27: /opt/kubernetes/bin

6.2 初始化CFSSL

master節點做

[root@k8s-master src]# mkdir ssl && cd ssl
[root@k8s-master ssl]# cfssl print-defaults config > config.json
[root@k8s-master ssl]# cfssl print-defaults csr > csr.json

6.3 創建用來生成 CA 文件的 JSON 配置文件

master節點做

[root@k8s-master ssl]# cd /usr/local/src/ssl
[root@k8s-master ssl]# vim ca-config.json
{
  "signing": {
    "default": {
      "expiry": "8760h"
    },
    "profiles": {
      "kubernetes": {
        "usages": [
            "signing",
            "key encipherment",
            "server auth",
            "client auth"
        ],
        "expiry": "8760h"
      }
    }
  }
}

6.4 創建用來生成 CA 證書簽名請求(CSR)的 JSON 配置文件

master節點做

[root@k8s-master ssl]# cd /usr/local/src/ssl
[root@k8s-master ssl]# vim ca-csr.json
{
  "CN": "kubernetes",
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "BeiJing",
      "L": "BeiJing",
      "O": "k8s",
      "OU": "System"
    }
  ]
}

6.5 生成CA證書(ca.pem)和密鑰(ca-key.pem)

master節點做

[root@k8s-master ssl]# cd /usr/local/src/ssl
[root@k8s-master ssl]# cfssl gencert -initca ca-csr.json | cfssljson -bare ca
[root@k8s-master ssl]#  ls -l ca*
-rw-r--r-- 1 root root  290 6月   2 15:58 ca-config.json
-rw-r--r-- 1 root root 1001 6月   2 15:58 ca.csr
-rw-r--r-- 1 root root  208 6月   2 15:58 ca-csr.json
-rw------- 1 root root 1675 6月   2 15:58 ca-key.pem
-rw-r--r-- 1 root root 1359 6月   2 15:58 ca.pem

6.6 分發證書

master節點做

[root@k8s-master ssl]# cp ca.csr ca.pem ca-key.pem ca-config.json /opt/kubernetes/ssl
[root@k8s-master ssl]# scp ca.csr ca.pem ca-key.pem ca-config.json 10.0.0.26:/opt/kubernetes/ssl 
[root@k8s-master ssl]# scp ca.csr ca.pem ca-key.pem ca-config.json 10.0.0.27:/opt/kubernetes/ssl

7. ETCD集群部署

7.1 准備etcd軟件包

master節點做

[root@k8s-master ~]# cd /usr/local/src/
[root@k8s-master src]# wget https://github.com/coreos/etcd/releases/download/v3.2.18/etcd-v3.2.18-linux-amd64.tar.gz
[root@k8s-master src]# tar zxf etcd-v3.2.18-linux-amd64.tar.gz
[root@k8s-master src]# cd etcd-v3.2.18-linux-amd64
[root@k8s-master etcd-v3.2.18-linux-amd64]# cp etcd etcdctl /opt/kubernetes/bin/ 
[root@k8s-master etcd-v3.2.18-linux-amd64]# scp etcd etcdctl 10.0.0.26:/opt/kubernetes/bin/
[root@k8s-master etcd-v3.2.18-linux-amd64]# scp etcd etcdctl 10.0.0.27:/opt/kubernetes/bin/

7.2 創建 etcd 證書簽名請求

master節點做

[root@k8s-master ~]# cd /usr/local/src/ssl/
[root@k8s-master ssl]# vim etcd-csr.json
{
  "CN": "etcd",
  "hosts": [
    "127.0.0.1",
"10.0.0.25",
"10.0.0.26",
"10.0.0.27"
  ],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "BeiJing",
      "L": "BeiJing",
      "O": "k8s",
      "OU": "System"
    }
  ]
}

7.4 生成 etcd 證書和私鑰

master節點做

[root@k8s-master ssl]# cd /usr/local/src/ssl/
[root@k8s-master ssl]# cfssl gencert -ca=/opt/kubernetes/ssl/ca.pem \
  -ca-key=/opt/kubernetes/ssl/ca-key.pem \
  -config=/opt/kubernetes/ssl/ca-config.json \
  -profile=kubernetes etcd-csr.json | cfssljson -bare etcd
#生成以下證書文件
[root@k8s-master ssl]#  ls -l etcd*
-rw-r--r-- 1 root root 1062 6月   3 10:48 etcd.csr
-rw-r--r-- 1 root root  275 6月   3 10:45 etcd-csr.json
-rw------- 1 root root 1675 6月   3 10:48 etcd-key.pem
-rw-r--r-- 1 root root 1436 6月   3 10:48 etcd.pem
[root@k8s-master ssl]# 

7.4 證書移動到/opt/kubernetes/ssl目錄下

master節點做

[root@k8s-master ssl]# cp etcd*.pem /opt/kubernetes/ssl
[root@k8s-master ssl]# scp etcd*.pem 10.0.0.26:/opt/kubernetes/ssl 
[root@k8s-master ssl]# scp etcd*.pem 10.0.0.27:/opt/kubernetes/ssl

7.5 設置ETCD配置文件

master節點配置文件

[root@k8s-master ~]#  vim /opt/kubernetes/cfg/etcd.conf
#[member]
ETCD_NAME="etcd-master"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
#ETCD_SNAPSHOT_COUNTER="10000"
#ETCD_HEARTBEAT_INTERVAL="100"
#ETCD_ELECTION_TIMEOUT="1000"
ETCD_LISTEN_PEER_URLS="https://10.0.0.25:2380"
ETCD_LISTEN_CLIENT_URLS="https://10.0.0.25:2379,https://127.0.0.1:2379"
#ETCD_MAX_SNAPSHOTS="5"
#ETCD_MAX_WALS="5"
#ETCD_CORS=""
#[cluster]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://10.0.0.25:2380"
# if you use different ETCD_NAME (e.g. test),
# set ETCD_INITIAL_CLUSTER value for this name, i.e. "test=http://..."
ETCD_INITIAL_CLUSTER="etcd-master=https://10.0.0.25:2380,etcd-node1=https://10.0.0.26:2380,etcd-node2=https://10.0.0.27:2380"
ETCD_INITIAL_CLUSTER_STATE="new"
ETCD_INITIAL_CLUSTER_TOKEN="k8s-etcd-cluster"
ETCD_ADVERTISE_CLIENT_URLS="https://10.0.0.25:2379"
#[security]
CLIENT_CERT_AUTH="true"
ETCD_CA_FILE="/opt/kubernetes/ssl/ca.pem"
ETCD_CERT_FILE="/opt/kubernetes/ssl/etcd.pem"
ETCD_KEY_FILE="/opt/kubernetes/ssl/etcd-key.pem"
PEER_CLIENT_CERT_AUTH="true"
ETCD_PEER_CA_FILE="/opt/kubernetes/ssl/ca.pem"
ETCD_PEER_CERT_FILE="/opt/kubernetes/ssl/etcd.pem"
ETCD_PEER_KEY_FILE="/opt/kubernetes/ssl/etcd-key.pem"

node1節點配置文件

[root@k8s-node-1 ~]# vim /opt/kubernetes/cfg/etcd.conf
#[member]
ETCD_NAME="etcd-node1"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
#ETCD_SNAPSHOT_COUNTER="10000"
#ETCD_HEARTBEAT_INTERVAL="100"
#ETCD_ELECTION_TIMEOUT="1000"
ETCD_LISTEN_PEER_URLS="https://10.0.0.26:2380"
ETCD_LISTEN_CLIENT_URLS="https://10.0.0.26:2379,https://127.0.0.1:2379"
#ETCD_MAX_SNAPSHOTS="5"
#ETCD_MAX_WALS="5"
#ETCD_CORS=""
#[cluster]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://10.0.0.26:2380"
# if you use different ETCD_NAME (e.g. test),
# set ETCD_INITIAL_CLUSTER value for this name, i.e. "test=http://..."
ETCD_INITIAL_CLUSTER="etcd-master=https://10.0.0.25:2380,etcd-node1=https://10.0.0.26:2380,etcd-node2=https://10.0.0.27:2380"
ETCD_INITIAL_CLUSTER_STATE="new"
ETCD_INITIAL_CLUSTER_TOKEN="k8s-etcd-cluster"
ETCD_ADVERTISE_CLIENT_URLS="https://10.0.0.26:2379"
#[security]
CLIENT_CERT_AUTH="true"
ETCD_CA_FILE="/opt/kubernetes/ssl/ca.pem"
ETCD_CERT_FILE="/opt/kubernetes/ssl/etcd.pem"
ETCD_KEY_FILE="/opt/kubernetes/ssl/etcd-key.pem"
PEER_CLIENT_CERT_AUTH="true"
ETCD_PEER_CA_FILE="/opt/kubernetes/ssl/ca.pem"
ETCD_PEER_CERT_FILE="/opt/kubernetes/ssl/etcd.pem"
ETCD_PEER_KEY_FILE="/opt/kubernetes/ssl/etcd-key.pem"

node2節點配置文件

[root@k8s-node-2 ~]# vim /opt/kubernetes/cfg/etcd.conf
#[member]
ETCD_NAME="etcd-node2"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
#ETCD_SNAPSHOT_COUNTER="10000"
#ETCD_HEARTBEAT_INTERVAL="100"
#ETCD_ELECTION_TIMEOUT="1000"
ETCD_LISTEN_PEER_URLS="https://10.0.0.27:2380"
ETCD_LISTEN_CLIENT_URLS="https://10.0.0.27:2379,https://127.0.0.1:2379"
#ETCD_MAX_SNAPSHOTS="5"
#ETCD_MAX_WALS="5"
#ETCD_CORS=""
#[cluster]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://10.0.0.27:2380"
# if you use different ETCD_NAME (e.g. test),
# set ETCD_INITIAL_CLUSTER value for this name, i.e. "test=http://..."
ETCD_INITIAL_CLUSTER="etcd-master=https://10.0.0.25:2380,etcd-node1=https://10.0.0.26:2380,etcd-node2=https://10.0.0.27:2380"
ETCD_INITIAL_CLUSTER_STATE="new"
ETCD_INITIAL_CLUSTER_TOKEN="k8s-etcd-cluster"
ETCD_ADVERTISE_CLIENT_URLS="https://10.0.0.27:2379"
#[security]
CLIENT_CERT_AUTH="true"
ETCD_CA_FILE="/opt/kubernetes/ssl/ca.pem"
ETCD_CERT_FILE="/opt/kubernetes/ssl/etcd.pem"
ETCD_KEY_FILE="/opt/kubernetes/ssl/etcd-key.pem"
PEER_CLIENT_CERT_AUTH="true"
ETCD_PEER_CA_FILE="/opt/kubernetes/ssl/ca.pem"
ETCD_PEER_CERT_FILE="/opt/kubernetes/ssl/etcd.pem"
ETCD_PEER_KEY_FILE="/opt/kubernetes/ssl/etcd-key.pem"

7.6 創建ETCD系統服務

master節點做

[root@k8s-master ~]# vim /etc/systemd/system/etcd.service
[Unit]
Description=Etcd Server
After=network.target

[Service]
Type=simple
WorkingDirectory=/var/lib/etcd
EnvironmentFile=-/opt/kubernetes/cfg/etcd.conf
# set GOMAXPROCS to number of processors
ExecStart=/bin/bash -c "GOMAXPROCS=$(nproc) /opt/kubernetes/bin/etcd"
Type=notify

[Install]
WantedBy=multi-user.target
[root@k8s-master ~]# scp /etc/systemd/system/etcd.service 10.0.0.26:/etc/systemd/system/ [root@k8s-master ~]# scp /etc/systemd/system/etcd.service 10.0.0.27:/etc/systemd/system/

7.7 重新加載系統服務

3個節點都做

systemctl daemon-reload
mkdir /var/lib/etcd
systemctl enable etcd
systemctl start etcd
systemctl status etcd

7.8 驗證集群

master節點做

[root@k8s-master ~]# etcdctl --endpoints=https://10.0.0.25:2379 \
  --ca-file=/opt/kubernetes/ssl/ca.pem \
  --cert-file=/opt/kubernetes/ssl/etcd.pem \
  --key-file=/opt/kubernetes/ssl/etcd-key.pem cluster-health

如下圖所示證明ETCD集群部署成功

 

三:Kubernetes的master節點部署

1. 部署API server服務

1.1 准備軟件包

master節點做

因為K8S是用二進制文件構寫的所以不需要編譯,只需要拷貝文件就行。

[root@k8s-master ~]# cd /usr/local/src/kubernetes
[root@k8s-master kubernetes]# cp server/bin/kube-apiserver /opt/kubernetes/bin/
[root@k8s-master kubernetes]# cp server/bin/kube-controller-manager /opt/kubernetes/bin/
[root@k8s-master kubernetes]# cp server/bin/kube-scheduler /opt/kubernetes/bin/

1.2 創建生成CSR的 JSON 配置文件

master節點做

[root@k8s-master src]# cd /usr/local/src/ssl/
[root@k8s-master ssl]# vim kubernetes-csr.json
{
  "CN": "kubernetes",
  "hosts": [
    "127.0.0.1",
    "10.0.0.25",
    "10.1.0.1",
    "kubernetes",
    "kubernetes.default",
    "kubernetes.default.svc",
    "kubernetes.default.svc.cluster",
    "kubernetes.default.svc.cluster.local"
  ],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "BeiJing",
      "L": "BeiJing",
      "O": "k8s",
      "OU": "System"
    }
  ]
}

1.3 生成 kubernetes 證書和私鑰

master節點做

[root@k8s-master ssl]# cfssl gencert -ca=/opt/kubernetes/ssl/ca.pem \
   -ca-key=/opt/kubernetes/ssl/ca-key.pem \
   -config=/opt/kubernetes/ssl/ca-config.json \
   -profile=kubernetes kubernetes-csr.json | cfssljson -bare kubernetes
[root@k8s-master ssl]# cp kubernetes*.pem /opt/kubernetes/ssl/
[root@k8s-master ssl]# scp kubernetes*.pem 10.0.0.26:/opt/kubernetes/ssl/
[root@k8s-master ssl]# scp kubernetes*.pem 10.0.0.27:/opt/kubernetes/ssl/

1.4 創建 kube-apiserver 使用的客戶端 token 文件

master節點做

[root@k8s-master ~]#  head -c 16 /dev/urandom | od -An -t x | tr -d ' ' 841276913ac026513f75d267f0fc5212
[root@k8s-master ~]# vim /opt/kubernetes/ssl/bootstrap-token.csv
841276913ac026513f75d267f0fc5212,kubelet-bootstrap,10001,"system:kubelet-bootstrap" #根據上面自己生成的填寫

1.5 創建基礎用戶名/密碼認證配置

master節點做

[root@k8s-master ~]#  vim /opt/kubernetes/ssl/basic-auth.csv
admin,admin,1
readonly,readonly,2

1.6 部署Kubernetes API Server

master節點做

[root@k8s-master ~]# vim /usr/lib/systemd/system/kube-apiserver.service
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=network.target

[Service]
ExecStart=/opt/kubernetes/bin/kube-apiserver \
  --admission-control=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota,NodeRestriction \
  --bind-address=10.0.0.25 \
  --insecure-bind-address=127.0.0.1 \
  --authorization-mode=Node,RBAC \
  --runtime-config=rbac.authorization.k8s.io/v1 \
  --kubelet-https=true \
  --anonymous-auth=false \
  --basic-auth-file=/opt/kubernetes/ssl/basic-auth.csv \
  --enable-bootstrap-token-auth \
  --token-auth-file=/opt/kubernetes/ssl/bootstrap-token.csv \
  --service-cluster-ip-range=10.1.0.0/16 \
  --service-node-port-range=20000-40000 \
  --tls-cert-file=/opt/kubernetes/ssl/kubernetes.pem \
  --tls-private-key-file=/opt/kubernetes/ssl/kubernetes-key.pem \
  --client-ca-file=/opt/kubernetes/ssl/ca.pem \
  --service-account-key-file=/opt/kubernetes/ssl/ca-key.pem \
  --etcd-cafile=/opt/kubernetes/ssl/ca.pem \
  --etcd-certfile=/opt/kubernetes/ssl/kubernetes.pem \
  --etcd-keyfile=/opt/kubernetes/ssl/kubernetes-key.pem \
  --etcd-servers=https://10.0.0.25:2379,https://10.0.0.26:2379,https://10.0.0.27:2379 \
  --enable-swagger-ui=true \
  --allow-privileged=true \
  --audit-log-maxage=30 \
  --audit-log-maxbackup=3 \
  --audit-log-maxsize=100 \
  --audit-log-path=/opt/kubernetes/log/api-audit.log \
  --event-ttl=1h \
  --v=2 \
  --logtostderr=false \
  --log-dir=/opt/kubernetes/log
Restart=on-failure
RestartSec=5
Type=notify
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target

1.7 啟動APIserver

master節點做

[root@k8s-master ~]# systemctl daemon-reload
[root@k8s-master ~]# systemctl enable kube-apiserver
[root@k8s-master ~]# systemctl start kube-apiserver
[root@k8s-master ~]# systemctl status kube-apiserver

2. 部署Controller Manager服務

master節點做

[root@k8s-master ~]# vim /usr/lib/systemd/system/kube-controller-manager.service
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/GoogleCloudPlatform/kubernetes

[Service]
ExecStart=/opt/kubernetes/bin/kube-controller-manager \
  --address=127.0.0.1 \
  --master=http://127.0.0.1:8080 \
  --allocate-node-cidrs=true \
  --service-cluster-ip-range=10.1.0.0/16 \
  --cluster-cidr=10.2.0.0/16 \
  --cluster-name=kubernetes \
  --cluster-signing-cert-file=/opt/kubernetes/ssl/ca.pem \
  --cluster-signing-key-file=/opt/kubernetes/ssl/ca-key.pem \
  --service-account-private-key-file=/opt/kubernetes/ssl/ca-key.pem \
  --root-ca-file=/opt/kubernetes/ssl/ca.pem \
  --leader-elect=true \
  --v=2 \
  --logtostderr=false \
  --log-dir=/opt/kubernetes/log

Restart=on-failure
RestartSec=5

[Install]
WantedBy=multi-user.target

###啟動Controller Manager
[root@k8s-master ~]# systemctl daemon-reload
[root@k8s-master ~]# systemctl enable kube-controller-manager
[root@k8s-master ~]# systemctl start kube-controller-manager
[root@k8s-master ~]# systemctl status kube-controller-manager

3. 部署Kubernetes Scheduler服務

master節點上做

[root@k8s-master ~]# vim /usr/lib/systemd/system/kube-scheduler.service
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/GoogleCloudPlatform/kubernetes

[Service]
ExecStart=/opt/kubernetes/bin/kube-scheduler \
  --address=127.0.0.1 \
  --master=http://127.0.0.1:8080 \
  --leader-elect=true \
  --v=2 \
  --logtostderr=false \
  --log-dir=/opt/kubernetes/log

Restart=on-failure
RestartSec=5

[Install]
WantedBy=multi-user.target

###啟動scheduler服務
[root@k8s-master ~]# systemctl daemon-reload
[root@k8s-master ~]# systemctl enable kube-scheduler
[root@k8s-master ~]# systemctl start kube-scheduler
[root@k8s-master ~]# systemctl status kube-scheduler

4. 部署kubectl 命令行工具

master節點做

###准備二進制命令包
[root@k8s-master ~]# cd /usr/local/src/kubernetes/client/bin
[root@k8s-master bin]# cp kubectl /opt/kubernetes/bin/ ###創建 admin 證書簽名請求
[root@linux-node1 ~]# cd /usr/local/src/ssl/
[root@linux-node1 ssl]# vim admin-csr.json
{
  "CN": "admin",
  "hosts": [],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "BeiJing",
      "L": "BeiJing",
      "O": "system:masters",
      "OU": "System"
    }
  ]
}

###生成 admin 證書和私鑰
[root@k8s-master ssl]# cd /usr/local/src/ssl/
[root@k8s-master ssl]# cfssl gencert -ca=/opt/kubernetes/ssl/ca.pem    -ca-key=/opt/kubernetes/ssl/ca-key.pem    -config=/opt/kubernetes/ssl/ca-config.json    -profile=kubernetes admin-csr.json | cfssljson -bare admin
[root@k8s-master ssl]# mv admin*.pem /opt/kubernetes/ssl/ ###設置集群參數
[root@k8s-master ssl]# kubectl config set-cluster kubernetes \
   --certificate-authority=/opt/kubernetes/ssl/ca.pem \
   --embed-certs=true \
   --server=https://10.0.0.25:6443
 ###設置客戶端認證參數
[root@k8s-master ssl]# kubectl config set-credentials admin \
   --client-certificate=/opt/kubernetes/ssl/admin.pem \
   --embed-certs=true \
   --client-key=/opt/kubernetes/ssl/admin-key.pem

###設置上下文參數
[root@k8s-master ssl]# kubectl config set-context kubernetes \
   --cluster=kubernetes \
   --user=admin

###使用kubectl工具
[root@k8s-master ~]# ll /root/.kube/config 
[root@k8s-master ~]# kubectl get cs

如下圖所示證明master節點部署成功

 

四:Kubernetes的node節點部署

1. kubelet部署

master節點做

###二進制包准備
[root@k8s-master ~]# cd /usr/local/src/kubernetes/server/bin/
[root@k8s-master bin]# cp kubelet kube-proxy /opt/kubernetes/bin/
[root@k8s-master bin]# scp kubelet kube-proxy 10.0.0.26:/opt/kubernetes/bin/
[root@k8s-master bin]# scp kubelet kube-proxy 10.0.0.27:/opt/kubernetes/bin/ ###創建角色綁定
[root@k8s-master ~]# kubectl create clusterrolebinding kubelet-bootstrap --clusterrole=system:node-bootstrapper --user=kubelet-bootstrap

  ###設置集群參數
  [root@k8s-master ~]# kubectl config set-cluster kubernetes \
    --certificate-authority=/opt/kubernetes/ssl/ca.pem \
    --embed-certs=true \
    --server=https://10.0.0.25:6443 \
    --kubeconfig=bootstrap.kubeconfig

  ###設置客戶端認證參數
  [root@k8s-master ~]# kubectl config set-credentials kubelet-bootstrap \
     --token=30e6006d17ccc1ef829d26d1323cc7d6 \
     --kubeconfig=bootstrap.kubeconfig

  ###設置上下文參數
  [root@k8s-master ~]# kubectl config set-context default \
     --cluster=kubernetes \
     --user=kubelet-bootstrap \
     --kubeconfig=bootstrap.kubeconfig

###選擇默認上下文
[root@k8s-master ssl]# kubectl config use-context default --kubeconfig=bootstrap.kubeconfig
[root@k8s-master ssl]# cp bootstrap.kubeconfig /opt/kubernetes/cfg
[root@k8s-master ssl]# scp bootstrap.kubeconfig 10.0.0.26:/opt/kubernetes/cfg  
[root@k8s-master ssl]# scp bootstrap.kubeconfig 10.0.0.27:/opt/kubernetes/cfg

###設置CNI支持 3個節點都創建
mkdir -p /etc/cni/net.d
vim /etc/cni/net.d/10-default.conf
{
        "name": "flannel",
        "type": "flannel",
        "delegate": {
            "bridge": "docker0",
            "isDefaultGateway": true,
            "mtu": 1400
        }
}
[root@k8s-master ~]# scp /etc/cni/net.d/10-default.conf 10.0.0.26:/etc/cni/net.d/10-default.conf 
[root@k8s-master ~]# scp /etc/cni/net.d/10-default.conf 10.0.0.27:/etc/cni/net.d/10-default.conf

###創建kubelet目錄 #3個節點都創建
mkdir /var/lib/kubelet

###master節點配置文件
[root@k8s-master ~]# vim /usr/lib/systemd/system/kubelet.service
[Unit]
Description=Kubernetes Kubelet
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=docker.service
Requires=docker.service

[Service]
WorkingDirectory=/var/lib/kubelet
ExecStart=/opt/kubernetes/bin/kubelet \
  --address=10.0.0.25 \
  --hostname-override=10.0.0.25 \
  --pod-infra-container-image=mirrorgooglecontainers/pause-amd64:3.0 \
  --experimental-bootstrap-kubeconfig=/opt/kubernetes/cfg/bootstrap.kubeconfig \
  --kubeconfig=/opt/kubernetes/cfg/kubelet.kubeconfig \
  --cert-dir=/opt/kubernetes/ssl \
  --network-plugin=cni \
  --cni-conf-dir=/etc/cni/net.d \
  --cni-bin-dir=/opt/kubernetes/bin/cni \
  --cluster-dns=10.1.0.2 \
  --cluster-domain=cluster.local. \
  --hairpin-mode hairpin-veth \
  --allow-privileged=true \
  --fail-swap-on=false \
  --logtostderr=true \
  --v=2 \
  --logtostderr=false \
  --log-dir=/opt/kubernetes/log
Restart=on-failure
RestartSec=5 ###node1節點配置文件
[root@k8s-node-1 ~]# vim /usr/lib/systemd/system/kubelet.service
[Unit]
Description=Kubernetes Kubelet
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=docker.service
Requires=docker.service

[Service]
WorkingDirectory=/var/lib/kubelet
ExecStart=/opt/kubernetes/bin/kubelet \
  --address=10.0.0.26 \
  --hostname-override=10.0.0.26 \
  --pod-infra-container-image=mirrorgooglecontainers/pause-amd64:3.0 \
  --experimental-bootstrap-kubeconfig=/opt/kubernetes/cfg/bootstrap.kubeconfig \
  --kubeconfig=/opt/kubernetes/cfg/kubelet.kubeconfig \
  --cert-dir=/opt/kubernetes/ssl \
  --network-plugin=cni \
  --cni-conf-dir=/etc/cni/net.d \
  --cni-bin-dir=/opt/kubernetes/bin/cni \
  --cluster-dns=10.1.0.2 \
  --cluster-domain=cluster.local. \
  --hairpin-mode hairpin-veth \
  --allow-privileged=true \
  --fail-swap-on=false \
  --logtostderr=true \
  --v=2 \
  --logtostderr=false \
  --log-dir=/opt/kubernetes/log
Restart=on-failure
RestartSec=5 ###node2節點配置文件
[root@k8s-node-2 ~]# vim /usr/lib/systemd/system/kubelet.service
[Unit]
Description=Kubernetes Kubelet
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=docker.service
Requires=docker.service

[Service]
WorkingDirectory=/var/lib/kubelet
ExecStart=/opt/kubernetes/bin/kubelet \
  --address=10.0.0.27 \
  --hostname-override=10.0.0.27 \
  --pod-infra-container-image=mirrorgooglecontainers/pause-amd64:3.0 \
  --experimental-bootstrap-kubeconfig=/opt/kubernetes/cfg/bootstrap.kubeconfig \
  --kubeconfig=/opt/kubernetes/cfg/kubelet.kubeconfig \
  --cert-dir=/opt/kubernetes/ssl \
  --network-plugin=cni \
  --cni-conf-dir=/etc/cni/net.d \
  --cni-bin-dir=/opt/kubernetes/bin/cni \
  --cluster-dns=10.1.0.2 \
  --cluster-domain=cluster.local. \
  --hairpin-mode hairpin-veth \
  --allow-privileged=true \
  --fail-swap-on=false \
  --logtostderr=true \
  --v=2 \
  --logtostderr=false \
  --log-dir=/opt/kubernetes/log
Restart=on-failure
RestartSec=5 
###啟動Kubelet #3個節點都做
systemctl daemon
-reload systemctl enable kubelet systemctl start kubelet systemctl status kubelet

###查看csr請求
kubectl get csr

 
         

###批准kubelet的TLS證書請求
kubectl get csr|grep 'Pending' | awk 'NR>0{print $1}'| xargs kubectl certificate approve

 

###查看節點狀態

kubectl get node

2. kube-proxy部署

###配置kube-proxy使用LVS
[root@k8s-master ~]#  yum install -y ipvsadm ipset conntrack
[root@k8s-node-1 ~]#  yum install -y ipvsadm ipset conntrack
[root@k8s-node-2 ~]#  yum install -y ipvsadm ipset conntrack

###創建 kube-proxy 證書請求
[root@k8s-master ~]# cd /usr/local/src/ssl/
[root@k8s-master ssl]# vim kube-proxy-csr.json
{
  "CN": "system:kube-proxy",
  "hosts": [],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "BeiJing",
      "L": "BeiJing",
      "O": "k8s",
      "OU": "System"
    }
  ]
}

   
###生成證書
[root@k8s-master ssl]# cfssl gencert -ca=/opt/kubernetes/ssl/ca.pem \
   -ca-key=/opt/kubernetes/ssl/ca-key.pem \
   -config=/opt/kubernetes/ssl/ca-config.json \
   -profile=kubernetes  kube-proxy-csr.json | cfssljson -bare kube-proxy

###分發證書到所有節點
[root@k8s-master ssl]# cp kube-proxy*.pem /opt/kubernetes/ssl/
[root@k8s-master ssl]# scp kube-proxy*.pem 10.0.0.26:/opt/kubernetes/ssl/
[root@k8s-master ssl]# scp kube-proxy*.pem 10.0.0.27:/opt/kubernetes/ssl/ ###創建kube-proxy配置文件 #master節點做
[root@k8s-master ssl]# kubectl config set-cluster kubernetes \
   --certificate-authority=/opt/kubernetes/ssl/ca.pem \
   --embed-certs=true \
   --server=https://10.0.0.25:6443 \
   --kubeconfig=kube-proxy.kubeconfig
[root@k8s-master ssl]# kubectl config set-credentials kube-proxy \
   --client-certificate=/opt/kubernetes/ssl/kube-proxy.pem \
   --client-key=/opt/kubernetes/ssl/kube-proxy-key.pem \
   --embed-certs=true \
   --kubeconfig=kube-proxy.kubeconfig
[root@k8s-master ssl]# kubectl config set-context default \
   --cluster=kubernetes \
   --user=kube-proxy \
   --kubeconfig=kube-proxy.kubeconfig
[root@k8s-master ssl]# kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig
[root@k8s-master ssl]# cp kube-proxy.kubeconfig /opt/kubernetes/cfg/
[root@k8s-master ssl]# scp kube-proxy.kubeconfig 10.0.0.26:/opt/kubernetes/cfg/
[root@k8s-master ssl]# scp kube-proxy.kubeconfig 10.0.0.27:/opt/kubernetes/cfg/ ###創建kube-proxy服務配置 #3個節點都創建
[root@k8s-master ssl]# mkdir /var/lib/kube-proxy
[root@k8s-node-1 ~]# mkdir /var/lib/kube-proxy
[root@k8s-node-2 ~]# mkdir /var/lib/kube-proxy
##master節點
[root@k8s-master ssl]# vim /usr/lib/systemd/system/kube-proxy.service
[Unit]
Description=Kubernetes Kube-Proxy Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=network.target

[Service]
WorkingDirectory=/var/lib/kube-proxy
ExecStart=/opt/kubernetes/bin/kube-proxy \
  --bind-address=10.0.0.25 \
  --hostname-override=10.0.0.25 \
  --kubeconfig=/opt/kubernetes/cfg/kube-proxy.kubeconfig \
--masquerade-all \
  --feature-gates=SupportIPVSProxyMode=true \
  --proxy-mode=ipvs \
  --ipvs-min-sync-period=5s \
  --ipvs-sync-period=5s \
  --ipvs-scheduler=rr \
  --logtostderr=true \
  --v=2 \
  --logtostderr=false \
  --log-dir=/opt/kubernetes/log

Restart=on-failure
RestartSec=5
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
##node1節點
[root@k8s-node-1 ~]# vim /usr/lib/systemd/system/kube-proxy.service
[Unit]
Description=Kubernetes Kube-Proxy Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=network.target

[Service]
WorkingDirectory=/var/lib/kube-proxy
ExecStart=/opt/kubernetes/bin/kube-proxy \
  --bind-address=10.0.0.26 \
  --hostname-override=10.0.0.26 \
  --kubeconfig=/opt/kubernetes/cfg/kube-proxy.kubeconfig \
--masquerade-all \
  --feature-gates=SupportIPVSProxyMode=true \
  --proxy-mode=ipvs \
  --ipvs-min-sync-period=5s \
  --ipvs-sync-period=5s \
  --ipvs-scheduler=rr \
  --logtostderr=true \
  --v=2 \
  --logtostderr=false \
  --log-dir=/opt/kubernetes/log

Restart=on-failure
RestartSec=5
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
##node2節點
[root@k8s-node-2 ~]# vim /usr/lib/systemd/system/kube-proxy.service
[Unit]
Description=Kubernetes Kube-Proxy Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=network.target

[Service]
WorkingDirectory=/var/lib/kube-proxy
ExecStart=/opt/kubernetes/bin/kube-proxy \
  --bind-address=10.0.0.27 \
  --hostname-override=10.0.0.27 \
  --kubeconfig=/opt/kubernetes/cfg/kube-proxy.kubeconfig \
--masquerade-all \
  --feature-gates=SupportIPVSProxyMode=true \
  --proxy-mode=ipvs \
  --ipvs-min-sync-period=5s \
  --ipvs-sync-period=5s \
  --ipvs-scheduler=rr \
  --logtostderr=true \
  --v=2 \
  --logtostderr=false \
  --log-dir=/opt/kubernetes/log

Restart=on-failure
RestartSec=5
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target

###啟動Kubernetes Proxy #3個節點都做
systemctl daemon-reload
systemctl enable kube-proxy
systemctl start kube-proxy
systemctl status kube-proxy

檢查LVS狀態
ipvsadm -L -n
kubectl get node

 

五:Flannel網絡部署

1. 部署Flanne

###為Flannel生成證書
[root@k8s-master ~]# cd /usr/local/src/ssl/
[root@k8s-master ~]# vim flanneld-csr.json
{
  "CN": "flanneld",
  "hosts": [],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "BeiJing",
      "L": "BeiJing",
      "O": "k8s",
      "OU": "System"
    }
  ]
}

###生成證書
[root@k8s-master ~]# cfssl gencert -ca=/opt/kubernetes/ssl/ca.pem \
   -ca-key=/opt/kubernetes/ssl/ca-key.pem \
   -config=/opt/kubernetes/ssl/ca-config.json \
   -profile=kubernetes flanneld-csr.json | cfssljson -bare flanneld

###分發證書
[root@k8s-master ssl]# cp flanneld*.pem /opt/kubernetes/ssl/
[root@k8s-master ssl]# scp flanneld*.pem 10.0.0.26:/opt/kubernetes/ssl/
[root@k8s-master ssl]# scp flanneld*.pem 10.0.0.27:/opt/kubernetes/ssl/ ###下載Flannel軟件包
[root@k8s-master ~]# cd /usr/local/src
wget https://github.com/coreos/flannel/releases/download/v0.10.0/flannel-v0.10.0-linux-amd64.tar.gz
[root@k8s-master src]# tar zxf flannel-v0.10.0-linux-amd64.tar.gz
[root@k8s-master src]# cp flanneld mk-docker-opts.sh /opt/kubernetes/bin/
[root@k8s-master src]# scp flanneld mk-docker-opts.sh 10.0.0.26:/opt/kubernetes/bin/
[root@k8s-master src]# scp flanneld mk-docker-opts.sh 10.0.0.27:/opt/kubernetes/bin/ ###復制對應腳本到/opt/kubernetes/bin目錄
[root@k8s-master  ~]# cd /usr/local/src/kubernetes/cluster/centos/node/bin/
[root@k8s-master bin]# cp remove-docker0.sh /opt/kubernetes/bin/
[root@k8s-master bin]# scp remove-docker0.sh 10.0.0.26:/opt/kubernetes/bin/
[root@k8s-master bin]# scp remove-docker0.sh 10.0.0.27:/opt/kubernetes/bin/ ###配置Flannel
[root@k8s-master  ~]# vim /opt/kubernetes/cfg/flannel
FLANNEL_ETCD="-etcd-endpoints=https://10.0.0.25:2379,https://10.0.0.26:2379,https://10.0.0.27:2379"
FLANNEL_ETCD_KEY="-etcd-prefix=/kubernetes/network"
FLANNEL_ETCD_CAFILE="--etcd-cafile=/opt/kubernetes/ssl/ca.pem"
FLANNEL_ETCD_CERTFILE="--etcd-certfile=/opt/kubernetes/ssl/flanneld.pem"
FLANNEL_ETCD_KEYFILE="--etcd-keyfile=/opt/kubernetes/ssl/flanneld-key.pem" ##復制配置到其它節點上
[root@k8s-master  ~]# scp /opt/kubernetes/cfg/flannel 10.0.0.26:/opt/kubernetes/cfg/
[root@k8s-master  ~]# scp /opt/kubernetes/cfg/flannel 10.0.0.27:/opt/kubernetes/cfg/ ###設置Flannel系統服務
[root@k8s-master  ~]# vim /usr/lib/systemd/system/flannel.service
[Unit]
Description=Flanneld overlay address etcd agent
After=network.target
Before=docker.service

[Service]
EnvironmentFile=-/opt/kubernetes/cfg/flannel
ExecStartPre=/opt/kubernetes/bin/remove-docker0.sh
ExecStart=/opt/kubernetes/bin/flanneld ${FLANNEL_ETCD} ${FLANNEL_ETCD_KEY} ${FLANNEL_ETCD_CAFILE} ${FLANNEL_ETCD_CERTFILE} ${FLANNEL_ETCD_KEYFILE}
ExecStartPost=/opt/kubernetes/bin/mk-docker-opts.sh -d /run/flannel/docker

Type=notify

[Install]
WantedBy=multi-user.target
RequiredBy=docker.service
##復制系統服務腳本到其它節點上
[root@k8s-master  ~]# scp /usr/lib/systemd/system/flannel.service 10.0.0.26:/usr/lib/systemd/system/
[root@k8s-master  ~]# scp /usr/lib/systemd/system/flannel.service 10.0.0.27:/usr/lib/systemd/system/ ###下載Flannel CNI集成插件
[root@k8s-master  ~]# mkdir /opt/kubernetes/bin/cni   #3個節點都要創建
[root@k8s-master ~]# cd /usr/local/src/
[root@k8s-master src]# wget https://github.com/containernetworking/plugins/releases/download/v0.7.1/cni-plugins-amd64-v0.7.1.tgz
[root@k8s-master src]# tar zxf cni-plugins-amd64-v0.7.1.tgz -C /opt/kubernetes/bin/cni
[root@k8s-master src]# scp -r /opt/kubernetes/bin/cni/* 10.0.0.26:/opt/kubernetes/bin/cni/
[root@k8s-master src]# scp -r /opt/kubernetes/bin/cni/* 10.0.0.27:/opt/kubernetes/bin/cni/ ###創建Etcd的key [root@k8s-master ~]# /opt/kubernetes/bin/etcdctl --ca-file /opt/kubernetes/ssl/ca.pem --cert-file /opt/kubernetes/ssl/flanneld.pem --key-file /opt/kubernetes/ssl/flanneld-key.pem \ --no-sync -C https://10.0.0.25:2379,https://10.0.0.26:2379,https://10.0.0.27:2379 \ mk /kubernetes/network/config '{ "Network": "10.2.0.0/16", "Backend": { "Type": "vxlan", "VNI": 1 }}' >/dev/null 2>&1 ###啟動flannel #3個節點都做 chmod +x /opt/kubernetes/bin/* systemctl daemon-reload systemctl enable flannel systemctl start flannel systemctl status flannel ###配置Docker使用Flannel [root@k8s-master ~]# vim /usr/lib/systemd/system/docker.service
#在Unit下面修改After和增加Requires [Unit] After=network-online.target firewalld.service flannel.service Requires=flannel.service #增加EnvironmentFile=-/run/flannel/docker和$DOCKER_OPTS [Service] EnvironmentFile=-/run/flannel/docker ExecStart=/usr/bin/dockerd $DOCKER_OPTS ###將配置復制到另外兩個階段 [root@k8s-master ~]# scp /usr/lib/systemd/system/docker.service 10.0.0.26:/usr/lib/systemd/system/ [root@k8s-master ~]# scp /usr/lib/systemd/system/docker.service 10.0.0.27:/usr/lib/systemd/system/ ###重啟Docker #3個節點都做 systemctl daemon-reload systemctl restart docker

2. 創建一個K8S應用

###創建一個測試用的deployment
[root@k8s-master ~]# kubectl run net-test --image=alpine --replicas=2 sleep 360000 ###查看獲取IP情況
[root@k8s-master ~]# kubectl get pod -o wide
NAME                        READY     STATUS    RESTARTS   AGE       IP          NODE
net-test-5767cb94df-7kqhz   1/1       Running   0          30m       10.2.93.2   10.0.0.27
net-test-5767cb94df-xw6r9   1/1       Running   0          30m       10.2.58.2   10.0.0.25 ###測試聯通性
[root@k8s-master ~]# ping 10.2.58.2 -c1

如下圖所示說明成功:

###創建nginx的deployment.yaml文件
[root@k8s-master ~]# vim nginx-deployment.yaml apiVersion: apps/v1 kind: Deployment metadata: name: nginx-deployment labels: app: nginx spec: replicas: 3 selector: matchLabels: app: nginx template: metadata: labels: app: nginx spec: containers: - name: nginx image: nginx:1.10.3 ports: - containerPort: 80 ###創建deployment [root@k8s-master ~]# kubectl create -f nginx-deployment.yaml deployment.apps "nginx-deployment" created ###查看deployment [root@k8s-master ~]# kubectl get deployment NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE net-test 2 2 2 2 43m nginx-deployment 3 3 3 0 53s [root@k8s-master ~]# kubectl describe deployment nginx-deployment ###查看pod [root@k8s-master ~]# kubectl get pod NAME READY STATUS RESTARTS AGE net-test-5767cb94df-7kqhz 1/1 Running 0 45m net-test-5767cb94df-xw6r9 1/1 Running 0 45m nginx-deployment-75d56bb955-8tzh6 0/1 ContainerCreating 0 2m #ContainerCreating說明正在創建中 nginx-deployment-75d56bb955-bjmg7 0/1 ContainerCreating 0 2m #ContainerCreating說明正在創建中 nginx-deployment-75d56bb955-w4dks 0/1 ContainerCreating 0 2m #ContainerCreating說明正在創建中
[root@k8s
-master ~]# kubectl describe pod nginx-deployment-75d56bb955-8tzh6|tail -2 Normal SuccessfulMountVolume 5m kubelet, 10.0.0.27 MountVolume.SetUp succeeded for volume "default-token-rpdp6" Normal Pulling 5m kubelet, 10.0.0.27 pulling image "nginx:1.10.3" #正在pull取nginx的鏡像
[root@k8s
-master ~]# kubectl get pod -o wide NAME READY STATUS RESTARTS AGE IP NODE net-test-5767cb94df-7kqhz 1/1 Running 0 49m 10.2.93.2 10.0.0.27 net-test-5767cb94df-xw6r9 1/1 Running 0 49m 10.2.58.2 10.0.0.25 nginx-deployment-75d56bb955-8tzh6 0/1 ContainerCreating 0 6m <none> 10.0.0.27 nginx-deployment-75d56bb955-bjmg7 0/1 ContainerCreating 0 6m <none> 10.0.0.26 nginx-deployment-75d56bb955-w4dks 0/1 ContainerCreating 0 6m <none> 10.0.0.25

[root@k8s-master ~]# kubectl get pod -o wide NAME READY STATUS RESTARTS AGE IP NODE net-test-5767cb94df-7kqhz 1/1 Running 0 1h 10.2.93.2 10.0.0.27 net-test-5767cb94df-xw6r9 1/1 Running 0 1h 10.2.58.2 10.0.0.25 nginx-deployment-75d56bb955-8tzh6 1/1 Running 0 21m 10.2.93.3 10.0.0.27 nginx-deployment-75d56bb955-bjmg7 1/1 Running 0 21m 10.2.83.2 10.0.0.26 nginx-deployment-75d56bb955-w4dks 1/1 Running 0 21m 10.2.58.3 10.0.0.25 ###測試連通性 [root@k8s-master ~]# curl -I 10.2.93.3 -s|awk 'NR==1{print$2}' 200 [root@k8s-master ~]# curl -I 10.2.83.2 -s|awk 'NR==1{print$2}' 200 [root@k8s-master ~]# curl -I 10.2.58.3 -s|awk 'NR==1{print$2}' 200 ###更新deployment #--record是記錄日志,方便回滾 [root@k8s-master ~]# kubectl set image deployment/nginx-deployment nginx=nginx:1.12.2 --record deployment.apps "nginx-deployment" image updated ###查看更新后的deployment [root@k8s-master ~]# kubectl get deployment -o wide NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR net-test 2 2 2 2 2h net-test alpine run=net-test nginx-deployment 3 3 3 3 1h nginx nginx:1.12.2 app=nginx ###查看更新歷史 [root@k8s-master ~]# kubectl rollout history deployment/nginx-deployment --revision=1 deployments "nginx-deployment" with revision #1 Pod Template: Labels: app=nginx pod-template-hash=3181266511 Containers: nginx: Image: nginx:1.10.3 Port: 80/TCP Host Port: 0/TCP Environment: <none> Mounts: <none> Volumes: <none> #查看具體某一個版本的更新歷史 [root@k8s-master ~]# kubectl rollout history deployment/nginx-deployment --revision=1 deployments "nginx-deployment" with revision #1 Pod Template: Labels: app=nginx pod-template-hash=3181266511 Containers: nginx: Image: nginx:1.10.3 Port: 80/TCP Host Port: 0/TCP Environment: <none> Mounts: <none> Volumes: <none> ###快速回滾到上一個版本 [root@k8s-master ~]# kubectl rollout undo deployment/nginx-deployment deployment.apps "nginx-deployment" [root@k8s-master ~]# vim nginx-service.yaml kind: Service apiVersion: v1 metadata: name: nginx-service spec: selector: app: nginx ports: - protocol: TCP port: 80 targetPort: 80
###創建nginx的service.yaml文件 [root@k8s
-master ~]# kubectl create -f nginx-service.yaml service "nginx-service" created
###查看service [root@k8s
-master ~]# kubectl get service NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.1.0.1 <none> 443/TCP 18h nginx-service ClusterIP 10.1.172.14 <none> 80/TCP 12s ###測試連通性 [root@k8s-master ~]# curl --head http://10.1.172.14 HTTP/1.1 200 OK Server: nginx/1.10.3 Date: Mon, 04 Jun 2018 04:58:47 GMT Content-Type: text/html Content-Length: 612 Last-Modified: Tue, 31 Jan 2017 15:01:11 GMT Connection: keep-alive ETag: "5890a6b7-264" Accept-Ranges: bytes
###查看虛IP [root@k8s
-master ~]# ipvsadm -Ln IP Virtual Server version 1.2.1 (size=4096) Prot LocalAddress:Port Scheduler Flags -> RemoteAddress:Port Forward Weight ActiveConn InActConn TCP 10.1.0.1:443 rr persistent 10800 -> 10.0.0.25:6443 Masq 1 0 0 TCP 10.1.172.14:80 rr -> 10.2.58.5:80 Masq 1 0 0 -> 10.2.83.4:80 Masq 1 0 0 -> 10.2.93.5:80 Masq 1 0 1 ###自動擴容到5個副本節點 [root@k8s-master ~]# kubectl scale deployment nginx-deployment --replicas 5 deployment.extensions "nginx-deployment" scaled [root@k8s-master ~]# kubectl get pod -o wide NAME READY STATUS RESTARTS AGE IP NODE net-test-5767cb94df-7kqhz 1/1 Running 0 2h 10.2.93.2 10.0.0.27 net-test-5767cb94df-xw6r9 1/1 Running 0 2h 10.2.58.2 10.0.0.25 nginx-deployment-75d56bb955-6dfxr 1/1 Running 0 17s 10.2.83.5 10.0.0.26 nginx-deployment-75d56bb955-9jmch 1/1 Running 0 17s 10.2.93.6 10.0.0.27 nginx-deployment-75d56bb955-gssxl 1/1 Running 0 7m 10.2.58.5 10.0.0.25 nginx-deployment-75d56bb955-hkqdc 1/1 Running 0 7m 10.2.93.5 10.0.0.27 nginx-deployment-75d56bb955-s66c4 1/1 Running 0 7m 10.2.83.4 10.0.0.26 [root@k8s-master ~]# ipvsadm -Ln IP Virtual Server version 1.2.1 (size=4096) Prot LocalAddress:Port Scheduler Flags -> RemoteAddress:Port Forward Weight ActiveConn InActConn TCP 10.1.0.1:443 rr persistent 10800 -> 10.0.0.25:6443 Masq 1 0 0 TCP 10.1.172.14:80 rr -> 10.2.58.5:80 Masq 1 0 0 -> 10.2.83.4:80 Masq 1 0 0 -> 10.2.83.5:80 Masq 1 0 0 -> 10.2.93.5:80 Masq 1 0 0 -> 10.2.93.6:80 Masq 1 0 0


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM