k8s集群安裝(單節點)


一、環境說明

1.1、主機清單

節點 IP 主機名 安裝軟件 備注
Master 10.199.142.31 k8sm1  etcd kube-apiserver kube-controller-manager kube-scheduler kubectl安裝在此機器
Master 10.199.142.32 k8sm2  etcd 集群安裝用,單節點不用
Master 10.199.142.33 k8sm3  etcd 集群安裝用,單節點不用
Node 10.199.142.34 k8sn1  docker kubelet kube-proxy  
Node 10.199.142.35 k8sn2  docker kubelet kube-proxy  

 

1.2、操作路徑

路徑 說明 備注
/root/k8s/ 放置安裝時文件和腳本

 

 

/opt/etcd/{cfg,bin,ssl,logs}

etcd的安裝位置

cfg是配置目錄

bin是執行目錄

ssl是證書目錄

logs是日志目錄

/opt/k8s/{cfg,bin,ssl,logs}

kubernetes的安裝位置

 

1.3、資源下載

  • cfssl:https://pkg.cfssl.org/
# 本次安裝下載鏈接(版本 1.2):
https://pkg.cfssl.org/R1.2/cfssl_linux-amd64
https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64
https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64
  • etcd:https://github.com/etcd-io/etcd/releases
# 本次安裝下載鏈接(版本 3.4.15):
https://github-releases.githubusercontent.com/11225014/f051cf00-7842-11eb-99a4-97cc3ddb6816?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAIWNJYAX4CSVEH53A%2F20210317%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20210317T040208Z&X-Amz-Expires=300&X-Amz-Signature=e9bd4970e97c34547807d8616d8b71e51121fa589707fbce4ac9ba71e1656cec&X-Amz-SignedHeaders=host&actor_id=16606154&key_id=0&repo_id=11225014&response-content-disposition=attachment%3B%20filename%3Detcd-v3.4.15-linux-amd64.tar&response-content-type=application%2Foctet-stream
  • k8s:https://github.com/kubernetes/kubernetes/releases     (tip:點擊對應版本的CHANGELOG)
# 本次安裝下載鏈接(版本 1.20.4):
https://storage.googleapis.com/kubernetes-release/release/v1.20.4/kubernetes-server-linux-amd64.tar.gz

 

下載文件匯總,默認放在部署服務器的 /root/k8s/ 目錄下:

cfssl_linux-amd64
cfssl-certinfo_linux-amd64
cfssljson_linux-amd64
etcd-v3.4.15-linux-amd64.tar
kubernetes-server-linux-amd64.tar.gz

 

 

二、主機准備工作

(沒有特別說明,所有節點主機都執行!)

2.1、設置主機名

(在對應主機上執行對應命令)

hostnamectl set-hostname k8sm1
hostnamectl set-hostname k8sm2
hostnamectl set-hostname k8sm3
hostnamectl set-hostname k8sn1
hostnamectl set-hostname k8sn2

 

2.2、修改hosts

cat >> /etc/hosts <<EOF
10.199.142.31 k8sm1
10.199.142.32 k8sm2
10.199.142.33 k8sm3
10.199.142.34 k8sn1
10.199.142.35 k8sn2
EOF

 

2.3、修改時區 同步時間

timedatectl set-timezone Asia/Shanghai
# 如果沒有ntpdate命令,則安裝:yum install ntpdate -y ntpdate pool.ntp.org

 

2.4、關閉selinux

sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config
setenforce 0

 

2.5、關閉防火牆

systemctl disable firewalld
systemctl stop firewalld

 

2.6、關閉swap

swapoff -a
vim /etc/fstab (注釋或刪除swap行,永久關閉)

 

2.7、ssh互信

(在k8sm1節點上,免密ssh任何一台機器!)

# 只在Master節點上執行即可,主要用於分發文件和遠程執行命令
NODE_IPS=("10.199.142.31" "10.199.142.32" "10.199.142.33" "10.199.142.34" "10.199.142.35")
for node_ip in ${NODE_IPS[@]};do
    echo ">>> ${node_ip}"
    ssh-copy-id -i ~/.ssh/id_rsa.pub root@${node_ip}
done

 

2.8、安裝docker

(在node節點上安裝)

# 下載軟件源
wget -O /etc/yum.repos.d/docker-ce.repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

# 安裝docker-ce,也可下載指定的docekr-ce的docker (低版本的docker 無法支持高版本的kubernetes)
yum -y install docker-ce
# 或指定版本安裝(選擇一種即可)
yum -y install docker-ce-18.09.1-3.el7

# 設置Docker鏡像加速器
mkdir -p /etc/docker/
cat > /etc/docker/daemon.json <<EOF
{
 "exec-opts":["native.cgroupdriver=systemd"],
 "registry-mirrors": ["http://hub-mirror.c.163.com"]
}
EOF

# 開啟docker,並開機自啟動
systemctl start docker & systemctl enable docker

 

2.9、將橋接的IPv4流量傳遞到iptables的鏈

cat > /etc/sysctl.d/k8s.conf << EOF
net.ipv4.ip_nonlocal_bind = 1    
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_local_port_range = 10000 65000
fs.file-max = 2000000
vm.swappiness = 0
EOF

# 加載br_netfilter模塊
modprobe br_netfilter

# 查看是否加載
lsmod | grep br_netfilter

# 生效
sysctl --system

 

2.10、建安裝文件夾

 (在k8sm1節點上執行)

# 所有節點建立k8s、安裝目錄
NODE_IPS=("10.199.142.31" "10.199.142.32" "10.199.142.33" "10.199.142.34" "10.199.142.35")
for node_ip in ${NODE_IPS[@]};do
    echo ">>> ${node_ip}"
    ssh root@${node_ip} "mkdir -p /root/k8s/ssl && mkdir -p /opt/k8s/{cfg,bin,ssl,logs}"
done

# Master節點建立etcd安裝目錄
NODE_IPS=("10.199.142.31" "10.199.142.32" "10.199.142.33")
for node_ip in ${NODE_IPS[@]};do
    echo ">>> ${node_ip}"
    ssh root@${node_ip} "mkdir -p /opt/etcd/{cfg,bin,ssl}"
done

 

 

三、創建CA證書、密鑰

3.1、安裝證書制作工具

安裝腳本(/root/k8s/install_cfssl.sh),內容如下:

# install_cfssl.sh
# cfssl:生成證書工具
# cfssljson:通過傳入json文件生成證書
# cfssl-certinfo:查看證書信息

curl -L https://pkg.cfssl.org/R1.2/cfssl_linux-amd64 -o /usr/local/bin/cfssl
curl -L https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64 -o /usr/local/bin/cfssljson
curl -L https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64 -o /usr/local/bin/cfssl-certinfo

chmod +x /usr/local/bin/cfssl /usr/local/bin/cfssljson /usr/local/bin/cfssl-certinfo

echo ">>> 安裝結果"
ls -trl /usr/local/bin/cfssl*

 

安裝cfssl:

[root@k8sm1 k8s]# sh install_cfssl.sh

 

3.2、制作根證書

3.2.1、創建CA證書配置文件

創建默認文件:

[root@k8sm1 ssl]# cfssl print-defaults config > ca-config.json

默認創建出的文件內容(需要修改):

View Code

修改后內容:

# 終版 /root/k8s/ssl/ca-config.json
{
  "signing": {
    "default": {
      "expiry": "87600h"
    },
    "profiles": {
      "kubernetes": {
        "usages": [
            "signing",
            "key encipherment",
            "server auth",
            "client auth"
        ],
        "expiry": "87600h"
      }
    }
  }
}
  • config.json:可以定義多個profiles,分別指定不同的過期時間、使用場景等參數;后續在簽名證書時使用某個profile;
  • signing: 表示該證書可用於簽名其它證書;生成的ca.pem 證書中CA=TRUE;
  • server auth: 表示client 可以用該CA 對server 提供的證書進行校驗;
  • client auth: 表示server 可以用該CA 對client 提供的證書進行驗證;

 

3.2.2、創建CA證書簽名請求文件

 創建默認文件:

[root@k8sm1 k8s]# cfssl print-defaults csr > ca-csr.json

默認創建出的文件內容(需要修改):

View Code

修改后內容:

# 終版 /root/k8s/ssl/ca-csr.json 
{
    "CN": "kubernetes",
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
          "C": "CN",
          "ST": "BeiJing",
          "L": "BeiJing",
          "O": "k8s",
          "OU": "System"
        }
    ]
}
  • CN: Common Name,kube-apiserver 從證書中提取該字段作為請求的用戶名(User Name);瀏覽器使用該字段驗證網站是否合法;
  • O: Organization,kube-apiserver 從證書中提取該字段作為請求用戶所屬的組(Group);

 

3.2.3、生成證書(ca.pem)和秘鑰(ca-key.pem)

[root@k8sm1 ssl]# cfssl gencert -initca ca-csr.json | cfssljson -bare ca

結果:

上面生成的證書文件存放在k8sm1的 /root/k8s/ssl

-rw-r--r-- 1 root root  292 3月  17 20:40 ca-config.json
-rw-r--r-- 1 root root  254 3月  17 20:40 ca-csr.json
# ca.csr是一個簽署請求 -rw-r--r-- 1 root root 1001 3月 17 20:42 ca.csr # ca-key.pem是ca的私鑰 -rw------- 1 root root 1679 3月 17 20:42 ca-key.pem # ca.pem是CA證書 -rw-r--r-- 1 root root 1359 3月 17 20:42 ca.pem

 

 

四、集群環境變量

4.1、預設環境網段及變量

環境變量腳本(/root/k8s/env.sh),內容如下:

export PATH=/opt/k8s/bin:/opt/etcd/bin:$PATH

# TLS Bootstrapping 使用的Token,
# 可以使用命令生成,命令: head -c 16 /dev/urandom | od -An -t x | tr -d ' '
BOOTSTRAP_TOKEN="555898ed9c5d0ba16cf76fec2c8f94ef"

# 服務網段(Service CIDR),集群內部使用IP:Port
SERVICE_CIDR="10.200.0.0/16"
# Pod 網段(Cluster CIDR)
CLUSTER_CIDR="10.100.0.0/16"

# 服務端口范圍(NodePort Range)
NODE_PORT_RANGE="30000-32766"

# etcd 集群所有機器 IP
ECTD_IPS="10.199.142.31,10.199.142.32,10.199.142.33"

# etcd集群服務地址列表
ETCD_ENDPOINTS="https://10.199.142.31:2379,https://10.199.142.32:2379,https://10.199.142.33:2379"

# etcd 集群間通信的IP和端口
ECTD_NODES="etcd01=https://10.199.142.31:2380,etcd02=https://10.199.142.32:2380,etcd03=https://10.199.142.33:2380"

# flanneld 網絡配置前綴
FLANNEL_ETCD_PREFIX="/kubernetes/network"

# kubernetes 服務IP(預先分配,一般為SERVICE_CIDR中的第一個IP)
CLUSTER_KUBERNETES_SVC_IP="10.200.0.1"

# 集群 DNS 服務IP(從SERVICE_CIDR 中預先分配)
CLUSTER_DNS_SVC_IP="10.200.0.2"

# 集群 DNS 域名
CLUSTER_DNS_DOMAIN="cluster.local"

# MASTER API Server 地址
#MASTER_URL="k8s-api.virtual.local"
MASTER_URL="10.199.142.31"

 

4.2、分發環境變量配置

(在k8sm1節點上執行)

NODE_IPS=("10.199.142.31" "10.199.142.32" "10.199.142.33" "10.199.142.34" "10.199.142.35")
for node_ip in ${NODE_IPS[@]};do
    echo ">>> ${node_ip}"
    scp /root/k8s/env.sh root@${node_ip}:/root/k8s/
done

 

 

五、部署etcd

5.1、下載etcd

[root@k8sm1 k8s]# tar -xvf etcd-v3.4.15-linux-amd64.tar
[root@k8sm1 k8s]# cd etcd-v3.4.15-linux-amd64 [root@k8sm1 etcd-v3.4.15-linux-amd64]# mv etcd etcdctl /opt/etcd/bin/

 

5.2、創建etcd證書

  • 創建etcd證書簽名請求文件

cat > /root/k8s/ssl/etcd-csr.json <<EOF
{
  "CN": "etcd",
  "hosts": [
    "127.0.0.1",
    "10.199.142.31",
    "10.199.142.32",
    "10.199.142.33"
  ],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "BeiJing",
      "L": "BeiJing",
      "O": "k8s",
      "OU": "System"
    }
  ]
}
EOF

Tip:hosts 字段指定授權使用該證書的etcd節點集群IP。

 

  • 生成證書

[root@k8sm1 ssl]# cd /root/k8s/ssl
[root@k8sm1 ssl]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes etcd-csr.json | cfssljson -bare etcd 
[root@k8sm1 ssl]# cp ca*.pem etcd*.pem /opt/etcd/ssl/

 

5.3、編寫etcd安裝腳本

etcd安裝腳本(/root/k8s/install_etcd.sh),內容如下:

#!/usr/bin/bash
# example: ./install_etcd.sh etcd01 10.199.142.31

# 參數,當前etcd的名字,IP
ETCD_NAME=$1
ETCD_IP=$2
# etcd安裝路徑
WORK_DIR=/opt/etcd
# 數據存放路徑
DATA_DIR=/var/lib/etcd/default.etcd

# 加載預設變量
source /root/k8s/env.sh

# 創建節點的配置文件模板
cat >$WORK_DIR/cfg/etcd.conf <<EOF
# [Member]
ETCD_NAME="${ETCD_NAME}"
ETCD_DATA_DIR="${DATA_DIR}"
ETCD_LISTEN_PEER_URLS="https://${ETCD_IP}:2380"
ETCD_LISTEN_CLIENT_URLS="https://${ETCD_IP}:2379"

# [Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://${ETCD_IP}:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://${ETCD_IP}:2379"
ETCD_INITIAL_CLUSTER="${ECTD_NODES}"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"
EOF

# 創建節點的啟動腳本模板
cat >/usr/lib/systemd/system/etcd.service <<EOF
[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target

[Service]
Type=notify
EnvironmentFile=${WORK_DIR}/cfg/etcd.conf
ExecStart=${WORK_DIR}/bin/etcd \
--cert-file=${WORK_DIR}/ssl/etcd.pem \
--key-file=${WORK_DIR}/ssl/etcd-key.pem \
--peer-cert-file=${WORK_DIR}/ssl/etcd.pem \
--peer-key-file=${WORK_DIR}/ssl/etcd-key.pem \
--trusted-ca-file=${WORK_DIR}/ssl/ca.pem \
--peer-trusted-ca-file=${WORK_DIR}/ssl/ca.pem
Restart=on-failure
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
EOF

# 重啟服務,並設置開機自啟
systemctl daemon-reload
systemctl enable etcd
systemctl restart etcd
  • 指定etcd的工作目錄和數據目錄為/var/lib/etcd,需要在啟動服務前創建這個目錄;
  • 為了保證通信安全,需要指定etcd 的公私鑰(cert-file和key-file)、Peers通信的公私鑰和CA 證書(peer-cert-file、peer-key-file、peer-trusted-ca-file)、客戶端的CA 證書(trusted-ca-file);
  • --initial-cluster-state值為new時,--name的參數值必須位於--initial-cluster列表中;

 

5.4、分發、啟動etcd服務

  • 執行腳本(此時服務還未起好,會報錯,因為是集群,檢測不到其他節點)
[root@hz-yf-xtax-it-199-142-31 k8s]# sh install_etcd.sh etcd01 10.199.142.31
  • 分發
NODE_IPS=("10.199.142.31" "10.199.142.32" "10.199.142.33")
for node_ip in ${NODE_IPS[@]};do
    echo ">>> ${node_ip}"
    scp -r /opt/etcd/ root@${node_ip}:/opt/
    scp /usr/lib/systemd/system/etcd.service root@${node_ip}:/usr/lib/systemd/system/
done
  • 修改節點的配置(/opt/etcd/cfg/etcd.conf)

把單節點的ip信息,換成此節點的IP

# [Member]
ETCD_NAME="etcd02"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://10.199.142.32:2380"
ETCD_LISTEN_CLIENT_URLS="https://10.199.142.32:2379"

# [Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://10.199.142.32:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://10.199.142.32:2379"
ETCD_INITIAL_CLUSTER="etcd01=https://10.199.142.31:2380,etcd02=https://10.199.142.32:2380,etcd03=https://10.199.142.33:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"

節點三同理

  • 啟動etcd服務

每台節點上,分別執行systemctl start etcd

 

5.5、驗證etcd服務結果:

ETCDCTL_API=3 /opt/etcd/bin/etcdctl --cacert=/opt/etcd/ssl/ca.pem --cert=/opt/etcd/ssl/etcd.pem --key=/opt/etcd/ssl/etcd-key.pem --endpoints="https://10.199.142.31:2379,https://10.199.142.32:2379,https://10.199.142.33:2379" endpoint health --write-out=table

結果:

+----------------------------+--------+-------------+-------+
|          ENDPOINT          | HEALTH |    TOOK     | ERROR |
+----------------------------+--------+-------------+-------+
| https://10.199.142.32:2379 |   true | 11.890704ms |       |
| https://10.199.142.31:2379 |   true | 11.739482ms |       |
| https://10.199.142.33:2379 |   true | 12.842723ms |       |
+----------------------------+--------+-------------+-------+

 

 

六、安裝kubectl

kubectl默認從~/.kube/config配置文件中獲取訪問kube-apiserver 地址、證書、用戶名等信息,需要正確配置該文件才能正常使用kubectl命令。
需要將下載的kubectl 二進制文件和生產的~/.kube/config配置文件拷貝到需要使用kubectl 命令的機器上。

很多童鞋說這個地方不知道在哪個節點上執行,kubectl只是一個和kube-apiserver進行交互的一個命令行工具,所以你想安裝到那個節點都行,master或者node任意節點都可以,比如你先在master節點上安裝,這樣你就可以在master節點使用kubectl命令行工具了,如果你想在node節點上使用(當然安裝的過程肯定會用到的),你就把master上面的kubectl二進制文件和~/.kube/config文件拷貝到對應的node節點上就行了。

 

6.1、下載kubectl

[root@k8sm1 k8s]# tar -xvf kubernetes-server-linux-amd64.tar.gz
[root@k8sm1 k8s]# cd kubernetes/server/bin/
[root@k8sm1 bin]# cp kubectl /usr/local/bin/

 

6.2、創建admin證書

6.2.1、創建admin證書簽名請求文件

kubectl 與kube-apiserver 的安全端口通信,需要為安全通信提供TLS 證書和密鑰。創建admin 證書簽名請求:

cat >/root/k8s/ssl/admin-csr.json <<EOF
{
  "CN": "admin",
  "hosts": [],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "BeiJing",
      "L": "BeiJing",
      "O": "system:masters",
      "OU": "System"
    }
  ]
}
EOF
  • 后續kube-apiserver使用RBAC 對客戶端(如kubelet、kube-proxy、Pod)請求進行授權;
  • kube-apiserver 預定義了一些RBAC 使用的RoleBindings,如cluster-admin 將Group system:masters與Role cluster-admin綁定,該Role 授予了調用kube-apiserver所有API 的權限;
  • O 指定了該證書的Group 為system:masters,kubectl使用該證書訪問kube-apiserver時,由於證書被CA 簽名,所以認證通過,同時由於證書用戶組為經過預授權的system:masters,所以被授予訪問所有API 的勸降;
  • hosts 屬性值為空列表;

 

6.2.2、生成證書

[root@k8sm ssl]# cfssl gencert -ca=/root/k8s/ssl/ca.pem -ca-key=/root/k8s/ssl/ca-key.pem -config=/root/k8s/ssl/ca-config.json -profile=kubernetes admin-csr.json | cfssljson -bare admin 

 

6.2.3、拷貝證書

[root@k8sm ssl]# cp admin*.pem /opt/k8s/ssl/

 

6.3、創建kubectl安裝腳本

kubectl安裝腳本(/root/k8s/install_kubectl.sh),內容如下:

# 加載預設變量
source /root/k8s/env.sh # 默認APISERVER:6443 KUBE_APISERVER="https://${MASTER_URL}:6443" # 設置集群參數 kubectl config set-cluster kubernetes \ --certificate-authority=/opt/k8s/ssl/ca.pem \ --embed-certs=true \ --server=${KUBE_APISERVER} # 設置客戶端認證參數 kubectl config set-credentials admin \ --client-certificate=/opt/k8s/ssl/admin.pem \ --embed-certs=true \ --client-key=/opt/k8s/ssl/admin-key.pem \ --token=${BOOTSTRAP_TOKEN} # 設置上下文參數 kubectl config set-context kubernetes \ --cluster=kubernetes \ --user=admin # 設置默認上下文 kubectl config use-context kubernetes
  • admin.pem證書O 字段值為system:masters,kube-apiserver 預定義的 RoleBinding cluster-admin 將 Group system:masters 與 Role cluster-admin 綁定,該 Role 授予了調用kube-apiserver 相關 API 的權限
  • 生成的kubeconfig 被保存到 ~/.kube/config 文件

 

6.4、生成配置文件

[root@k8sm k8s]# sh install_kubectl.sh

 

Tip:如果分發kubectl到其他機器

  • 拷貝kubectl文件;
  • 將~/.kube/config文件拷貝到運行kubectl命令的機器的~/.kube/目錄下去;

 

kubectl get cs

 

七、安裝Master節點

kubernetes master 節點包含的組件有:

  • kube-apiserver
  • kube-scheduler
  • kube-controller-manager

目前這3個組件需要部署到同一台機器上:(后面再部署高可用的master)

  • kube-scheduler、kube-controller-manager 和 kube-apiserver 三者的功能緊密相關;
  • 同時只能有一個 kube-scheduler、kube-controller-manager 進程處於工作狀態,如果運行多個,則需要通過選舉產生一個 leader;

master 節點與node 節點上的Pods 通過Pod 網絡通信,所以需要在master 節點上部署Flannel 網絡。

 

7.1、下載二進制文件

[root@k8sm1 ~]# cd /root/k8s/kubernetes/server/bin
[root@k8sm1 bin]# cp kube-apiserver kube-controller-manager kube-scheduler /opt/k8s/bin/

 

7.2、創建kubernetes證書

  • 創建kubernetes證書簽名請求文件

cat > /root/k8s/ssl/kube-apiserver-csr.json <<EOF
{
  "CN": "kubernetes",
  "hosts": [
    "127.0.0.1",
    "10.199.142.31",
    "10.199.142.32",
    "10.199.142.33",
    "10.199.142.34",
    "10.199.142.35",
    "kubernetes",
    "kubernetes.default",
    "kubernetes.default.svc",
    "kubernetes.default.svc.cluster",
    "kubernetes.default.svc.cluster.local"
  ],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "BeiJing",
      "L": "BeiJing",
      "O": "k8s",
      "OU": "System"
    }
  ]
}
EOF
  • 如果 hosts 字段不為空則需要指定授權使用該證書的 IP 或域名列表,所以上面分別指定了當前部署的 master 節點主機 IP 以及apiserver 負載的內部域名
  • 還需要添加 kube-apiserver 注冊的名為 kubernetes 的服務 IP (Service Cluster IP),一般是 kube-apiserver --service-cluster-ip-range 選項值指定的網段的第一個IP,如 “10.254.0.1”

 

8.2.2、生成證書

cfssl gencert -ca=/root/k8s/ssl/ca.pem   -ca-key=/root/k8s/ssl/ca-key.pem   -config=/root/k8s/ssl/ca-config.json   -profile=kubernetes kube-apiserver-csr.json | cfssljson -bare kube-apiserver

 

8.2.3、分發證書

cp kube-apiserver*.pem /opt/k8s/ssl/

 

8.3、配置和啟動kube-apiserver

8.3.1、制作token令牌

創建kube-apiserver使用的客戶端token 文件。kubelet 首次啟動時向kube-apiserver 發送TLS Bootstrapping 請求,kube-apiserver 驗證請求中的token是否與它配置的token.csv 一致,如果一致則自動為kubelet 生成證書和密鑰。

source /root/k8s/env.sh
cat >/opt/k8s/cfg/token.csv <<EOF # 寫入內容:序列號,用戶名,id,角色 ${BOOTSTRAP_TOKEN},kubelet-bootstrap,10001,"system:kubelet-bootstrap" EOF

 

8.3.2、創建kube-apiserver安裝腳本

kube-apiserver安裝腳本(/root/k8s/install_kube-apiserver.sh),內容如下:

#!/usr/bin/bash
# example: ./install_kube-apiserver.sh 10.199.142.31
NODE_IP=$1

K8S_DIR=/opt/k8s
ETCD_DIR=/opt/etcd

source /root/k8s/env.sh

cat >${K8S_DIR}/cfg/kube-apiserver.conf <<EOF
KUBE_APISERVER_OPTS="--logtostderr=false \
--v=2 \
--log-dir=${K8S_DIR}/logs \
--etcd-servers=${ETCD_ENDPOINTS} \
--bind-address=${NODE_IP} \
--secure-port=6443 \
--advertise-address=${NODE_IP} \
--allow-privileged=true \
--service-cluster-ip-range=${SERVICE_CIDR} \
--enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota,NodeRestriction \
--authorization-mode=RBAC,Node \
--enable-bootstrap-token-auth=true \
--token-auth-file=${K8S_DIR}/cfg/token.csv \
--service-node-port-range=${NODE_PORT_RANGE} \
--kubelet-client-certificate=${K8S_DIR}/ssl/kube-apiserver.pem \
--kubelet-client-key=${K8S_DIR}/ssl/kube-apiserver-key.pem \
--tls-cert-file=${K8S_DIR}/ssl/kube-apiserver.pem  \
--tls-private-key-file=${K8S_DIR}/ssl/kube-apiserver-key.pem \
--client-ca-file=${K8S_DIR}/ssl/ca.pem \
--service-account-key-file=${K8S_DIR}/ssl/ca-key.pem \
--service-account-issuer=api \
--service-account-signing-key-file=${K8S_DIR}/ssl/kube-apiserver-key.pem \
--etcd-cafile=${ETCD_DIR}/ssl/ca.pem \
--etcd-certfile=${ETCD_DIR}/ssl/etcd.pem \
--etcd-keyfile=${ETCD_DIR}/ssl/etcd-key.pem \
--requestheader-client-ca-file=${K8S_DIR}/ssl/ca.pem \
--proxy-client-cert-file=${K8S_DIR}/ssl/kube-apiserver.pem \
--proxy-client-key-file=${K8S_DIR}/ssl/kube-apiserver-key.pem \
--requestheader-allowed-names=kubernetes \
--requestheader-extra-headers-prefix=X-Remote-Extra- \
--requestheader-group-headers=X-Remote-Group \
--requestheader-username-headers=X-Remote-User \
--enable-aggregator-routing=true \
--audit-log-maxage=30 \
--audit-log-maxbackup=3 \
--audit-log-maxsize=100 \
--audit-log-path=${K8S_DIR}/logs/k8s-audit.log"
EOF

# 創建節點的啟動腳本模板
cat >/usr/lib/systemd/system/kube-apiserver.service <<EOF
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes

[Service]
EnvironmentFile=${K8S_DIR}/cfg/kube-apiserver.conf
ExecStart=${K8S_DIR}/bin/kube-apiserver \$KUBE_APISERVER_OPTS
Restart=on-failure

[Install]
WantedBy=multi-user.target
EOF

# 重啟服務,並設置開機自啟
systemctl daemon-reload
systemctl enable kube-apiserver
systemctl restart kube-apiserver

 

8.3.3、啟動apiserver

[root@k8sm k8s]# sh install_kube-apiserver.sh 10.199.142.31

 

8.3.4、 驗證服務

systemctl status kube-apiserver

 

8.4、配置和啟動kube-controller-manager

生成證書

cat > /root/k8s/ssl/kube-controller-manager-csr.json << EOF
{
  "CN": "system:kube-controller-manager",
  "hosts": [],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "L": "BeiJing", 
      "ST": "BeiJing",
      "O": "system:masters",
      "OU": "System"
    }
  ]
}
EOF

cd /root/k8s/ssl

cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-controller-manager-csr.json | cfssljson -bare kube-controller-manager

cp kube-controller-manager*.pem /opt/k8s/ssl/

 

K8S_DIR=/opt/k8s

source /root/k8s/env.sh

KUBE_CONFIG="${K8S_DIR}/cfg/kube-controller-manager.kubeconfig"
KUBE_APISERVER="https://${MASTER_URL}:6443"

kubectl config set-cluster kubernetes \
  --certificate-authority=${K8S_DIR}/ssl/ca.pem \
  --embed-certs=true \
  --server=${KUBE_APISERVER} \
  --kubeconfig=${KUBE_CONFIG}
kubectl config set-credentials kube-controller-manager \
  --client-certificate=${K8S_DIR}/ssl/kube-controller-manager.pem \
  --client-key=${K8S_DIR}/ssl/kube-controller-manager-key.pem \
  --embed-certs=true \
  --kubeconfig=${KUBE_CONFIG}
kubectl config set-context default \
  --cluster=kubernetes \
  --user=kube-controller-manager \
  --kubeconfig=${KUBE_CONFIG}
kubectl config use-context default --kubeconfig=${KUBE_CONFIG}

 

 

8.4.1、創建kube-controller-manager安裝腳本

kube-controller-manager安裝腳本(/root/k8s/install_kube-controller-manager.sh),內容如下:

#!/usr/bin/bash
# example: ./install_kube-controller-manager.sh
K8S_DIR=/opt/k8s

source /root/k8s/env.sh

cat > ${K8S_DIR}/cfg/kube-controller-manager.conf << EOF
KUBE_CONTROLLER_MANAGER_OPTS="--logtostderr=false \\
--v=2 \\
--log-dir=${K8S_DIR}/logs \\
--leader-elect=true \\
--kubeconfig=${K8S_DIR}/cfg/kube-controller-manager.kubeconfig \\
--bind-address=127.0.0.1 \\
--allocate-node-cidrs=true \\
--cluster-cidr=${CLUSTER_CIDR} \\
--service-cluster-ip-range=${SERVICE_CIDR} \\
--cluster-signing-cert-file=${K8S_DIR}/ssl/ca.pem \\
--cluster-signing-key-file=${K8S_DIR}/ssl/ca-key.pem  \\
--root-ca-file=${K8S_DIR}/ssl/ca.pem \\
--service-account-private-key-file=${K8S_DIR}/ssl/ca-key.pem \\
--cluster-signing-duration=87600h0m0s"
EOF

cat > /usr/lib/systemd/system/kube-controller-manager.service << EOF
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes

[Service]
EnvironmentFile=${K8S_DIR}/cfg/kube-controller-manager.conf
ExecStart=${K8S_DIR}/bin/kube-controller-manager \$KUBE_CONTROLLER_MANAGER_OPTS
Restart=on-failure

[Install]
WantedBy=multi-user.target
EOF

systemctl daemon-reload
systemctl start kube-controller-manager
systemctl enable kube-controller-manager
  • --address 值必須為 127.0.0.1,因為當前 kube-apiserver 期望 scheduler 和 controller-manager 在同一台機器
  • --master=http://${MASTER_URL}:8080:使用http(非安全端口)與 kube-apiserver 通信,需要下面的haproxy安裝成功后才能去掉8080端口。
  • --cluster-cidr 指定 Cluster 中 Pod 的 CIDR 范圍,該網段在各 Node 間必須路由可達(flanneld保證)
  • --service-cluster-ip-range 參數指定 Cluster 中 Service 的CIDR范圍,該網絡在各 Node 間必須路由不可達,必須和 kube-apiserver 中的參數一致
  • --cluster-signing-* 指定的證書和私鑰文件用來簽名為 TLS BootStrap 創建的證書和私鑰
  • --root-ca-file 用來對 kube-apiserver 證書進行校驗,指定該參數后,才會在Pod 容器的 ServiceAccount 中放置該 CA 證書文件
  • --leader-elect=true 部署多台機器組成的 master 集群時選舉產生一處於工作狀態的 kube-controller-manager 進程

 

8.4.2、啟動kube-controller-manager

[root@k8sm k8s]# sh install_kube-controller-manager.sh

 

8.4.3、驗證服務

systemctl status kube-controller-manager

 

8.5、配置和啟動kube-scheduler

cat > /root/k8s/ssl/kube-scheduler-csr.json << EOF
{
  "CN": "system:kube-scheduler",
  "hosts": [],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "L": "BeiJing",
      "ST": "BeiJing",
      "O": "system:masters",
      "OU": "System"
    }
  ]
}
EOF

cd /root/k8s/ssl

cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-scheduler-csr.json | cfssljson -bare kube-scheduler

cp kube-scheduler*.pem /opt/k8s/ssl/

 

K8S_DIR=/opt/k8s
source /root/k8s/env.sh

KUBE_CONFIG="${K8S_DIR}/cfg/kube-scheduler.kubeconfig"
KUBE_APISERVER="https://${MASTER_URL}:6443"

kubectl config set-cluster kubernetes \
  --certificate-authority=${K8S_DIR}/ssl/ca.pem \
  --embed-certs=true \
  --server=${KUBE_APISERVER} \
  --kubeconfig=${KUBE_CONFIG}
kubectl config set-credentials kube-scheduler \
  --client-certificate=${K8S_DIR}/ssl/kube-scheduler.pem \
  --client-key=${K8S_DIR}/ssl/kube-scheduler-key.pem \
  --embed-certs=true \
  --kubeconfig=${KUBE_CONFIG}
kubectl config set-context default \
  --cluster=kubernetes \
  --user=kube-scheduler \
  --kubeconfig=${KUBE_CONFIG}
kubectl config use-context default --kubeconfig=${KUBE_CONFIG}

 

 

8.5.1、創建kube-scheduler安裝腳本

kube-scheduler安裝腳本(/root/k8s/install_kube-scheduler.sh),內容如下:

#!/usr/bin/bash
# example: ./install_kube-scheduler.sh
K8S_DIR=/opt/k8s
source /root/k8s/env.sh

cat > ${K8S_DIR}/cfg/kube-scheduler.conf << EOF
KUBE_SCHEDULER_OPTS="--logtostderr=false \
--v=2 \
--log-dir=${K8S_DIR}/logs \
--leader-elect \
--kubeconfig=${K8S_DIR}/cfg/kube-scheduler.kubeconfig \
--bind-address=127.0.0.1"
EOF

cat > /usr/lib/systemd/system/kube-scheduler.service << EOF
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes

[Service]
EnvironmentFile=${K8S_DIR}/cfg/kube-scheduler.conf
ExecStart=${K8S_DIR}/bin/kube-scheduler \$KUBE_SCHEDULER_OPTS
Restart=on-failure

[Install]
WantedBy=multi-user.target
EOF

systemctl daemon-reload
systemctl start kube-scheduler
systemctl enable kube-scheduler
  • --address 值必須為 127.0.0.1,因為當前 kube-apiserver 期望 scheduler 和 controller-manager 在同一台機器
  • --master=http://${MASTER_URL}:8080:使用http(非安全端口)與 kube-apiserver 通信,需要下面的haproxy啟動成功后才能去掉8080端口
  • --leader-elect=true 部署多台機器組成的 master 集群時選舉產生一處於工作狀態的 kube-controller-manager 進程

 

8.5.2、啟動kube-scheduler

sh install_kube-scheduler.sh 

 

8.5.3、驗證kube-scheduler服務

systemctl status kube-scheduler

 

8.6、驗證Master節點

[root@k8sm k8s]# /opt/k8s/bin/kubectl get componentstatuses

結果:

View Code

 

9.3.3、創建bootstrap角色

創建 bootstrap角色賦予權限,用於連接 apiserver請求簽名

kubectl create clusterrolebinding kubelet-bootstrap --clusterrole=system:node-bootstrapper --user=kubelet-bootstrap

 

 

 

八、部署Node節點

9.1、下載二進制文件

NODE_IPS=("192.168.56.108" "192.168.56.109")
for node_ip in ${NODE_IPS[@]};do
    echo ">>> ${node_ip}"
    scp /root/k8s/kubernetes/server/bin/kubelet root@${node_ip}:/opt/k8s/bin/
    scp /root/k8s/kubernetes/server/bin/kube-proxy root@${node_ip}:/opt/k8s/bin/
done

 

9.2、配置docker

9.2.1、修改systemd unit文件,暫時不修改

vim /usr/lib/systemd/system/docker.service

# 原內容
ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
 # 修改后內容: EnvironmentFile=-/run/flannel/docker ExecStart=/usr/bin/dockerd --log-level=info $DOCKER_NETWORK_OPTIONS

 

9.2.2、重啟docker

systemctl daemon-reload
systemctl restart docker
 
        

9.3、安裝kubelet

9.3.1、配置kubelet

K8S_DIR=/opt/k8s
source /root/k8s/env.sh

KUBE_CONFIG="/root/k8s/bootstrap.kubeconfig"
KUBE_APISERVER="https://${MASTER_URL}:6443"

kubectl config set-cluster kubernetes \
  --certificate-authority=${K8S_DIR}/ssl/ca.pem \
  --embed-certs=true \
  --server=${KUBE_APISERVER} \
  --kubeconfig=${KUBE_CONFIG}
kubectl config set-credentials "kubelet-bootstrap" \
  --token=${BOOTSTRAP_TOKEN} \
  --kubeconfig=${KUBE_CONFIG}
kubectl config set-context default \
  --cluster=kubernetes \
  --user="kubelet-bootstrap" \
  --kubeconfig=${KUBE_CONFIG}
kubectl config use-context default --kubeconfig=${KUBE_CONFIG}

mv bootstrap.kubeconfig /root/k8s/

scp /root/k8s/bootstrap.kubeconfig root@10.199.142.34:/opt/k8s/cfg/

scp /root/k8s/ssl/ca*.pem root@10.199.142.34:/opt/k8s/ssl/

 

配置文件(/root/k8s/gen_bootstrap-kubeconfig.sh)下面的,被上面的替換掉了

#!/usr/bin/bash

source /root/k8s/env.sh
KUBE_APISERVER="https://${MASTER_URL}:6443" # 設置集群參數 kubectl config set-cluster kubernetes \ --certificate-authority=/opt/k8s/ssl/ca.pem \ --embed-certs=true \ --server=${KUBE_APISERVER} \ --kubeconfig=bootstrap.kubeconfig # 設置客戶端認證參數 kubectl config set-credentials kubelet-bootstrap \ --token=${BOOTSTRAP_TOKEN} \ --kubeconfig=bootstrap.kubeconfig # 設置上下文參數 kubectl config set-context default \ --cluster=kubernetes \ --user=kubelet-bootstrap \ --kubeconfig=bootstrap.kubeconfig # 設置默認上下文 kubectl config use-context default --kubeconfig=bootstrap.kubeconfig NODE_IPS=("192.168.56.108" "192.168.56.109") for node_ip in ${NODE_IPS[@]};do echo ">>> ${node_ip}" scp /root/k8s/bootstrap.kubeconfig root@${node_ip}:/opt/k8s/cfg/ done

 

9.3.2、生成配置文件

[root@k8sm k8s]# sh gen_bootstrap-kubeconfig.sh

 

 

9.3.4、創建kubelet安裝腳本

kubelet腳本(/root/k8s/install_kubelet.sh),內容如下:

#!/usr/bin/bash

NODE_NAME=$1

K8S_DIR=/opt/k8s
source /root/k8s/env.sh

cat >${K8S_DIR}/cfg/kubelet.conf <<EOF
KUBELET_OPTS="--logtostderr=false \
--v=2 \
--log-dir=${K8S_DIR}/logs \
--hostname-override=${NODE_NAME} \
--network-plugin=cni \
--kubeconfig=${K8S_DIR}/cfg/kubelet.kubeconfig \
--bootstrap-kubeconfig=${K8S_DIR}/cfg/bootstrap.kubeconfig \
--config=${K8S_DIR}/cfg/kubelet-config.yml \
--cert-dir=${K8S_DIR}/ssl \
--pod-infra-container-image=k8s.gcr.io/pause:3.2"
EOF

cat >${K8S_DIR}/cfg/kubelet-config.yml <<EOF
kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
address: 0.0.0.0
port: 10250
readOnlyPort: 10255
cgroupDriver: systemd
clusterDNS:
  - 
clusterDomain: 
failSwapOn: false

# 身份驗證
authentication:
  anonymous:
    enabled: false
  webhook:
    cacheTTL: 2m0s
    enabled: true
  x509:
    clientCAFile: ${K8S_DIR}/ssl/ca.pem

# 授權
authorization:
  mode: Webhook
  webhook:
    cacheAuthorizedTTL: 5m0s
    cacheUnauthorizedTTL: 30s

# Node 資源保留
evictionHard:
  imagefs.available: 15%
  memory.available: 1G
  nodefs.available: 10%
  nodefs.inodesFree: 5%
evictionPressureTransitionPeriod: 5m0s

# 鏡像刪除策略
imageGCHighThresholdPercent: 85
imageGCLowThresholdPercent: 80
imageMinimumGCAge: 2m0s

# 旋轉證書
rotateCertificates: true # 旋轉kubelet client 證書
featureGates:
  RotateKubeletServerCertificate: true
  RotateKubeletClientCertificate: true

maxOpenFiles: 1000000
maxPods: 110
EOF

cat >/usr/lib/systemd/system/kubelet.service <<EOF
[Unit]
Description=Kubernetes Kubelet
After=docker.service
Requires=docker.service

[Service]
EnvironmentFile=-${K8S_DIR}/cfg/kubelet.conf
ExecStart=${K8S_DIR}/bin/kubelet \$KUBELET_OPTS
Restart=on-failure
KillMode=process

[Install]
WantedBy=multi-user.target
EOF

systemctl daemon-reload
systemctl enable kubelet 
systemctl restart kubelet

 

 認證節點

kubectl get csr

NAME AGE SIGNERNAME REQUESTOR CONDITION
csr-7h7nd 49s kubernetes.io/kube-apiserver-client-kubelet kubelet-bootstrap Pending
csr-xfk22 18s kubernetes.io/kube-apiserver-client-kubelet kubelet-bootstrap Pending

 

[root@hz-yf-xtax-it-199-142-31 cfg]# kubectl certificate approve csr-xfk22
certificatesigningrequest.certificates.k8s.io/csr-xfk22 approved

 

kubectl get node

NAME STATUS ROLES AGE VERSION
k8sn1 NotReady <none> 11s v1.20.4

 

9.4、安裝kube-proxy

9.4.1、創建kube-proxy證書

cat > /root/k8s/ssl/kube-proxy-csr.json << EOF
{
"CN": "system:kube-proxy",
"hosts": [],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "BeiJing",
"ST": "BeiJing",
"O": "k8s",
"OU": "System"
}
]
}
EOF

 

9.4.3、生成證書

cfssl gencert -ca=/root/k8s/ssl/ca.pem -ca-key=/root/k8s/ssl/ca-key.pem -config=/root/k8s/ssl/ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy 

 

9.4.4、分發證書

scp /root/k8s/ssl/kube-proxy*.pem root@10.199.142.34:/opt/k8s/ssl/

 

9.4.5、配置kube-proxy

/root/k8s/gen_kube-proxyconfig.sh,內容如下:

xin的

source /root/k8s/env.sh

KUBE_CONFIG="/root/k8s/kube-proxy.kubeconfig"
KUBE_APISERVER="https://${MASTER_URL}:6443"

kubectl config set-cluster kubernetes \
  --certificate-authority=${K8S_DIR}/ssl/ca.pem \
  --embed-certs=true \
  --server=${KUBE_APISERVER} \
  --kubeconfig=${KUBE_CONFIG}
kubectl config set-credentials kube-proxy \
  --client-certificate=${K8S_DIR}/ssl/kube-proxy.pem \
  --client-key=${K8S_DIR}/ssl/kube-proxy-key.pem \
  --embed-certs=true \
  --kubeconfig=${KUBE_CONFIG}
kubectl config set-context default \
  --cluster=kubernetes \
  --user=kube-proxy \
  --kubeconfig=${KUBE_CONFIG}
kubectl config use-context default --kubeconfig=${KUBE_CONFIG}

scp /root/k8s/kube-proxy.kubeconfig root@10.199.142.34:/opt/k8s/cfg/

 

#!/usr/bin/bash

source /root/k8s/env.sh KUBE_APISERVER="https://${MASTER_URL}:6443" # 設置集群參數 kubectl config set-cluster kubernetes \ --certificate-authority=/opt/k8s/ssl/ca.pem \ --embed-certs=true \ --server=${KUBE_APISERVER} \ --kubeconfig=kube-proxy.kubeconfig # 設置客戶端認證參數 kubectl config set-credentials kube-proxy\ --client-certificate=/root/k8s/ssl/kube-proxy.pem \ --client-key=/root/k8s/ssl/kube-proxy-key.pem \ --embed-certs=true \ --kubeconfig=kube-proxy.kubeconfig # 設置上下文參數 kubectl config set-context default \ --cluster=kubernetes \ --user=kube-proxy \ --kubeconfig=kube-proxy.kubeconfig # 設置默認上下文 kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig NODE_IPS=("192.168.56.108" "192.168.56.109") for node_ip in ${NODE_IPS[@]};do echo ">>> ${node_ip}" scp /root/k8s/kube-proxy.kubeconfig root@${node_ip}:/opt/k8s/cfg/ done

 

9.4.7、kube-proxy安裝腳本

 /root/k8s/install_kube-proxy.sh,內容如下:

#!/usr/bin/bash

K8S_DIR=/opt/k8s
source /root/k8s/env.sh

cat > ${K8S_DIR}/cfg/kube-proxy.conf << EOF
KUBE_PROXY_OPTS="--logtostderr=false \\
--v=2 \\
--log-dir=${K8S_DIR}/logs \\
--config=${K8S_DIR}/cfg/kube-proxy-config.yml"
EOF

cat > ${K8S_DIR}/cfg/kube-proxy-config.yml << EOF
kind: KubeProxyConfiguration
apiVersion: kubeproxy.config.k8s.io/v1alpha1
bindAddress: 0.0.0.0
metricsBindAddress: 0.0.0.0:10249
clientConnection:
  kubeconfig: ${K8S_DIR}/cfg/kube-proxy.kubeconfig
hostnameOverride: k8s-master1
clusterCIDR: ${CLUSTER_CIDR}
EOF

cat > /usr/lib/systemd/system/kube-proxy.service << EOF
[Unit]
Description=Kubernetes Proxy
After=network.target

[Service]
EnvironmentFile=${K8S_DIR}/cfg/kube-proxy.conf
ExecStart=${K8S_DIR}/bin/kube-proxy \$KUBE_PROXY_OPTS
Restart=on-failure
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
EOF

systemctl daemon-reload
systemctl start kube-proxy
systemctl enable kube-proxy

 

 

 

九、安裝calico

 curl https://docs.projectcalico.org/manifests/calico.yaml -O

 

diff calico.yaml calico.yaml.default 
3660,3662c3660
<               value: "Never"
<             - name: IP_AUTODETECTION_METHOD
<               value: "interface=en.*"
---
>               value: "Always"
3687,3688c3685,3686
<             - name: CALICO_IPV4POOL_CIDR
<               value: "10.100.0.0/16"
---
>             # - name: CALICO_IPV4POOL_CIDR
>             #   value: "192.168.0.0/16"
3768c3766
<             path: /opt/k8s/bin
---
>             path: /opt/cni/bin

 

 

 

 

 

 

 

十、部署kubedns插件

十一、部署dashboard插件

 

問題:
[root@k8sm-1 ~]# /opt/etcd/bin/etcdctl --ca-file=/opt/k8s/ssl/ca.pem --cert-file=/opt/k8s/ssl/flanneld.pem --key-file=/opt/k8s/ssl/flanneld-key.pem --endpoints=${ETCD_ENDPOINTS} get ${FLANNEL_ETCD_PREFIX}/config
Error: client: response is invalid json. The endpoint is probably not valid etcd cluster endpoint
解決辦法:
https://blog.csdn.net/sonsunny/article/details/105226586

 

 

end

hostnamectl set-hostname k8sm1hostnamectl set-hostname k8sm2hostnamectl set-hostname k8sm3hostnamectl set-hostname k8sn1hostnamectl set-hostname k8sn2

NODE_IPS=("10.199.142.31" "10.199.142.32" "10.199.142.33" "10.199.142.34" "10.199.142.35")for node_ip in ${NODE_IPS[@]};do    echo ">>> ${node_ip}"    scp /root/k8s/env.sh root@${node_ip}:/root/k8s/done


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM