k8s 集群部署
環境:
centos 7.x k8s 1.12 docker 18.xx-ce etcd 3.x flannel 0.10為所有容器提供可以跨機器網絡訪問 利用etcd 存儲網絡路由
方式一 minkube: 適用於日常開發適用
方式二 kubeadm
問題:
(1)開發的證書一年
(2)版本還是在測試階段
方式三 二進制方式:推薦
最新穩定版本v1.12.3
https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.12.md#v1123
集群模式選擇:
單Master集群:
master 掛了,整個集群就掛了
多Master集群:前面要加一個LB,所有的node都要連接lb,lb幫忙轉發到apiserver,然后apiserver再進行相應的操作
這里我們先部署單master 集群的方式:
打開三台虛擬機
現在由單master開始,然后擴展到多master
1.一個master,兩個node
2.三台機器都裝上etcd,組成集群(建議三台允許壞一台,官方建議五台允許壞兩台)
3.安裝cfssl
4.自簽證書,利用openssl或者cfssl,這里用cfssl(簡單)
cfssl 生成證書
cfssljson 傳入json文件生成證書
cfssl-certinfo 可以查看生成證書的信息
腳本中的hosts 是部署etcd的機器IP
5.為etcd簽發ssl證書
三台主機IP是
192.168.20.11 master kube-apiserver kube-controller-manager kube-scheduler etcd 192.168.20.12 node kubelet kube-proxy docker fannel etcd 192.168.20.13 node kubelet kube-proxy docker fannel etcd
在安裝etcd 之前我們先制作自簽CA證書
install_cfssl.sh
#######
curl -L https://pkg.cfssl.org/R1.2/cfssl_linux-amd64 -o /usr/local/bin/cfssl
curl -L https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64 -o /usr/local/bin/cfssljson
curl -L https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64 -o /usr/local/bin/cfssl-certinfo
chmod +x /usr/local/bin/cfssl /usr/local/bin/cfssljson /usr/local/bin/cfssl-certinfo
etcd_cert.sh ######### cat > ca-config.json <<EOF { "signing": { "default": { "expiry": "87600h" }, "profiles": { "www": { "expiry": "87600h", "usages": [ "signing", "key encipherment", "server auth", "client auth" ] } } } } EOF cat > ca-csr.json <<EOF { "CN": "etcd CA", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "L": "Beijing", "ST": "Beijing" } ] } EOF cfssl gencert -initca ca-csr.json | cfssljson -bare ca - #----------------------- cat > server-csr.json <<EOF { "CN": "etcd", "hosts": [ "192.168.20.11", "192.168.20.12", "192.168.20.13" ], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "L": "BeiJing", "ST": "BeiJing" } ] } EOF cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=www server-csr.json | cfssljson -bare server
sh install_cfssl.sh
sh etcd_cert.sh 執行后生成下面文件
[root@hu-001 etcd-cert]# ll total 44 -rw-r--r-- 1 root root 287 Dec 4 04:01 ca-config.json -rw-r--r-- 1 root root 956 Dec 4 04:01 ca.csr -rw-r--r-- 1 root root 209 Dec 4 04:01 ca-csr.json -rw------- 1 root root 1679 Dec 4 04:01 ca-key.pem -rw-r--r-- 1 root root 1265 Dec 4 04:01 ca.pem -rw-r--r-- 1 root root 1088 Aug 27 09:51 etcd-cert.sh -rw-r--r-- 1 root root 1013 Dec 4 04:01 server.csr -rw-r--r-- 1 root root 293 Dec 4 04:01 server-csr.json -rw------- 1 root root 1679 Dec 4 04:01 server-key.pem -rw-r--r-- 1 root root 1338 Dec 4 04:01 server.pem
然后我們將我們下載的etcd的包上傳到服務器上(觀看時間一小時,正在部署etcd,有問題)
看一下etcd的啟動參數的含義
2379和2380端口分別代表啥
2379 數據端口
2380 集群端口
etcd 集群部署完畢之后,我們使用etcdctl 來檢測各個節點是否健康
node 節點安裝Docker
Flannel 容器集群網絡的部署
Overlay Network
VXLAN
Flannel
Calico(大公司用的網絡架構https://www.cnblogs.com/netonline/p/9720279.html)
Flannel 網絡原理
首先之前我們得寫入分配的子網段到etcd中,供flannel使用
配置的網段一定不能和宿主機的網段相同,寫入網段,以及網絡類型到etcd中
啟動完Flannel后我們要重啟一下Docker,保證Container 和 flannel 在一個網段
然后我們在兩台宿主機下再分別啟動一個Docker容器,我們ping 對方的容器,雖然網段不同,但是還是可以ping通的(Flannel起的作用)
接下來部署k8s組件
master:
必須首先部署apiserver,其他兩個組件無序
首先我們得給kube-apiserver 自簽一個證書
k8s_cert.sh ############################################### cat > ca-config.json <<EOF { "signing": { "default": { "expiry": "87600h" }, "profiles": { "kubernetes": { "expiry": "87600h", "usages": [ "signing", "key encipherment", "server auth", "client auth" ] } } } } EOF cat > ca-csr.json <<EOF { "CN": "kubernetes", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "L": "Beijing", "ST": "Beijing", "O": "k8s", "OU": "System" } ] } EOF cfssl gencert -initca ca-csr.json | cfssljson -bare ca - #----------------------- cat > server-csr.json <<EOF { "CN": "kubernetes", "hosts": [ "10.0.0.1",#不要修改 "127.0.0.1",#不要修改 "192.168.20.11", #master_ip,LB_ip,vip "kubernetes", "kubernetes.default", "kubernetes.default.svc", "kubernetes.default.svc.cluster", "kubernetes.default.svc.cluster.local" ], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "L": "BeiJing", "ST": "BeiJing", "O": "k8s", #不要修改 "OU": "System" #不要修改 } ] } EOF cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes server-csr.json | cfssljson -bare server #----------------------- cat > admin-csr.json <<EOF { "CN": "admin", "hosts": [], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "L": "BeiJing", "ST": "BeiJing", "O": "system:masters", "OU": "System" } ] } EOF cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes admin-csr.json | cfssljson -bare admin #----------------------- cat > kube-proxy-csr.json <<EOF { "CN": "system:kube-proxy", "hosts": [], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "L": "BeiJing", "ST": "BeiJing", "O": "k8s", "OU": "System" } ] } EOF cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy
部署Apiserver組件
apiserver.sh ################################################# #!/bin/bash MASTER_ADDRESS=$1 ETCD_SERVERS=$2 cat <<EOF >/opt/kubernetes/cfg/kube-apiserver KUBE_APISERVER_OPTS="--logtostderr=true \\ --v=4 \\ #日志級別 --etcd-servers=${ETCD_SERVERS} \\ # etcd地址 --bind-address=${MASTER_ADDRESS} \\ # 綁定當前IP --secure-port=6443 \\ #默認監聽端口 --advertise-address=${MASTER_ADDRESS} \\ #集群通告地址 --allow-privileged=true \\ --service-cluster-ip-range=10.0.0.0/24 \\ --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota,NodeRestriction \\ 啟用准入控制 --authorization-mode=RBAC,Node \\ --kubelet-https=true \\ --enable-bootstrap-token-auth \\ # 啟用token認證 --token-auth-file=/opt/kubernetes/cfg/token.csv \\ #token認證文件 --service-node-port-range=30000-50000 \\ --tls-cert-file=/opt/kubernetes/ssl/server.pem \\ --tls-private-key-file=/opt/kubernetes/ssl/server-key.pem \\ --client-ca-file=/opt/kubernetes/ssl/ca.pem \\ --service-account-key-file=/opt/kubernetes/ssl/ca-key.pem \\ --etcd-cafile=/opt/etcd/ssl/ca.pem \\ --etcd-certfile=/opt/etcd/ssl/server.pem \\ --etcd-keyfile=/opt/etcd/ssl/server-key.pem" EOF cat <<EOF >/usr/lib/systemd/system/kube-apiserver.service [Unit] Description=Kubernetes API Server Documentation=https://github.com/kubernetes/kubernetes [Service] EnvironmentFile=-/opt/kubernetes/cfg/kube-apiserver ExecStart=/opt/kubernetes/bin/kube-apiserver \$KUBE_APISERVER_OPTS Restart=on-failure [Install] WantedBy=multi-user.target EOF systemctl daemon-reload systemctl enable kube-apiserver systemctl restart kube-apiserver
具體的操作步驟如下:
k8s 單Master集群的部署
筆記鏈接:
https://www.jianshu.com/p/33b5f47ababc
1、安裝cfssl工具(使用下面的腳步安裝cfssl)
curl -L https://pkg.cfssl.org/R1.2/cfssl_linux-amd64 -o /usr/local/bin/cfssl
curl -L https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64 -o /usr/local/bin/cfssljson
curl -L https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64 -o /usr/local/bin/cfssl-certinfo
chmod +x /usr/local/bin/cfssl /usr/local/bin/cfssljson /usr/local/bin/cfssl-certinfo
[root@hu-001 tools]# ls -lh /usr/local/bin/cfssl*
-rwxr-xr-x 1 root root 9.9M Dec 4 03:57 /usr/local/bin/cfssl
-rwxr-xr-x 1 root root 6.3M Dec 4 03:58 /usr/local/bin/cfssl-certinfo
-rwxr-xr-x 1 root root 2.2M Dec 4 03:58 /usr/local/bin/cfssljson
2、使用cfssl創建CA證書以及etcd的TLS認證證書
2.1 創建CA證書
mkdir /data/k8s/etcd-cert/ 創建一個專門用來生成ca證書的文件夾
cd /data/k8s/etcd-cert
創建CA配置文件
cat > ca-config.json <<EOF
{
"signing": {
"default": {
"expiry": "87600h"
},
"profiles": {
"www": {
"expiry": "87600h",
"usages": [
"signing",
"key encipherment",
"server auth",
"client auth"
]
}
}
}
}
EOF
字段解釋:
"ca-config.json":可以定義多個 profiles,分別指定不同的過期時間、使用場景等參數;后續在簽名證書時使用某個 profile;
"signing":表示該證書可用於簽名其它證書;生成的 ca.pem 證書中 CA=TRUE;
"server auth":表示client可以用該 CA 對server提供的證書進行驗證;
"client auth":表示server可以用該CA對client提供的證書進行驗證;
創建CA證書簽名請求
cat > ca-csr.json <<EOF
{
"CN": "etcd CA",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "Beijing",
"ST": "Beijing"
}
]
}
EOF
字段解釋:
"CN":Common Name,etcd 從證書中提取該字段作為請求的用戶名 (User Name);瀏覽器使用該字段驗證網站是否合法;
"O":Organization,etcd 從證書中提取該字段作為請求用戶所屬的組 (Group);
這兩個參數在后面的kubernetes啟用RBAC模式中很重要,因為需要設置kubelet、admin等角色權限,那么在配置證書的時候就必須配置對了,具體后面在部署kubernetes的時候會進行講解。
"在etcd這兩個參數沒太大的重要意義,跟着配置就好。"
接下來就是生成CA證書和私鑰了
cfssl gencert -initca ca-csr.json | cfssljson -bare ca -
這個命令生成了"ca-csr.json ca-key.pem ca.pem"三個文件
3、創建Etcd的TLS認證證書:
創建etcd證書簽名請求
cat > server-csr.json <<EOF
{
"CN": "etcd",
"hosts": [
"192.168.20.11",
"192.168.20.12",
"192.168.20.13"
],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "BeiJing",
"ST": "BeiJing"
}
]
}
EOF
字段解釋:
hosts:這里填寫etcd集群節點機器的IP(可以理解成信任列表),指定授權使用該證書的IP列表
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=www server-csr.json | cfssljson -bare server
上面的命令生成 "server-csr.json server-key.pem server.pem" 三個文件
這里我們可以把TLS認證文件拷貝到自己常用位置證書目錄下或者當前位置
cp *pem /data/etcd/ssl/
我們可以把上面的幾個步驟放在一個腳本里執行
個人建議,因為是部署集群(個人虛擬機環境),這里我們最好還是關閉防火牆以及進行時間同步
4、接下來就是安裝etcd服務了
將我們下載好的包文件上傳到服務器上(可自行下載https://github.com/etcd-io/etcd/releases)
etcd-v3.3.10-linux-amd64.tar.gz
mkdir -p /data/etcd/{cfg,bin,ssl}
cp /data/k8s/etcd-cert/{server-csr.json,server-key.pem,server.pem} /data/etcd/ssl/
tar -xf etcd-v3.3.10-linux-amd64.tar.gz
cp etcd-v3.3.10-linux-amd64/etcd /data/etcd/bin
cp etcd-v3.3.10-linux-amd64/etcdctl /data/etcd/bin
[root@hu-001 etcd-cert]# cat etcd.sh
#!/bin/bash
# example: ./etcd.sh etcd01 192.168.20.11 etcd02=https://192.168.20.12:2380,etcd03=https://192.168.20.13:2380
ETCD_NAME=$1
ETCD_IP=$2
ETCD_CLUSTER=$3
WORK_DIR=/data/etcd
cat <<EOF >$WORK_DIR/cfg/etcd
#[Member]
ETCD_NAME="${ETCD_NAME}"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://${ETCD_IP}:2380"
ETCD_LISTEN_CLIENT_URLS="https://${ETCD_IP}:2379"
#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://${ETCD_IP}:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://${ETCD_IP}:2379"
ETCD_INITIAL_CLUSTER="etcd01=https://${ETCD_IP}:2380,${ETCD_CLUSTER}"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"
EOF
cat <<EOF >/usr/lib/systemd/system/etcd.service
[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target
[Service]
Type=notify
EnvironmentFile=${WORK_DIR}/cfg/etcd
ExecStart=${WORK_DIR}/bin/etcd \
--name=\${ETCD_NAME} \
--data-dir=\${ETCD_DATA_DIR} \
--listen-peer-urls=\${ETCD_LISTEN_PEER_URLS} \
--listen-client-urls=\${ETCD_LISTEN_CLIENT_URLS},http://127.0.0.1:2379 \
--advertise-client-urls=\${ETCD_ADVERTISE_CLIENT_URLS} \
--initial-advertise-peer-urls=\${ETCD_INITIAL_ADVERTISE_PEER_URLS} \
--initial-cluster=\${ETCD_INITIAL_CLUSTER} \
--initial-cluster-token=\${ETCD_INITIAL_CLUSTER_TOKEN} \
--initial-cluster-state=new \
--cert-file=${WORK_DIR}/ssl/server.pem \
--key-file=${WORK_DIR}/ssl/server-key.pem \
--peer-cert-file=${WORK_DIR}/ssl/server.pem \
--peer-key-file=${WORK_DIR}/ssl/server-key.pem \
--trusted-ca-file=${WORK_DIR}/ssl/ca.pem \
--peer-trusted-ca-file=${WORK_DIR}/ssl/ca.pem
Restart=on-failure
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
EOF
systemctl daemon-reload
systemctl enable etcd
systemctl restart etcd
這個時候我們查看一下etcd的狀態 systemctl status etcd.service ,看到該節點已經起來了
參數解釋:
參數說明:
1、指定 etcd 的工作目錄為 /var/lib/etcd,數據目錄為 /var/lib/etcd,需在啟動服務前創建這兩個目錄;
在配置中的命令是這條:
WorkingDirectory=/var/lib/etcd/
2、為了保證通信安全,需要指定 etcd 的公私鑰(cert-file和key-file)、Peers 通信的公私鑰和 CA 證書(peer-cert-file、peer-key-file、peer-trusted-ca-file)、客戶端的CA證書(trusted-ca-file);
在配置中添加etcd證書的命令是以下:
--cert-file=/data/etcd/ssl/server.pem \
--key-file=/data/etcd/ssl/server-key.pem \
--peer-cert-file=/data/etcd/ssl/server.pem \
--peer-key-file=/data/etcd/ssl/server-key.pem \
--trusted-ca-file=/data/etcd/ssl/ca.pem \
--peer-trusted-ca-file=/data/etcd/ssl/ca.pem
#3、配置etcd的endpoint:
# --initial-cluster infra1=https://172.16.5.81:2380 \
4、配置etcd的監聽服務集群:
--initial-advertise-peer-urls ${ETCD_INITIAL_ADVERTISE_PEER_URLS} \
--listen-peer-urls ${ETCD_LISTEN_PEER_URLS} \
--listen-client-urls ${ETCD_LISTEN_CLIENT_URLS},http://127.0.0.1:2379 \
--advertise-client-urls ${ETCD_ADVERTISE_CLIENT_URLS} \
5、配置etcd創建的集群為新集群,則定義集群狀態為new
--initial-cluster-state 值為 new
6、定義etcd節點的名稱,該名稱等下從配置文件中獲取:
--name ${ETCD_NAME} \
其中配置文件:EnvironmentFile=/data/etcd/cfg/etcd
這個時候我們把上面相應的文件拷貝到另外兩台節點上
ssh-keygen
ssh-copy-id root@192.168.20.12
ssh-copy-id root@192.168.20.13
scp -r /data/etcd root@192.168.20.12:/data/
scp -r /data/etcd root@192.168.20.13:/data/
scp /usr/lib/systemd/system/etcd.service root@192.168.20.12:/usr/lib/systemd/system/
scp /usr/lib/systemd/system/etcd.service root@192.168.20.13:/usr/lib/systemd/system/
然后我們再去修改另外兩台節點的配置文件
vim /data/etcd/cfg/etcd
#[Member]
ETCD_NAME="etcd03"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://192.168.20.13:2380"
ETCD_LISTEN_CLIENT_URLS="https://192.168.20.13:2379"
#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.20.13:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://192.168.20.13:2379"
ETCD_INITIAL_CLUSTER="etcd01=https://192.168.20.11:2380,etcd02=https://192.168.20.12:2380,etcd03=https://192.168.20.13:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"
systemctl daemon-reload
systemctl enable etcd
systemctl restart etcd
至此三個節點的etcd集群就部署好了
監測一下etcd集群服務的健康狀態:
[root@hu-001 etcd-v3.3.10-linux-amd64]# /data/etcd/bin/etcdctl \
> --ca-file=/data/etcd/ssl/ca.pem \
> --cert-file=/data/etcd/ssl/server.pem \
> --key-file=/data/etcd/ssl/server-key.pem cluster-health
member 98aa99c4dcd6c4 is healthy: got healthy result from https://192.168.20.11:2379
member 12446003b2a53d43 is healthy: got healthy result from https://192.168.20.12:2379
member 667c9c7ba890c3f7 is healthy: got healthy result from https://192.168.20.13:2379
cluster is healthy
Node節點安裝Docker
安裝Docker環境所需要的依賴包
yum install -y yum-utils device-mapper-persistent-data lvm2
添加Docker軟件包源
yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.re
使用上面的源可能會安裝失敗。這里時候我們可以選擇使用阿里的
yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
安裝Docker-CE
yum -y install docker-ce
啟動Docker服務並設置開機啟動
systemctl start docker
systemctl enable docker
Flannel 集群網絡部署
接下來就是部署容器集群Flannel網絡了
https://www.cnblogs.com/kevingrace/p/6859114.html
首先我們寫入分配的子網段(和宿主機不要在一個網段)到etcd中,供flannel使用
/data/etcd/bin/etcdctl --ca-file=/data/etcd/ssl/ca.pem --cert-file=/data/etcd/ssl/server.pem --key-file=/data/etcd/ssl/server-key.pem --endpoints="https://192.168.20.11:2379,https://192.168.20.12:2379,https://192.168.20.13:2379" set /coreos.com/network/config '{"Network":"192.168.10.0/16","Backend":{"Type":"vxlan"}}'
這里我們宿主機的網段是192.168.20.0,我們給flannel 分配的望斷時192.168.10.0
聲明網絡類型為vxlan
給各node主機上傳flannel網絡包
flannel-v0.10.0-linux-amd64.tar.gz
mkdir /data/kubernetes/{bin,cfg,ssl}
tar -xf flannel-v0.10.0-linux-amd64.tar.gz -C /data/kubernetes/bin/
然后執行下面的腳本:
這里在操作的時候把Docker的給先去掉,其實只是在原有的Docker啟動腳本上加上了兩行,這里我們可以單獨手動修改,或者后面完善腳本也可以
#!/bin/bash ETCD_ENDPOINTS=${1:-"http://127.0.0.1:2379"} #" ${1:-"http://127.0.0.1:2379"} "解釋 第一個參數要是不傳的話就使用http://127.0.0.1:2379 cat <<EOF >/data/kubernetes/cfg/flanneld FLANNEL_dataIONS="--etcd-endpoints=${ETCD_ENDPOINTS} \ -etcd-cafile=/data/etcd/ssl/ca.pem \ -etcd-certfile=/data/etcd/ssl/server.pem \ -etcd-keyfile=/data/etcd/ssl/server-key.pem" EOF cat <<EOF >/usr/lib/systemd/system/flanneld.service [Unit] Description=Flanneld overlay address etcd agent After=network-online.target network.target Before=docker.service [Service] Type=notify EnvironmentFile=/data/kubernetes/cfg/flanneld ExecStart=/data/kubernetes/bin/flanneld --ip-masq \$FLANNEL_dataIONS ExecStartPost=/data/kubernetes/bin/mk-docker-opts.sh -k DOCKER_NETWORK_dataIONS -d /run/flannel/subnet.env Restart=on-failure [Install] WantedBy=multi-user.target EOF #這里生成子網信息到/run/flannel/subnet.env,然后Docker啟動的時候從這里獲取子網信息 #修改Docker的網絡,新增下面兩行,修改后在啟動flannel 成功后記得要重啟docker服務 #EnvironmentFile=/run/flannel/subnet.env #ExecStart=/usr/bin/dockerd \$DOCKER_NETWORK_dataIONS cat <<EOF >/usr/lib/systemd/system/docker.service [Unit] Description=Docker Application Container Engine Documentation=https://docs.docker.com After=network-online.target firewalld.service Wants=network-online.target [Service] Type=notify EnvironmentFile=/run/flannel/subnet.env ExecStart=/usr/bin/dockerd \$DOCKER_NETWORK_dataIONS ExecReload=/bin/kill -s HUP \$MAINPID LimitNOFILE=infinity LimitNPROC=infinity LimitCORE=infinity TimeoutStartSec=0 Delegate=yes KillMode=process Restart=on-failure StartLimitBurst=3 StartLimitInterval=60s [Install] WantedBy=multi-user.target EOF systemctl daemon-reload systemctl enable flanneld systemctl restart flanneld systemctl restart docker
#!/bin/bash
ETCD_ENDPOINTS=${1:-"http://127.0.0.1:2379"}
#" ${1:-"http://127.0.0.1:2379"} "解釋 第一個參數要是不傳的話就使用http://127.0.0.1:2379
cat <<EOF >/data/kubernetes/cfg/flanneld
FLANNEL_dataIONS="--etcd-endpoints=${ETCD_ENDPOINTS} \
-etcd-cafile=/data/etcd/ssl/ca.pem \
-etcd-certfile=/data/etcd/ssl/server.pem \
-etcd-keyfile=/data/etcd/ssl/server-key.pem"
EOF
cat <<EOF >/usr/lib/systemd/system/flanneld.service
[Unit]
Description=Flanneld overlay address etcd agent
After=network-online.target network.target
Before=docker.service
[Service]
Type=notify
EnvironmentFile=/data/kubernetes/cfg/flanneld
ExecStart=/data/kubernetes/bin/flanneld --ip-masq \$FLANNEL_dataIONS
ExecStartPost=/data/kubernetes/bin/mk-docker-opts.sh -k DOCKER_NETWORK_dataIONS -d /run/flannel/subnet.env
Restart=on-failure
[Install]
WantedBy=multi-user.target
EOF
#這里生成子網信息到/run/flannel/subnet.env,然后Docker啟動的時候從這里獲取子網信息
#修改Docker的網絡,新增下面兩行,修改后在啟動flannel 成功后記得要重啟docker服務
#EnvironmentFile=/run/flannel/subnet.env
#ExecStart=/usr/bin/dockerd $DOCKER_NETWORK_dataIONS
systemctl daemon-reload
systemctl enable flanneld
systemctl restart flanneld
systemctl restart docker
這樣我們看到兩台Node節點的網絡情況是這樣的
我們看到兩台處於不同的網段,但是確實可以互ping相通
這個時候我們可以在兩台Node節點上分別啟動一個容器,然后看兩個容器的網絡是否相通
部署Master組件
必須第一部署apiserver,其他兩個組件可以不按順序
首先利用腳本自簽一個apiserver用到的證書:
cat > ca-config.json <<EOF { "signing": { "default": { "expiry": "87600h" }, "profiles": { "kubernetes": { "expiry": "87600h", "usages": [ "signing", "key encipherment", "server auth", "client auth" ] } } } } EOF cat > ca-csr.json <<EOF { "CN": "kubernetes", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "L": "Beijing", "ST": "Beijing", "O": "k8s", "OU": "System" } ] } EOF cfssl gencert -initca ca-csr.json | cfssljson -bare ca - #----------------------- cat > server-csr.json <<EOF { "CN": "kubernetes", "hosts": [ "10.0.0.1", "127.0.0.1", "192.168.20.11", "192.168.20.12", "192.168.20.13", "kubernetes", "kubernetes.default", "kubernetes.default.svc", "kubernetes.default.svc.cluster", "kubernetes.default.svc.cluster.local" ], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "L": "BeiJing", "ST": "BeiJing", "O": "k8s", "OU": "System" } ] } EOF # 這里的hosts中10.0.0.1,127.0.0.1 不要刪除了,k8s自用,然后我們加上master_ip,LB_ip,Vip cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes server-csr.json | cfssljson -bare server #----------------------- cat > admin-csr.json <<EOF { "CN": "admin", "hosts": [], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "L": "BeiJing", "ST": "BeiJing", "O": "system:masters", "OU": "System" } ] } EOF cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes admin-csr.json | cfssljson -bare admin #----------------------- cat > kube-proxy-csr.json <<EOF { "CN": "system:kube-proxy", "hosts": [], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "L": "BeiJing", "ST": "BeiJing", "O": "k8s", "OU": "System" } ] } EOF cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy
mkdir /data/kubernetes/{bin,ssl,cfg} -p
mv ca.pem server.pem ca-key.pem server-key.pem /data/kubernetes/ssl/
[root@hu-001 tools]# tar -xf kubernetes-server-linux-amd64.tar.gz
解壓后拷貝需要的可執行文件
[root@hu-001 bin]# cp kubectl kube-apiserver kube-controller-manager kube-scheduler /data/kubernetes/bin/
使用下面的命令
# 創建 TLS Bootstrapping Token
#BOOTSTRAP_TOKEN=$(head -c 16 /dev/urandom | od -An -t x | tr -d ' ')
[root@hu-001 master_sh]# cat /data/kubernetes/cfg/token.csv f23bd9cb6289ab11ddb622ec9de9ed6f,kubelet-bootstrap,10001,"system:kubelet-bootstrap"
apiserver.sh腳本內容如下:
#!/bin/bash MASTER_ADDRESS=$1 ETCD_SERVERS=$2 mkdir -p /data/kubernetes/{cfg,bin,ssl} cat <<EOF >/data/kubernetes/cfg/kube-apiserver KUBE_APISERVER_dataS="--logtostderr=true \\ --v=4 \\ --etcd-servers=${ETCD_SERVERS} \\ --bind-address=${MASTER_ADDRESS} \\ --secure-port=6443 \\ --advertise-address=${MASTER_ADDRESS} \\ --allow-privileged=true \\ --service-cluster-ip-range=10.0.0.0/24 \\ --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota,NodeRestriction \\ --authorization-mode=RBAC,Node \\ --kubelet-https=true \\ --enable-bootstrap-token-auth \\ --token-auth-file=/data/kubernetes/cfg/token.csv \\ --service-node-port-range=30000-50000 \\ --tls-cert-file=/data/kubernetes/ssl/server.pem \\ --tls-private-key-file=/data/kubernetes/ssl/server-key.pem \\ --client-ca-file=/data/kubernetes/ssl/ca.pem \\ --service-account-key-file=/data/kubernetes/ssl/ca-key.pem \\ --etcd-cafile=/data/etcd/ssl/ca.pem \\ --etcd-certfile=/data/etcd/ssl/server.pem \\ --etcd-keyfile=/data/etcd/ssl/server-key.pem" EOF cat <<EOF >/usr/lib/systemd/system/kube-apiserver.service [Unit] Description=Kubernetes API Server Documentation=https://github.com/kubernetes/kubernetes [Service] EnvironmentFile=-/data/kubernetes/cfg/kube-apiserver ExecStart=/data/kubernetes/bin/kube-apiserver \$KUBE_APISERVER_dataS Restart=on-failure [Install] WantedBy=multi-user.target EOF systemctl daemon-reload systemctl enable kube-apiserver systemctl restart kube-apiserver
[root@hu-001 master_sh]# sh apiserver.sh 192.168.20.11 https://192.168.20.11:2379,https://192.168.20.12:2379,https://192.168.20.13:2379
至此kube-apiserver 就已經啟動成功了,觀看時間到38分鍾