k8s二進制部署


一. 基礎環境設置 ------所有節點都要設置
可使用鏡像及二進制文件:鏈接:https://pan.baidu.com/s/1ypgC8MeYc0SUfZr-IdnHeg 密碼:bb82

  1. 軟件環境:
    • CentOS Linux release 7.4.1708 (Core)
    • kubernetes1.8.6
    • etcd3.2.12
    • flanneld0.9.1
    • docker17.12.0-ce
  2. 方便安裝在 master 與其他兩台機器設置成無密碼訪問
    ssh-keygen
    ssh-copy-id -i k8s-master
    ssh-copy-id -i k8s-node1
    ssh-copy-id -i k8s-node2
  3. hosts設置 -所有節點
    vim /etc/hosts
    192.168.102.130 k8s-master etcd01
    192.168.102.131 k8s-node1 etcd02
    192.168.102.132 k8s-node2 etcd03
  4. 設置防火牆 -所有節點
    systemctl stop firewalld && systemctl disable firewalld
  5. 配置內核參數 -所有節點
    vim /etc/sysctl.d/k8s.conf
    net.bridge.bridge-nf-call-ip6tables = 1
    net.bridge.bridge-nf-call-iptables = 1
    vm.swappiness=0
    加載模塊
    modprobe br_netfilter
    echo 'modprobe br_netfilter' >> /etc/rc.local
    chmod a+x /etc/rc.d/rc.local
    sysctl -p
    sysctl -p /etc/sysctl.d/k8s.conf
  6. 禁用SELINUX
    setenforce 0
    sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config
  7. 關閉系統的Swap
    sed -i 's/.swap./#&/' /etc/fstab
    swapoff -a
  8. 設置iptables策略為 ACCEPT
    /sbin/iptables -P FORWARD ACCEPT
    echo 'sleep 60 && /sbin/iptables -P FORWARD ACCEPT' >> /etc/rc.local
  9. 安裝依賴包
    yum install -y epel-release
    yum install -y yum-utils device-mapper-persistent-data lvm2 net-tools conntrack-tools wget
    所有的軟件以及相關配置 https://pan.baidu.com/share/init?surl=AZCx_k1w4g3BBWatlomkyQ
    密碼:ff7y

二. 安裝 CA 證書和秘鑰
kubernetes 系統各組件需要使用 TLS 證書對通信進行加密,本文檔使用 CloudFlare 的 PKI 工具集 cfssl 來生成 Certificate Authority (CA) 證書和秘鑰文件,CA 是自簽名的證書,用來簽名后續創建的其它 TLS 證書。
以下操作在一台 master 上操作,,證書只需要創建一次即可,以后在向集群中添加新節點時只要將 /etc/kubernetes/ 目錄下的證書拷貝到新節點上即可
2.1 安裝 CFSSL
wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64
chmod +x cfssl_linux-amd64
mv cfssl_linux-amd64 /usr/local/bin/cfssl

wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64
chmod +x cfssljson_linux-amd64
mv cfssljson_linux-amd64 /usr/local/bin/cfssljson

wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64
chmod +x cfssl-certinfo_linux-amd64
mv cfssl-certinfo_linux-amd64 /usr/local/bin/cfssl-certinfo 

2.2 創建 CA 證書與秘鑰
1. 創建 CA 配置文件
mkdir /root/ssl
cd /root/ssl
vim ca-config.json
{
"signing": {
"default": {
"expiry": "8760h"
},
"profiles": {
"kubernetes": {
"usages": [
"signing",
"key encipherment",
"server auth",
"client auth"
],
"expiry": "8760h"
}
}
}
}
• ca-config.json:可以定義多個 profiles,分別指定不同的過期時間、使用場景等參數;后續在簽名證書時使用某個 profile;
• signing:表示該證書可用於簽名其它證書;生成的 ca.pem 證書中 CA=TRUE;
• server auth:表示 client 可以用該 CA 對 server 提供的證書進行驗證;
• client auth:表示 server 可以用該 CA 對 client 提供的證書進行驗證;
2. 創建 CA 證書簽名請求
vim ca-csr.json
{
"CN": "kubernetes",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "BeiJing",
"L": "BeiJing",
"O": "k8s",
"OU": "System"
}
]
}
• “CN”:Common Name,kube-apiserver 從證書中提取該字段作為請求的用戶名 (User Name);瀏覽器使用該字段驗證網站是否合法;
• “O”:Organization,kube-apiserver 從證書中提取該字段作為請求用戶所屬的組 (Group);
3. 生成 CA 證書和私鑰
cfssl gencert -initca ca-csr.json | cfssljson -bare ca
2.3 創建 kubernetes 證書與秘鑰
1. 創建 kubernetes 證書簽名請求文件
vim kubernetes-csr.json
{
"CN": "kubernetes",
"hosts": [
"127.0.0.1",
"192.168.102.130",
"192.168.102.131",
"192.168.102.132",
"10.254.0.1",
"kubernetes",
"kubernetes.default",
"kubernetes.default.svc",
"kubernetes.default.svc.cluster",
"kubernetes.default.svc.cluster.local"
],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "BeiJing",
"L": "BeiJing",
"O": "k8s",
"OU": "System"
}
]
}
• hosts 中的內容可以為空,即使按照上面的配置,向集群中增加新節點后也不需要重新生成證書。
• 如果 hosts 字段不為空則需要指定授權使用該證書的 IP 或域名列表,由於該證書后續被 etcd 集群和 kubernetes master 集群使用,所以上面分別指定了 etcd 集群、kubernetes master 集群的主機 IP 和 kubernetes 服務的服務 IP
2. 生成 kubernetes 證書和私鑰
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kubernetes-csr.json | cfssljson -bare kubernetes
2.4 創建 admin 證書與秘鑰
1. 創建 admin 證書
vim admin-csr.json
{
"CN": "admin",
"hosts": [],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "BeiJing",
"L": "BeiJing",
"O": "system:masters",
"OU": "System"
}
]
}
• kube-apiserver 使用 RBAC 對客戶端(如 kubelet、kube-proxy、Pod)請求進行授權;
• kube-apiserver 預定義了一些 RBAC 使用的 RoleBindings,如 cluster-admin 將 Group system:masters 與 Role cluster-admin 綁定,該 Role 授予了調用kube-apiserver 的所有 API的權限;
• OU 指定該證書的 Group 為 system:masters,kubelet 使用該證書訪問 kube-apiserver 時 ,由於證書被 CA 簽名,所以認證通過,同時由於證書用戶組為經過預授權的 system:masters,所以被授予訪問所有 API 的權限
2. 生成 admin 證書和私鑰
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes admin-csr.json | cfssljson -bare admin
ls admin*
admin.csr admin-csr.json admin-key.pem admin.pem
2.5 創建 kube-proxy 證書與秘鑰
1. 創建 kube-proxy 證書
vim kube-proxy-csr.json
{
"CN": "system:kube-proxy",
"hosts": [],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "BeiJing",
"L": "BeiJing",
"O": "k8s",
"OU": "System"
}
]
}
• CN 指定該證書的 User 為 system:kube-proxy;
• kube-apiserver 預定義的 RoleBinding cluster-admin 將User system:kube-proxy 與 Role system:node-proxier 綁定,該 Role 授予了調用 kube-apiserver Proxy 相關 API 的權限;
2. 生成 kube-proxy 客戶端證書和私鑰
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy
ls kube-proxy*
kube-proxy.csr kube-proxy-csr.json kube-proxy-key.pem kube-proxy.pem
2.6 分發證書
mkdir -p /etc/kubernetes/ssl --所有節點
scp /root/ssl/.pem 192.168.102.130:/etc/kubernetes/ssl
scp /root/ssl/
.pem 192.168.102.131:/etc/kubernetes/ssl
scp /root/ssl/.pem 192.168.102.132:/etc/kubernetes/ssl
3. 部署 etcd
在三個節點都安裝 etcd,下面的操作需要再三個節點都執行一遍
3.1 下載 etcd 安裝包
wget https://github.com/coreos/etcd/releases/download/v3.2.12/etcd-v3.2.12-linux-amd64.tar.gz
tar -xvf etcd-v3.2.12-linux-amd64.tar.gz
scp etcd-v3.2.12-linux-amd64/etcd
192.168.102.130:/usr/local/bin
scp etcd-v3.2.12-linux-amd64/etcd* 192.168.102.131:/usr/local/bin
scp etcd-v3.2.12-linux-amd64/etcd* 192.168.102.132:/usr/local/bin
3.2 配置 etcd 服務
1. 創建工作目錄
mkdir -p /var/lib/etcd ----所有節點都要創建
2. 創建systemd 文件
【k8s-master 】
vim /usr/lib/systemd/system/etcd.service
[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target
Documentation=https://github.com/coreos

		[Service]
		Type=notify
		WorkingDirectory=/var/lib/etcd/
		ExecStart=/usr/local/bin/etcd \
		--name k8s-matser \
		--cert-file=/etc/kubernetes/ssl/kubernetes.pem \
		--key-file=/etc/kubernetes/ssl/kubernetes-key.pem \
		--peer-cert-file=/etc/kubernetes/ssl/kubernetes.pem \
		--peer-key-file=/etc/kubernetes/ssl/kubernetes-key.pem \
		--trusted-ca-file=/etc/kubernetes/ssl/ca.pem \
		--peer-trusted-ca-file=/etc/kubernetes/ssl/ca.pem \
		--initial-advertise-peer-urls https://192.168.102.130:2380 \
		--listen-peer-urls https://192.168.102.130:2380 \
		--listen-client-urls https://192.168.102.130:2379,http://127.0.0.1:2379 \
		--advertise-client-urls https://192.168.102.130:2379 \
		--initial-cluster-token etcd-cluster-0 \
		--initial-cluster k8s-matser=https://192.168.102.130:2380,k8s-node1=https://192.168.102.131:2380,k8s-node2=https://192.168.102.132:2380 \
		--initial-cluster-state new \
		--data-dir=/var/lib/etcd
		Restart=on-failure
		RestartSec=5
		LimitNOFILE=65536
		
		[Install]
		WantedBy=multi-user.target
		【k8s-node1】
		[Unit]
		Description=Etcd Server
		After=network.target
		After=network-online.target
		Wants=network-online.target
		Documentation=https://github.com/coreos
		
		[Service]
		Type=notify
		WorkingDirectory=/var/lib/etcd/
		ExecStart=/usr/local/bin/etcd \
		--name k8s-node1 \
		--cert-file=/etc/kubernetes/ssl/kubernetes.pem \
		--key-file=/etc/kubernetes/ssl/kubernetes-key.pem \
		--peer-cert-file=/etc/kubernetes/ssl/kubernetes.pem \
		--peer-key-file=/etc/kubernetes/ssl/kubernetes-key.pem \
		--trusted-ca-file=/etc/kubernetes/ssl/ca.pem \
		--peer-trusted-ca-file=/etc/kubernetes/ssl/ca.pem \
		--initial-advertise-peer-urls https://192.168.102.131:2380 \
		--listen-peer-urls https://192.168.102.131:2380 \
		--listen-client-urls https://192.168.102.131:2379,http://127.0.0.1:2379 \
		--advertise-client-urls https://192.168.102.131:2379 \
		--initial-cluster-token etcd-cluster-0 \
		--initial-cluster k8s-matser=https://192.168.102.130:2380,k8s-node1=https://192.168.102.131:2380,k8s-node2=https://192.168.102.132:2380 \
		--initial-cluster-state new \
		--data-dir=/var/lib/etcd
		Restart=on-failure
		RestartSec=5
		LimitNOFILE=65536
		
		[Install]
		WantedBy=multi-user.target
		【k8s-node2】
		[Unit]
		Description=Etcd Server
		After=network.target
		After=network-online.target
		Wants=network-online.target
		Documentation=https://github.com/coreos
		
		[Service]
		Type=notify
		WorkingDirectory=/var/lib/etcd/
		ExecStart=/usr/local/bin/etcd \
		--name k8s-node2 \
		--cert-file=/etc/kubernetes/ssl/kubernetes.pem \
		--key-file=/etc/kubernetes/ssl/kubernetes-key.pem \
		--peer-cert-file=/etc/kubernetes/ssl/kubernetes.pem \
		--peer-key-file=/etc/kubernetes/ssl/kubernetes-key.pem \
		--trusted-ca-file=/etc/kubernetes/ssl/ca.pem \
		--peer-trusted-ca-file=/etc/kubernetes/ssl/ca.pem \
		--initial-advertise-peer-urls https://192.168.102.132:2380 \
		--listen-peer-urls https://192.168.102.132:2380 \
		--listen-client-urls https://192.168.102.132:2379,http://127.0.0.1:2379 \
		--advertise-client-urls https://192.168.102.132:2379 \
		--initial-cluster-token etcd-cluster-0 \
		--initial-cluster k8s-matser=https://192.168.102.130:2380,k8s-node1=https://192.168.102.131:2380,k8s-node2=https://192.168.102.132:2380 \
		--initial-cluster-state new \
		--data-dir=/var/lib/etcd
		Restart=on-failure
		RestartSec=5
		LimitNOFILE=65536
		
		[Install]
		WantedBy=multi-user.target

• 指定 etcd 的工作目錄為 /var/lib/etcd,數據目錄為 /var/lib/etcd,需在啟動服務前創建這個目錄,否則啟動服務的時候會報錯“Failed at step CHDIR spawning /usr/bin/etcd: No such file or directory”;
• 為了保證通信安全,需要指定 etcd 的公私鑰(cert-file和key-file)、Peers 通信的公私鑰和 CA 證書(peer-cert-file、peer-key-file、peer-trusted-ca-file)、客戶端的CA證書(trusted-ca-file);
• 創建 kubernetes.pem 證書時使用的 kubernetes-csr.json 文件的 hosts 字段包含所有 etcd 節點的IP,否則證書校驗會出錯;
• –initial-cluster-state 值為 new 時,–name 的參數值必須位於 –initial-cluster 列表中;
3.3 啟動和驗證 etcd 服務
systemctl daemon-reload
systemctl enable etcd
systemctl start etcd
systemctl status etcd
最先啟動的 etcd 進程會卡住一段時間,等待其它節點上的 etcd 進程加入集群,為正常現象。
驗證 etcd 服務,在任何一個 etcd 節點執行
etcdctl
--endpoints=https://192.168.102.130:2379,https://192.168.102.131:2379,https://192.168.102.132:2379
--ca-file=/etc/kubernetes/ssl/ca.pem
--cert-file=/etc/kubernetes/ssl/kubernetes.pem
--key-file=/etc/kubernetes/ssl/kubernetes-key.pem
cluster-health
## 輸出結果
member be2bba99f48e54c is healthy: got healthy result from https://192.168.102.130:2379
member bec754b230c8075e is healthy: got healthy result from https://192.168.102.131:2379
member dfc0880cac2a50c8 is healthy: got healthy result from https://192.168.102.132:2379
cluster is healthy
4. 部署 flannel
在三個節點都安裝 Flannel,下面的操作需要再三個節點都執行一遍
4.1 下載安裝 flannel
wget https://github.com/coreos/flannel/releases/download/v0.9.1/flannel-v0.9.1-linux-amd64.tar.gz
mkdir flannel
tar -xzvf flannel-v0.9.1-linux-amd64.tar.gz -C flannel
scp flannel/{flanneld,mk-docker-opts.sh} root@192.168.102.130:/usr/local/bin
scp flannel/{flanneld,mk-docker-opts.sh} root@192.168.102.131:/usr/local/bin
scp flannel/{flanneld,mk-docker-opts.sh} root@192.168.102.132:/usr/local/bin
4.2 配置網絡
etcdctl --endpoints="https://192.168.102.130:2379,https://192.168.102.131:2379,https://192.168.102.132:2379"
--ca-file=/etc/kubernetes/ssl/ca.pem
--cert-file=/etc/kubernetes/ssl/kubernetes.pem
--key-file=/etc/kubernetes/ssl/kubernetes-key.pem
mkdir /kubernetes/network

	etcdctl --endpoints="https://192.168.102.130:2379,https://192.168.102.131:2379,https://192.168.102.132:2379" \
	--ca-file=/etc/kubernetes/ssl/ca.pem \
	--cert-file=/etc/kubernetes/ssl/kubernetes.pem \
	--key-file=/etc/kubernetes/ssl/kubernetes-key.pem \
	mk /kubernetes/network/config '{"Network":"172.30.0.0/16","SubnetLen":24,"Backend":{"Type":"vxlan"}}'
4.3 創建 systemd 文件
	vim /usr/lib/systemd/system/flanneld.service		
		[Unit]
		Description=Flanneld overlay address etcd agent
		After=network.target
		After=network-online.target
		Wants=network-online.target
		After=etcd.service
		Before=docker.service
		
		[Service]
		Type=notify
		ExecStart=/usr/local/bin/flanneld \
		-etcd-cafile=/etc/kubernetes/ssl/ca.pem \
		-etcd-certfile=/etc/kubernetes/ssl/kubernetes.pem \
		-etcd-keyfile=/etc/kubernetes/ssl/kubernetes-key.pem \
		-etcd-endpoints=https://192.168.102.130:2379,https://192.168.102.131:2379,https://192.168.102.132:2379 \
		-etcd-prefix=/kubernetes/network
		ExecStartPost=/usr/local/bin/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/docker
		Restart=on-failure
		
		[Install]
		WantedBy=multi-user.target
		RequiredBy=docker.service

• mk-docker-opts.sh 腳本將分配給 flanneld 的 Pod 子網網段信息寫入到 /run/flannel/docker 文件中,后續 docker 啟動時使用這個文件中參數值設置 docker0 網橋;
• flanneld 使用系統缺省路由所在的接口和其它節點通信,對於有多個網絡接口的機器(如,內網和公網),可以用 -iface=enpxx 選項值指定通信接口;
4.4 啟動和驗證 flannel
scp /usr/lib/systemd/system/flanneld.service root@192.168.102.130:/usr/lib/systemd/system/
scp /usr/lib/systemd/system/flanneld.service root@192.168.102.131:/usr/lib/systemd/system/
scp /usr/lib/systemd/system/flanneld.service root@192.168.102.132:/usr/lib/systemd/system/
systemctl daemon-reload
systemctl enable flanneld
systemctl start flanneld
systemctl status flanneld
檢查 flannel 服務狀態
/usr/local/bin/etcdctl
--endpoints=https://192.168.102.130:2379,https://192.168.102.131:2379,https://192.168.102.132:2379
--ca-file=/etc/kubernetes/ssl/ca.pem
--cert-file=/etc/kubernetes/ssl/kubernetes.pem
--key-file=/etc/kubernetes/ssl/kubernetes-key.pem
ls /kubernetes/network/subnets
輸出結果
/kubernetes/network/subnets/172.30.37.0-24
/kubernetes/network/subnets/172.30.48.0-24
/kubernetes/network/subnets/172.30.69.0-24
5. 部署 kubectl 工具,創建 kubeconfig 文件
kubectl 是 kubernetes 的集群管理工具,任何節點通過 kubetcl 都可以管理整個k8s集群。
本文是在 master 節點部署,部署成功后會生成 /root/.kube/config 文件,kubectl 就是通過這個獲取 kube-apiserver 地址、證書、用戶名等信息,所以這個文件需要保管好。
5.1 下載安裝包
wget https://dl.k8s.io/v1.10.3/kubernetes-client-linux-amd64.tar.gz

	tar -xzvf kubernetes-client-linux-amd64.tar.gz
	chmod a+x kubernetes/client/bin/kube*
	scp kube* root@192.168.102.130:/usr/local/bin/
	scp kube* root@192.168.102.131:/usr/local/bin/
	scp kube* root@192.168.102.132:/usr/local/bin/
5.2 創建 /root/.kube/config
# 設置集群參數,--server 指定 Master 節點 ip
	kubectl config set-cluster kubernetes \
	--certificate-authority=/etc/kubernetes/ssl/ca.pem \
	--embed-certs=true \
	--server=https://192.168.102.130:6443

# 設置客戶端認證參數
	kubectl config set-credentials admin \
	--client-certificate=/etc/kubernetes/ssl/admin.pem \
	--embed-certs=true \
	--client-key=/etc/kubernetes/ssl/admin-key.pem

# 設置上下文參數
	kubectl config set-context kubernetes \
	--cluster=kubernetes \
	--user=admin

# 設置默認上下文
	kubectl config use-context kubernetes

• admin.pem 證書 O 字段值為 system:masters,kube-apiserver 預定義的 RoleBinding cluster-admin 將 Group system:masters 與 Role cluster-admin 綁定,該 Role 授予了調用kube-apiserver 相關 API 的權限
5.3 創建 bootstrap.kubeconfig
# 生成token 變量
head -c 16 /dev/urandom | od -An -t x | tr -d ' '
vim token.csv
c971eb8d0edf3fba117695124b9eebcf,kubelet-bootstrap,10001,"system:kubelet-bootstrap"

	mv token.csv /etc/kubernetes/
	
# 設置集群參數 --server 為 master 節點 ip
	kubectl config set-cluster kubernetes \
	--certificate-authority=/etc/kubernetes/ssl/ca.pem \
	--embed-certs=true \
	--server=https://192.168.102.130:6443 \
	--kubeconfig=bootstrap.kubeconfig

# 設置客戶端認證參數
	kubectl config set-credentials kubelet-bootstrap \
	--token=c971eb8d0edf3fba117695124b9eebcf \
	--kubeconfig=bootstrap.kubeconfig

# 設置上下文參數
	kubectl config set-context default \
	--cluster=kubernetes \
	--user=kubelet-bootstrap \
	--kubeconfig=bootstrap.kubeconfig

# 設置默認上下文
	kubectl config use-context default --kubeconfig=bootstrap.kubeconfig
5.4 創建 kube-proxy.kubeconfig
# 設置集群參數 --server 參數為 master ip
	kubectl config set-cluster kubernetes \
	--certificate-authority=/etc/kubernetes/ssl/ca.pem \
	--embed-certs=true \
	--server=https://192.168.102.130:6443 \
	--kubeconfig=kube-proxy.kubeconfig
	
# 設置客戶端認證參數
	kubectl config set-credentials kube-proxy \
	--client-certificate=/etc/kubernetes/ssl/kube-proxy.pem \
	--client-key=/etc/kubernetes/ssl/kube-proxy-key.pem \
	--embed-certs=true \
	--kubeconfig=kube-proxy.kubeconfig
	
# 設置上下文參數
	kubectl config set-context default \
	--cluster=kubernetes \
	--user=kube-proxy \
	--kubeconfig=kube-proxy.kubeconfig
	
#設置默認上下文
	kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig

• 設置集群參數和客戶端認證參數時 –embed-certs 都為 true,這會將 certificate-authority、client-certificate 和 client-key 指向的證書文件內容寫入到生成的 kube-proxy.kubeconfig 文件中;
• kube-proxy.pem 證書中 CN 為 system:kube-proxy,kube-apiserver 預定義的 RoleBinding cluster-admin 將User system:kube-proxy 與 Role system:node-proxier 綁定,該 Role 授予了調用 kube-apiserver Proxy 相關 API 的權限;
分發文件
scp /etc/kubernetes/{kube-proxy.kubeconfig,bootstrap.kubeconfig} root@192.168.102.130:/etc/kubernetes/
scp /etc/kubernetes/{kube-proxy.kubeconfig,bootstrap.kubeconfig} root@192.168.102.131:/etc/kubernetes/
scp /etc/kubernetes/{kube-proxy.kubeconfig,bootstrap.kubeconfig} root@192.168.102.132:/etc/kubernetes/

  1. 部署 master 節點
    上面的那一堆都是准備工作,下面開始正式部署 kubernetes
    在 master 節點進行部署
    6.1 下載安裝文件
    wget https://dl.k8s.io/v1.10.3/kubernetes-server-linux-amd64.tar.gz

     tar -zxvf kubernetes-server-linux-amd64.tar.gz -C kubernetes-serve
     scp -r kubernetes/server/bin/{kube-apiserver,kube-controller-manager,kube-scheduler,kubectl,kube-proxy,kubelet} root@192.168.102.130:/usr/local/bin/
     scp -r kubernetes/server/bin/{kube-apiserver,kube-controller-manager,kube-scheduler,kubectl,kube-proxy,kubelet} root@192.168.102.131:/usr/local/bin/
     scp -r kubernetes/server/bin/{kube-apiserver,kube-controller-manager,kube-scheduler,kubectl,kube-proxy,kubelet} root@192.168.102.132:/usr/local/bin/
    

    6.2 配置和啟動 kube-apiserver
    vim /usr/lib/systemd/system/kube-apiserver.service
    [Unit]
    Description=Kubernetes API Server
    Documentation=https://github.com/GoogleCloudPlatform/kubernetes
    After=network.target
    After=etcd.service

     	[Service]
     	ExecStart=/usr/local/bin/kube-apiserver \
     	--logtostderr=true \
     	--admission-control=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota,N
     	odeRestriction \
     	--advertise-address=192.168.102.130 \
     	--bind-address=192.168.102.130 \
     	--insecure-bind-address=192.168.102.130 \
     	--authorization-mode=Node,RBAC \
     	--runtime-config=rbac.authorization.k8s.io/v1alpha1 \
     	--kubelet-https=true \
     	--enable-bootstrap-token-auth \
     	--token-auth-file=/etc/kubernetes/token.csv \
     	--service-cluster-ip-range=10.254.0.0/16 \
     	--service-node-port-range=8400-10000 \
     	--tls-cert-file=/etc/kubernetes/ssl/kubernetes.pem \
     	--tls-private-key-file=/etc/kubernetes/ssl/kubernetes-key.pem \
     	--client-ca-file=/etc/kubernetes/ssl/ca.pem \
     	--service-account-key-file=/etc/kubernetes/ssl/ca-key.pem \
     	--etcd-cafile=/etc/kubernetes/ssl/ca.pem \
     	--etcd-certfile=/etc/kubernetes/ssl/kubernetes.pem \
     	--etcd-keyfile=/etc/kubernetes/ssl/kubernetes-key.pem \
     	--etcd-servers=https://192.168.102.130:2379,https://192.168.102.131:2379,https://192.168.102.132:2379 \
     	--enable-swagger-ui=true \
     	--allow-privileged=true \
     	--apiserver-count=3 \
     	--audit-log-maxage=30 \
     	--audit-log-maxbackup=3 \
     	--audit-log-maxsize=100 \
     	--audit-log-path=/var/lib/audit.log \
     	--event-ttl=1h \
     	--v=2
     	Restart=on-failure
     	RestartSec=5
     	Type=notify
     	LimitNOFILE=65536
     	
     	[Install]
     	WantedBy=multi-user.target
    

• –authorization-mode=RBAC 指定在安全端口使用 RBAC 授權模式,拒絕未通過授權的請求;
• kube-scheduler、kube-controller-manager 一般和 kube-apiserver 部署在同一台機器上,它們使用非安全端口和 kube-apiserver通信;
• kubelet、kube-proxy、kubectl 部署在其它 Node 節點上,如果通過安全端口訪問 kube-apiserver,則必須先通過 TLS 證書認證,再通過 RBAC 授權;
• kube-proxy、kubectl 通過在使用的證書里指定相關的 User、Group 來達到通過 RBAC 授權的目的;
• 如果使用了 kubelet TLS Boostrap 機制,則不能再指定 –kubelet-certificate-authority、–kubelet-client-certificate 和 –kubelet-client-key 選項,否則后續 kube-apiserver 校驗 kubelet 證書時出現 ”x509: certificate signed by unknown authority“ 錯誤;
• –admission-control 值必須包含 ServiceAccount,否則部署集群插件時會失敗;
• –bind-address 不能為 127.0.0.1;
• –runtime-config配置為rbac.authorization.k8s.io/v1beta1,表示運行時的apiVersion
• –service-cluster-ip-range 指定 Service Cluster IP 地址段,該地址段不能路由可達;
• –service-node-port-range 指定 NodePort 的端口范圍;
• 缺省情況下 kubernetes 對象保存在 etcd /registry 路徑下,可以通過 –etcd-prefix 參數進行調整;

啟動 kube-apiserver
systemctl daemon-reload
systemctl enable kube-apiserver
systemctl restart kube-apiserver
systemctl status kube-apiserver

6.3 配置和啟動 kube-controller-manager
vim /usr/lib/systemd/system/kube-controller-manager.service
	[Unit]
	Description=Kubernetes Controller Manager
	Documentation=https://github.com/GoogleCloudPlatform/kubernetes
	
	[Service]
	ExecStart=/usr/local/bin/kube-controller-manager \
	--logtostderr=true  \
	--address=127.0.0.1 \
	--master=http://192.168.102.130:8080 \
	--allocate-node-cidrs=true \
	--service-cluster-ip-range=10.254.0.0/16 \
	--cluster-cidr=172.30.0.0/16 \
	--cluster-name=kubernetes \
	--cluster-signing-cert-file=/etc/kubernetes/ssl/ca.pem \
	--cluster-signing-key-file=/etc/kubernetes/ssl/ca-key.pem \
	--service-account-private-key-file=/etc/kubernetes/ssl/ca-key.pem \
	--root-ca-file=/etc/kubernetes/ssl/ca.pem \
	--leader-elect=true \
	--v=2
	Restart=on-failure
	LimitNOFILE=65536
	RestartSec=5
	
	[Install]
	WantedBy=multi-user.target

• –address 值必須為 127.0.0.1,因為當前 kube-apiserver 期望 scheduler 和 controller-manager 在同一台機器
• –master=http://{MASTER_IP}:8080:使用非安全 8080 端口與 kube-apiserver 通信;
• –cluster-cidr 指定 Cluster 中 Pod 的 CIDR 范圍,該網段在各 Node 間必須路由可達(flanneld保證);
• –service-cluster-ip-range 參數指定 Cluster 中 Service 的CIDR范圍,該網絡在各 Node 間必須路由不可達,必須和 kube-apiserver 中的參數一致;
• –cluster-signing-* 指定的證書和私鑰文件用來簽名為 TLS BootStrap 創建的證書和私鑰;
• –root-ca-file 用來對 kube-apiserver 證書進行校驗,指定該參數后,才會在Pod 容器的 ServiceAccount 中放置該 CA 證書文件;
• –leader-elect=true 部署多台機器組成的 master 集群時選舉產生一處於工作狀態的 kube-controller-manager 進程;

啟動服務
systemctl daemon-reload  
systemctl enable kube-controller-manager  
systemctl restart kube-controller-manager  
systemctl status kube-controller-manager

6.4 配置和啟動 kube-scheduler
vim /usr/lib/systemd/system/kube-scheduler.service
	[Unit]
	Description=Kubernetes Scheduler
	Documentation=https://github.com/GoogleCloudPlatform/kubernetes
	
	[Service]
	ExecStart=/usr/local/bin/kube-scheduler \
	--logtostderr=true \
	--address=127.0.0.1 \
	--master=http://192.168.102.130:8080 \
	--leader-elect=true \
	--v=2
	Restart=on-failure
	LimitNOFILE=65536
	RestartSec=5
	
	[Install]
	WantedBy=multi-user.target

• –address 值必須為 127.0.0.1,因為當前 kube-apiserver 期望 scheduler 和 controller-manager 在同一台機器;
• –master=http://{MASTER_IP}:8080:使用非安全 8080 端口與 kube-apiserver 通信;
• –leader-elect=true 部署多台機器組成的 master 集群時選舉產生一處於工作狀態的 kube-controller-manager 進程;
啟動 kube-scheduler
systemctl daemon-reload
systemctl enable kube-scheduler
systemctl restart kube-scheduler
systemctl status kube-scheduler

6.5 驗證 master 節點功能
	kubectl get componentstatuses  
##輸出結果
NAME                 STATUS    MESSAGE              ERROR  
etcd-1               Healthy   {"health": "true"}     
etcd-2               Healthy   {"health": "true"}     
etcd-0               Healthy   {"health": "true"}     
controller-manager   Healthy   ok                     
scheduler            Healthy   ok    
  1. 部署 Node 節點
    master 節點也作為 node 節點使用,需要在三個節點都執行安裝操作
    7.1 下載文件
    wget https://download.docker.com/linux/static/stable/x86_64/docker-17.12.0-ce.tgz
    tar -xvf docker-17.12.0-ce.tgz
    scp docker/docker* root@192.168.102.130:/usr/local/bin
    scp docker/docker* root@192.168.102.131:/usr/local/bin
    scp docker/docker* root@192.168.102.132:/usr/local/bin
    7.2 配置啟動 docker
    vim /usr/lib/systemd/system/docker.service
    [Unit]
    Description=Docker Application Container Engine
    Documentation=http://docs.docker.io

     [Service]
     Environment="PATH=/usr/local/bin:/bin:/sbin:/usr/bin:/usr/sbin"
     EnvironmentFile=-/run/flannel/subnet.env
     EnvironmentFile=-/run/flannel/docker
     ExecStart=/usr/local/bin/dockerd \
     --exec-opt native.cgroupdriver=cgroupfs \
     --log-level=error \
     --log-driver=json-file \
     --storage-driver=overlay \
     $DOCKER_NETWORK_OPTIONS
     ExecReload=/bin/kill -s HUP $MAINPID
     Restart=on-failure
     RestartSec=5
     LimitNOFILE=infinity
     LimitNPROC=infinity
     LimitCORE=infinity
     Delegate=yes
     KillMode=process
     
     [Install]
     WantedBy=multi-user.target
    

• (DOCKER_NETWORK_OPTIONS和)MAINPID不需要替換;
• flanneld 啟動時將網絡配置寫入到 /run/flannel/docker 文件中的變量 DOCKER_NETWORK_OPTIONS,dockerd 命令行上指定該變量值來設置 docker0 網橋參數;
• 如果指定了多個 EnvironmentFile 選項,則必須將 /run/flannel/docker 放在最后(確保 docker0 使用 flanneld 生成的 bip 參數);
• 不能關閉默認開啟的 –iptables 和 –ip-masq 選項;
• 如果內核版本比較新,建議使用 overlay 存儲驅動;
• –exec-opt native.cgroupdriver=systemd參數可以指定為”cgroupfs”或者“systemd”

同步配置到其它節點
	scp docker.service root@192.168.102.130:/usr/lib/systemd/system/
	scp docker.service root@192.168.102.131:/usr/lib/systemd/system/
	scp docker.service root@192.168.102.132:/usr/lib/systemd/system/
啟動服務
	systemctl daemon-reload
	systemctl enable docker
	systemctl start docker
	systemctl status docker

7.3 安裝和配置 kubelet
kubelet 啟動時向 kube-apiserver 發送 TLS bootstrapping 請求,需要先將 bootstrap token 文件中的 kubelet-bootstrap 用戶賦予 system:node-bootstrapper 角色,然后 kubelet 才有權限創建認證請求
下面這條命令只在 master 點執行一次即可
	cd /etc/kubernetes
	kubectl create clusterrolebinding kubelet-bootstrap --clusterrole=system:node-bootstrapper --user=kubelet-bootstrap
由於 master 已經有下面兩個文件,這里只需要把他 scp 到其他兩個節點即可
7.3.1 下載安裝 kubelet 和 kube-proxy
	wget https://dl.k8s.io/v1.10.3/kubernetes-server-linux-amd64.tar.gz
	tar -xzvf kubernetes-server-linux-amd64.tar.gz
	cp -r kubernetes/server/bin/{kube-proxy,kubelet} /usr/local/bin/
	scp /usr/local/bin/{kube-proxy,kubelet} root@192.168.102.131:/usr/local/bin/
	scp /usr/local/bin/{kube-proxy,kubelet} root@192.168.102.132:/usr/local/bin/
創建 kubelet 工作目錄 ---所有節點
	mkdir /var/lib/kubelet
	【k8s-master】
	vim /usr/lib/systemd/system/kubelet.service
		[Unit]
		Description=Kubernetes Kubelet
		Documentation=https://github.com/GoogleCloudPlatform/kubernetes
		After=docker.service
		Requires=docker.service
		
		[Service]
		WorkingDirectory=/var/lib/kubelet
		ExecStart=/usr/local/bin/kubelet \
		--address=192.168.102.130 \
		--hostname-override=192.168.102.130 \
		--pod-infra-container-image=registry.access.redhat.com/rhel7/pod-infrastructure:latest \
		--experimental-bootstrap-kubeconfig=/etc/kubernetes/bootstrap.kubeconfig \
		--kubeconfig=/etc/kubernetes/kubelet.kubeconfig \
		--require-kubeconfig \
		--cert-dir=/etc/kubernetes/ssl \
		--container-runtime=docker \
		--cluster-dns=10.254.0.2 \
		--cluster-domain=cluster.local \
		--hairpin-mode promiscuous-bridge \
		--allow-privileged=true \
		--serialize-image-pulls=false \
		--register-node=true \
		--logtostderr=true \
		--cgroup-driver=cgroupfs  \
		--v=2
		
		Restart=on-failure
		KillMode=process
		LimitNOFILE=65536
		RestartSec=5
		
		[Install]
		WantedBy=multi-user.target
	【k8s-node1】
	vim /usr/lib/systemd/system/kubelet.service
		[Unit]
		Description=Kubernetes Kubelet
		Documentation=https://github.com/GoogleCloudPlatform/kubernetes
		After=docker.service
		Requires=docker.service
		
		[Service]
		WorkingDirectory=/var/lib/kubelet
		ExecStart=/usr/local/bin/kubelet \
		--address=192.168.102.131 \
		--hostname-override=192.168.102.131 \
		--pod-infra-container-image=registry.access.redhat.com/rhel7/pod-infrastructure:latest \
		--experimental-bootstrap-kubeconfig=/etc/kubernetes/bootstrap.kubeconfig \
		--kubeconfig=/etc/kubernetes/kubelet.kubeconfig \
		--require-kubeconfig \
		--cert-dir=/etc/kubernetes/ssl \
		--container-runtime=docker \
		--cluster-dns=10.254.0.2 \
		--cluster-domain=cluster.local \
		--hairpin-mode promiscuous-bridge \
		--allow-privileged=true \
		--serialize-image-pulls=false \
		--register-node=true \
		--logtostderr=true \
		--cgroup-driver=cgroupfs  \
		--v=2
		
		Restart=on-failure
		KillMode=process
		LimitNOFILE=65536
		RestartSec=5
		
		[Install]
		WantedBy=multi-user.target

	【k8s-node2】
	vim /usr/lib/systemd/system/kubelet.service	
		[Unit]
		Description=Kubernetes Kubelet
		Documentation=https://github.com/GoogleCloudPlatform/kubernetes
		After=docker.service
		Requires=docker.service
		
		[Service]
		WorkingDirectory=/var/lib/kubelet
		ExecStart=/usr/local/bin/kubelet \
		--address=192.168.102.132 \
		--hostname-override=192.168.102.132 \
		--pod-infra-container-image=registry.access.redhat.com/rhel7/pod-infrastructure:latest \
		--experimental-bootstrap-kubeconfig=/etc/kubernetes/bootstrap.kubeconfig \
		--kubeconfig=/etc/kubernetes/kubelet.kubeconfig \
		--require-kubeconfig \
		--cert-dir=/etc/kubernetes/ssl \
		--container-runtime=docker \
		--cluster-dns=10.254.0.2 \
		--cluster-domain=cluster.local \
		--hairpin-mode promiscuous-bridge \
		--allow-privileged=true \
		--serialize-image-pulls=false \
		--register-node=true \
		--logtostderr=true \
		--cgroup-driver=cgroupfs  \
		--v=2
		
		Restart=on-failure
		KillMode=process
		LimitNOFILE=65536
		RestartSec=5
		
		[Install]
		WantedBy=multi-user.target

• –address 是本機ip,不能設置為 127.0.0.1,否則后續 Pods 訪問 kubelet 的 API 接口時會失敗,因為 Pods 訪問的 127.0.0.1 指向自己而不是 kubelet;
• –hostname-override 也是本機IP;
• –cgroup-driver 配置成 cgroup(保持docker和kubelet中的cgroup driver配置一致即可);
• –experimental-bootstrap-kubeconfig 指向 bootstrap kubeconfig 文件,kubelet 使用該文件中的用戶名和 token 向 kube-apiserver 發送 TLS Bootstrapping 請求;
• 管理員通過了 CSR 請求后,kubelet 自動在 –cert-dir 目錄創建證書和私鑰文件(kubelet-client.crt 和 kubelet-client.key),然后寫入 –kubeconfig 文件(自動創建 –kubeconfig 指定的文件);
• 建議在 –kubeconfig 配置文件中指定 kube-apiserver 地址,如果未指定 –api-servers 選項,則必須指定 –require-kubeconfig 選項后才從配置文件中讀取 kue-apiserver 的地址,否則 kubelet 啟動后將找不到 kube-apiserver (日志中提示未找到 API Server),kubectl get nodes 不會返回對應的 Node 信息;
• –cluster-dns 指定 kubedns 的 Service IP(可以先分配,后續創建 kubedns 服務時指定該 IP),–cluster-domain 指定域名后綴,這兩個參數同時指定后才會生效;
• –cluster-domain 指定 pod 啟動時 /etc/resolve.conf 文件中的 search domain ,起初我們將其配置成了 cluster.local.,這樣在解析 service 的 DNS 名稱時是正常的,可是在解析 headless service 中的 FQDN pod name 的時候卻錯誤,因此我們將其修改為 cluster.local,去掉嘴后面的 ”點號“ 就可以解決該問題;
• –kubeconfig=/etc/kubernetes/kubelet.kubeconfig中指定的kubelet.kubeconfig文件在第一次啟動kubelet之前並不存在,請看下文,當通過CSR請求后會自動生成kubelet.kubeconfig文件,如果你的節點上已經生成了~/.kube/config文件,你可以將該文件拷貝到該路徑下,並重命名為kubelet.kubeconfig,所有node節點可以共用同一個kubelet.kubeconfig文件,這樣新添加的節點就不需要再創建CSR請求就能自動添加到kubernetes集群中。同樣,在任意能夠訪問到kubernetes集群的主機上使用kubectl –kubeconfig命令操作集群時,只要使用~/.kube/config文件就可以通過權限認證,因為這里面已經有認證信息並認為你是admin用戶,對集群擁有所有權限。
同步配置

啟動 kubelet
	systemctl daemon-reload
	systemctl enable kubelet
	systemctl start kubelet
	systemctl status kubelet
執行 TLS 證書授權請求
kubelet 首次啟動時向 kube-apiserver 發送證書簽名請求,必須授權通過后,Node 才會加入到集群中

在三個節點都部署完 kubelet 之后,在 master 節點執行授權操作
# 查詢授權請求
	kubectl get csr
NAME                                                   AGE       REQUESTOR           CONDITION  
node-csr--eUQ3Gbj-kKAFGnvsNcDXaaKSkgOP6qg4yFUzuJcoIo   29s       kubelet-bootstrap   Pending  
node-csr-3Sb1MgoFpVDeI28qKujAkCHTvZELPMKh1QoQtLB1Vv0   29s       kubelet-bootstrap   Pending  
node-csr-4D7r9R2I7XWu1oK0d2HkFb1XggOIJ9XeXaiN_lwb0nQ   28s       kubelet-bootstrap   Pending  

#授權

	kubectl certificate approve node-csr--eUQ3Gbj-kKAFGnvsNcDXaaKSkgOP6qg4yFUzuJcoIo
	kubectl certificate approve node-csr-3Sb1MgoFpVDeI28qKujAkCHTvZELPMKh1QoQtLB1Vv0
	kubectl certificate approve node-csr-4D7r9R2I7XWu1oK0d2HkFb1XggOIJ9XeXaiN_lwb0nQ

	kubectl get csr
# 輸出結果
NAME                                                   AGE       REQUESTOR           CONDITION  
node-csr--eUQ3Gbj-kKAFGnvsNcDXaaKSkgOP6qg4yFUzuJcoIo   2m        kubelet-bootstrap   Approved,Issued  
node-csr-3Sb1MgoFpVDeI28qKujAkCHTvZELPMKh1QoQtLB1Vv0   2m        kubelet-bootstrap   Approved,Issued  
node-csr-4D7r9R2I7XWu1oK0d2HkFb1XggOIJ9XeXaiN_lwb0nQ   1m        kubelet-bootstrap   Approved,Issued 

	kubectl get nodes  
#返回
No resources found

在 master 上進行角色綁定
	kubectl get nodes
	kubectl describe clusterrolebindings system:node
	kubectl create clusterrolebinding kubelet-node-clusterbinding --clusterrole=system:node --user=system:node:192.168.102.130
	kubectl describe clusterrolebindings kubelet-node-clusterbinding 
# 也可以將在整個集群范圍內將 system:node ClusterRole 授予組”system:nodes”:
	kubectl create clusterrolebinding kubelet-node-clusterbinding --clusterrole=system:node --group=system:nodes
	kubectl get nodes 
	NAME              STATUS    ROLES     AGE       VERSION
	192.168.102.130   Ready     <none>    1d        v1.8.6
	192.168.102.131   Ready     <none>    1d        v1.8.6
	192.168.102.132   Ready     <none>    1d        v1.8.6
7.4 安裝和配置 kube-proxy
	7.4.1 創建 kube-proxy 工作目錄	 所有節點都執行
		mkdir -p /var/lib/kube-proxy	
	7.4.2 配置啟動 kube-proxy
	【k8s-master】
		vim /usr/lib/systemd/system/kube-proxy.service
			[Unit]
			Description=Kubernetes Kube-Proxy Server
			Documentation=https://github.com/GoogleCloudPlatform/kubernetes
			After=network.target
			
			[Service]
			WorkingDirectory=/var/lib/kube-proxy
			ExecStart=/usr/local/bin/kube-proxy \
			--bind-address=192.168.102.130 \
			--hostname-override=192.168.102.130 \
			--cluster-cidr=10.254.0.0/16 \
			--kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig \
			--logtostderr=true \
			--v=2
			Restart=on-failure
			RestartSec=5
			LimitNOFILE=65536
			
			[Install]
			WantedBy=multi-user.target
	【k8s-node1】
		vim /usr/lib/systemd/system/kube-proxy.service
			[Unit]
			Description=Kubernetes Kube-Proxy Server
			Documentation=https://github.com/GoogleCloudPlatform/kubernetes
			After=network.target
			
			[Service]
			WorkingDirectory=/var/lib/kube-proxy
			ExecStart=/usr/local/bin/kube-proxy \
			--bind-address=192.168.102.131 \
			--hostname-override=192.168.102.131 \
			--cluster-cidr=10.254.0.0/16 \
			--kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig \
			--logtostderr=true \
			--v=2
			Restart=on-failure
			RestartSec=5
			LimitNOFILE=65536
			
			[Install]
			WantedBy=multi-user.target

	【k8s-node2】
		vim /usr/lib/systemd/system/kube-proxy.service				
			[Unit]
			Description=Kubernetes Kube-Proxy Server
			Documentation=https://github.com/GoogleCloudPlatform/kubernetes
			After=network.target
			
			[Service]
			WorkingDirectory=/var/lib/kube-proxy
			ExecStart=/usr/local/bin/kube-proxy \
			--bind-address=192.168.102.132 \
			--hostname-override=192.168.102.132 \
			--cluster-cidr=10.254.0.0/16 \
			--kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig \
			--logtostderr=true \
			--v=2
			Restart=on-failure
			RestartSec=5
			LimitNOFILE=65536
			
			[Install]
			WantedBy=multi-user.target

• –bind-address 參數為本機IP
• –hostname-override 參數為本機IP,值必須與 kubelet 的值一致,否則 kube-proxy 啟動后會找不到該 Node,從而不會創建任何 iptables 規則;
• –cluster-cidr 必須與 kube-apiserver 的 –service-cluster-ip-range 選項值一致,kube-proxy 根據 –cluster-cidr 判斷集群內部和外部流量,指定 –cluster-cidr 或 –masquerade-all 選項后 kube-proxy 才會對訪問 Service IP 的請求做 SNAT;
• –kubeconfig 指定的配置文件嵌入了 kube-apiserver 的地址、用戶名、證書、秘鑰等請求和認證信息;
• 預定義的 RoleBinding cluster-admin 將User system:kube-proxy 與 Role system:node-proxier 綁定,該 Role 授予了調用 kube-apiserver Proxy 相關 API 的
啟動 kube-proxy
systemctl daemon-reload
systemctl enable kube-proxy
systemctl start kube-proxy
systemctl status kube-proxy
在另外的兩個節點進行上面的部署操作,注意替換其中的 ip

  1. 安裝 DNS 插件
    在 master 主節點上進行安裝操作
    8.1 下載安裝文件
    鏈接: https://pan.baidu.com/s/1PU-iBCFP6o1mH7N9DFgI5g 提取碼: y3dg
    創建DNS
    kubectl create -f kube-dns.yaml
    查看
    kubectl get pod,svc -n kube-system

  2. 部署 dashboard 插件
    在 master 節點部署
    9.1 下載本地鏡像
    docker pull registry.cn-beijing.aliyuncs.com/kubernetesdevops/kubernetes-dashboard-amd64:v1.10.0
    docker tag registry.cn-beijing.aliyuncs.com/kubernetesdevops/kubernetes-dashboard-amd64:v1.10.0 k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.0
    9.2 下載部署文件
    wget https://raw.githubusercontent.com/kubernetes/dashboard/v1.10.1/src/deploy/recommended/kubernetes-dashboard.yaml
    注意其中 apiVersion: apps/v1 是錯誤的修改成 apiVersion: apps/v1beta2 完整如下:

     vim kubernetes-dashboard.yaml
    
    
     	# Copyright 2017 The Kubernetes Authors.
     	#
     	# Licensed under the Apache License, Version 2.0 (the "License");
     	# you may not use this file except in compliance with the License.
     	# You may obtain a copy of the License at
     	#
     	#     http://www.apache.org/licenses/LICENSE-2.0
     	#
     	# Unless required by applicable law or agreed to in writing, software
     	# distributed under the License is distributed on an "AS IS" BASIS,
     	# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
     	# See the License for the specific language governing permissions and
     	# limitations under the License.
     	
     	# ------------------- Dashboard Secret ------------------- #
     	
     	apiVersion: v1
     	kind: Secret
     	metadata:
     	labels:
     		k8s-app: kubernetes-dashboard
     	name: kubernetes-dashboard-certs
     	namespace: kube-system
     	type: Opaque
     	
     	---
     	# ------------------- Dashboard Service Account ------------------- #
     	
     	apiVersion: v1
     	kind: ServiceAccount
     	metadata:
     	labels:
     		k8s-app: kubernetes-dashboard
     	name: kubernetes-dashboard
     	namespace: kube-system
     	
     	---
     	# ------------------- Dashboard Role & Role Binding ------------------- #
     	
     	kind: Role
     	apiVersion: rbac.authorization.k8s.io/v1
     	metadata:
     	name: kubernetes-dashboard-minimal
     	namespace: kube-system
     	rules:
     	# Allow Dashboard to create 'kubernetes-dashboard-key-holder' secret.
     	- apiGroups: [""]
     	resources: ["secrets"]
     	verbs: ["create"]
     	# Allow Dashboard to create 'kubernetes-dashboard-settings' config map.
     	- apiGroups: [""]
     	resources: ["configmaps"]
     	verbs: ["create"]
     	# Allow Dashboard to get, update and delete Dashboard exclusive secrets.
     	- apiGroups: [""]
     	resources: ["secrets"]
     	resourceNames: ["kubernetes-dashboard-key-holder", "kubernetes-dashboard-certs"]
     	verbs: ["get", "update", "delete"]
     	# Allow Dashboard to get and update 'kubernetes-dashboard-settings' config map.
     	- apiGroups: [""]
     	resources: ["configmaps"]
     	resourceNames: ["kubernetes-dashboard-settings"]
     	verbs: ["get", "update"]
     	# Allow Dashboard to get metrics from heapster.
     	- apiGroups: [""]
     	resources: ["services"]
     	resourceNames: ["heapster"]
     	verbs: ["proxy"]
     	- apiGroups: [""]
     	resources: ["services/proxy"]
     	resourceNames: ["heapster", "http:heapster:", "https:heapster:"]
     	verbs: ["get"]
     	
     	---
     	apiVersion: rbac.authorization.k8s.io/v1
     	kind: RoleBinding
     	metadata:
     	name: kubernetes-dashboard-minimal
     	namespace: kube-system
     	roleRef:
     	apiGroup: rbac.authorization.k8s.io
     	kind: Role
     	name: kubernetes-dashboard-minimal
     	subjects:
     	- kind: ServiceAccount
     	name: kubernetes-dashboard
     	namespace: kube-system
     	
     	---
     	# ------------------- Dashboard Deployment ------------------- #
     	
     	kind: Deployment
     	apiVersion: apps/v1beta2
     	metadata:
     	labels:
     		k8s-app: kubernetes-dashboard
     	name: kubernetes-dashboard
     	namespace: kube-system
     	spec:
     	replicas: 1
     	revisionHistoryLimit: 10
     	selector:
     		matchLabels:
     		k8s-app: kubernetes-dashboard
     	template:
     		metadata:
     		labels:
     			k8s-app: kubernetes-dashboard
     		spec:
     		containers:
     		- name: kubernetes-dashboard
     			image: k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.1
     			ports:
     			- containerPort: 8443
     			protocol: TCP
     			args:
     			- --auto-generate-certificates
     			# Uncomment the following line to manually specify Kubernetes API server Host
     			# If not specified, Dashboard will attempt to auto discover the API server and connect
     			# to it. Uncomment only if the default does not work.
     			# - --apiserver-host=http://my-address:port
     			volumeMounts:
     			- name: kubernetes-dashboard-certs
     			mountPath: /certs
     			# Create on-disk volume to store exec logs
     			- mountPath: /tmp
     			name: tmp-volume
     			livenessProbe:
     			httpGet:
     				scheme: HTTPS
     				path: /
     				port: 8443
     			initialDelaySeconds: 30
     			timeoutSeconds: 30
     		volumes:
     		- name: kubernetes-dashboard-certs
     			secret:
     			secretName: kubernetes-dashboard-certs
     		- name: tmp-volume
     			emptyDir: {}
     		serviceAccountName: kubernetes-dashboard
     		# Comment the following tolerations if Dashboard must not be deployed on master
     		tolerations:
     		- key: node-role.kubernetes.io/master
     			effect: NoSchedule
     	
     	---
     	# ------------------- Dashboard Service ------------------- #
     	
     	kind: Service
     	apiVersion: v1
     	metadata:
     	labels:
     		k8s-app: kubernetes-dashboard
     	name: kubernetes-dashboard
     	namespace: kube-system
     	spec:
     	ports:
     		- port: 443
     		targetPort: 8443
     		# nodePort: 9999
     	selector:
     		k8s-app: kubernetes-dashboard
     		# type: NodePort
     	
     	
     	
     	----------------------------------------------------------------------------------------------------------------------------------
    

創建admin用戶token
vim kubernetes-dashboard-admin.yaml

		kind: ClusterRoleBinding
		apiVersion: rbac.authorization.k8s.io/v1beta1
		metadata:
		name: admin
		annotations:
			rbac.authorization.kubernetes.io/autoupdate: "true"
		roleRef:
		kind: ClusterRole 
		name: cluster-admin
		apiGroup: rbac.authorization.k8s.io 
		subjects:
		- kind: ServiceAccount
		name: admin
		namespace: kube-system
		---
		apiVersion: v1
		kind: ServiceAccount
		metadata:
		name: admin
		namespace: kube-system
		labels:
			kubernetes.io/cluster-service: "true"
			addonmanager.kubernetes.io/mode: Reconcile  

創建應用
	kubectl -n kube-system create -f .
刪除應用
	kubectl -n kube-system delete -f .
獲取admin用戶token
	kubectl get secret -n kube-system | grep "admin"
	kubectl describe secret admin-token-gb7db  -n kube-system
獲取對外服務端口
	kubectl get service --namespace=kube-system
登陸web頁面
https://192.168.102.130:對外端口
  1. 部署 heapster 插件
    Heapster是容器集群監控和性能分析工具
    執行命令
    kubectl create -f http://mirror.faasx.com/kubernetes/heapster/deploy/kube-config/influxdb/influxdb.yaml
    kubectl create -f http://mirror.faasx.com/kubernetes/heapster/deploy/kube-config/influxdb/grafana.yaml
    kubectl create -f http://mirror.faasx.com/kubernetes/heapster/deploy/kube-config/influxdb/heapster.yaml
    kubectl create -f http://mirror.faasx.com/kubernetes/heapster/deploy/kube-config/rbac/heapster-rbac.yaml

    當pod的狀態全都變成了running,則能夠使用。

    web 登錄 dashboard 查看


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM