Kubernetes作為容器應用的管理平台,通過對pod的運行狀態進行監控,並且根據主機或容器失效的狀態將新的pod調度到其他node上,實現了應用層的高可用。
針對kubernetes集群,高可用性還包含以下兩個層面的考慮:
- etcd存儲的高可用
- master節點的高可用
在開始之前,先貼一下架構圖:
etcd作為kubernetes的中心數據庫,必須保證其不是單點。不過etcd集群的部署很簡單,這里就不細說了,之前寫過一鍵部署腳本,有興趣的同學可以往前翻。
在k8s全面容器化加上各種驗證機制之前,master節點的高可用部署還算簡單,現在k8s有了非常復雜的安全機制,在運維上增加了不小難度。
在kubernetes中,master扮演着總控中心的角色,主要有三個服務apiserver、controller-manager、scheduler,這三個服務通過不斷與node節點上的kubelet、kube-proxy進行通信來維護整個集群的健康工作狀態,如果master的服務無法訪問到某個node,則會將該node標記為不可用,不再向其調度pod。
Master的三個組件都以容器的形式啟動,啟動他們的基礎工具是kubelet,他們都以static pod的形式啟動,並由kubelet進行監控和自動啟動。而kubelet自身的自啟動由systemd完成。
APIserver作為集群的核心,負責集群各功能模塊之間的通信,集群內的各個功能模塊通過apiserver將信息存入etcd,當需要獲取和操作這些數據時,則通過apiserver提供的rest接口來實現,從而實現各模塊之間的信息交互。
APIserver最主要的rest接口是資源對象的增刪查改,除此之外,它還提供了一類很特殊的rest接口KubernetesProxyAPI接口,這類接口的作用是代理rest請求,即apiserver把收到的rest請求轉發到某個node上的kubelet守護進程的rest端口上,由該kubelet進程負責響應。在kubernetes集群之外訪問某個pod容器的服務(http服務)時,可以用proxyAPI實現,這種場景多用於管理目的。
每個node節點上的kubelet每隔一個時間周期,就會調用一次apiserver的rest接口報告自身狀態,apiserver接收到這些信息后,將節點狀態信息更新到etcd。此外,kubelet也通過apiserver的watch接口監聽pod信息,如果監聽到新的pod副本被調度綁定到本節點,則執行pod對應的容器的創建和啟動邏輯;如果監聽到pod對象被刪除,則刪除本節點上的響應的pod容器;如果監聽到修改pod信息,則kubelet監聽到變化后,會相應地修改本節點的pod容器。
ControllerManager作為集群內部的管理控制中心,負責集群內的node、pod副本、endpoint、namespace、serviceaccount、resourcequota等的管理,當某個node意外宕機時,ControllerManager會及時發現此故障並執行自動化修復流程,確保集群始終處於預期的工作狀態。ControllerManager內部包含多個controller,每種controller都負責具體的控制流程。
ControllerManager中的NodeController模塊通過apiserver提供的watch接口,實時監控node的信息,並做相應處理。Scheduler通過apiserver的watch接口監聽到新建pod副本信息后,它會檢索所有符合該pod要求的node列表,開始執行pod調度邏輯,調度成功后將pod綁定到目標節點上。
一般來說,智能系統和自動系統通常會通過一個被稱為操作系統的機構來不斷修正系統的工作狀態。在kubernetes集群中,每個controller都是這樣一個操作系統,它們通過APIserver提供的接口實時監控整個集群里的每個資源對象的當前狀態,當發生各種故障導致系統狀態發生變化時,會嘗試着將系統狀態從“現有狀態”修正到“期望狀態”。
Scheduler的作用是將待調度的pod,包括通過apiserver新創建的pod及rc為補足副本而創建的pod等,通過一些復雜的調度流程計算出最佳目標節點,然后綁定到該節點上。
以master的三個組件作為一個部署單元,使用至少三個節點安裝master,並且需要保證任何時候總有一套master能正常工作。
三個master節點,一個node節點:
master01,etcd0 uy05-13 192.168.5.42
master02,etcd1 uy08-07 192.168.5.104
master03,etcd2 uy08-08 192.168.5.105
node01 uy02-07 192.168.5.40
兩個lvs節點:
lvs01 uy-s-91 192.168.2.56
lvs02 uy-s-92 192.168.2.57
vip=192.168.6.15
kubernetes version: 1.8.3
docker version: 17.06.2-ce
etcd version: 3.2.9
OS version: debian stretch
使用lvs+keepalived對apiserver做負載均衡和高可用。
由於controller-manager和scheduler會修改集群的狀態信息,為了保證同一時間只有一個實例可以對集群狀態信息進行讀寫,避免出現同步問題和一致性問題,這兩個組件需要開啟選舉功能,並選舉出一個leader,k8s采用的是租賃鎖(lease-lock)。並且,apiserver希望這兩個組件工作在同一個節點上,所以這兩個組件需要監聽127.0.0.1
。
1、為三個節點安裝kubeadm、kubectl、kubelet。
# cat <<EOF >/etc/apt/sources.list.d/kubernetes.list
deb http://mirrors.ustc.edu.cn/kubernetes/apt/ kubernetes-xenial main
EOF
# aptitude update
# aptitude install -y kubelet kubeadm kubectl
2、准備鏡像,自行科學下載...。
k8s-dns-dnsmasq-nanny-amd64.tar
k8s-dns-kube-dns-amd64.tar
k8s-dns-sidecar-amd64.tar
kube-apiserver-amd64.tar
kube-controller-manager-amd64.tar
kube-proxy-amd64.tar
kube-scheduler-amd64.tar
pause-amd64.tar
kubernetes-dashboard-amd64.tar
kubernetes-dashboard-init-amd64.tar
# for i in `ls`; do docker load -i $i; done
3、部署第一個master節點。
a、我直接使用了kubeadm來初始化第一個節點,kubeadm的使用其實有一些技巧,這里我使用了一個配置文件:
# cat kubeadm-config.yml
apiVersion: kubeadm.k8s.io/v1alpha1
kind: MasterConfiguration
api:
advertiseAddress: "192.168.5.42"
etcd:
endpoints:
- "http://192.168.5.42:2379"
- "http://192.168.5.104:2379"
- "http://192.168.5.105:2379"
kubernetesVersion: "v1.8.3"
apiServerCertSANs:
- uy05-13
- uy08-07
- uy08-08
- 192.168.6.15
- 127.0.0.1
- 192.168.5.42
- 192.168.5.104
- 192.168.5.105
- 192.168.122.1
- 10.244.0.1
- 10.96.0.1
- kubernetes
- kubernetes.default
- kubernetes.default.svc
- kubernetes.default.svc.cluster
- kubernetes.default.svc.cluster.local
tokenTTL: 0s
networking:
podSubnet: 10.244.0.0/16
b、執行初始化:
# kubeadm init --config=kubeadm-config.yml
[kubeadm] WARNING: kubeadm is in beta, please do not use it for production clusters.
[init] Using Kubernetes version: v1.8.4
[init] Using Authorization modes: [Node RBAC]
[preflight] Running pre-flight checks
[preflight] WARNING: docker version is greater than the most recently validated version. Docker version: 17.06.2-ce. Max validated version: 17.03
[kubeadm] WARNING: starting in 1.8, tokens expire after 24 hours by default (if you require a non-expiring token use --token-ttl 0)
[certificates] Generated ca certificate and key.
[certificates] Generated apiserver certificate and key.
[certificates] apiserver serving cert is signed for DNS names [uy05-13 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local uy05-13 uy08-07 uy08-08 uy-s-91 uy-s-92 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.5.42 192.168.6.15 127.0.0.1 192.168.5.42 192.168.5.104 192.168.5.105 192.168.122.1 10.244.0.1 10.96.0.1 192.168.2.56 192.168.2.57]
[certificates] Generated apiserver-kubelet-client certificate and key.
[certificates] Generated sa key and public key.
[certificates] Generated front-proxy-ca certificate and key.
[certificates] Generated front-proxy-client certificate and key.
[certificates] Valid certificates and keys now exist in "/etc/kubernetes/pki"
[kubeconfig] Wrote KubeConfig file to disk: "admin.conf"
[kubeconfig] Wrote KubeConfig file to disk: "kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "controller-manager.conf"
[kubeconfig] Wrote KubeConfig file to disk: "scheduler.conf"
[controlplane] Wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml"
[controlplane] Wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
[controlplane] Wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml"
[init] Waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests"
[init] This often takes around a minute; or longer if the control plane images have to be pulled.
[apiclient] All control plane components are healthy after 26.002009 seconds
[uploadconfig] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[markmaster] Will mark node uy05-13 as master by adding a label and a taint
[markmaster] Master uy05-13 tainted and labelled with key/value: node-role.kubernetes.io/master=""
[bootstraptoken] Using token: 5a87e1.b760be788520eee5
[bootstraptoken] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstraptoken] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstraptoken] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstraptoken] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: kube-dns
[addons] Applied essential addon: kube-proxy
Your Kubernetes master has initialized successfully!
To start using your cluster, you need to run (as a regular user):
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
http://kubernetes.io/docs/admin/addons/
You can now join any number of machines by running the following on each node
as root:
kubeadm join --token 5a87e1.b760be788520eee5 192.168.6.15:6443 --discovery-token-ca-cert-hash sha256:7f2642ce5b6dd3cb4938d1aa067a3b43b906cdf7815eae095a77e41435bd8369
kubeadm自動生成了一套證書,創建了配置文件,用kubelet拉起了三個組件的靜態pod,並運行了kube-dns和kube-proxy。
c、讓master節點參與調度。
# kubectl taint nodes --all node-role.kubernetes.io/master-
d、安裝網絡插件,我這里使用的是calico,文件自行從官網下載,修改CALICO_IPV4POOL_CIDR
為初始化時自定義的網段。
# vim calico.yaml
- name: CALICO_IPV4POOL_CIDR
value: "10.244.0.0/16"
# kubectl apply -f calico.yaml
這時,所有組件應該都正常運行起來了。
e、安裝dashboard插件,文件自行從官網下載。
修改service的端口類型為NodePort:
# vim kubernetes-dashboard.yaml
kind: Service
apiVersion: v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kube-system
spec:
type: NodePort
ports:
- port: 80
targetPort: 9090
nodePort: 30001
selector:
k8s-app: kubernetes-dashboard
# kubectl apply -f kubernetes-dashboard.yaml
這里有權限問題,手動添加權限:
# cat rbac.yaml
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: kubernetes-dashboard-head
labels:
k8s-app: kubernetes-dashboard-head
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: kubernetes-dashboard
namespace: kube-system
# kubectl apply -f rbac.yaml
f、安裝heapster,文件自行從官網下載。
# kubectl apply -f heapster.yaml
這里也有權限問題,權限無處不在...
# kubectl create clusterrolebinding heapster-binding --clusterrole=cluster-admin --serviceaccount=kube-system:heapster
或者:
# vim heapster-rbac.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: heapster-binding
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: heapster
namespace: kube-system
這時dashboard上應該能看到圖了。
4、部署第二個master節點。
這里由於現在的版本開啟了node驗證,所以需要解決證書問題。
a、將第一個節點的配置文件和證書全部復制過來。
# scp -r /etc/kubernetes/* 192.168.5.104:`pwd`
b、使用CA為新的節點簽發證書,並替換復制過來的證書。其中,apiserver使用的是多域名證書,相關的域名和IP我已經在初始化第一個節點的時候簽進去了,所以這里不需要重簽。這里不需要替換的證書文件包括:ca.crt、ca.key、front-proxy-ca.crt、front-proxy-ca.key、front-proxy-client.crt、front-proxy-client.key、sa.key、sa.pub、apiserver.crt、apiserver.key,其他的需要重簽並替換。
#apiserver-kubelet-client
openssl genrsa -out apiserver-kubelet-client.key 2048
openssl req -new -key apiserver-kubelet-client.key -out apiserver-kubelet-client.csr -subj "/O=system:masters,/CN=kube-apiserver-kubelet-client"
openssl x509 -req -set_serial $(date +%s%N) -in apiserver-kubelet-client.csr -CA ca.crt -CAkey ca.key -out apiserver-kubelet-client.crt -days 365 -extensions v3_req -extfile apiserver-kubelet-client-openssl.cnf
[ v3_req ]
# Extensions to add to a certificate request
keyUsage = critical, digitalSignature, keyEncipherment
extendedKeyUsage = clientAuth
#controller-manager
openssl genrsa -out controller-manager.key 2048
openssl req -new -key controller-manager.key -out controller-manager.csr -subj "/CN=system:kube-controller-manager"
openssl x509 -req -set_serial $(date +%s%N) -in controller-manager.csr -CA ca.crt -CAkey ca.key -out controller-manager.crt -days 365 -extensions v3_req -extfile controller-manager-openssl.cnf
[ v3_req ]
# Extensions to add to a certificate request
keyUsage = critical, digitalSignature, keyEncipherment
extendedKeyUsage = clientAuth
#scheduler
openssl genrsa -out scheduler.key 2048
openssl req -new -key scheduler.key -out scheduler.csr -subj "/CN=system:kube-scheduler"
openssl x509 -req -set_serial $(date +%s%N) -in scheduler.csr -CA ca.crt -CAkey ca.key -out scheduler.crt -days 365 -extensions v3_req -extfile scheduler-openssl.cnf
[ v3_req ]
# Extensions to add to a certificate request
keyUsage = critical, digitalSignature, keyEncipherment
extendedKeyUsage = clientAuth
#admin
openssl genrsa -out admin.key 2048
openssl req -new -key admin.key -out admin.csr -subj "/O=system:masters/CN=kubernetes-admin"
openssl x509 -req -set_serial $(date +%s%N) -in admin.csr -CA ca.crt -CAkey ca.key -out admin.crt -days 365 -extensions v3_req -extfile admin-openssl.cnf
[ v3_req ]
# Extensions to add to a certificate request
keyUsage = critical, digitalSignature, keyEncipherment
extendedKeyUsage = clientAuth
#node
openssl genrsa -out $(hostname).key 2048
openssl req -new -key $(hostname).key -out $(hostname).csr -subj "/O=system:nodes/CN=system:node:$(hostname)" -config kubelet-openssl.cnf
openssl x509 -req -set_serial $(date +%s%N) -in $(hostname).csr -CA ca.crt -CAkey ca.key -out $(hostname).crt -days 365 -extensions v3_req -extfile kubelet-openssl.cnf
[ v3_req ]
# Extensions to add to a certificate request
keyUsage = critical, digitalSignature, keyEncipherment
extendedKeyUsage = clientAuth
其實這幾個證書都是客戶端驗證,使用同一個配置即可。這里為每個證書使用了不同的文件名,主要是因為還有幾個證書是服務端驗證,以及apiserver證書需要配置SAN,subjectAltName = @alt_names
,當需要手動為這些服務端配置生成證書時就得區分開了。
這里的證書,在配置文件里有的是用路徑引用的,有的是直接以key:value的形式使用的。
需要替換的證書文件實際上只有apiserver-kubelet-client.key和apiserver-kubelet-client.crt,將這兩個文件復制到/etc/kubernetes/pki/
目錄下替換原文件。
其他的證書需要讀取證書的內容替換到相應地配置文件里面,/etc/kubernetes
目錄下包含四個conf文件,admin的證書放到admin.conf里面,controller-manager的證書放到controller-manager.conf里面,scheduler的證書放到scheduler.conf里面,node證書放到kubelet.conf里面。
而且,證書內容不能直接讀取使用,需要用base64加密,具體來說是這樣:
# cat admin.crt | base64 -w 0
用加密的內容替換配置文件中對應的地方,改完之后應該是這樣:
# kubelet.conf
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: ca證書內容
server: https://192.168.5.42:6443
name: kubernetes
contexts:
- context:
cluster: kubernetes
user: system:node:uy08-07
name: system:node:uy08-07@kubernetes
current-context: system:node:uy08-07@kubernetes
kind: Config
preferences: {}
users:
- name: system:node:uy08-07
user:
client-certificate-data: node證書內容
client-key-data: node的key的內容
這里kubelet作為客戶端,需要修改節點名稱,節點驗證的時候會驗證域名。域名已經在apiserver的證書簽署好,節點啟動后會自動完成驗證。驗證完是這樣:
# kubectl get csr
NAME AGE REQUESTOR CONDITION
csr-kwlj5 2d system:node:uy05-13 Approved,Issued
csr-l9qkz 3d system:node:uy08-07 Approved,Issued
csr-z9nmd 3d system:node:uy08-08 Approved,Issued
其他的三個配置文件與kubelet.conf類似,替換證書內容即可。
c、修改advertise-address
為本機地址。
# vim manifests/kube-apiserver.yaml
--advertise-address=192.168.5.104
d、修改好配置文件之后,這時候就可以啟動kubelet了。這里要提醒一下的是,在部署負載均衡器之前,apiserver的地址使用的是第一個節點的apiserver地址。
5、部署第三個master節點,請重復上面部署第二個master節點的步驟。
這時三個節點應該都運行起來了:
# kubectl get no
NAME STATUS ROLES AGE VERSION
uy05-13 Ready master 3d v1.8.3
uy08-07 Ready <none> 3d v1.8.3
uy08-08 Ready <none> 3d v1.8.3
6、將dns和heapster擴容到三個副本,讓三個節點都運行有dns和heapster。
# kubectl scale --replicas=3 deployment kube-dns -n kube-system
# kubectl scale --replicas=3 deployment heapster -n kube-system
7、部署負載衡器和高可用。
a、安裝lvs和keepalived。
# aptitude install -y ipvsadm keepalived
b、修改配置文件。
master節點:
# vim keepalived.conf
global_defs {
router_id LVS_k8s
}
vrrp_script CheckK8sMaster {
script "/usr/bin/curl -k https://127.0.0.1:6443/api"
interval 3
weight -10
fall 2
rise 2
}
vrrp_instance VI_1 {
virtual_router_id 66
advert_int 1
state MASTER
priority 100
interface eno2
mcast_src_ip 192.168.2.56
authentication {
auth_type PASS
auth_pass 4743
}
unicast_peer {
192.168.2.56
192.168.2.57
}
virtual_ipaddress {
192.168.6.15
}
track_script {
CheckK8sMaster
}
}
virtual_server 192.168.6.15 6443 {
lb_algo rr
lb_kind DR
persistence_timeout 0
delay_loop 20
protocol TCP
real_server 192.168.5.42 6443 {
weight 10
TCP_CHECK {
connect_timeout 10
}
}
real_server 192.168.5.104 6443 {
weight 10
TCP_CHECK {
connect_timeout 10
}
}
real_server 192.168.5.105 6443 {
weight 10
TCP_CHECK {
connect_timeout 10
}
}
}
slave節點:
# vim keepalived.conf
global_defs {
router_id LVS_k8s
}
vrrp_script CheckK8sMaster {
script "/usr/bin/curl -k https://127.0.0.1:6443/api"
interval 3
weight -10
fall 2
rise 2
}
vrrp_instance VI_1 {
virtual_router_id 66
advert_int 1
state BACKUP
priority 95
interface eno2
mcast_src_ip 192.168.2.57
authentication {
auth_type PASS
auth_pass 4743
}
unicast_peer {
192.168.2.56
192.168.2.57
}
virtual_ipaddress {
192.168.6.15
}
track_script {
CheckK8sMaster
}
}
virtual_server 192.168.6.15 6443 {
lb_algo rr
lb_kind DR
persistence_timeout 0
delay_loop 20
protocol TCP
real_server 192.168.5.42 6443 {
weight 10
TCP_CHECK {
connect_timeout 10
}
}
real_server 192.168.5.104 6443 {
weight 10
TCP_CHECK {
connect_timeout 10
}
}
real_server 192.168.5.105 6443 {
weight 10
TCP_CHECK {
connect_timeout 10
}
}
}
c、為各real server(也就是三個master節點)配置vip。
# vim /etc/network/interfaces
auto lo:15
iface lo:15 inet static
address 192.168.6.15
netmask 255.255.255.255
# ifconfig lo:15 192.168.6.15 netmask 255.255.255.255 up
d、修改arp內核參數。
net.ipv4.conf.lo.arp_ignore = 1
net.ipv4.conf.lo.arp_announce = 2
net.ipv4.conf.all.arp_ignore = 1
net.ipv4.conf.all.arp_announce = 2
net.ipv4.ip_forward = 1
net.ipv4.nf_conntrack_max = 2048000
net.netfilter.nf_conntrack_max = 2048000
# sysctl -p
e、啟動服務。
# systemctl start keepalived
# systemctl enable keepalived
# ipvsadm -ln
IP Virtual Server version 1.2.1 (size=1048576)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 192.168.6.15:6443 rr
-> 192.168.5.42:6443 Route 10 0 0
-> 192.168.5.104:6443 Route 10 0 0
-> 192.168.5.105:6443 Route 10 0 0
8、將kubernetes集群中所有需要訪問apiserver的地方全部改為vip。
這里需要修改的地方包括:四個配置文件admin.conf、controller-manager.conf、scheduler.conf、kubelet.conf
,以及kube-proxy和cluster-info的configmap。
修改配置文件就不說了,打開文件替換server地址即可。這里說一下如何修改configmap:
# kubectl edit cm kube-proxy -n kube-system
apiVersion: v1
data:
kubeconfig.conf: |
apiVersion: v1
kind: Config
clusters:
- cluster:
certificate-authority: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
server: https://192.168.6.15:6443
name: default
contexts:
- context:
cluster: default
namespace: default
user: default
name: default
current-context: default
users:
- name: default
user:
tokenFile: /var/run/secrets/kubernetes.io/serviceaccount/token
kind: ConfigMap
metadata:
creationTimestamp: 2017-11-22T10:47:19Z
labels:
app: kube-proxy
name: kube-proxy
namespace: kube-system
resourceVersion: "9703"
selfLink: /api/v1/namespaces/kube-system/configmaps/kube-proxy
uid: 836ffdfe-cf72-11e7-9b82-34e6d7899e5d
# kubectl edit cm cluster-info -n kube-public
apiVersion: v1
data:
jws-kubeconfig-2a8d9c: eyJhbGciOiJIUzI1NiIsImtpZCI6IjJhOGQ5YyJ9..nBOva6m8fBYwn8qbe0CUA3pVF-WPXRe1Ynr3sAwPmKI
kubeconfig: |
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUN5RENDQWJDZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwcmRXSmwKY201bGRHVnpNQjRYRFRFM01URXlNakV3TkRZME5Wb1hEVEkzTVRFeU1ERXdORFkwTlZvd0ZURVRNQkVHQTFVRQpBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBS3Z4CmFJSkR4UTRjTFo3Y0xIbm1CWXFJY3ZVTENMSXY2ZCtlVGg0SzBnL2NEMXAzNVBaa2JKUE1YSXpOVjJDOVZodXMKMXVpTlQvQ3dOL245WXhtYk9WaHBZbXNySytuMzJ3dTB0TlhUdWhTQ1dFSU1SWGpkeno2TG0xaTNLWEorSXF4KwpTbTVVMXhaY01iTy9UT1ZXWG81TDBKai9PN0ZublB1cFd2SUtpZVRpT1lnckZuMHZsZlY4bVVCK2E5UFNSMnRSCkJDWFBwWFRTOG96ZFQ3alFoRE92N01KRTJKU0pjRHp1enBISVBuejF0RUNYS25SU0xpVm5rVE51L0RNek9LYWEKNFJiaUwvbDY2MDkra1BYL2JNVXNsdEVhTmVyS2tEME13SjBOakdvS0pEOWUvUldoa0ZTZWFKWVFFN0NXZk5nLwo3U01wblF0SGVhbVBCbDVFOTIwQ0F3RUFBYU1qTUNFd0RnWURWUjBQQVFIL0JBUURBZ0trTUE4R0ExVWRFd0VCCi93UUZNQU1CQWY4d0RRWUpLb1pJaHZjTkFRRUxCUUFEZ2dFQkFEUlh2N0V3clVyQ0tyODVGU2pGcCtYd2JTQmsKRlFzcFR3ZEZEeFUvemxERitNVlJLL0QyMzdFQmdNbGg3ZndDV2llUjZnTFYrQmdlVGowU3BQWVl6ZVZJZEZYVQp0Z3lzYmQvVHNVcWNzQUEyeExiSnY4cm1nL2FTL3dScEQ0YmdlMS9Jb1EwTXFUV0FoZno2VklMajVkU0xWbVNOCmQzcXlFb0RDUGJnMGVadzBsdE5LbW9BN0p4VUhLOFhnTWRVNUZnelYvMi9XdUt2NkZodUdlUEt0cjYybUUvNkcKSy9BTTZqUHhKeXYrSm1VVVFCbllUQ2pCbU5nNjR2M0ZPSDhHMVBCdlhlUHNvZW5DQng5M3J6SFM1WWhnNHZ0dAoyelNnUGpHeUw0RkluZlF4MFdwNHJGYUZZMGFkQnV0VkRnbC9VTWI1eFdnSDN2Z0RBOEEvNGpka251dz0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
server: https://192.168.6.15:6443
name: ""
contexts: []
current-context: ""
kind: Config
preferences: {}
users: []
kind: ConfigMap
metadata:
creationTimestamp: 2017-11-22T10:47:19Z
name: cluster-info
namespace: kube-public
resourceVersion: "580570"
selfLink: /api/v1/namespaces/kube-public/configmaps/cluster-info
uid: 834a18c5-cf72-11e7-9b82-34e6d7899e5d
當然,修改配置文件之后需要重啟kubelet使配置生效。
9、驗證,嘗試通過vip請求apiserver將node節點添加到集群。
# kubeadm join --token 2a8d9c.9b5a1c7c05269fb3 192.168.6.15:6443 --discovery-token-ca-cert-hash sha256:ce9e1296876ab076f7afb868f79020aa6b51542291d80b69f2f10cdabf72ca66
[kubeadm] WARNING: kubeadm is in beta, please do not use it for production clusters.
[preflight] Running pre-flight checks
[preflight] WARNING: docker version is greater than the most recently validated version. Docker version: 17.06.2-ce. Max validated version: 17.03
[discovery] Trying to connect to API Server "192.168.6.15:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://192.168.6.15:6443"
[discovery] Requesting info from "https://192.168.6.15:6443" again to validate TLS against the pinned public key
[discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server "192.168.6.15:6443"
[discovery] Successfully established connection with API Server "192.168.6.15:6443"
[bootstrap] Detected server version: v1.8.3
[bootstrap] The server supports the Certificates API (certificates.k8s.io/v1beta1)
Node join complete:
* Certificate signing request sent to master and response
received.
* Kubelet informed of new secure connection details.
Run 'kubectl get nodes' on the master to see this machine join.
10、至此,整個kubernetes集群的高可用全部完成。
# kubectl get no
NAME STATUS ROLES AGE VERSION
uy02-07 Ready <none> 22m v1.8.3
uy05-13 Ready master 5d v1.8.3
uy08-07 Ready <none> 5d v1.8.3
uy08-08 Ready <none> 5d v1.8.3
# kubectl get po --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system calico-etcd-cnwlt 1/1 Running 2 5d
kube-system calico-kube-controllers-55449f8d88-dffp5 1/1 Running 2 5d
kube-system calico-node-d6v5n 2/2 Running 4 5d
kube-system calico-node-fqxl2 2/2 Running 0 5d
kube-system calico-node-hbzd4 2/2 Running 6 5d
kube-system calico-node-tcltp 2/2 Running 0 2h
kube-system heapster-59ff54b574-ct5td 1/1 Running 2 5d
kube-system heapster-59ff54b574-d7hwv 1/1 Running 0 5d
kube-system heapster-59ff54b574-vxxbv 1/1 Running 1 5d
kube-system kube-apiserver-uy05-13 1/1 Running 2 5d
kube-system kube-apiserver-uy08-07 1/1 Running 0 5d
kube-system kube-apiserver-uy08-08 1/1 Running 1 4d
kube-system kube-controller-manager-uy05-13 1/1 Running 2 5d
kube-system kube-controller-manager-uy08-07 1/1 Running 0 5d
kube-system kube-controller-manager-uy08-08 1/1 Running 1 5d
kube-system kube-dns-545bc4bfd4-4xf99 3/3 Running 0 5d
kube-system kube-dns-545bc4bfd4-8fv7p 3/3 Running 3 5d
kube-system kube-dns-545bc4bfd4-jbj9t 3/3 Running 6 5d
kube-system kube-proxy-8c59t 1/1 Running 1 5d
kube-system kube-proxy-bdx5p 1/1 Running 2 5d
kube-system kube-proxy-dmzm4 1/1 Running 0 2h
kube-system kube-proxy-gnfcx 1/1 Running 0 5d
kube-system kube-scheduler-uy05-13 1/1 Running 2 5d
kube-system kube-scheduler-uy08-07 1/1 Running 0 5d
kube-system kube-scheduler-uy08-08 1/1 Running 1 5d
kube-system kubernetes-dashboard-69c5c78645-4r8zw 1/1 Running 2 5d
# kubectl get cs
NAME STATUS MESSAGE ERROR
controller-manager Healthy ok
scheduler Healthy ok
etcd-0 Healthy {"health": "true"}
etcd-2 Healthy {"health": "true"}
etcd-1 Healthy {"health": "true"}
# kubectl cluster-info
Kubernetes master is running at https://192.168.6.15:6443
Heapster is running at https://192.168.6.15:6443/api/v1/namespaces/kube-system/services/heapster/proxy
KubeDNS is running at https://192.168.6.15:6443/api/v1/namespaces/kube-system/services/kube-dns/proxy