一 環境規划
大致拓撲:
我這里是etcd和master都在同一台機器上面
二 系統初始化
見 https://www.cnblogs.com/huningfei/p/12697310.html
三 安裝k8s和docker
見 https://www.cnblogs.com/huningfei/p/12697310.html
四 安裝keepalived
在三台master節點上安裝
yum -y install keepalived
配置文件
master1
[root@k8s-master01 keepalived]# cat keepalived.conf
! Configuration File for keepalived
global_defs {
router_id master01
}
vrrp_instance VI_1 {
state MASTER #主
interface ens33 #網卡名字
virtual_router_id 50
priority 100 #權重
advert_int 1
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
192.168.1.222 #vip
}
}
master2
! Configuration File for keepalived
global_defs {
router_id master01
}
vrrp_instance VI_1 {
state BACKUP
interface ens32
virtual_router_id 50
priority 90
advert_int 1
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
192.168.1.222
}
}
master3
! Configuration File for keepalived
global_defs {
router_id master01
}
vrrp_instance VI_1 {
state BACKUP
interface ens32
virtual_router_id 50
priority 80
advert_int 1
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
192.168.1.222
}
}
啟動,並設置開機啟動
service keepalived start
systemctl enable keepalived
四初始化master節點
只在任意一台執行即可
kubeadm init --config=kubeadm-config.yaml
初始化配置文件如下:
```bash
[root@k8s-master01 load-k8s]# cat kubeadm-config.yaml
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
kubernetesVersion: v1.15.1
apiServer:
certSANs: #填寫所有kube-apiserver節點的hostname、IP、VIP(好像也可以不用寫,只寫vip就行)
- k8s-master01
- k8s-node1
- k8s-node2
- 192.168.1.210
- 192.168.1.200
- 192.168.1.211
- 192.168.1.222
controlPlaneEndpoint: "192.168.1.222:6443" #vip
imageRepository: registry.aliyuncs.com/google_containers
networking:
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
featureGates:
SupportIPVSProxyMode: true
mode: ipvs
出現圖中信息代表初始化成功:
然后按照提示運行命令:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
五安裝網絡插件flannel
kubectl apply -f kube-flannel.yml
六拷貝證書(關鍵步驟)
從master01上拷貝到其余兩個主節點,我這里利用腳本拷貝
[root@k8s-master01 load-k8s]# cat cert-master.sh
USER=root # customizable
CONTROL_PLANE_IPS="192.168.1.200 192.168.1.211"
for host in ${CONTROL_PLANE_IPS}; do
scp /etc/kubernetes/pki/ca.crt "${USER}"@$host:
scp /etc/kubernetes/pki/ca.key "${USER}"@$host:
scp /etc/kubernetes/pki/sa.key "${USER}"@$host:
scp /etc/kubernetes/pki/sa.pub "${USER}"@$host:
scp /etc/kubernetes/pki/front-proxy-ca.crt "${USER}"@$host:
scp /etc/kubernetes/pki/front-proxy-ca.key "${USER}"@$host:
scp /etc/kubernetes/pki/etcd/ca.crt "${USER}"@$host:etcd-ca.crt
# Quote this line if you are using external etcd
scp /etc/kubernetes/pki/etcd/ca.key "${USER}"@$host:etcd-ca.key
done
然后去其他兩個master節點把證書移動到/etc/kubernetes/pki目錄下面,我這里用腳本移動
```bash
[root@k8s-node1 load-k8s]# cat mv-cert.sh
USER=root # customizable
mkdir -p /etc/kubernetes/pki/etcd
mv /${USER}/ca.crt /etc/kubernetes/pki/
mv /${USER}/ca.key /etc/kubernetes/pki/
mv /${USER}/sa.pub /etc/kubernetes/pki/
mv /${USER}/sa.key /etc/kubernetes/pki/
mv /${USER}/front-proxy-ca.crt /etc/kubernetes/pki/
mv /${USER}/front-proxy-ca.key /etc/kubernetes/pki/
mv /${USER}/etcd-ca.crt /etc/kubernetes/pki/etcd/ca.crt
# Quote this line if you are using external etcd
mv /${USER}/etcd-ca.key /etc/kubernetes/pki/etcd/ca.key
七 剩余兩個master節點加入集群
kubeadm join 192.168.1.222:6443 --token zi3lku.0jmskzstc49429cu \
--discovery-token-ca-cert-hash sha256:75c2e15f51e23490a0b042d72d6ac84fc18ba63c230f27882728f8832711710b \
--control-plane
注意這里的ip就是keepalived生成的虛擬ip
出現下面這個代表成功
加入成功之后,可以去三台master上面查看狀態都是否成功
kubectl get nodes
說明:我這里的主機名由於省事,所以就沒改成master主機名,其實三台都是master節點
八 node節點加入集群
kubeadm join 192.168.1.222:6443 --token zi3lku.0jmskzstc49429cu \
--discovery-token-ca-cert-hash sha256:75c2e15f51e23490a0b042d72d6ac84fc18ba63c230f27882728f8832711710b
出現如下信息代表成功
查看節點狀態,node3是我的node節點,其余都是主節點
九 集群高可用測試
1 master01關機,vip飄到了master02上面,各項功能正常
2 master02關機,vip飄到了master03上面,已有pod正常,但是所有命令都不能使用了
結論就是當壞掉其中一台master的時候,集群是可以正常工作的