利用ansible來做kubernetes 1.10.3集群高可用的一鍵部署


請讀者務必保持環境一致

安裝過程中需要下載所需系統包,請務必使所有節點連上互聯網

本次安裝的集群節點信息


實驗環境:VMware的虛擬機

IP地址 主機名 CPU 內存
192.168.77.133 k8s-m1 6核 6G
192.168.77.134 k8s-m2 6核 6G
192.168.77.135 k8s-m3 6核 6G
192.168.77.136 k8s-n1 6核 6G
192.168.77.137 k8s-n2 6核 6G
192.168.77.138 k8s-n3 6核 6G

另外由所有 master節點提供一組VIP 192.168.77.140

本次安裝的集群拓撲圖


 
image.png

本次使用到的ROLE

ansible role怎么用請看下面文章

集群安裝方式

以static pod方式安裝kubernetes ha高可用集群。

Ansible管理節點操作


OS: CentOS Linux release 7.4.1708 (Core)
ansible: 2.5.3

安裝Ansible
# yum -y install ansible # ansible --version ansible 2.5.3 config file = /etc/ansible/ansible.cfg configured module search path = [u'/root/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules'] ansible python module location = /usr/lib/python2.7/site-packages/ansible executable location = /usr/bin/ansible python version = 2.7.5 (default, Aug 4 2017, 00:39:18) [GCC 4.8.5 20150623 (Red Hat 4.8.5-16)] 
配置ansible
# sed -i 's|#host_key_checking|host_key_checking|g' /etc/ansible/ansible.cfg 
下載role
# yum -y install git # git clone https://github.com/kuailemy123/Ansible-roles.git /etc/ansible/roles 正克隆到 '/etc/ansible/roles'... remote: Counting objects: 1767, done. remote: Compressing objects: 100% (20/20), done. remote: Total 1767 (delta 5), reused 24 (delta 4), pack-reused 1738 接收對象中: 100% (1767/1767), 427.96 KiB | 277.00 KiB/s, done. 處理 delta 中: 100% (639/639), done. 
下載kubernetes-files.zip文件

這是為了適應國情,導出所需的谷歌docker image,方便大家使用。

文件下載鏈接:https://pan.baidu.com/s/1BNMJLEVzCE8pvegtT7xjyQ 密碼:qm4k

# yum -y install unzip # unzip kubernetes-files.zip -d /etc/ansible/roles/kubernetes/files/ 
配置主機信息
# cat /etc/ansible/hosts [k8s-master] 192.168.77.133 192.168.77.134 192.168.77.135 [k8s-node] 192.168.77.136 192.168.77.137 192.168.77.138 [k8s-cluster:children] k8s-master k8s-node [k8s-cluster:vars] ansible_ssh_pass=123456 

k8s-master組為所有的master節點主機。k8s-node組為所有的node節點主機。k8s-cluster包含k8s-masterk8s-node組的所有主機。

請注意, 主機名稱請用小寫字母, 大寫字母會出現找不到主機的問題。

配置playbook
# cat /etc/ansible/k8s.yml --- # 初始化集群 - hosts: k8s-cluster serial: "100%" any_errors_fatal: true vars: - ipnames: '192.168.77.133': 'k8s-m1' '192.168.77.134': 'k8s-m2' '192.168.77.135': 'k8s-m3' '192.168.77.136': 'k8s-n1' '192.168.77.137': 'k8s-n2' '192.168.77.138': 'k8s-n3' roles: - hostnames - repo-epel - docker # 安裝master節點 - hosts: k8s-master any_errors_fatal: true vars: - kubernetes_master: true - kubernetes_apiserver_vip: 192.168.77.140 roles: - kubernetes # 安裝node節點 - hosts: k8s-node any_errors_fatal: true vars: - kubernetes_node: true - kubernetes_apiserver_vip: 192.168.77.140 roles: - kubernetes # 安裝addons應用 - hosts: k8s-master any_errors_fatal: true vars: - kubernetes_addons: true - kubernetes_ingress_controller: nginx - kubernetes_apiserver_vip: 192.168.77.140 roles: - kubernetes 

kubernetes_ingress_controller 還可以選擇traefik

執行playbook
# ansible-playbook /etc/ansible/k8s.yml ...... real 26m44.153s user 1m53.698s sys 0m55.509s 

 

 
asciicast

 

驗證集群版本
# kubectl version Client Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.3", GitCommit:"2bba0127d85d5a46ab4b778548be28623b32d0b0", GitTreeState:"clean", BuildDate:"2018-05-21T09:17:39Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"} Server Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.3", GitCommit:"2bba0127d85d5a46ab4b778548be28623b32d0b0", GitTreeState:"clean", BuildDate:"2018-05-21T09:05:37Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"} 
驗證集群狀態
kubectl -n kube-system get po -o wide -l k8s-app=kube-proxy
kubectl -n kube-system get po -l k8s-app=kube-dns
kubectl -n kube-system get po -l k8s-app=calico-node -o wide
calicoctl node status
kubectl -n kube-system get po,svc -l k8s-app=kubernetes-dashboard
kubectl -n kube-system get po,svc | grep -E 'monitoring|heapster|influxdb' kubectl -n ingress-nginx get pods kubectl -n kube-system get po -l app=helm kubectl -n kube-system logs -f kube-scheduler-k8s-m2 helm version 

這里就不寫結果了。

查看addons訪問信息

在第一台master服務器上

kubectl cluster-info
Kubernetes master is running at https://192.168.77.140:6443 Elasticsearch is running at https://192.168.77.140:6443/api/v1/namespaces/kube-system/services/elasticsearch-logging/proxy heapster is running at https://192.168.77.140:6443/api/v1/namespaces/kube-system/services/heapster/proxy Kibana is running at https://192.168.77.140:6443/api/v1/namespaces/kube-system/services/kibana-logging/proxy kube-dns is running at https://192.168.77.140:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy monitoring-grafana is running at https://192.168.77.140:6443/api/v1/namespaces/kube-system/services/monitoring-grafana/proxy monitoring-influxdb is running at https://192.168.77.140:6443/api/v1/namespaces/kube-system/services/monitoring-influxdb:http/proxy 
# cat ~/k8s_addons_access 

集群部署完成后,建議重啟集群所有節點。



作者:lework
鏈接:https://www.jianshu.com/p/265cfb0811b2
來源:簡書
簡書著作權歸作者所有,任何形式的轉載都請聯系作者獲得授權並注明出處。


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM