kubeadm是官方社區推出的一個用於快速部署kubernetes集群的工具。
這個工具能通過兩條指令完成一個kubernetes集群的部署:
1. 安裝要求
在開始之前,部署Kubernetes集群機器需要滿足以下幾個條件:
一台或多台機器,操作系統 CentOS7.x-86_x64 硬件配置:
2GB或更多RAM,2個CPU或更多CPU,硬盤30GB或更多
集群中所有機器之間網絡互通 可以訪問外網,需要拉取鏡像
禁止swap分區
2. 學習目標
1. 在所有節點上安裝Docker和kubeadm
2. 部署Kubernetes Master
3. 部署容器網絡插件
4. 部署 Kubernetes Node,將節點加入Kubernetes集群中
5. 部署Dashboard Web頁面,可視化查看Kubernetes資源
3. 准備環境
關閉防火牆: systemctl stop firewalld
systemctl disable firewalld
Iptables -F
關閉selinux: $ sed -i 's/enforcing/disabled/' /etc/selinux/config $ setenforce 0
臨時 $ setenforce 0
關閉swap: $ swapoff -a $ 臨時 $ vim /etc/fstab $ 永久
添加主機名與IP對應關系(記得設置主機名):
$ cat /etc/hosts
192.168.30.21 k8s-master
192.168.30.22 k8s-node1
192.168.30.23 k8s-node2
將橋接的IPv4流量傳遞到iptables的鏈:
$
[root@k8s-node1 ~]# cat /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
[root@k8s-node1 ~]# sysctl --system
4. 所有節點安裝Docker/kubeadm/kubelet
4.1 安裝Docker
Kubernetes默認CRI(容器運行時)為Docker,因此先安裝Docker。
[root@k8s-node1 ~]# wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -O /etc/yum.repos.d/docker-ce.repo
[root@k8s-node1 ~]# yum -y install docker-ce
[root@k8s-node1 ~]#systemctl enable docker && systemctl start docker
[root@k8s-node1 ~]#docker --version
4.2 添加阿里雲YUM軟件源
[root@k8s-node2 ~]# vim /etc/repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
4.3 安裝kubeadm,kubelet和kubectl
[root@k8s-node1 ~]# yum -y install kubelet kubeadm kubectl
[root@k8s-node1 ~]# systemctl enable kubelet.service
kubeadm init --apiserver-advertise-address=192.168.30.21 --image-repository registry.aliyuncs.com/google_containers --kubernetes-version v1.15.0 --service-cidr=10.1.0.0/16 --pod-network-cidr=10.244.0.0/16
按照自己的版本和master主機IP去填
5. 部署Kubernetes Master
master主機操作
[root@k8s-master ~]# kubeadm init --apiserver-advertise-address=192.168.30.21 --image-repository registry.aliyuncs.com/google_containers --kubernetes-version v1.15.0 --service-cidr=10.1.0.0/16 --pod-network-cidr=10.244.0.0/16
初始化完成會顯示以下內容:這里的東西很重要,顏色表明的要按自己的去復制
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 192.168.30.21:6443 --token x8gdiq.sbcj8g4fmoocd5tl \
--discovery-token-ca-cert-hash sha256:0b48e70fa8a268f8b88cd69b02cf87d8a2bf2efe519bb88dfa558de20d4a9993
使用kubectl工具
[root@k8s-master ~]# mkdir -p $HOME/.kube
[root@k8s-master ~]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@k8s-master ~]# sudo chown $(id -u):$(id -g) $HOME/.kube/config
目前node節點沒有准備
[root@k8s-master ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master NotReady master 10m v1.15.0
6. 安裝Pod網絡插件(CNI)
在master中操作
[root@k8s-master ~]# kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/a70459be0084506e4ec919aa1c114638878db11b/Documentation/kube-flannel.yml
確保節點就緒情況我們的master節點已經開啟
[root@k8s-master ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master Ready master 32m v1.15.0
確保pod已經啟動 kube-syetem 命名空間
[root@k8s-master ~]# kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-bccdc95cf-5j5fd 1/1 Running 0 33m
coredns-bccdc95cf-6plrt 1/1 Running 0 33m
etcd-k8s-master 1/1 Running 0 32m
kube-apiserver-k8s-master 1/1 Running 0 33m
kube-controller-manager-k8s-master 1/1 Running 0 32m
kube-flannel-ds-amd64-l8dg8 1/1 Running 0 8m1s
kube-proxy-lxn4w 1/1 Running 0 33m
kube-scheduler-k8s-master 1/1 Running 0 33m
7. 加入Kubernetes Node
向集群添加新節點,執行在kubeadm init輸出的kubeadm join命令
[root@k8s-node2 ~]# kubeadm join 192.168.30.21:6443 --token x8gdiq.sbcj8g4fmoocd5tl \
> --discovery-token-ca-cert-hash sha256:0b48e70fa8a268f8b88cd69b02cf87d8a2bf2efe519bb88dfa558de20d4a9993
查看node節點
[root@k8s-master ~]# kubectl get node
NAME STATUS ROLES AGE VERSION
k8s-master Ready master 81m v1.15.0
k8s-node1 Ready <none> 23m v1.15.0
k8s-node2 Ready <none> 26m v1.15.0
8. 測試在Kubernetes集群
在Kubernetes集群中創建一個pod,驗證是否正常運行:
[root@k8s-master ~]# kubectl create deployment nginx --image=nginx
[root@k8s-master ~]# kubectl expose deployment nginx --port=80 --type=NodePort
查看pod的詳細信息
這里我啟動了一個node,另外一個swap有問題,所以沒有開,不過不影響我們的部署
[root@k8s-master ~]# kubectl get pods,svc -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod/nginx-554b9c67f9-sfxh2 1/1 Running 0 5m7s 10.244.1.2 k8s-node2 <none> <none>
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
service/kubernetes ClusterIP 10.1.0.1 <none> 443/TCP 96m <none>
service/nginx NodePort 10.1.66.144 <none> 80:30900/TCP 3m44s app=nginx
訪問我們的Node節點的應用
http://nodeip:port 也就是30900端口

9. 部署 Dashboard
[root@k8s-master ~]#
wget https://raw.githubusercontent.com/kubernetes/dashboard/v1.10.1/src/deploy/recommended/kubernetes-dashboard.yaml
[root@k8s-master ~]# vim kubernetes-dashboard.yaml
112行 image: lizhenliang/kubernetes-dashboard-amd64:v1.10.1 指定國內的鏡像,這里默認是谷歌的
158行 type: NodePort 添加類型
159 ports:
160 - port: 443
應用dashboard
[root@k8s-master ~]# kubectl apply -f kubernetes-dashboard.yaml
查看pods狀態
[root@k8s-master ~]# kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-bccdc95cf-5j5fd 1/1 Running 0 121m
coredns-bccdc95cf-6plrt 1/1 Running 0 121m
etcd-k8s-master 1/1 Running 0 120m
kube-apiserver-k8s-master 1/1 Running 0 121m
kube-controller-manager-k8s-master 1/1 Running 0 120m
kube-flannel-ds-amd64-6m7ct 1/1 Running 0 64m
kube-flannel-ds-amd64-l8dg8 1/1 Running 0 95m
kube-proxy-lxn4w 1/1 Running 0 121m
kube-proxy-xdcgv 1/1 Running 0 64m
kube-scheduler-k8s-master 1/1 Running 0 121m
kubernetes-dashboard-79ddd5-t7q57 1/1 Running 0 82s
查看端口進行訪問31510:需要用https://192.168.30.23:31510
[root@k8s-master ~]# kubectl get pods,svc -n kube-system
NAME READY STATUS RESTARTS AGE
pod/coredns-bccdc95cf-5j5fd 1/1 Running 0 127m
pod/coredns-bccdc95cf-6plrt 1/1 Running 0 127m
pod/etcd-k8s-master 1/1 Running 0 126m
pod/kube-apiserver-k8s-master 1/1 Running 0 126m
pod/kube-controller-manager-k8s-master 1/1 Running 0 126m
pod/kube-flannel-ds-amd64-6m7ct 1/1 Running 0 69m
pod/kube-flannel-ds-amd64-l8dg8 1/1 Running 0 101m
pod/kube-proxy-lxn4w 1/1 Running 0 127m
pod/kube-proxy-xdcgv 1/1 Running 0 69m
pod/kube-scheduler-k8s-master 1/1 Running 0 126m
pod/kubernetes-dashboard-79ddd5-t7q57 1/1 Running 0 6m53s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kube-dns ClusterIP 10.1.0.10 <none> 53/UDP,53/TCP,9153/TCP 127m
service/kubernetes-dashboard NodePort 10.1.45.160 <none> 443:31510/TCP 6m53s

這里有選擇kubeconfig
還有令牌,我們選擇令牌登錄
先創建service account並綁定默認cluster-admin管理員集群角色:
[root@k8s-master ~]# kubectl create serviceaccount dashboard-admin -n kube-system
[root@k8s-master ~]# kubectl create clusterrolebinding dashboard-admin --clusterrole=cluster-admin --serviceaccount=kube-system:dashboard-admin
找到dashboard的admin
[root@k8s-master ~]# kubectl get secret -n kube-system
復制這個dashboard-admin-token-sx5gl

查看詳細信息,並復制粘貼在我們的web頁面的令牌上
[root@k8s-master ~]# kubectl describe secret dashboard-admin-token-sx5gl -n kube-system
token: eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJkYXNoYm9hcmQtYWRtaW4tdG9rZW4tc3g1Z2wiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiZGFzaGJvYXJkLWFkbWluIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiMTBmNTk4YWUtMWFlNS00YzNjLTgzZjUtOWRmNDg3MzJhNDVjIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Omt1YmUtc3lzdGVtOmRhc2hib2FyZC1hZG1pbiJ9.QYPP7SDQnPp062yb_4XrdOE8xkmTergaTTTPulDADvXzyG2udAEU5AKfHNDH1ZXu5pw9RN9OgX5xUwE_OzQXpPoE5Qt0x2M3VOdpscW2pOw_KOUfnYtf-Aq6Z8c9KgsNdAtUkBHwFbMucL3tDH-Uxb9AdBMX6q5W9jbGlfMa0M6tp2o4zIcoqpli1qAMI_FjvNfkmWX0x4akIzsVeoocewdjzB8Ca-VyqEFZXCMULQv5L8z1RszCXZ4VgOnkHQB6AiVUGmJ9B8iwtCZu-SW2iwWaT-4iQeQvtM3HQTl5aZycaI26qUlsuUtBj5eqyJqugSGlidXJs5TPdn_xmF-FZg

