使用kubeadm搭建Kubernetes(1.10.2)集群(國內環境)


目錄

  1. 目標
  2. 准備
  3. 步驟
  4. 卸載集群

目標

  • 在您的機器上建立一個安全的Kubernetes集群。
  • 在集群里安裝網絡插件,以便應用之間可以相互通訊。
  • 在集群上運行一個簡單的微服務。

准備

主機

  • 一台或多台運行Ubuntu 16.04+的主機。
  • 最好選至少有 2 GB 內存的雙核主機。
  • 集群中完整的網絡連接,公網或者私網都可以。

軟件

安裝Docker

sudo apt-get update
sudo apt-get install -y docker.io

Kubunetes建議使用老版本的docker.io,如果需要使用最新版的docker-ce,可參考上一篇博客:Docker初體驗

禁用swap文件

然后需要禁用swap文件,這是Kubernetes的強制步驟。實現它很簡單,編輯/etc/fstab文件,注釋掉引用swap的行,保存並重啟后輸入sudo swapoff -a即可。

對於禁用swap內存,你可能會有點不解,具體原因可以查看Github上的Issue:Kubelet/Kubernetes should work with Swap Enabled

步驟

(1/4)安裝 kubeadm, kubelet and kubectl

  • kubeadm: 引導啟動k8s集群的命令工具。
  • kubelet: 在群集中的所有計算機上運行的組件, 並用來執行如啟動pods和containers等操作。
  • kubectl: 用於操作運行中的集群的命令行工具。
sudo apt-get update && sudo apt-get install -y apt-transport-https
curl -s http://packages.faasx.com/google/apt/doc/apt-key.gpg | sudo apt-key add -
sudo cat <<EOF >/etc/apt/sources.list.d/kubernetes.list
deb http://mirrors.ustc.edu.cn/kubernetes/apt/ kubernetes-xenial main
EOF
sudo apt-get update
sudo apt-get install -y kubelet kubeadm kubectl

apt-key下載地址使用了國內鏡像,官方地址為:https://packages.cloud.google.com/apt/doc/apt-key.gpg
apt安裝包地址使用了中科大的鏡像,官方地址為:http://apt.kubernetes.io/

(2/4)初始化master節點

由於網絡原因,我們需要提前拉取k8s初始化需要用到的Images,並添加對應的k8s.gcr.io標簽:

## 拉取鏡像
docker pull reg.qiniu.com/k8s/kube-apiserver-amd64:v1.10.2
docker pull reg.qiniu.com/k8s/kube-controller-manager-amd64:v1.10.2
docker pull reg.qiniu.com/k8s/kube-scheduler-amd64:v1.10.2
docker pull reg.qiniu.com/k8s/kube-proxy-amd64:v1.10.2
docker pull reg.qiniu.com/k8s/etcd-amd64:3.1.12
docker pull reg.qiniu.com/k8s/pause-amd64:3.1

## 添加Tag
docker tag reg.qiniu.com/k8s/kube-apiserver-amd64:v1.10.2 k8s.gcr.io/kube-apiserver-amd64:v1.10.2
docker tag reg.qiniu.com/k8s/kube-scheduler-amd64:v1.10.2 k8s.gcr.io/kube-scheduler-amd64:v1.10.2
docker tag reg.qiniu.com/k8s/kube-controller-manager-amd64:v1.10.2 k8s.gcr.io/kube-controller-manager-amd64:v1.10.2
docker tag reg.qiniu.com/k8s/kube-proxy-amd64:v1.10.2 k8s.gcr.io/kube-proxy-amd64:v1.10.2
docker tag reg.qiniu.com/k8s/etcd-amd64:3.1.12 k8s.gcr.io/etcd-amd64:3.1.12
docker tag reg.qiniu.com/k8s/pause-amd64:3.1 k8s.gcr.io/pause-amd64:3.1

## 在Kubernetes 1.10 中,增加了CoreDNS,如果使用CoreDNS(默認關閉),則不需要下面三個鏡像。
docker pull reg.qiniu.com/k8s/k8s-dns-sidecar-amd64:1.14.10
docker pull reg.qiniu.com/k8s/k8s-dns-kube-dns-amd64:1.14.10
docker pull reg.qiniu.com/k8s/k8s-dns-dnsmasq-nanny-amd64:1.14.10

docker tag reg.qiniu.com/k8s/k8s-dns-sidecar-amd64:1.14.10 k8s.gcr.io/k8s-dns-sidecar-amd64:1.14.10
docker tag reg.qiniu.com/k8s/k8s-dns-kube-dns-amd64:1.14.10 k8s.gcr.io/k8s-dns-kube-dns-amd64:1.14.10
docker tag reg.qiniu.com/k8s/k8s-dns-dnsmasq-nanny-amd64:1.14.10 k8s.gcr.io/k8s-dns-dnsmasq-nanny-amd64:1.14.10

據說kubeadm可以自定義鏡像Registry,但我並沒有實驗成功。

Master節點就是運行着控制組件的機器,包括etcd(集群數據庫)和API服務(kubectl CLI通訊服務)。
初始化master節點, 只需隨便在一台裝過kubeadm的機器上運行如下命令:

sudo kubeadm init --kubernetes-version=v1.10.2 --feature-gates=CoreDNS=true --pod-network-cidr=192.168.0.0/16

init 常用主要參數:

  • --kubernetes-version: 指定Kubenetes版本,如果不指定該參數,會從google網站下載最新的版本信息。

  • --pod-network-cidr: 指定pod網絡的IP地址范圍,它的值取決於你在下一步選擇的哪個網絡網絡插件,比如我在本文中使用的是Calico網絡,需要指定為192.168.0.0/16

  • --apiserver-advertise-address: 指定master服務發布的Ip地址,如果不指定,則會自動檢測網絡接口,通常是內網IP。

  • --feature-gates=CoreDNS: 是否使用CoreDNS,值為true/false,CoreDNS插件在1.10中提升到了Beta階段,最終會成為Kubernetes的缺省選項。

關於kubeadm更詳細的的介紹請參考kubeadm官方文檔

最終輸出如下:

raining@raining-ubuntu:~$ sudo kubeadm init --kubernetes-version=v1.10.2 --feature-gates=CoreDNS=true --pod-network-cidr=192.168.0.0/16
[sudo] password for raining: 
[init] Using Kubernetes version: v1.10.2
[init] Using Authorization modes: [Node RBAC]
[preflight] Running pre-flight checks.
	[WARNING SystemVerification]: docker version is greater than the most recently validated version. Docker version: 17.12.1-ce. Max validated version: 17.03
	[WARNING Service-Docker]: docker service is not enabled, please run 'systemctl enable docker.service'
	[WARNING FileExisting-crictl]: crictl not found in system path
Suggestion: go get github.com/kubernetes-incubator/cri-tools/cmd/crictl
[preflight] Starting the kubelet service
[certificates] Generated ca certificate and key.
[certificates] Generated apiserver certificate and key.
[certificates] apiserver serving cert is signed for DNS names [raining-ubuntu kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.0.8]
[certificates] Generated apiserver-kubelet-client certificate and key.
[certificates] Generated etcd/ca certificate and key.
[certificates] Generated etcd/server certificate and key.
[certificates] etcd/server serving cert is signed for DNS names [localhost] and IPs [127.0.0.1]
[certificates] Generated etcd/peer certificate and key.
[certificates] etcd/peer serving cert is signed for DNS names [raining-ubuntu] and IPs [192.168.0.8]
[certificates] Generated etcd/healthcheck-client certificate and key.
[certificates] Generated apiserver-etcd-client certificate and key.
[certificates] Generated sa key and public key.
[certificates] Generated front-proxy-ca certificate and key.
[certificates] Generated front-proxy-client certificate and key.
[certificates] Valid certificates and keys now exist in "/etc/kubernetes/pki"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf"
[controlplane] Wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml"
[controlplane] Wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
[controlplane] Wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml"
[etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml"
[init] Waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests".
[init] This might take a minute or longer if the control plane images have to be pulled.
[apiclient] All control plane components are healthy after 39.501722 seconds
[uploadconfig] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[markmaster] Will mark node raining-ubuntu as master by adding a label and a taint
[markmaster] Master raining-ubuntu tainted and labelled with key/value: node-role.kubernetes.io/master=""
[bootstraptoken] Using token: vtyk9m.g4afak37myq3rsdi
[bootstraptoken] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstraptoken] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstraptoken] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstraptoken] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes master has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of machines by running the following on each node
as root:

  kubeadm join 192.168.0.8:6443 --token vtyk9m.g4afak37myq3rsdi --discovery-token-ca-cert-hash sha256:19246ce11ba3fc633fe0b21f2f8aaaebd7df9103ae47138dc0dd615f61a32d99

如果想在非root用戶下使用kubectl,可以執行如下命令(也是kubeadm init輸出的一部分):

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

kubeadm init 輸出的token用於master和加入節點間的身份認證,token是機密的,需要保證它的安全,因為擁有此標記的人都可以隨意向集群中添加節點。你也可以使用kubeadm命令列出,創建,刪除Token,有關詳細信息, 請參閱官方引用文檔

我們在瀏覽器中輸入https://<master-ip>:6443來驗證一下是否部署成功,返回如下:

{
  "kind": "Status",
  "apiVersion": "v1",
  "metadata": {
    
  },
  "status": "Failure",
  "message": "forbidden: User \"system:anonymous\" cannot get path \"/\"",
  "reason": "Forbidden",
  "details": {
    
  },
  "code": 403
}

(3/4) 安裝網絡插件

安裝一個網絡插件是必須的,因為你的pods之間需要彼此通信。

網絡部署必須是優先於任何應用的部署,如kube-dns(本文中使用的是coredns)在網絡部署成功之前是無法使用的。kubeadm只支持容器網絡接口(CNI)的網絡類型(不支持kubenet)。

比較常見的network addon有:Calico, Canal, Flannel, Kube-router, Romana, Weave Net等。詳細的網絡列表可參考插件頁面

使用下列命令來安裝網絡插件:

kubectl apply -f <add-on.yaml>

在本文中,我使用的是Calico網絡,安裝如下:

# 使用國內鏡像
kubectl apply -f http://mirror.faasx.com/kubernetes/installation/hosted/kubeadm/1.7/calico.yaml

為了Calico可以正常運行,必須在執行kubeadm init時使用 --pod-network-cidr=192.168.0.0/16

更詳細的可以查看Calico官方文檔:kubeadm quickstart

網絡插件安裝完成后,可以通過檢查coredns pod的運行狀態來判斷網絡插件是否正常運行:

kubectl get pods --all-namespaces

# 輸出
NAMESPACE     NAME                                      READY     STATUS    RESTARTS   AGE
kube-system   calico-etcd-zxmvh                         1/1       Running   0          4m
kube-system   calico-kube-controllers-f9d6c4cb6-42w9j   1/1       Running   0          4m
kube-system   calico-node-jq5qb                         2/2       Running   0          4m
kube-system   coredns-7997f8864c-kfswc                  1/1       Running   0          1h
kube-system   coredns-7997f8864c-ttvj2                  1/1       Running   0          1h
kube-system   etcd-raining-ubuntu                       1/1       Running   0          1h
kube-system   kube-apiserver-raining-ubuntu             1/1       Running   0          1h
kube-system   kube-controller-manager-raining-ubuntu    1/1       Running   0          1h
kube-system   kube-proxy-vrjlq                          1/1       Running   0          1h
kube-system   kube-scheduler-raining-ubuntu             1/1       Running   0          1h

等待coredns pod的狀態變成Running,就可以繼續添加從節點了。

隔離主節點

默認情況下,出於安全的考慮,並不會在主節點上運行pod,如果你想在主節點上運行pod,比如:運行一個單機版的kubernetes集群時,可運行下面的命令:

kubectl taint nodes --all node-role.kubernetes.io/master-

輸出類似這樣:

node "test-01" untainted
taint key="dedicated" and effect="" not found.
taint key="dedicated" and effect="" not found.

這將移除所有節點的node-role.kubernetes.io/master標志,包括主節點,Scheduler便可以在任何節點上安排運行pod了。

(4/4)加入其他節點

節點就是你的負載(容器和pod等等)運行的地方,往集群里添加節點,只需要在每台機器上執行下列幾步:

  • SSH 登錄機器
  • 切換到root (比如 sudo su -)
  • 執行kubeadm init輸出的那句命令: kubeadm join --token <token> <master-ip>:<master-port> --discovery-token-ca-cert-hash sha256:<hash>

執行后輸出類似這樣:

raining@ubuntu1:~$ sudo kubeadm join 192.168.0.8:6443 --token vtyk9m.g4afak37myq3rsdi --discovery-token-ca-cert-hash sha256:19246ce11ba3fc633fe0b21f2f8aaaebd7df9103ae47138dc0dd615f61a32d99
[preflight] Running pre-flight checks.
	[WARNING SystemVerification]: docker version is greater than the most recently validated version. Docker version: 17.12.1-ce. Max validated version: 17.03
	[WARNING Service-Docker]: docker service is not enabled, please run 'systemctl enable docker.service'
	[WARNING FileExisting-crictl]: crictl not found in system path
Suggestion: go get github.com/kubernetes-incubator/cri-tools/cmd/crictl
[preflight] Starting the kubelet service
[discovery] Trying to connect to API Server "192.168.0.8:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://192.168.0.8:6443"
[discovery] Requesting info from "https://192.168.0.8:6443" again to validate TLS against the pinned public key
[discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server "192.168.0.8:6443"
[discovery] Successfully established connection with API Server "192.168.0.8:6443"

This node has joined the cluster:
* Certificate signing request was sent to master and a response
  was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the master to see this node join the cluster.

幾秒后,你在主節點上運行kubectl get nodes就可以看到新加的機器了:

NAME             STATUS    ROLES     AGE       VERSION
raining-ubuntu   Ready     master    1h        v1.10.2
ubuntu1          Ready     <none>    2m        v1.10.2

(可選)在非主節點上管理集群

為了可以在其他電腦上使用kubectl來管理你的集群,可以從主節點上復制管理員 的kubeconfig文件到你的電腦上:

scp root@<master ip>:/etc/kubernetes/admin.conf .
kubectl --kubeconfig ./admin.conf get nodes

(可選)映射API服務到本地

如果你想從集群外部連接到API服務,可以使用工具kubectl proxy:

scp root@<master ip>:/etc/kubernetes/admin.conf .
kubectl --kubeconfig ./admin.conf proxy

這樣就可以在本地這樣 http://localhost:8001/api/v1 訪問到API服務了。

(可選)部署一個微服務

現在可以測試你新搭建的集群了,Sock Shop就是一個微服務的樣本,它體現了在Kubernetes里如何運行和連接一系列的服務。想了解更多關於微服務的內容,請查看GitHub README

kubectl create namespace sock-shop
kubectl apply -n sock-shop -f "https://github.com/microservices-demo/microservices-demo/blob/master/deploy/kubernetes/complete-demo.yaml?raw=true"

可以通過以下命令來查看前端服務是否有開放對應的端口:

kubectl -n sock-shop get svc front-end

輸出類似:

NAME        TYPE       CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE
front-end   NodePort   10.107.207.35   <none>        80:30001/TCP   31s

可能需要幾分鍾時間來下載和啟用所有的容器,通過kubectl get pods -n sock-shop來獲取服務的狀態。

輸出如下:

raining@raining-ubuntu:~$ kubectl get pods -n sock-shop
NAME                            READY     STATUS    RESTARTS   AGE
carts-6cd457d86c-wdbsg          1/1       Running   0          1m
carts-db-784446fdd6-9gsrs       1/1       Running   0          1m
catalogue-779cd58f9b-nf6n4      1/1       Running   0          1m
catalogue-db-6794f65f5d-kwc2x   1/1       Running   0          1m
front-end-679d7bcb77-4hbjq      1/1       Running   0          1m
orders-755bd9f786-gbspz         1/1       Running   0          1m
orders-db-84bb8f48d6-98wsm      1/1       Running   0          1m
payment-674658f686-xc7gk        1/1       Running   0          1m
queue-master-5f98bbd67-xgqr6    1/1       Running   0          1m
rabbitmq-86d44dd846-nf2g6       1/1       Running   0          1m
shipping-79786fb956-bs7jn       1/1       Running   0          1m
user-6995984547-nvqw4           1/1       Running   0          1m
user-db-fc7b47fb9-zcf5r         1/1       Running   0          1m

然后在你的瀏覽器里訪問集群節點的IP和對應的端口,比如http://<master_ip>/<cluster-ip>:<port>。 在這個例子里,可能是30001,但是它可能跟你的不一樣。如果有防火牆的話,確保在你訪問之前開放了對應的端口。

sock-shop-home

需要注意的是,如果在多節點部署時,要使用節點的IP進行訪問,而不是Master服務器的IP。

最后,卸載socks shop, 只需要在主節點上運行:

kubectl delete namespace sock-shop

卸載集群

想要撤銷kubeadm做的事,首先要排除節點,並確保在關閉節點之前要清空節點。

在主節點上運行:

kubectl drain <node name> --delete-local-data --force --ignore-daemonsets
kubectl delete node <node name>

然后在需要移除的節點上,重置kubeadm的安裝狀態:

kubeadm reset

如果你想重新配置集群,只需運行kubeadm init或者kubeadm join並使用所需的參數即可。

參考資料


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM