學習kubernetes——搭建kubernetes集群


  學習k8s的最好方式是自己搭建一個k8s集群,並實際操作。按照官方教程,其實並不復雜,但是由於網絡問題,很多軟件和鏡像無法下載,所以安裝過程還是比較麻煩的。

  學習k8s並不需要集群環境,個人電腦就可以搭建一個單機集群來學習。下面簡單介紹下過程,會跳過比較簡單的步驟,重點說下需要注意的事項

一、安裝虛擬機和linux系統

  虛擬機可以使用hyper-v,virtualbox,和vmware。我用的是VirtualBox 6.1.0版本,下載地址是https://www.virtualbox.org/wiki/Downloads

  系統用的是CentOS-7-x86_64-Minimal-1908。學習用的話建議使用Minimal,下載和安裝都很快。下載地址是 http://isoredirect.centos.org/centos/7/isos/x86_64/,選擇一個速度較快的鏡像地址下載。

  安裝教程,網上很多,這里就不說了。需要建議的地方是,1、安裝語言選擇中文。2、軟件選擇最小安裝,禁用KDUMP;配置網絡連接

 

 

  注意:1、虛擬機配置中設置CPU個數為2或者以上

     2、內存設置為2G以上

     3、防火牆會給k8s集群帶來一些問題,這里僅為學習用,可以直接關閉防火牆。systemctl stop firewalld & systemctl disable firewalld

     4、關閉Swap。執行swapoff -a可臨時關閉,編輯/etc/fstab,注釋掉包含swap的那一行即可,重啟后可永久關閉

     5、關閉centos圖形登錄界面,systemctl set-default multi-user.target

     6、添加額外的網卡,僅主機(Host-Only)網絡,這樣就可以從主機之間訪問虛擬機

二、安裝docker

  首先參考官方文檔

  https://docs.docker.com/install/linux/docker-ce/centos/#prerequisites

  https://kubernetes.io/docs/setup/production-environment/container-runtimes/

 

# Install Docker CE
## Set up the repository
### Install required packages.
yum install yum-utils device-mapper-persistent-data lvm2

### Add Docker repository.
## 注意換成阿里雲地址
yum-config-manager --add-repo \
  https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

## Install Docker CE.
yum update && yum install \
  containerd.io-1.2.10 \
  docker-ce-19.03.4 \
  docker-ce-cli-19.03.4

## Create /etc/docker directory.
mkdir /etc/docker

# Setup daemon.
cat > /etc/docker/daemon.json <<EOF
{
  "exec-opts": ["native.cgroupdriver=systemd"],
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "100m"
  },
  "storage-driver": "overlay2",
  "storage-opts": [
    "overlay2.override_kernel_check=true"
  ]
}
EOF

mkdir -p /etc/systemd/system/docker.service.d

# Restart Docker
systemctl daemon-reload
systemctl restart docker

 

  設置開機啟動

systemctl start docker & systemctl enable docker

 

  驗證安裝是否成功

docker run hello-world

  結果如下

Hello from Docker!
This message shows that your installation appears to be working correctly.

To generate this message, Docker took the following steps:
 1. The Docker client contacted the Docker daemon.
 2. The Docker daemon pulled the "hello-world" image from the Docker Hub.
    (amd64)
 3. The Docker daemon created a new container from that image which runs the
    executable that produces the output you are currently reading.
 4. The Docker daemon streamed that output to the Docker client, which sent it
    to your terminal.

To try something more ambitious, you can run an Ubuntu container with:
 $ docker run -it ubuntu bash

Share images, automate workflows, and more with a free Docker ID:
 https://hub.docker.com/

For more examples and ideas, visit:
 https://docs.docker.com/get-started/

 

三、安裝Kubernetes

  官方文檔https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/

  注意:需要改成國內的鏡像地址

cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

# Set SELinux in permissive mode (effectively disabling it)
setenforce 0
sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config

yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes

systemctl enable --now kubelet

 

cat <<EOF >  /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl --system

 

  設置自啟動

systemctl enable kubelet && systemctl start kubelet

 

四、配置K8S單機集群

  這里使用Calico方案來部署k8s單機集群

  官方文檔:https://docs.projectcalico.org/v3.11/getting-started/kubernetes/

  初始換環境,並下載安裝k8s鏡像

kubeadm init --pod-network-cidr=192.168.0.0/16

  注意ip地址根據實際情況更改

  由於這一步需要下載k8s的docker鏡像,國內不用代理的話基本上是下載不下來的。所以這里會出錯,出錯的原因就是鏡像pull失敗。會出現類似下面的錯誤信息

W1229 11:23:22.589295    1688 version.go:101] could not fetch a Kubernetes version from the internet: unable to get URL "https://dl.k8s.io/release/stable-1.txt": Get https://dl.k8s.io/release/stable-1.txt: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
W1229 11:23:22.590166    1688 version.go:102] falling back to the local client version: v1.17.0
W1229 11:23:22.590472    1688 validation.go:28] Cannot validate kube-proxy config - no validator is available
W1229 11:23:22.590492    1688 validation.go:28] Cannot validate kubelet config - no validator is available
[init] Using Kubernetes version: v1.17.0
[preflight] Running pre-flight checks
        [WARNING Firewalld]: firewalld is active, please ensure ports [6443 10250] are open or your cluster may not function correctly
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
error execution phase preflight: [preflight] Some fatal errors occurred:
        [ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-apiserver:v1.17.0: output: Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
, error: exit status 1
        [ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-controller-manager:v1.17.0: output: Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
, error: exit status 1
        [ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-scheduler:v1.17.0: output: Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
, error: exit status 1
        [ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-proxy:v1.17.0: output: Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
, error: exit status 1
        [ERROR ImagePull]: failed to pull image k8s.gcr.io/pause:3.1: output: Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
, error: exit status 1
        [ERROR ImagePull]: failed to pull image k8s.gcr.io/etcd:3.4.3-0: output: Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
, error: exit status 1
        [ERROR ImagePull]: failed to pull image k8s.gcr.io/coredns:1.6.5: output: Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
, error: exit status 1
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher

  

  執行這個解決防火牆警告

  

firewall-cmd --permanent --add-port=6443/tcp && sudo firewall-cmd --permanent --add-port=10250/tcp && sudo firewall-cmd --reload

 

  失敗信息中給出了pull失敗的鏡像地址。github上有人已經下載所有的鏡像並上傳到國內地址,可以從那下載。

  github地址:https://github.com/anjia0532/gcr.io_mirror

  鏡像地址轉換規則

  

gcr.io/namespace/image_name:image_tag 
#eq
gcr.azk8s.cn/namespace/image_name:image_tag 

# special
k8s.gcr.io/{image}/{tag} <==> gcr.io/google-containers/{image}/{tag} <==> gcr.azk8s.cn/namespace/image_name:image_tag 

  例如上面初始話下載失敗的鏡像地址是k8s.gcr.io/kube-apiserver:v1.17.0。

k8s.gcr.io/kube-apiserver:v1.17.0
轉換為
gcr.azk8s.cn/google-containers/kube-apiserver:v1.17.0

  下載鏡像

docker pull gcr.azk8s.cn/google-containers/kube-controller-manager:v1.17.0
docker pull gcr.azk8s.cn/google-containers/kube-scheduler:v1.17.0
docker pull gcr.azk8s.cn/google-containers/kube-proxy:v1.17.0
docker pull gcr.azk8s.cn/google-containers/pause:3.1
docker pull gcr.azk8s.cn/google-containers/etcd:3.4.3-0
docker pull gcr.azk8s.cn/google-containers/coredns:1.6.5

 

  由於,kubeadm init拉取的鏡像地址是官方的地址,因此我們需要打對應的tag

docker tag gcr.azk8s.cn/google-containers/kube-apiserver:v1.17.0 k8s.gcr.io/kube-apiserver:v1.17.0
docker tag gcr.azk8s.cn/google-containers/kube-controller-manager:v1.17.0 k8s.gcr.io/kube-controller-manager:v1.17.0
docker tag gcr.azk8s.cn/google-containers/kube-scheduler:v1.17.0 k8s.gcr.io/kube-scheduler:v1.17.0
docker tag gcr.azk8s.cn/google-containers/kube-proxy:v1.17.0 k8s.gcr.io/kube-proxy:v1.17.0
docker tag gcr.azk8s.cn/google-containers/pause:3.1 k8s.gcr.io/pause:3.1
docker tag gcr.azk8s.cn/google-containers/etcd:3.4.3-0 k8s.gcr.io/etcd:3.4.3-0
docker tag gcr.azk8s.cn/google-containers/coredns:1.6.5 k8s.gcr.io/coredns:1.6.5

  同樣的方式完成所有鏡像的下載。

  如果有多台服務器可以進行同樣的操作。

 

  設置主機域名

  編輯/etc/hostname,將hostname修改為k8s-node1
  編輯/etc/hosts,追加內容 IP k8s-node1

 

  然后在主節點再次執行初始化,如果失敗,可以先執行kubeadm reset

kubeadm init --pod-network-cidr=192.168.0.0/16

## 如果需要多個節點
kubeadm init --pod-network-cidr=192.168.0.0/16 --apiserver-advertise-address=192.168.56.104

  繼續執行

  

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

  安裝Calico

kubectl apply -f https://docs.projectcalico.org/v3.11/manifests/calico.yaml

  這一步也需要下載相應的calico鏡像,也可能下載失敗,大家可以去上面的yaml文件中,找到需要的鏡像,然后搜索下載方式,這里就不說了。

  驗證是否成功

  

watch kubectl get pods --all-namespaces

  結果如下就表示成功了

  

NAMESPACE    NAME                                       READY  STATUS   RESTARTS  AGE
kube-system  calico-kube-controllers-6ff88bf6d4-tgtzb   1/1    Running  0         2m45s
kube-system  calico-node-24h85                          1/1    Running  0         2m43s
kube-system  coredns-846jhw23g9-9af73                   1/1    Running  0         4m5s
kube-system  coredns-846jhw23g9-hmswk                   1/1    Running  0         4m5s
kube-system  etcd-jbaker-1                              1/1    Running  0         6m22s
kube-system  kube-apiserver-jbaker-1                    1/1    Running  0         6m12s
kube-system  kube-controller-manager-jbaker-1           1/1    Running  0         6m16s
kube-system  kube-proxy-8fzp2                           1/1    Running  0         5m16s
kube-system  kube-scheduler-jbaker-1                    1/1    Running  0         5m41s

  如果calico-node出現ErrorImagePull等狀態,就表示這個鏡像沒有下載成功,需要自己手動去國內鏡像地址去下載,名稱和版本號見https://docs.projectcalico.org/v3.11/manifests/calico.yaml

  配置master節點為work節點

kubectl taint nodes --all node-role.kubernetes.io/master-

  結果如下

node/<your-hostname> untainted

  最后運行

  

kubectl get nodes -o wide

  出現類型如下結果就表示成功了

NAME              STATUS   ROLES    AGE   VERSION   INTERNAL-IP   EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION    CONTAINER-RUNTIME
<your-hostname>   Ready    master   52m   v1.12.2   10.128.0.28   <none>        Ubuntu 18.04.1 LTS   4.15.0-1023-gcp   docker://18.6.1

 

  


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM