Kubeadm安裝Kubernetes 1.15.1


一、實驗環境准備

  1. 服務器虛擬機准備
IP CPU 內存 hostname
192.168.198.200 >=2c >=2G master
192.168.198.201 >=2c >=2G node1
192.168.198.202 >=2c >=2G node2

本實驗我這里用的VM是vmware workstation創建的,我的機器配置較低,所以master給了2G 2C,node每個給了1G 2C,大家根據自己的資源情況,按照上面給的建議最低值創建即可。
注意:hostname不能有大寫字母,比如Master這樣的。

  1. 軟件版本
    系統:CentOS7.5.1804
    Kubernetes:1.15.1
    docker版本:18.06.1-ce

  2. 環境初始化操作
    3.1 配置hostname
    hostnamectl set-hostname master
    hostnamectl set-hostname node1
    hostnamectl set-hostname node2
    3.2 配置/etc/hosts
    echo "192.168.198.200 master" >> /etc/hosts
    echo "192.168.198.201 node1" >> /etc/hosts
    echo "192.168.198.202 node2" >> /etc/hosts
    3.3 關閉防火牆、selinux、swap

    //停防火牆 
    systemctl stop firewalld
    systemctl disable firewalld
    
    //關閉Selinux
    setenforce 0
    sed -i "s/^SELINUX=enforcing/SELINUX=disabled/g" /etc/sysconfig/selinux
    sed -i "s/^SELINUX=enforcing/SELINUX=disabled/g" /etc/selinux/config
    sed -i "s/^SELINUX=permissive/SELINUX=disabled/g" /etc/sysconfig/selinux
    sed -i "s/^SELINUX=permissive/SELINUX=disabled/g" /etc/selinux/config
    
    //關閉Swap
    swapoff –a
    注釋掉/etc/fstab的swap行
    # /dev/mapper/centos-swap swap                    swap    defaults        0 0
    
    //加載br_netfilter
    modprobe br_netfilter
    

     
    3.4 配置內核參數

    //配置sysctl內核參數 
    cat > /etc/sysctl.d/k8s.conf <<EOF
    net.bridge.bridge-nf-call-ip6tables = 1
    net.bridge.bridge-nf-call-iptables = 1
    EOF
    //生效文件
    sysctl -p /etc/sysctl.d/k8s.conf
    //修改Linux 資源配置文件,調高ulimit最大打開數和systemctl管理的服務文件最大打開數
    echo "* soft nofile 655360" >> /etc/security/limits.conf 
    echo "* hard nofile 655360" >> /etc/security/limits.conf
    echo "* soft nproc 655360" >> /etc/security/limits.conf
    echo "* hard nproc 655360" >> /etc/security/limits.conf
    echo "* soft memlock unlimited" >> /etc/security/limits.conf
    echo "* hard memlock unlimited" >> /etc/security/limits.conf
    echo "DefaultLimitNOFILE=1024000" >> /etc/systemd/system.conf
    echo "DefaultLimitNPROC=1024000" >> /etc/systemd/system.conf
    
  3. 配置CentOS YUM源

    //配置國內tencent yum源地址、epel源地址、Kubernetes源地址
    mkdir -p /etc/yum.repo.d/repo.bak
    mv /etc/yum.repo.d/*.repo  /etc/yum.repo.d/repo.bak
    wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.cloud.tencent.com/repo/centos7_base.repo
    wget -O /etc/yum.repos.d/epel.repo http://mirrors.cloud.tencent.com/repo/epel-7.repo
    yum clean all && yum makecache
    
    //配置國內Kubernetes源地址
    cat <<EOF > /etc/yum.repos.d/kubernetes.repo 
        [kubernetes]
        name=Kubernetes baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
        enabled=1
        gpgcheck=1
        repo_gpgcheck=1
        gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg 
        https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
    EOF
    
  4. 安裝一些依賴軟件包
    yum install -y conntrack ipvsadm ipset jq sysstat curl iptables libseccomp bash-completion yum-utils device-mapper-persistent-data lvm2 net-tools conntrack-tools vim libtool-ltdl

  5. 時間同步配置

    yum install chrony –y
    systemctl enable chronyd.service && systemctl start chronyd.service && systemctl status chronyd.service 
    chronyc sources
    

    運行date命令看下系統時間,過一會兒時間就會同步。

  6. 配置節點間ssh互信
    配置ssh互信,那么節點之間就能無密訪問,方便以后操作

    ssh-keygen  //每台機器執行這個命令, 一路回車即可
    ssh-copy-id node  //到master上拷貝公鑰到其他節點,這里需要輸入 yes和密碼
    
  7. 以上操作后,全部重啟一下。

 
 

二、docker安裝

  1. 設置docker yum源
    yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo

  2. 列出docker版本
    yum list docker-ce --showduplicates | sort -r

  3. 安裝docker 指定18.06.1
    yum install -y docker-ce-18.06.1.ce-3.el7
    systemctl start docker

  4. 配置鏡像加速器和docker數據存放路徑

    tee /etc/docker/daemon.json << EOF
    {
        "registry-mirrors": ["https://q2hy3fzi.mirror.aliyuncs.com"],
        "graph": "/tol/docker-data"
    }
    EOF
    
  5. 啟動docker
    systemctl daemon-reload && systemctl restart docker && systemctl enable docker

 
 

三、安裝kubeadm、kubelet、kubectl(所有節點)

  1. 工具說明
  • kubeadm: 部署集群用的命令
  • kubelet: 在集群中每台機器上都要運行的組件,負責管理pod、容器的生命周期
  • kubectl: 集群管理工具
  1. yum 安裝
    //安裝工具
    yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes
    //啟動kubelet
    systemctl enable kubelet && systemctl start kubelet
    
    注意:kubelet 服務會暫時啟動不了,先不用管它。

 
 

四、鏡像下載准備

  1. 初始化獲取要下載的鏡像列表
    使用kubeadm來搭建Kubernetes,那么就需要下載得到Kubernetes運行的對應基礎鏡像,比如:kube- proxy、kube-apiserver、kube-controller-manager等等 。那么有什么方法可以得知要下載哪些鏡像 呢?從kubeadm v1.11+版本開始,增加了一個kubeadm config print-default 命令,可以讓我們方便 的將kubeadm的默認配置輸出到文件中,這個文件里就包含了搭建K8S對應版本需要的基礎配置環境。另 外,我們也可以執行 kubeadm config images list 命令查看依賴需要安裝的鏡像列表。

    [root@master ]# kubeadm config images list
    W0806 17:29:06.709181  130077 version.go:98] could not fetch a Kubernetes version from the internet: unable to get URL "https://dl.k8s.io/release/stable-1.txt": Get https://dl.k8s.io/release/stable-1.txt: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
    W0806 17:29:06.709254  130077 version.go:99] falling back to the local client version: v1.15.1
    k8s.gcr.io/kube-apiserver:v1.15.1
    k8s.gcr.io/kube-controller-manager:v1.15.1
    k8s.gcr.io/kube-scheduler:v1.15.1
    k8s.gcr.io/kube-proxy:v1.15.1
    k8s.gcr.io/pause:3.1
    k8s.gcr.io/etcd:3.3.10
    k8s.gcr.io/coredns:1.3.1
    
  2. 生成默認kubeadm.conf文件

    //執行這個命令就生成了一個kubeadm.conf文件
    kubeadm config print init-defaults > kubeadm.conf
    
  3. 繞過牆下載鏡像方法
    這個配置文件默認會從google的鏡像倉庫地址k8s.gcr.io下載鏡像,下載不了。因此,我們通過下面的方法把地址改成國內的,比如用阿里的:

    vim kubeadm.conf
    ...
    imageRepository: registry.aliyuncs.com/google_containers   //地址
    kind: ClusterConfiguration
    kubernetesVersion: v1.15.1   //版本
    ...
    
  4. 下載需要用到的鏡像
    kubeadm.conf修改好后,我們執行下面命令就可以自動從國內下載需要用到的鏡像了
    kubeadm config images pull --config kubeadm.conf

  5. docker tag 鏡像
    鏡像下載好后,還需要tag下載好的鏡像,讓下載好的鏡像都是帶有 k8s.gcr.io 標識的,如果不打tag變成k8s.gcr.io,那么后面用kubeadm安裝會出現問題,因為kubeadm里面只認 google自身的模式。打tag后刪除帶有 registry.aliyuncs.com 標識的鏡像。下面把操作寫在腳本里。

    #/bin/bash
    
    # 打tag
    docker tag registry.aliyuncs.com/google_containers/coredns:1.3.1 k8s.gcr.io/coredns:1.3.1
    docker tag registry.aliyuncs.com/google_containers/kube-proxy:v1.15.1 k8s.gcr.io/kube-proxy:v1.15.1
    docker tag registry.aliyuncs.com/google_containers/kube-apiserver:v1.15.1 k8s.gcr.io/kube-apiserver:v1.15.1
    docker tag registry.aliyuncs.com/google_containers/kube-controller-manager:v1.15.1 k8s.gcr.io/kube-controller-manager:v1.15.1
    docker tag registry.aliyuncs.com/google_containers/kube-scheduler:v1.15.1 k8s.gcr.io/kube-scheduler:v1.15.1
    docker tag registry.aliyuncs.com/google_containers/etcd:3.3.10 k8s.gcr.io/etcd:3.3.10
    docker tag registry.aliyuncs.com/google_containers/pause:3.1 k8s.gcr.io/pause:3.1
    
    # 刪除帶有 registry.aliyuncs.com 標識的鏡像
    docker rmi registry.aliyuncs.com/google_containers/coredns:1.3.1
    docker rmi registry.aliyuncs.com/google_containers/kube-proxy:v1.15.1
    docker rmi registry.aliyuncs.com/google_containers/kube-apiserver:v1.15.1
    docker rmi registry.aliyuncs.com/google_containers/kube-controller-manager:v1.15.1
    docker rmi registry.aliyuncs.com/google_containers/kube-scheduler:v1.15.1
    docker rmi registry.aliyuncs.com/google_containers/etcd:3.3.10
    docker rmi registry.aliyuncs.com/google_containers/pause:3.1
    
  6. 查看下載的鏡像列表

    [root@master ~]# docker image ls
    REPOSITORY                           TAG                 IMAGE ID            CREATED             SIZE
    k8s.gcr.io/kube-controller-manager   v1.15.1             d75082f1d121        2 weeks ago         159MB
    k8s.gcr.io/kube-apiserver            v1.15.1             68c3eb07bfc3        2 weeks ago         207MB
    k8s.gcr.io/kube-proxy                v1.15.1             89a062da739d        2 weeks ago         82.4MB
    k8s.gcr.io/kube-scheduler            v1.15.1             b0b3c4c404da        2 weeks ago         81.1MB
    k8s.gcr.io/coredns                   1.3.1               eb516548c180        6 months ago        40.3MB
    k8s.gcr.io/etcd                      3.3.10              2c4adeb21b4f        8 months ago        258MB
    k8s.gcr.io/pause                     3.1                 da86e6ba6ca1        19 months ago       742kB
    

 
 

五、部署master節點

  1. kubeadm init 初始化master節點
    kubeadm init --kubernetes-version=v1.15.1 --pod-network-cidr=172.22.0.0/16 --apiserver-advertise-address=192.168.198.200
    這里我們定義POD的網段為: 172.22.0.0/16 ,api server就是master本機IP地址。

  2. 初始化成功后,最后會顯示如下

    Your Kubernetes control-plane has initialized successfully!
    
    To start using your cluster, you need to run the following as a regular user:
    
      mkdir -p $HOME/.kube
      sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
      sudo chown $(id -u):$(id -g) $HOME/.kube/config
    
    You should now deploy a pod network to the cluster.
    Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
      https://kubernetes.io/docs/concepts/cluster-administration/addons/
    
    Then you can join any number of worker nodes by running the following on each as root:
    
    kubeadm join 192.168.198.200:6443 --token 81i5bj.qwo2gfiqafmr6g6s \
        --discovery-token-ca-cert-hash sha256:aef745fb87e366993ad20c0586c2828eca9590c29738ef.... 
    

    kubeadm join 192.168.198.200:6443 --token 81i5bj.qwo2gfiqafmr6g6s \ --discovery-token-ca-cert-hash sha256:aef745fb87e366993ad20c0586c2828eca9590c29738ef....
    最后這個記錄下,到時候添加node的時候要用到。

    同時/etc/kubernetes/會生成下面的文件

    [root@master ~]# ll /etc/kubernetes/
    總用量 36
    -rw------- 1 root root 5451 8月   5 15:12 admin.conf
    -rw------- 1 root root 5491 8月   5 15:12 controller-manager.conf
    -rw------- 1 root root 5459 8月   5 15:12 kubelet.conf
    drwxr-xr-x 2 root root  113 8月   5 15:12 manifests
    drwxr-xr-x 3 root root 4096 8月   5 15:12 pki
    -rw------- 1 root root 5435 8月   5 15:12 scheduler.conf
    
  3. 驗證測試
    配置kubectl命令
    mkdir -p /root/.kube
    cp /etc/kubernetes/admin.conf /root/.kube/config
    執行獲取pods列表命令,查看相關狀態

    [root@master ~]# kubectl get pods --all-namespaces
    NAMESPACE     NAME                                    READY   STATUS    RESTARTS   AGE
    kube-system   coredns-5c98db65d4-rrpgm                0/1     Pending   0          5d18h
    kube-system   coredns-5c98db65d4-xg5cc                0/1     Pending   0          5d18h
    kube-system   etcd-master                             1/1     Running   0          5d18h
    kube-system   kube-apiserver-master                   1/1     Running   0          5d18h
    kube-system   kube-controller-manager-master          1/1     Running   0          5d18h
    kube-system   kube-proxy-8vf84                        1/1     Running   0          5d18h
    kube-system   kube-scheduler-master                   1/1     Running   0          5d18h
    

    其中coredns pod處於Pending狀態,這個先不管
    也可以執行 kubectl get cs 查看集群的健康狀態

    [root@master ~]# kubectl get cs
    NAME                 STATUS    MESSAGE             ERROR
    scheduler            Healthy   ok                  
    controller-manager   Healthy   ok                  
    etcd-0               Healthy   {"health":"true"}   
    

     
     

六、部署calico網絡 (在master上執行)

calico介紹:Calico是一個純三層的方案,其好處是它整合了各種雲原生平台(Docker、Mesos與OpenStack等)每個Kubernetes節點上通過Linux Kernel有的L3 forwarding功能來實現vRouter功能。

  1. 下載calico 官方鏡像
    這里要下載三個鏡像,分別是calico-node:v3.1.4、calico-cni:v3.1.4、calico-typha:v3.1.4
    直接運行 docker pull 下載即可

    docker pull calico/node:v3.1.4
    docker pull calico/cni:v3.1.4
    docker pull calico/typha:v3.1.4
    
  2. tag 這三個calico鏡像

    docker tag calico/node:v3.1.4 quay.io/calico/node:v3.1.4
    docker tag calico/cni:v3.1.4 quay.io/calico/cni:v3.1.4
    docker tag calico/typha:v3.1.4 quay.io/calico/typha:v3.1.4
    
  3. 刪除原有鏡像

    docker rmi calico/node:v3.1.4
    docker rmi calico/cni:v3.1.4
    docker rmi calico/typha:v3.1.4
    
  4. 部署calico
    4.1 下載執行rbac-kdd.yaml文件
    curl https://docs.projectcalico.org/v3.1/getting-started/kubernetes/installation/hosted/rbac-kdd.yaml -O
    kubectl apply -f rbac-kdd.yaml

    4.2 下載、配置calico.yaml文件
    curl https://docs.projectcalico.org/v3.1/getting-started/kubernetes/installation/hosted/kubernetes-datastore/policy-only/1.7/calico.yaml -O

    把ConfigMap 下的 typha_service_name 值由none變成 calico-typha

    kind: ConfigMap
    apiVersion: v1
    metadata:
      name: calico-config
      namespace: kube-system
    data:
      # To enable Typha, set this to "calico-typha" *and* set a non-zero value for Typha replicas
      # below.  We recommend using Typha if you have more than 50 nodes. Above 100 nodes it is
      # essential.
      #typha_service_name: "none"   #before
      typha_service_name: "calico-typha"   #after
    

    設置 Deployment 類目的 spec 下的replicas值為1

    apiVersion: apps/v1beta1
    kind: Deployment
    metadata:
      name: calico-typha
      namespace: kube-system
      labels:
        k8s-app: calico-typha
    spec:
      # Number of Typha replicas.  To enable Typha, set this to a non-zero value *and* set the
      # typha_service_name variable in the calico-config ConfigMap above.
      #
      # We recommend using Typha if you have more than 50 nodes.  Above 100 nodes it is essential
      # (when using the Kubernetes datastore).  Use one replica for every 100-200 nodes.  In
      # production, we recommend running at least 3 replicas to reduce the impact of rolling upgrade.
      #replicas: 0  #before
      replicas: 1   #after
      revisionHistoryLimit: 2
      template:
        metadata:
    

    4.3 定義POD網段
    我們找到CALICO_IPV4POOL_CIDR,然后值修改成之前定義好的POD網段,我這里是172.22.0.0/16

    - name: CALICO_IPV4POOL_CIDR
      value: "172.22.0.0/16"
    

    4.4 開啟bird模式
    把 CALICO_NETWORKING_BACKEND 值設置為 bird ,這個值是設置BGP網絡后端模式

    - name: CALICO_NETWORKING_BACKEND
      #value: "none"
      value: "bird"
    

    4.5 部署calico.yaml文件
    上面參數設置調優完畢,我們執行下面命令徹底部署calico
    kubectl apply -f calico.yaml
     
     

七、部署node節點

  1. 下載安裝鏡像(在node上執行)
    node上也是需要下載安裝一些鏡像的,需要下載的鏡像為:kube-proxy:v1.13、pause:3.1、calico-node:v3.1.4、calico-cni:v3.1.4、calico-typha:v3.1.4
    1.1 下載鏡像,打tag,刪除原鏡像
    docker pull registry.aliyuncs.com/google_containers/kube-proxy:v1.13.0
    docker pull registry.aliyuncs.com/google_containers/pause:3.1
    docker pull calico/node:v3.1.4
    docker pull calico/cni:v3.1.4
    docker pull calico/typha:v3.1.4
    
    docker tag registry.aliyuncs.com/google_containers/kube-proxy:v1.13.0 k8s.gcr.io/kube-proxy:v1.13.0
    docker tag registry.aliyuncs.com/google_containers/pause:3.1 k8s.gcr.io/pause:3.1
    docker tag calico/node:v3.1.4 quay.io/calico/node:v3.1.4
    docker tag calico/cni:v3.1.4 quay.io/calico/cni:v3.1.4
    docker tag calico/typha:v3.1.4 quay.io/calico/typha:v3.1.4
    
    docker rmi registry.aliyuncs.com/google_containers/kube-proxy:v1.13.0
    docker rmi registry.aliyuncs.com/google_containers/pause:3.1
    docker rmi calico/node:v3.1.4
    docker rmi calico/cni:v3.1.4
    docker rmi calico/typha:v3.1.4
    
    1.2 把node加入集群里
    在node上運行
    kubeadm join 192.168.198.200:6443 --token 81i5bj.qwo2gfiqafmr6g6s \ --discovery-token-ca-cert-hash sha256:aef745fb87e366993ad20c0586c2828eca9590c29738ef....
    運行完后,我們在master節點上運行 kubectl get nodes 命令查看node是否正常
    [root@master ~]# kubectl get nodes
    NAME     STATUS   ROLES    AGE     VERSION
    master   Ready    master   5d19h   v1.15.1
    node1    Ready    <none>   5d18h   v1.15.1
    node2    Ready    <none>   5d18h   v1.15.1
    

總結

到此,k8s就部署完成了。


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM