1、節點介紹
master cluster-1 cluster-2 cluster-3
hostname k8s-55 k8s-54 k8s-53 k8s-52
ip 10.2.49.55 10.2.49.54 10.2.49.53 10.2.49.52
2、配置網絡,配置/etc/hosts 略過。。。。
3、安裝kubernets
1 sudo apt-get update && apt-get install -y apt-transport-https 2 sudo curl https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg | apt-key add - 3 sudo cat <<EOF >/etc/apt/sources.list.d/kubernetes.list 4 deb https://mirrors.aliyun.com/kubernetes/apt/ kubernetes-xenial main 5 EOF 6 sudo apt-get update
查看源中的軟件版本 sudo apt-cache madison kubelet
一般不安裝最新的,也不會安裝最老的,我們先安裝1.11.2的
1 sudo apt install kubelet=1.11.2-00 kubeadm=1.11.2-00 kubectl=1.11.2-00
至此kubernetes的二進制部分安裝成功
3、安裝docker
由於是用Kubernetes管理docker ,docker的版本要兼容kubernetes,去網站找兼容性列表,網站https://github.com/kubernetes/kubernetes,查看安裝的是哪個版本,就看哪個版本的changlog,本文中安裝的是1.11.2版本,
從這里可以看出來兼容最高版本的docker是17.03.x,docker版本盡量裝新版本
安裝docker
1 sudo apt-get update 2 sudo apt-get -y install apt-transport-https ca-certificates curl software-properties-common 3 curl -fsSL http://mirrors.aliyun.com/docker-ce/linux/ubuntu/gpg | sudo apt-key add - 4 sudo add-apt-repository "deb [arch=amd64] http://mirrors.aliyun.com/docker-ce/linux/ubuntu $(lsb_release -cs) stable" 5 sudo apt-get -y update
查看docker版本
1 sudo apt install docker-ce=17.03.3~ce-0~ubuntu-xenial 2 sudo systemctl enable docker
如果需要配置加速器,請編輯/etc/systemd/system/multi-user.target.wants/docker.service文件
4、拉取kubernetes初始化鏡像
查看初始鏡像要求
1 kubeadm config images list
由於國內無法直接拉取google鏡像,可以使用別人的鏡像,也可以自己通過阿里雲等制作。本文使用anjia的鏡像源,給出腳本
1 #!/bin/bash 2 KUBE_VERSION=v1.11.3 3 KUBE_PAUSE_VERSION=3.1 4 ETCD_VERSION=3.2.18 5 DNS_VERSION=1.1.3 6 username=anjia0532 7 8 images="google-containers.kube-proxy-amd64:${KUBE_VERSION} 9 google-containers.kube-scheduler-amd64:${KUBE_VERSION} 10 google-containers.kube-controller-manager-amd64:${KUBE_VERSION} 11 google-containers.kube-apiserver-amd64:${KUBE_VERSION} 12 pause:${KUBE_PAUSE_VERSION} 13 etcd-amd64:${ETCD_VERSION} 14 coredns:${DNS_VERSION} 15 " 16 17 for image in $images 18 do 19 docker pull ${username}/${image} 20 docker tag ${username}/${image} k8s.gcr.io/${image} 21 #docker tag ${username}/${image} gcr.io/google_containers/${image} 22 docker rmi ${username}/${image} 23 done 24 25 unset ARCH version images username 26 27 docker tag k8s.gcr.io/google-containers.kube-apiserver-amd64:${KUBE_VERSION} k8s.gcr.io/kube-apiserver-amd64:${KUBE_VERSION} 28 docker rmi k8s.gcr.io/google-containers.kube-apiserver-amd64:${KUBE_VERSION} 29 docker tag k8s.gcr.io/google-containers.kube-controller-manager-amd64:${KUBE_VERSION} k8s.gcr.io/kube-controller-manager-amd64:${KUBE_VERSION} 30 docker rmi k8s.gcr.io/google-containers.kube-controller-manager-amd64:${KUBE_VERSION} 31 docker tag k8s.gcr.io/google-containers.kube-scheduler-amd64:${KUBE_VERSION} k8s.gcr.io/kube-scheduler-amd64:${KUBE_VERSION} 32 docker rmi k8s.gcr.io/google-containers.kube-scheduler-amd64:${KUBE_VERSION} 33 docker tag k8s.gcr.io/google-containers.kube-proxy-amd64:${KUBE_VERSION} k8s.gcr.io/kube-proxy-amd64:${KUBE_VERSION} 34 docker rmi k8s.gcr.io/google-containers.kube-proxy-amd64:${KUBE_VERSION}
執行sh pull.sh,會自動將所需鏡像拉取
5、初始化Kubernetes
sudo kubeadm init --kubernetes-version=v1.11.3 --pod-network-cidr=10.244.0.0/16 --apiserver-advertise-address 10.2.49.55
--kubernetes-version 用來指定版本
--pod-network-cidr 用於后期采用flannel作為網絡組建而准備
--apiserver-advertise-address 如果機器上只有單個網卡,可以不進行指定
初始化成功的結果
1 [init] Using Kubernetes version: vX.Y.Z 2 [preflight] Running pre-flight checks 3 [kubeadm] WARNING: starting in 1.8, tokens expire after 24 hours by default (if you require a non-expiring token use --token-ttl 0) 4 [certificates] Generated ca certificate and key. 5 [certificates] Generated apiserver certificate and key. 6 [certificates] apiserver serving cert is signed for DNS names [kubeadm-master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.138.0.4] 7 [certificates] Generated apiserver-kubelet-client certificate and key. 8 [certificates] Generated sa key and public key. 9 [certificates] Generated front-proxy-ca certificate and key. 10 [certificates] Generated front-proxy-client certificate and key. 11 [certificates] Valid certificates and keys now exist in "/etc/kubernetes/pki" 12 [kubeconfig] Wrote KubeConfig file to disk: "admin.conf" 13 [kubeconfig] Wrote KubeConfig file to disk: "kubelet.conf" 14 [kubeconfig] Wrote KubeConfig file to disk: "controller-manager.conf" 15 [kubeconfig] Wrote KubeConfig file to disk: "scheduler.conf" 16 [controlplane] Wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml" 17 [controlplane] Wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml" 18 [controlplane] Wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml" 19 [etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml" 20 [init] Waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests" 21 [init] This often takes around a minute; or longer if the control plane images have to be pulled. 22 [apiclient] All control plane components are healthy after 39.511972 seconds 23 [uploadconfig] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace 24 [markmaster] Will mark node master as master by adding a label and a taint 25 [markmaster] Master master tainted and labelled with key/value: node-role.kubernetes.io/master="" 26 [bootstraptoken] Using token: <token> 27 [bootstraptoken] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials 28 [bootstraptoken] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token 29 [bootstraptoken] Creating the "cluster-info" ConfigMap in the "kube-public" namespace 30 [addons] Applied essential addon: CoreDNS 31 [addons] Applied essential addon: kube-proxy 32 33 Your Kubernetes master has initialized successfully! 34 35 To start using your cluster, you need to run (as a regular user): 36 37 mkdir -p $HOME/.kube 38 sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config 39 sudo chown $(id -u):$(id -g) $HOME/.kube/config 40 41 You should now deploy a pod network to the cluster. 42 Run "kubectl apply -f [podnetwork].yaml" with one of the addon options listed at: 43 http://kubernetes.io/docs/admin/addons/ 44 45 You can now join any number of machines by running the following on each node 46 as root: 47 48 kubeadm join --token <token> <master-ip>:<master-port> --discovery-token-ca-cert-hash sha256:<hash>
1 mkdir -p $HOME/.kube 2 sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config 3 sudo chown $(id -u):$(id -g) $HOME/.kube/config
6、安裝flannel網絡組件
1 wget https://raw.githubusercontent.com/coreos/flannel/v0.10.0/Documentation/kube-flannel.yml 2 3 kubectl apply -f kube-flannel.yml
查看kube-flannel.yml,使用的鏡像是quay.io/coreos/flannel:v0.10.0-amd64
如果一直卡頓,可以自行下載
安裝組建結束后正常情況下執行,一般都是正常,如果有錯誤,那就是某些鏡像沒有下載成功
7、安裝客戶端
加載內核模塊
1 modprobe ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh
加入kubenetes集群,執行的是kubeadm初始化最后顯示的token部分
1 sudo kubeadm join --token <token> <master-ip>:<master-port> --discovery-token-ca-cert-hash sha256:<hash>
mster節點執行kubectl get node
ready狀態是正常的,如果不正常,一般是某些鏡像沒有下載成功,一般是pause鏡像比較難下載,可以采用前文pull.sh方法進行下載
mster節點執行kubectl get pod --all-namespaces -o wide
如上圖,為集群創建成功,並能夠正常運行。
創建CA證書等下次分享。