服務規划
一、Kubernetes安裝前准備
1、關閉交換空間
# 1、臨時並永久關閉交換空間 swapoff -a && sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab # 2、查看是否關閉成功 free -m
2、關閉防火牆
## 臨時並永久關閉防火牆(centos7以上系統)
systemctl stop firewalld && systemctl disable firewalld
3、關閉 SELinux
## 臨時並永久關閉SELinux setenforce 0 && sed -i 's/^SELINUX=.*/SELINUX=disabled/' /etc/selinux/config
4、修改 DNS,方便上網
[root@localhost ~]# vi /etc/resolv.conf nameserver 8.8.8.8 nameserver 114.114.114.114
5、修改名稱解析
cat >> /etc/hosts << EOF 192.168.101.26 k8s-master 192.168.101.27 k8s-node01 192.168.101.28 k8s-node02 EOF
6、配置主機名
# 在各自對應的主機上執行對應操作 hostnamectl set-hostname k8s-master hostnamectl set-hostname k8s-node01 hostnamectl set-hostname k8s-node02
7、升級內核
# 1、查看當前內核版本(當前版本為3.10.0) [root@k8s-master ~]# uname -r 3.10.0-693.el7.x86_64 2、調整內核參數 cat> /etc/sysctl.d/kubernetes.conf << EOF net.bridge.bridge-nf-call-ip6tables=1 net.bridge.bridge-nf-call-iptables=1 net.ipv4.ip_forward=1 net.ipv4.tcp_tw_recycle=0 vm.swapppiness=0 #禁止使用SWAP空間,只有當系統 OOM 時才允許使用它 vm.overcommit_memory=1 vm.panic_on_oom=0 # 開啟OOM net.ipv6.conf.all.disabe_ipv6=1 EOF # 3、重新加載 sysctl -p /etc/sysctl.d/kubernetes.conf # 4、安裝Key rpm --import https://www.elrepo.org/RPM-GPG-KEY-elrepo.org yum install https://www.elrepo.org/elrepo-release-7.el7.elrepo.noarch.rpm # 5、查看內核列表 yum --disablerepo="*" --enablerepo="elrepo-kernel" list available --showduplicates # 6、安裝內核 yum -y install --enablerepo="elrepo-kernel" kernel-ml # 或者單獨安裝內核:yum -y install kernel-ml-5.9.6-1.el7.elrepo.x86_64.rpm # 7、設置開機從新的內核啟動 grub2-set-default 0 grub2-mkconfig -o /boot/grub2/grub.cfg # 8、重啟服務器並查看內核 reboot uname -r [root@k8s-master01 ~]# uname -r 5.9.6-1.el7.elrepo.x86_64
8、開啟 IPVS 前置條件
# 1、加載模塊 modprobe br_netfilter # 2、創建參數腳本 yum -y install ipvsadm ipset cat > /etc/sysconfig/modules/ipvs.modules <<EOF #!/bin/bash modprobe -- ip_vs modprobe -- ip_vs_rr modprobe -- ip_vs_wrr modprobe -- ip_vs_sh modprobe -- nf_conntrack_ipv4 EOF # 3、配置生效 chmod 755 /etc/sysconfig/modules/ipvs.modules \ && bash /etc/sysconfig/modules/ipvs.modules \ && sleep 5 && lsmod | grep -e ip_vs -e nf_conntrack_ipv4
二、、安裝 Docker 服務
參考 https://www.cnblogs.com/wangzy-tongq/p/13993493.html
三、安裝 Kubernetes 必備工具
# 1、配置YUM源 cat << EOF > /etc/yum.repos.d/kubernetes.repo [kubernetes] name=Kubernetes baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/ enabled=1 gpgcheck=0 EOF # 2、所有節點安裝必備工具,版本不加默認為最新版,我這邊指定了1.14.0版本 yum -y install kubelet-1.14.0 kubeadm-1.14.0 kubectl-1.14.0 --disableexcludes=kubernetes(用自定義的Kubernetes) # 3、啟動服務(無需啟動) systemctl enable kubelet # 4、查看版本 kubeadm version #kubeadm版本號 kubelet --version #kubelet版本號
四、創建配置文件信息
# 1、創建集群目錄並進入 mkdir -p /usr/local/kubernetes/cluster cd /usr/local/kubernetes/cluster # 2、導出配置文件 kubeadm config print init-defaults --kubeconfig ClusterConfiguration > kubeadm.yml # 3、備份並修改導出的配置文件,修改內容如下 [root@k8s-master cluster]# cp kubeadm.yml kubeadm.yml-bak [root@k8s-master cluster]# cat kubeadm.yml apiVersion: kubeadm.k8s.io/v1beta1 bootstrapTokens: - groups: - system:bootstrappers:kubeadm:default-node-token token: abcdef.0123456789abcdef ttl: 24h0m0s usages: - signing - authentication kind: InitConfiguration localAPIEndpoint: advertiseAddress: 192.168.101.26 #######修改一: 修改為主節點IP為maser的實際IP,默認配置為1.2.3.4 bindPort: 6443 nodeRegistration: criSocket: /var/run/dockershim.sock name: k8s-master01 taints: - effect: NoSchedule key: node-role.kubernetes.io/master --- apiServer: timeoutForControlPlane: 4m0s apiVersion: kubeadm.k8s.io/v1beta1 certificatesDir: /etc/kubernetes/pki clusterName: kubernetes controlPlaneEndpoint: "" controllerManager: {} dns: type: CoreDNS etcd: local: dataDir: /var/lib/etcd imageRepository: registry.aliyuncs.com/google_containers #######修改二: 修改鏡像下載地址為阿里雲地址,默認為國外下載地址:k8s.gcr.io kind: ClusterConfiguration kubernetesVersion: v1.14.0 #######修改三:修改為Kubernetes實際安裝版本,否則服務起不來。查看版本命名kubeadm version networking: dnsDomain: cluster.local podSubnet: "10.244.0.0/16" #######修改四:配置Pod所在網段為我們虛擬機不沖突的網段(這里用的是Flannel默認網段),默認配置為:podSubnet: "" serviceSubnet: 10.96.0.0/12 scheduler: {}
初始化主節點
[root@k8s-master cluster]# kubeadm init --config=kubeadm.yml --experimental-upload-certs | tee kubeadm-init.log ###########以下為輸出信息,內容同步到kubeadm-init.log日志里,方便后續查看內容信息 [init] Using Kubernetes version: v1.14.0 [preflight] Running pre-flight checks [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/ [WARNING Hostname]: hostname "k8s-master01" could not be reached [WARNING Hostname]: hostname "k8s-master01": lookup k8s-master01 on 8.8.8.8:53: no such host [preflight] Pulling images required for setting up a Kubernetes cluster [preflight] This might take a minute or two, depending on the speed of your internet connection [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Activating the kubelet service [certs] Using certificateDir folder "/etc/kubernetes/pki" [certs] Generating "ca" certificate and key [certs] Generating "apiserver" certificate and key [certs] apiserver serving cert is signed for DNS names [k8s-master01 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.101.26] [certs] Generating "apiserver-kubelet-client" certificate and key [certs] Generating "front-proxy-ca" certificate and key [certs] Generating "front-proxy-client" certificate and key [certs] Generating "etcd/ca" certificate and key [certs] Generating "etcd/server" certificate and key [certs] etcd/server serving cert is signed for DNS names [k8s-master01 localhost] and IPs [192.168.101.26 127.0.0.1 ::1] [certs] Generating "apiserver-etcd-client" certificate and key [certs] Generating "etcd/peer" certificate and key [certs] etcd/peer serving cert is signed for DNS names [k8s-master01 localhost] and IPs [192.168.101.26 127.0.0.1 ::1] [certs] Generating "etcd/healthcheck-client" certificate and key [certs] Generating "sa" key and public key [kubeconfig] Using kubeconfig folder "/etc/kubernetes" [kubeconfig] Writing "admin.conf" kubeconfig file [kubeconfig] Writing "kubelet.conf" kubeconfig file [kubeconfig] Writing "controller-manager.conf" kubeconfig file [kubeconfig] Writing "scheduler.conf" kubeconfig file [control-plane] Using manifest folder "/etc/kubernetes/manifests" [control-plane] Creating static Pod manifest for "kube-apiserver" [control-plane] Creating static Pod manifest for "kube-controller-manager" [control-plane] Creating static Pod manifest for "kube-scheduler" [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests" [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s [apiclient] All control plane components are healthy after 15.001525 seconds [upload-config] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace [kubelet] Creating a ConfigMap "kubelet-config-1.14" in namespace kube-system with the configuration for the kubelets in the cluster [upload-certs] Storing the certificates in ConfigMap "kubeadm-certs" in the "kube-system" Namespace [upload-certs] Using certificate key: 718cc006cd91e1cca3e3f28e5c98eddadd1cf8650d6c7333cf7d2dcd247dd166 [mark-control-plane] Marking the node k8s-master01 as control-plane by adding the label "node-role.kubernetes.io/master=''" [mark-control-plane] Marking the node k8s-master01 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule] [bootstrap-token] Using token: abcdef.0123456789abcdef [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster [bootstrap-token] creating the "cluster-info" ConfigMap in the "kube-public" namespace [addons] Applied essential addon: CoreDNS [addons] Applied essential addon: kube-proxy Your Kubernetes control-plane has initialized successfully! ###########初始化成功 To start using your cluster, you need to run the following as a regular user: ###########若要使用集群,必須執行以下步驟。root用戶一般復制前兩句命令即可,第三個命令為非root用戶使用 mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster. ##########配置pod網絡 Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ Then you can join any number of worker nodes by running the following on each as root: kubeadm join 192.168.101.26:6443 --token abcdef.0123456789abcdef \ --discovery-token-ca-cert-hash sha256:a83c85d21b249f854f8ab24990c52b9ca705cae72c7a3b63764611032a757cdd #########客戶機加入集群時需要使用,注意token會過期 [root@k8s-master01 cluster]#
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
查看節點信息
[root@k8s-master cluster]# kubectl get nodes NAME STATUS ROLES AGE VERSION k8s-master01 NotReady master 6m10s v1.14.0 # NotReady 是因為還沒有配置網絡插件,需要網絡插件,如Flannel
添加 Flannel 網絡插件
# 1、下載資源配置清單 wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml # 2、運行資源清單 kubectl apply -f kube-flannel.yml # 3、再次查看狀態信息(STATUS 狀態變成 Ready) kubectl get nodes # 輸出如下: NAME STATUS ROLES AGE VERSION k8s-master Ready master 13m v1.14.0 # 4、查看網卡,發現多處一個 flannel.1: 網卡 ifconfig
從節點加入主節點
# 驗證信息從主節點初始化日志里面查找 kubeadm join 192.168.101.26:6443 --token abcdef.0123456789abcdef \ --discovery-token-ca-cert-hash sha256:a83c85d21b249f854f8ab24990c52b9ca705cae72c7a3b63764611032a757cdd ## 以下為輸出信息 [root@k8s-node02 ~]# kubeadm join 192.168.101.26:6443 --token abcdef.0123456789abcdef \ > --discovery-token-ca-cert-hash sha256:a83c85d21b249f854f8ab24990c52b9ca705cae72c7a3b63764611032a757cdd [preflight] Running pre-flight checks [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/ [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service' [preflight] Reading configuration from the cluster... [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml' [kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.14" ConfigMap in the kube-system namespace [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Activating the kubelet service [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap... This node has joined the cluster: * Certificate signing request was sent to apiserver and a response was received. * The Kubelet was informed of the new secure connection details. Run 'kubectl get nodes' on the control-plane to see this node join the cluster. ######查看節點信息,確認是否加入成功 [root@k8s-node01 cluster]# kubectl get nodes NAME STATUS ROLES AGE VERSION k8s-master01 NotReady master 44m v1.14.0 k8s-node01 NotReady <none> 28m v1.14.0 k8s-node02 NotReady <none> 27m v1.14.0
結束!!!