一:基础概念:
Ⅰ、etcd:
1:保存pod的信息
2:存储各种对象的状态和元信息配置
3:存储网络的配置信息,flannel通过etcd获得其他node的网络信息,为node创建路由表,实现node跨主机通信
Ⅱ、kube-proxy:
1:service的透明代理和负载均衡器,将service的访问请求转发到后端某个pod上
2:service的cluster IP和Nodeport的proxy通过iptables的NAT转换实现
Ⅲ、kubelet:
1:监控分配给node节点的pods
2:挂载pods需要的volumes
3:下载pod的secret
4:运行pod中的容器
一、集群环境信息及安装前准备
部署前准备(集群内所有主机)
.关闭防火墙,关闭selinux(生产环境按需关闭或打开) .同步服务器时间,选择公网ntpd服务器或者自建ntpd服务器 .关闭swap分区 .集群所有节点主机可以相互解析 .master对node节点ssh互信6.配置系统内核参数使流过网桥的流量也进入iptables/netfilter框架(如果报错,提示没有文件 modprobe br_netfilter 添加此模块) echo -e 'net.bridge.bridge-nf-call-iptables = 1 \nnet.bridge.bridge-nf-call-ip6tables = 1' >> /etc/sysctl.conf && sysctl -p
二、配置yum源及k8s相关软件包
备注:以下操作在所有节点执行相同操作
1.配置yum、epel、k8s及docker源
[root@k8s-master ~]# cd /etc/yum.repos.d [root@k8s-master yum.repos.d]# yum install wget -y [root@k8s-master yum.repos.d]# rm -f CentOS-* [root@k8s-master yum.repos.d]#wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo [root@k8s-master yum.repos.d]#wget -P /etc/yum.repos.d/ http://mirrors.aliyun.com/repo/epel-7.repo [root@k8s-master yum.repos.d]#cat <<EOF > /etc/yum.repos.d/kubernetes.repo [kubernetes] name=Kubernetes baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64 enabled=1 gpgcheck=0 EOF [root@k8s-master yum.repos.d]#wget https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg [root@k8s-master yum.repos.d]#rpm -import rpm-package-key.gpg #安装key文件 [root@k8s-master yum.repos.d]#wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo [root@k8s-master yum.repos.d]#yum clean all && yum makecache fast
2.安装kubeadm和相关工具包
[root@k8s-master ~]# cd [root@k8s-master ~]# yum install docker kubectl-1.12.3 kubeadm-1.12.3 kubelet-1.12.3 kubernetes-cni-0.6.0 -y #需要制定版本,默认是最新的,由于不可描述的原因,暂时最新的还无法下载
3.启动相关服务并设置开机自启
[root@k8s-master ~]# systemctl daemon-reload [root@k8s-master ~]# systemctl enable docker && systemctl restart docker [root@k8s-master ~]# systemctl enable kubelet && systemctl restart kubelet
4.下载K8S相关镜像
[root@k8s-node2 yum.repos.d]# cat down.sh #!/bin/bash down_image_url=registry.cn-hangzhou.aliyuncs.com/kuberimages/ images=(kube-proxy:v1.12.3 kube-apiserver:v1.12.3 kube-controller-manager:v1.12.3 kube-scheduler:v1.12.3 etcd:3.2.24 coredns:1.2.2 flannel:v0.10.0-amd64 pause:3.1 kubernetes-dashboard-amd64:v1.10.0) for imageName in ${images[@]} ; do docker pull ${down_image_url}$imageName docker tag ${down_image_url}$imageName k8s.gcr.io/$imageName docker rmi ${down_image_url}$imageName done
5.查看镜像是否下载
[root@k8s-master ~]# docker images REPOSITORY TAG IMAGE ID CREATED SIZE k8s.gcr.io/kube-proxy-amd64 v1.10.0 bfc21aadc7d3 7 months ago 97 MB k8s.gcr.io/kube-apiserver-amd64 v1.10.0 af20925d51a3 7 months ago 225 MB k8s.gcr.io/kube-scheduler-amd64 v1.10.0 704ba848e69a 7 months ago 50.4 MB k8s.gcr.io/kube-controller-manager-amd64 v1.10.0 ad86dbed1555 7 months ago 148 MB k8s.gcr.io/etcd-amd64 3.1.12 52920ad46f5b 7 months ago 193 MB k8s.gcr.io/kubernetes-dashboard-amd64 v1.8.3 0c60bcf89900 8 months ago 102 MB quay.io/coreos/flannel v0.10.0-amd64 f0fad859c909 9 months ago 44.6 MB k8s.gcr.io/k8s-dns-dnsmasq-nanny-amd64 1.14.8 c2ce1ffb51ed 9 months ago 41 MB k8s.gcr.io/k8s-dns-sidecar-amd64 1.14.8 6f7f2dc7fab5 9 months ago 42.2 MB k8s.gcr.io/k8s-dns-kube-dns-amd64 1.14.8 80cc5ea4b547 9 months ago 50.5 MB k8s.gcr.io/pause-amd64 3.1 da86e6ba6ca1 10 months ago 742 kB
三、初始化k8s集群(仅master节点)
[root@k8s-master ~]# kubeadm init --kubernetes-version=v1.12.3 --pod-network-cidr=10.244.0.0/16(指定pod的网段)
#指定版本,需要与docker中的镜像版本一致
配置环境变量
[root@k8s-master ~]# mkdir -p $HOME/.kube [root@k8s-master ~]# cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
查看集群状态:
[root@k8s-master ~]# kubectl get nodes NAME STATUS ROLES AGE VERSION k8s-master NotReady master 10m v1.10.0
可以看到STATUS显示未就绪,这是因为现在还没有网络可用,下一步我们需要配置网络功能
四、配置网络
docker pull quay.io/coreos/flannel:v0.9.1-amd64 mkdir -p /etc/cni/net.d/ cat <<EOF> /etc/cni/net.d/10-flannel.conf {"name":"cbr0","type":"flannel","delegate": {"isDefaultGateway": true}} EOF mkdir /usr/share/oci-umount/oci-umount.d -p mkdir /run/flannel/ cat <<EOF> /run/flannel/subnet.env FLANNEL_NETWORK=172.100.0.0/16 FLANNEL_SUBNET=172.100.1.0/24 FLANNEL_MTU=1450 FLANNEL_IPMASQ=true EOF
然后执行命令
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/v0.9.1/Documentation/kube-flannel.yml
安装完成后,稍等一下再查看集群状态,发现已经就绪了(master上)
[root@k8s-master ~]# kubectl get nodes NAME STATUS ROLES AGE VERSION k8s-master Ready master 1h v1.10.0
五、node节点加入集群(所有node上)
[root@k8s-node1 ~]# kubeadm join 192.168.199.116:6443 --token fauq6n.e1iv2mbxotq5a1zp --discovery-token-ca-cert-hash sha81dbbeb6
如果token失效,则需要重新生成token再加入
1、查看token,执行kubeadm token list
2、没有token则生成,执行kubeadm token create
3、获取ca证书sha256编码hash值
openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //'
4、node节点加入
kubeadm join 10.167.11.153:6443 --token o4avtg.65ji6b778nyacw68 --discovery-token-ca-cert-hash sha256:2cc3029123db737f234186636330e87b5510c173c669f513a9c0e0da395515b0
5、在master查看是否成功
六、自动补全
yum install -y bash-completion source /usr/share/bash-completion/bash_completion source <(kubectl completion bash) echo "source <(kubectl completion bash)" >> ~/.bashrc