一、概述
手工搭建 Kubernetes 集群是一件很繁瑣的事情,為了簡化這些操作,就產生了很多安裝配置工具,如 Kubeadm ,Kubespray,RKE 等組件,我最終選擇了官方的 Kubeadm 主要是不同的 Kubernetes 版本都有一些差異,Kubeadm 更新與支持的會好一些。Kubeadm 是 Kubernetes 官方提供的快速安裝和初始化 Kubernetes 集群的工具,目前的還處於孵化開發狀態,跟隨 Kubernetes 每個新版本的發布都會同步更新, 強烈建議先看下官方的文檔了解下各個組件與對象的作用。
https://kubernetes.io/docs/concepts/
https://kubernetes.io/docs/setup/independent/install-kubeadm/
https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm/
系統環境
系統 | 內核 | docker | ip | 主機名 | 配置 |
---|---|---|---|---|---|
centos 7.6 | 3.10.0-957.el7.x86_64 | 19.03.5 | 192.168.31.150 | k8s-master | 2核4G |
centos 7.6 | 3.10.0-957.el7.x86_64 | 19.03.5 | 192.168.31.183 | k8s-node01 | 2核4G |
注意:請確保CPU至少2核,內存2G
二、准備工作
關閉防火牆
如果各個主機啟用了防火牆,需要開放Kubernetes各個組件所需要的端口,可以查看Installing kubeadm中的”Check required ports”一節。 這里簡單起見在各節點禁用防火牆:
systemctl stop firewalld
systemctl disable firewalld
禁用SELINUX
# 臨時禁用 setenforce 0 # 永久禁用 vim /etc/selinux/config # 或者修改/etc/sysconfig/selinux SELINUX=disabled
修改k8s.conf文件
cat <<EOF > /etc/sysctl.d/k8s.conf net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 EOF sysctl --system
關閉swap
# 臨時關閉
swapoff -a
修改 /etc/fstab
文件,注釋掉 SWAP 的自動掛載(永久關閉swap,重啟后生效)
# 注釋掉以下字段 /dev/mapper/cl-swap swap swap defaults 0 0
安裝docker
這里就不再敘述了,請參考鏈接:
https://www.cnblogs.com/xiao987334176/p/11771657.html
修改主機名
hostnamectl set-hostname k8s-master
注意:主機名不能帶下划線,只能帶中划線
否則安裝k8s會報錯
could not convert cfg to an internal cfg: nodeRegistration.name: Invalid value: "k8s_master": a DNS-1123 subdomain must consist of lower case alphanumeric characters, '-' or '.', and must start and end with an alphanumeric character (e.g. 'example.com', regex used for validation is '[a-z0-9]([-a-z0-9]*[a-z0-9])?(\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*')
三、安裝kubeadm,kubelet,kubectl
在各節點安裝kubeadm,kubelet,kubectl
修改yum安裝源
cat <<EOF > /etc/yum.repos.d/kubernetes.repo [kubernetes] name=Kubernetes baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/ enabled=1 gpgcheck=1 repo_gpgcheck=1 gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg EOF
安裝軟件
目前最新版本是:1.16.3
yum install -y kubelet-1.16.3-0 kubeadm-1.16.3-0 kubectl-1.16.3-0 systemctl enable kubelet && systemctl start kubelet
以上,就是master和node都需要操作的部分。
四、初始化Master節點
運行初始化命令
kubeadm init --kubernetes-version=1.16.3 \ --apiserver-advertise-address=192.168.31.150 \ --image-repository registry.aliyuncs.com/google_containers \ --service-cidr=10.1.0.0/16 \ --pod-network-cidr=10.244.0.0/16
參數解釋:
–kubernetes-version: 用於指定k8s版本; –apiserver-advertise-address:用於指定kube-apiserver監聽的ip地址,就是 master本機IP地址。 –pod-network-cidr:用於指定Pod的網絡范圍; 10.244.0.0/16 –service-cidr:用於指定SVC的網絡范圍; –image-repository: 指定阿里雲鏡像倉庫地址
這一步很關鍵,由於kubeadm 默認從官網k8s.grc.io下載所需鏡像,國內無法訪問,因此需要通過–image-repository指定阿里雲鏡像倉庫地址
集群初始化成功后返回如下信息:
記錄生成的最后部分內容,此內容需要在其它節點加入Kubernetes集群時執行。
輸出如下:
... Your Kubernetes control-plane has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ Then you can join any number of worker nodes by running the following on each as root: kubeadm join 192.168.31.150:6443 --token ute1qr.ylhan3tn3eohip20 \ --discovery-token-ca-cert-hash sha256:f7b37ecd602deb59e0ddc2a0cfa842f8c3950690f43a5d552a7cefef37d1fa31
配置kubectl工具
mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config
安裝Calico
mkdir k8s cd k8s wget https://docs.projectcalico.org/v3.10/getting-started/kubernetes/installation/hosted/kubernetes-datastore/calico-networking/1.7/calico.yaml ## 將192.168.0.0/16修改ip地址為10.244.0.0/16 sed -i 's/192.168.0.0/10.244.0.0/g' calico.yaml
加載Calico
kubectl apply -f calico.yaml
查看Pod狀態
等待幾分鍾,確保所有的Pod都處於Running狀態
[root@k8s_master k8s]# kubectl get pod --all-namespaces -o wide NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES kube-system calico-kube-controllers-6b64bcd855-tdv2h 1/1 Running 0 2m37s 192.168.235.195 k8s-master <none> <none> kube-system calico-node-4xgk8 1/1 Running 0 2m38s 192.168.31.150 k8s-master <none> <none> kube-system coredns-58cc8c89f4-8672x 1/1 Running 0 45m 192.168.235.194 k8s-master <none> <none> kube-system coredns-58cc8c89f4-8h8tq 1/1 Running 0 45m 192.168.235.193 k8s-master <none> <none> kube-system etcd-k8s-master 1/1 Running 0 44m 192.168.31.150 k8s-master <none> <none> kube-system kube-apiserver-k8s-master 1/1 Running 0 44m 192.168.31.150 k8s-master <none> <none> kube-system kube-controller-manager-k8s-master 1/1 Running 0 44m 192.168.31.150 k8s-master <none> <none> kube-system kube-proxy-6f42j 1/1 Running 0 45m 192.168.31.150 k8s-master <none> <none> kube-system kube-scheduler-k8s-master 1/1 Running 0 44m 192.168.31.150 k8s-master <none> <none>
注意:calico-kube-controllers容器的網段不是10.244.0.0/16
刪除Calico,重新加載
kubectl apply -f calico.yaml
kubectl delete -f calico.yaml
再次查看ip
[root@k8s-master k8s]# kubectl get pod --all-namespaces -o wide NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES kube-system calico-kube-controllers-6b64bcd855-qn6bs 0/1 Running 0 18s 10.244.235.193 k8s-master <none> <none> kube-system calico-node-cdnvz 1/1 Running 0 18s 192.168.31.150 k8s-master <none> <none> kube-system coredns-58cc8c89f4-8672x 1/1 Running 1 5h22m 192.168.235.197 k8s-master <none> <none> kube-system coredns-58cc8c89f4-8h8tq 1/1 Running 1 5h22m 192.168.235.196 k8s-master <none> <none> kube-system etcd-k8s-master 1/1 Running 1 5h22m 192.168.31.150 k8s-master <none> <none> kube-system kube-apiserver-k8s-master 1/1 Running 1 5h21m 192.168.31.150 k8s-master <none> <none> kube-system kube-controller-manager-k8s-master 1/1 Running 1 5h22m 192.168.31.150 k8s-master <none> <none> kube-system kube-proxy-6f42j 1/1 Running 1 5h22m 192.168.31.150 k8s-master <none> <none> kube-system kube-scheduler-k8s-master 1/1 Running 1 5h21m 192.168.31.150 k8s-master <none> <none>
發現,ip地址已經是10.244.0.0/16 網段了。
設置開機啟動
systemctl enable kubelet
五、node加入集群
准備工作
請查看上文中的准備工作,確保都執行了!!!
修改主機名部分,改為k8s-node01
hostnamectl set-hostname k8s-node01
加入節點
登錄到node節點,確保已經安裝了docker和kubeadm,kubelet,kubectl
kubeadm join 192.168.31.150:6443 --token ute1qr.ylhan3tn3eohip20 \ --discovery-token-ca-cert-hash sha256:f7b37ecd602deb59e0ddc2a0cfa842f8c3950690f43a5d552a7cefef37d1fa31
設置開機啟動
systemctl enable kubelet
查看節點
登錄到master,使用命令查看
[root@k8s_master k8s]# kubectl get nodes -o wide NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME k8s-master Ready master 87m v1.16.3 192.168.31.150 <none> CentOS Linux 7 (Core) 3.10.0-957.el7.x86_64 docker://19.3.5 k8s-node01 Ready <none> 5m14s v1.16.3 192.168.31.183 <none> CentOS Linux 7 (Core) 3.10.0-957.el7.x86_64 docker://19.3.5
六、創建Pod
創建nginx
kubectl create deployment nginx --image=nginx kubectl expose deployment nginx --port=80 --type=NodePort
查看pod和svc
[root@k8s-master k8s]# kubectl get pod,svc -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES pod/nginx-86c57db685-z2kdd 1/1 Running 0 18m 10.244.85.194 k8s-node01 <none> <none> NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR service/kubernetes ClusterIP 10.1.0.1 <none> 443/TCP 111m <none> service/nginx NodePort 10.1.111.179 <none> 80:30876/TCP 24m app=nginx
允許外網訪問nodePort
iptables -P FORWARD ACCEPT
測試訪問
使用master ip+nodeport端口方式訪問
http://192.168.31.150:30876/
效果如下:
命令補全
(僅master)
yum install -y bash-completion source <(kubectl completion bash) echo "source <(kubectl completion bash)" >> ~/.bashrc source ~/.bashrc
必須退出一次,再次登錄,就可以了
七、使用yml發布應用
以flaskapp為例子
flaskapp-deployment.yaml
apiVersion: apps/v1 kind: Deployment metadata: name: flaskapp-1 spec: selector: matchLabels: run: flaskapp-1 replicas: 1 template: metadata: labels: run: flaskapp-1 spec: containers: - name: flaskapp-1 image: jcdemo/flaskapp ports: - containerPort: 5000
flaskapp-service.yaml
apiVersion: v1 kind: Service metadata: name: flaskapp-1 labels: run: flaskapp-1 spec: type: NodePort ports: - port: 5000 name: flaskapp-port targetPort: 5000 protocol: TCP nodePort: 30005 selector: run: flaskapp-1
加載yml文件
kubectl apply -f flaskapp-service.yaml
kubectl apply -f flaskapp-deployment.yaml
訪問頁面
使用master ip+nodeport訪問
http://192.168.31.183:30005/
效果如下:
注意:使用node節點ip+nodeport也可以訪問。
本文參考鏈接:
https://yq.aliyun.com/articles/626118
https://blog.csdn.net/fenglailea/article/details/88745642