DevOps 之 K8s集群搭建過程


DevOps-K8s集群

DevOps

DevOps 是一組過程、方法與系統的統稱,並通過工具實現自動化部署,確保部署任務的可重復性,減少部署出錯的可能性。隨着微服務、中台架構的興起,DevOps的重要性日益顯著。本文以K8s為底座,部署Gitlab+Jenkins+K8s的方式全面構建DevOps的CICD功能

K8s部署

使用kubeadm 的方式部署集群
本文使用 1 master 2 node 的方式部署,3台機器需要安裝新版本的Docker-CE

1.准備開始

主機 系統版本 Docker版本
master CentOS Linux release 7.6.1810 (Core) 20.10.5
node1 CentOS Linux release 7.6.1810 (Core) 20.10.5
node2 CentOS Linux release 7.6.1810 (Core) 20.10.5
  • 一台兼容Linux主機。Kubernetes項目為基於Debian和Red Hat 的Linux 發行版以及一些不提供包管理的發行版提供通用的命令
  • 每台機器2GB 或更多的RAM (如果少語音這個數字將會影響應用的運行)
  • 2 CPU 核或更多
  • 集群中的所有機器的網絡彼此均能互相連接(公網和內網都可以)
  • 節點之間不可以有重復的主機名、MAC地址或者prodect_uuid
  • 禁用交換分區。保證kubelet正常功能

檢查所需端口

協議 方向 端口范圍 作用 使用者
TCP 入站 6443 Kubernetes API服務 所有組件
TCP 入站 2379~2380 etcd服務器客戶端API kube-apiserver,etcd
TCP 入站 10250 Kubelet API Kubelet 自身、控制本平面組件
TCP 入站 10251 kuber-scheduler kube-scheduler自身
TCP 入站 10252 kube-controller-manager kube-controller-manager自身

工作節點

協議 方向 端口范圍 作用 使用者
TCP 入站 10250 Kubelet API kubelet 自身、控制平面組件
TCP 入站 30000~32767 NodePort 服務 所有組件

修改主機名
修改主機名分為三步

  1. 使用hostnamectl set-hostname ${name}
  2. 修改 /etc/sysconfig/network 文件 修改 HOSTNAME = ${name}.domainname
  3. 修改本機的域名解析文件 /etc/hosts, 使得本機的應用程序能解析新的主機名
    修改 xxx.xxx.xxx.xxx ${name}.domainname ${name}

2.使用Kubeadm安裝主節點和計算節點(1.18.20)

cat < /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
EOF

# Set SELinux in permissive mode (effectively disabling it)
setenforce 0 
sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config
## Master節點需要安裝 kubectl  Node節點不需要
yum install -y kubelet-1.18.20 kubeadm-1.18.20 kubectl-1.18.20 --disableexcludes=kubernetes systemctl enable --now kubelet
## Node 節點命令
yum install -y kubelet-1.18.20 kubeadm-1.18.20 --disableexcludes=kubernetes systemctl enable --now kubelet

3.設置iptables

cat <<EOF > /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF

4.關閉防火牆

systemctl stop firewalld
systemctl disable firewalld

5.設置cgroup為 systemd

cat > /etc/docker/daemon.json <<EOF
{
"exec-opts": ["native.cgroupdriver=systemd"]
}
EOF
systemctl daemon-reload
systemctl enable docker
systemctl restart docker

6.關閉Swap 分區

swapoff -a
sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab

7.設置DNS

vim /etc/hosts
##添加集群的配置
xxx.xxx.xxx.xxx master
xxx.xxx.xxx.xxx node1
xxx.xxx.xxx.xxx node2

8.Master初始化

[root@m-100 ~]# kubeadm init \
        --cert-dir /etc/kubernetes/pki \
        --image-repository registry.aliyuncs.com/google_containers \
        --pod-network-cidr 10.11.0.0/16 \
        --service-cidr 10.20.0.0/16
W0217 10:01:19.304156   10476 version.go:101] could not fetch a Kubernetes version from the internet: unable to get URL "https://dl.k8s.io/release/stable-1.txt": Get https://dl.k8s.io/release/stable-1.txt: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
W0217 10:01:19.304321   10476 version.go:102] falling back to the local client version: v1.17.3
W0217 10:01:19.304755   10476 validation.go:28] Cannot validate kube-proxy config - no validator is available
W0217 10:01:19.304772   10476 validation.go:28] Cannot validate kubelet config - no validator is available
[init] Using Kubernetes version: v1.17.3
[preflight] Running pre-flight checks
    [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [m-100 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.12.1.100]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [m-100 localhost] and IPs [10.12.1.100 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [m-100 localhost] and IPs [10.12.1.100 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
W0217 10:01:24.604930   10476 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[control-plane] Creating static Pod manifest for "kube-scheduler"
W0217 10:01:24.606403   10476 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 35.003335 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.17" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node m-100 as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node m-100 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: 0aizeu.ji5y0dooy8g658n1
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
 
Your Kubernetes control-plane has initialized successfully!
 
To start using your cluster, you need to run the following as a regular user:
 
  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config
 
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/
 
Then you can join any number of worker nodes by running the following on each as root:
 
kubeadm join 10.12.1.100:6443 --token 0aizeu.ji5y0dooy8g658n1 \
    --discovery-token-ca-cert-hash sha256:1bc30ca4b0c09582bf0537ca2f516ae2c510becd5bdefe4ec866f9201f3519a5

9.Node節點分別執行 kubeadm join

kubeadm join 10.12.1.100:6443 --token 0aizeu.ji5y0dooy8g658n1 \
    --discovery-token-ca-cert-hash sha256:1bc30ca4b0c09582bf0537ca2f516ae2c510becd5bdefe4ec866f9201f3519a5

10.配置kubectl

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

11.Master節點驗證集群

kubectl get po -A

發現coredns pending ,是因為未安裝網絡插件
集群中安裝calico插件

kubectl apply -f https://docs.projectcalico.org/v3.8/manifests/calico.yaml

此處需要注意 calico.yaml中這個參數 CALICO_IPV4POOL_CIDR,需要與公司網沖突

安裝完成后發現 coreDns 都已經正常


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM