kubeadm部署k8s集群


一、部署步驟

1、環境准備

2、Docker部署與配置

3、kubeadm、kubectl、kubelet命令安裝

4、初始化Master節點

5、Master節點和Worker節點加入集群

6、容器網絡部署

7、Dashboard部署

 

二、環境准備

1、設置主機名

 

[root@localhost ~]# hostnamectl set-hostname master

 重新登陸終端生效

 

2、關閉防火牆

[root@master ~]# systemctl stop firewalld
[root@master ~]# systemctl disable firewalld

3、關閉Selinux

[root@master ~]# sed -i 's/enforcing/disabled/' /etc/selinux/config
[root@master ~]# setenforce 0

4、關閉Swap

[root@master ~]# swapoff -a
[root@master ~]# vi /etc/fstab

注釋fstab文件內的swap行

 

5、添加hosts本地解析記錄

master節點需要添加所有節點的主機名解析記錄,node節點添加本機主機名解析,所有節點需要添加k8s域名解析記錄

[root@master ~]# cat >> /etc/hosts <<EOF
172.16.11.139 master
172.16.11.139 k8s-api
EOF

需要添加本機主機名和k8s-api的解析記錄,k8s-api是apiserver的地址

 

6、修改內核參數

 

[root@master ~]# cat >>/etc/sysctl.d/k8s.conf<<EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
[root@master ~]# sysctl --system

 

 

 

7、時間同步

[root@master ~]# yum -y install ntpdate
[root@master ~]# ntpdate time.windows.com

 

 

三、Docker部署與配置國內鏡像源

1、添加YUM源

[root@master ~]# wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -O /etc/yum.repos.d/docker-ce.repo

2、安裝Docker

[root@master ~]# yum -y install docker-ce

3、配置開機啟動並啟動Docker

[root@master ~]# systemctl enable docker
[root@master ~]# systemctl start docker

4、配置國內鏡像源

[root@master ~]# cat >>/etc/docker/daemon.json<<EOF
{
  "registry-mirrors": ["https://b9pmyelo.mirror.aliyuncs.com"]
}
EOF
[root@master ~]# systemctl restart docker

5、查看配置是否生效

[root@master ~]# docker info

 

 

 

 

四、安裝kubeadm、kubectl、kubelet命令

 1、添加YUM源

[root@master ~]# cat > /etc/yum.repos.d/kubernetes.repo << EOF
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

 

 2、安裝

[root@master ~]# yum install -y kubeadm-1.19.0 kubelet-1.19.0 kubectl-1.19.0

 

 

五、初始化Master節點

 1、准備初始化配置

[root@master ~]# cat >>init.yaml<<EOF

apiVersion: kubeadm.k8s.io/v1beta2

 
         

bootstrapTokens:

 
         

- groups:

 
         

  - system:bootstrappers:kubeadm:default-node-token

 
         

  token: abcdef.0123456789abcdef

 
         

  ttl: 24h0m0s

 
         

  usages:

 
         

  - signing

 
         

  - authentication

 
         

kind: InitConfiguration

 
         

localAPIEndpoint:

 
         

  #advertiseAddress的值填本機內網地址

 
         

  advertiseAddress: 172.16.11.139

 
         

  #生產環境建議修改apiserver端口

 
         

  bindPort: 6443

 
         

nodeRegistration:

 
         

  criSocket: /var/run/dockershim.sock

 
         

  #根據本機主機名修改

 
         

  name: master

 
         

  taints:

 
         

  - effect: NoSchedule

 
         

    key: node-role.kubernetes.io/master

 
         

---

 
         

apiServer:

 
         

  timeoutForControlPlane: 4m0s

 
         

apiVersion: kubeadm.k8s.io/v1beta2

 
         

certificatesDir: /etc/kubernetes/pki

 
         

clusterName: kubernetes

 
         

#master節點中的hosts文件中添加k8s-api的解析

 
         

#初始化Master時解析指向master本機內網地址

 
         

#初始化完成后修改為負載均衡的外網地址

 
         

controlPlaneEndpoint: "k8s-api:6443"

 
         

controllerManager: {}

 
         

dns:

 
         

  type: CoreDNS

 
         

etcd:

 
         

  local:

 
         

    dataDir: /data/k8s/etcd

 
         

imageRepository: registry.aliyuncs.com/google_containers

 
         

kind: ClusterConfiguration

 
         

#根據安裝的kubeadm版本修改

 
         

kubernetesVersion: v1.19.0

 
         

networking:

 
         

  dnsDomain: cluster.local

 
         

  serviceSubnet: 10.96.0.0/12

 
         

  #pod子網網段,跟網絡插件中配置保持一致

 
         

  podSubnet: 10.244.0.0/16

 
         

scheduler: {}

EOF

 

2、初始化Master

選擇一個Master節點進行初始化,其余Master節點通過命令加入集群

 

[root@master ~]# kubeadm init --config init.yaml --ignore-preflight-errors=all --upload-certs

3、復制集群配置文件

 

[root@master ~]# mkdir -p $HOME/.kube
[root@master ~]# cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@master ~]# chown $(id -u):$(id -g) $HOME/.kube/config

將集群配置文件復制到所有節點上以便在任意節點上操作集群

4、添加kubectl補全功能

[root@master ~]# yum -y install bash-completion && cd ~;echo "source <(kubectl completion bash)" >> .bashrc

重新登陸終端生效

 

六、Master節點和Worker節點加入集群

Master節點:

 

[root@master ~]#kubeadm join k8s-api:6443 --token abcdef.0123456789abcdef \
>--discovery-token-ca-cert-hash sha256:966c9c2447d11f1d7a2e8c2a73890895abbc24d1799a5110bbbea9998084e7d0 \
>--control-plane --certificate-key 86c6fa16bdead1f1b2b6f0c1112ba63b26f521a0a63d25208ccd7adde66ac33b

 

 

 

Worker節點:

 

[root@master ~]#kubeadm join k8s-api:6443 --token abcdef.0123456789abcdef \
>--discovery-token-ca-cert-hash sha256:966c9c2447d11f1d7a2e8c2a73890895abbc24d1799a5110bbbea9998084e7d0

 擴容時重新獲取加入集群的命令

 

[root@master ~]# kubeadm token create --print-join-command

 

 

加入集群命令以初始化Master節點后輸出命令為准

七、部署容器網絡

1、下載yaml文件

[root@master ~]# wget https://docs.projectcalico.org/v3.20/manifests/calico.yaml --no-check-certificate

 

2、修改Pod網絡地址

 

[root@master ~]# vi calico.yaml

 

 

 

 

 取消紅框處注釋並修改為Master節點初始化時配置的Pod網段

3、部署

[root@master ~]# kubectl apply -f calico.yaml

 

 

八、部署Dashboard

 

1、下載yaml文件

[root@master ~]# wget https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.3/aio/deploy/recommended.yaml

 

 

 

[root@master ~]# cat >> recommended.yaml<<EOF# Copyright 2017 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

apiVersion: v1
kind: Namespace
metadata:
  name: kubernetes-dashboard

---

apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard

---

kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
spec:
  ports:
    - port: 443
      targetPort: 8443
  selector:
    k8s-app: kubernetes-dashboard

---

apiVersion: v1
kind: Secret
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-certs
  namespace: kubernetes-dashboard
type: Opaque

---

apiVersion: v1
kind: Secret
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-csrf
  namespace: kubernetes-dashboard
type: Opaque
data:
  csrf: ""

---

apiVersion: v1
kind: Secret
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-key-holder
  namespace: kubernetes-dashboard
type: Opaque

---

kind: ConfigMap
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-settings
  namespace: kubernetes-dashboard

---

kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
rules:
  # Allow Dashboard to get, update and delete Dashboard exclusive secrets.
  - apiGroups: [""]
    resources: ["secrets"]
    resourceNames: ["kubernetes-dashboard-key-holder", "kubernetes-dashboard-certs", "kubernetes-dashboard-csrf"]
    verbs: ["get", "update", "delete"]
    # Allow Dashboard to get and update 'kubernetes-dashboard-settings' config map.
  - apiGroups: [""]
    resources: ["configmaps"]
    resourceNames: ["kubernetes-dashboard-settings"]
    verbs: ["get", "update"]
    # Allow Dashboard to get metrics.
  - apiGroups: [""]
    resources: ["services"]
    resourceNames: ["heapster", "dashboard-metrics-scraper"]
    verbs: ["proxy"]
  - apiGroups: [""]
    resources: ["services/proxy"]
    resourceNames: ["heapster", "http:heapster:", "https:heapster:", "dashboard-metrics-scraper", "http:dashboard-metrics-scraper"]
    verbs: ["get"]

---

kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
rules:
  # Allow Metrics Scraper to get metrics from the Metrics server
  - apiGroups: ["metrics.k8s.io"]
    resources: ["pods", "nodes"]
    verbs: ["get", "list", "watch"]

---

apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: kubernetes-dashboard
subjects:
  - kind: ServiceAccount
    name: kubernetes-dashboard
    namespace: kubernetes-dashboard

---

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: kubernetes-dashboard
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: kubernetes-dashboard
subjects:
  - kind: ServiceAccount
    name: kubernetes-dashboard
    namespace: kubernetes-dashboard

---

kind: Deployment
apiVersion: apps/v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
spec:
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      k8s-app: kubernetes-dashboard
  template:
    metadata:
      labels:
        k8s-app: kubernetes-dashboard
    spec:
      containers:
        - name: kubernetes-dashboard
          image: kubernetesui/dashboard:v2.4.0
          imagePullPolicy: Always
          ports:
            - containerPort: 8443
              protocol: TCP
          args:
            - --auto-generate-certificates
            - --namespace=kubernetes-dashboard
            # Uncomment the following line to manually specify Kubernetes API server Host
            # If not specified, Dashboard will attempt to auto discover the API server and connect
            # to it. Uncomment only if the default does not work.
            # - --apiserver-host=http://my-address:port
          volumeMounts:
            - name: kubernetes-dashboard-certs
              mountPath: /certs
              # Create on-disk volume to store exec logs
            - mountPath: /tmp
              name: tmp-volume
          livenessProbe:
            httpGet:
              scheme: HTTPS
              path: /
              port: 8443
            initialDelaySeconds: 30
            timeoutSeconds: 30
          securityContext:
            allowPrivilegeEscalation: false
            readOnlyRootFilesystem: true
            runAsUser: 1001
            runAsGroup: 2001
      volumes:
        - name: kubernetes-dashboard-certs
          secret:
            secretName: kubernetes-dashboard-certs
        - name: tmp-volume
          emptyDir: {}
      serviceAccountName: kubernetes-dashboard
      nodeSelector:
        "kubernetes.io/os": linux
      # Comment the following tolerations if Dashboard must not be deployed on master
      tolerations:
        - key: node-role.kubernetes.io/master
          effect: NoSchedule

---

kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: dashboard-metrics-scraper
  name: dashboard-metrics-scraper
  namespace: kubernetes-dashboard
spec:
  ports:
    - port: 8000
      targetPort: 8000
  selector:
    k8s-app: dashboard-metrics-scraper

---

kind: Deployment
apiVersion: apps/v1
metadata:
  labels:
    k8s-app: dashboard-metrics-scraper
  name: dashboard-metrics-scraper
  namespace: kubernetes-dashboard
spec:
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      k8s-app: dashboard-metrics-scraper
  template:
    metadata:
      labels:
        k8s-app: dashboard-metrics-scraper
    spec:
      securityContext:
        seccompProfile:
          type: RuntimeDefault
      containers:
        - name: dashboard-metrics-scraper
          image: kubernetesui/metrics-scraper:v1.0.7
          ports:
            - containerPort: 8000
              protocol: TCP
          livenessProbe:
            httpGet:
              scheme: HTTP
              path: /
              port: 8000
            initialDelaySeconds: 30
            timeoutSeconds: 30
          volumeMounts:
          - mountPath: /tmp
            name: tmp-volume
          securityContext:
            allowPrivilegeEscalation: false
            readOnlyRootFilesystem: true
            runAsUser: 1001
            runAsGroup: 2001
      serviceAccountName: kubernetes-dashboard
      nodeSelector:
        "kubernetes.io/os": linux
      # Comment the following tolerations if Dashboard must not be deployed on master
      tolerations:
        - key: node-role.kubernetes.io/master
          effect: NoSchedule
      volumes:
        - name: tmp-volume
          emptyDir: {}
EOF

 

 

2、修改Service類型為NodePort

[root@master ~]# vi recommended.yaml

 

 

 

 3、部署

[root@master ~]# kubectl apply -f recommended.yaml

 

4、創建管理員並獲取登錄Token

 

[root@master ~]# kubectl create serviceaccount dashboard-admin -n kube-system
[root@master ~]# kubectl create clusterrolebinding dashboard-admin --clusterrole=cluster-admin --serviceaccount=kube-system:dashboard-admin
kubectl describe secrets -n kube-system $(kubectl -n kube-system get secret | awk '/dashboard-admin/{print $1}')

 

 

 

 5、查看端口

[root@master ~]# kubectl get svc -n kubernetes-dashboard

 

 

 

 

 443為集群內部訪問端口,31245為集群外訪問端口,外部訪問端口可以在yaml中指定nodePort參數設置

修改yaml添加nodePort

 

[root@master ~]# vi recommended.yaml

 

 

 

 

 

 更新Service

[root@master ~]# kubectl apply -f recommended.yaml

 

查看端口

 

[root@master ~]# kubectl get svc -n kubernetes-dashboard

 

 

 6、登錄Dashboard

地址:https://NodeIP:31245

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Dashboard


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM