kubeadm部署k8s集群


一、部署步骤

1、环境准备

2、Docker部署与配置

3、kubeadm、kubectl、kubelet命令安装

4、初始化Master节点

5、Master节点和Worker节点加入集群

6、容器网络部署

7、Dashboard部署

 

二、环境准备

1、设置主机名

 

[root@localhost ~]# hostnamectl set-hostname master

 重新登陆终端生效

 

2、关闭防火墙

[root@master ~]# systemctl stop firewalld
[root@master ~]# systemctl disable firewalld

3、关闭Selinux

[root@master ~]# sed -i 's/enforcing/disabled/' /etc/selinux/config
[root@master ~]# setenforce 0

4、关闭Swap

[root@master ~]# swapoff -a
[root@master ~]# vi /etc/fstab

注释fstab文件内的swap行

 

5、添加hosts本地解析记录

master节点需要添加所有节点的主机名解析记录,node节点添加本机主机名解析,所有节点需要添加k8s域名解析记录

[root@master ~]# cat >> /etc/hosts <<EOF
172.16.11.139 master
172.16.11.139 k8s-api
EOF

需要添加本机主机名和k8s-api的解析记录,k8s-api是apiserver的地址

 

6、修改内核参数

 

[root@master ~]# cat >>/etc/sysctl.d/k8s.conf<<EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
[root@master ~]# sysctl --system

 

 

 

7、时间同步

[root@master ~]# yum -y install ntpdate
[root@master ~]# ntpdate time.windows.com

 

 

三、Docker部署与配置国内镜像源

1、添加YUM源

[root@master ~]# wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -O /etc/yum.repos.d/docker-ce.repo

2、安装Docker

[root@master ~]# yum -y install docker-ce

3、配置开机启动并启动Docker

[root@master ~]# systemctl enable docker
[root@master ~]# systemctl start docker

4、配置国内镜像源

[root@master ~]# cat >>/etc/docker/daemon.json<<EOF
{
  "registry-mirrors": ["https://b9pmyelo.mirror.aliyuncs.com"]
}
EOF
[root@master ~]# systemctl restart docker

5、查看配置是否生效

[root@master ~]# docker info

 

 

 

 

四、安装kubeadm、kubectl、kubelet命令

 1、添加YUM源

[root@master ~]# cat > /etc/yum.repos.d/kubernetes.repo << EOF
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

 

 2、安装

[root@master ~]# yum install -y kubeadm-1.19.0 kubelet-1.19.0 kubectl-1.19.0

 

 

五、初始化Master节点

 1、准备初始化配置

[root@master ~]# cat >>init.yaml<<EOF

apiVersion: kubeadm.k8s.io/v1beta2

 
 

bootstrapTokens:

 
 

- groups:

 
 

  - system:bootstrappers:kubeadm:default-node-token

 
 

  token: abcdef.0123456789abcdef

 
 

  ttl: 24h0m0s

 
 

  usages:

 
 

  - signing

 
 

  - authentication

 
 

kind: InitConfiguration

 
 

localAPIEndpoint:

 
 

  #advertiseAddress的值填本机内网地址

 
 

  advertiseAddress: 172.16.11.139

 
 

  #生产环境建议修改apiserver端口

 
 

  bindPort: 6443

 
 

nodeRegistration:

 
 

  criSocket: /var/run/dockershim.sock

 
 

  #根据本机主机名修改

 
 

  name: master

 
 

  taints:

 
 

  - effect: NoSchedule

 
 

    key: node-role.kubernetes.io/master

 
 

---

 
 

apiServer:

 
 

  timeoutForControlPlane: 4m0s

 
 

apiVersion: kubeadm.k8s.io/v1beta2

 
 

certificatesDir: /etc/kubernetes/pki

 
 

clusterName: kubernetes

 
 

#master节点中的hosts文件中添加k8s-api的解析

 
 

#初始化Master时解析指向master本机内网地址

 
 

#初始化完成后修改为负载均衡的外网地址

 
 

controlPlaneEndpoint: "k8s-api:6443"

 
 

controllerManager: {}

 
 

dns:

 
 

  type: CoreDNS

 
 

etcd:

 
 

  local:

 
 

    dataDir: /data/k8s/etcd

 
 

imageRepository: registry.aliyuncs.com/google_containers

 
 

kind: ClusterConfiguration

 
 

#根据安装的kubeadm版本修改

 
 

kubernetesVersion: v1.19.0

 
 

networking:

 
 

  dnsDomain: cluster.local

 
 

  serviceSubnet: 10.96.0.0/12

 
 

  #pod子网网段,跟网络插件中配置保持一致

 
 

  podSubnet: 10.244.0.0/16

 
 

scheduler: {}

EOF

 

2、初始化Master

选择一个Master节点进行初始化,其余Master节点通过命令加入集群

 

[root@master ~]# kubeadm init --config init.yaml --ignore-preflight-errors=all --upload-certs

3、复制集群配置文件

 

[root@master ~]# mkdir -p $HOME/.kube
[root@master ~]# cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@master ~]# chown $(id -u):$(id -g) $HOME/.kube/config

将集群配置文件复制到所有节点上以便在任意节点上操作集群

4、添加kubectl补全功能

[root@master ~]# yum -y install bash-completion && cd ~;echo "source <(kubectl completion bash)" >> .bashrc

重新登陆终端生效

 

六、Master节点和Worker节点加入集群

Master节点:

 

[root@master ~]#kubeadm join k8s-api:6443 --token abcdef.0123456789abcdef \
>--discovery-token-ca-cert-hash sha256:966c9c2447d11f1d7a2e8c2a73890895abbc24d1799a5110bbbea9998084e7d0 \
>--control-plane --certificate-key 86c6fa16bdead1f1b2b6f0c1112ba63b26f521a0a63d25208ccd7adde66ac33b

 

 

 

Worker节点:

 

[root@master ~]#kubeadm join k8s-api:6443 --token abcdef.0123456789abcdef \
>--discovery-token-ca-cert-hash sha256:966c9c2447d11f1d7a2e8c2a73890895abbc24d1799a5110bbbea9998084e7d0

 扩容时重新获取加入集群的命令

 

[root@master ~]# kubeadm token create --print-join-command

 

 

加入集群命令以初始化Master节点后输出命令为准

七、部署容器网络

1、下载yaml文件

[root@master ~]# wget https://docs.projectcalico.org/v3.20/manifests/calico.yaml --no-check-certificate

 

2、修改Pod网络地址

 

[root@master ~]# vi calico.yaml

 

 

 

 

 取消红框处注释并修改为Master节点初始化时配置的Pod网段

3、部署

[root@master ~]# kubectl apply -f calico.yaml

 

 

八、部署Dashboard

 

1、下载yaml文件

[root@master ~]# wget https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.3/aio/deploy/recommended.yaml

 

 

 

[root@master ~]# cat >> recommended.yaml<<EOF# Copyright 2017 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

apiVersion: v1
kind: Namespace
metadata:
  name: kubernetes-dashboard

---

apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard

---

kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
spec:
  ports:
    - port: 443
      targetPort: 8443
  selector:
    k8s-app: kubernetes-dashboard

---

apiVersion: v1
kind: Secret
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-certs
  namespace: kubernetes-dashboard
type: Opaque

---

apiVersion: v1
kind: Secret
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-csrf
  namespace: kubernetes-dashboard
type: Opaque
data:
  csrf: ""

---

apiVersion: v1
kind: Secret
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-key-holder
  namespace: kubernetes-dashboard
type: Opaque

---

kind: ConfigMap
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-settings
  namespace: kubernetes-dashboard

---

kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
rules:
  # Allow Dashboard to get, update and delete Dashboard exclusive secrets.
  - apiGroups: [""]
    resources: ["secrets"]
    resourceNames: ["kubernetes-dashboard-key-holder", "kubernetes-dashboard-certs", "kubernetes-dashboard-csrf"]
    verbs: ["get", "update", "delete"]
    # Allow Dashboard to get and update 'kubernetes-dashboard-settings' config map.
  - apiGroups: [""]
    resources: ["configmaps"]
    resourceNames: ["kubernetes-dashboard-settings"]
    verbs: ["get", "update"]
    # Allow Dashboard to get metrics.
  - apiGroups: [""]
    resources: ["services"]
    resourceNames: ["heapster", "dashboard-metrics-scraper"]
    verbs: ["proxy"]
  - apiGroups: [""]
    resources: ["services/proxy"]
    resourceNames: ["heapster", "http:heapster:", "https:heapster:", "dashboard-metrics-scraper", "http:dashboard-metrics-scraper"]
    verbs: ["get"]

---

kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
rules:
  # Allow Metrics Scraper to get metrics from the Metrics server
  - apiGroups: ["metrics.k8s.io"]
    resources: ["pods", "nodes"]
    verbs: ["get", "list", "watch"]

---

apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: kubernetes-dashboard
subjects:
  - kind: ServiceAccount
    name: kubernetes-dashboard
    namespace: kubernetes-dashboard

---

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: kubernetes-dashboard
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: kubernetes-dashboard
subjects:
  - kind: ServiceAccount
    name: kubernetes-dashboard
    namespace: kubernetes-dashboard

---

kind: Deployment
apiVersion: apps/v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
spec:
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      k8s-app: kubernetes-dashboard
  template:
    metadata:
      labels:
        k8s-app: kubernetes-dashboard
    spec:
      containers:
        - name: kubernetes-dashboard
          image: kubernetesui/dashboard:v2.4.0
          imagePullPolicy: Always
          ports:
            - containerPort: 8443
              protocol: TCP
          args:
            - --auto-generate-certificates
            - --namespace=kubernetes-dashboard
            # Uncomment the following line to manually specify Kubernetes API server Host
            # If not specified, Dashboard will attempt to auto discover the API server and connect
            # to it. Uncomment only if the default does not work.
            # - --apiserver-host=http://my-address:port
          volumeMounts:
            - name: kubernetes-dashboard-certs
              mountPath: /certs
              # Create on-disk volume to store exec logs
            - mountPath: /tmp
              name: tmp-volume
          livenessProbe:
            httpGet:
              scheme: HTTPS
              path: /
              port: 8443
            initialDelaySeconds: 30
            timeoutSeconds: 30
          securityContext:
            allowPrivilegeEscalation: false
            readOnlyRootFilesystem: true
            runAsUser: 1001
            runAsGroup: 2001
      volumes:
        - name: kubernetes-dashboard-certs
          secret:
            secretName: kubernetes-dashboard-certs
        - name: tmp-volume
          emptyDir: {}
      serviceAccountName: kubernetes-dashboard
      nodeSelector:
        "kubernetes.io/os": linux
      # Comment the following tolerations if Dashboard must not be deployed on master
      tolerations:
        - key: node-role.kubernetes.io/master
          effect: NoSchedule

---

kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: dashboard-metrics-scraper
  name: dashboard-metrics-scraper
  namespace: kubernetes-dashboard
spec:
  ports:
    - port: 8000
      targetPort: 8000
  selector:
    k8s-app: dashboard-metrics-scraper

---

kind: Deployment
apiVersion: apps/v1
metadata:
  labels:
    k8s-app: dashboard-metrics-scraper
  name: dashboard-metrics-scraper
  namespace: kubernetes-dashboard
spec:
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      k8s-app: dashboard-metrics-scraper
  template:
    metadata:
      labels:
        k8s-app: dashboard-metrics-scraper
    spec:
      securityContext:
        seccompProfile:
          type: RuntimeDefault
      containers:
        - name: dashboard-metrics-scraper
          image: kubernetesui/metrics-scraper:v1.0.7
          ports:
            - containerPort: 8000
              protocol: TCP
          livenessProbe:
            httpGet:
              scheme: HTTP
              path: /
              port: 8000
            initialDelaySeconds: 30
            timeoutSeconds: 30
          volumeMounts:
          - mountPath: /tmp
            name: tmp-volume
          securityContext:
            allowPrivilegeEscalation: false
            readOnlyRootFilesystem: true
            runAsUser: 1001
            runAsGroup: 2001
      serviceAccountName: kubernetes-dashboard
      nodeSelector:
        "kubernetes.io/os": linux
      # Comment the following tolerations if Dashboard must not be deployed on master
      tolerations:
        - key: node-role.kubernetes.io/master
          effect: NoSchedule
      volumes:
        - name: tmp-volume
          emptyDir: {}
EOF

 

 

2、修改Service类型为NodePort

[root@master ~]# vi recommended.yaml

 

 

 

 3、部署

[root@master ~]# kubectl apply -f recommended.yaml

 

4、创建管理员并获取登录Token

 

[root@master ~]# kubectl create serviceaccount dashboard-admin -n kube-system
[root@master ~]# kubectl create clusterrolebinding dashboard-admin --clusterrole=cluster-admin --serviceaccount=kube-system:dashboard-admin
kubectl describe secrets -n kube-system $(kubectl -n kube-system get secret | awk '/dashboard-admin/{print $1}')

 

 

 

 5、查看端口

[root@master ~]# kubectl get svc -n kubernetes-dashboard

 

 

 

 

 443为集群内部访问端口,31245为集群外访问端口,外部访问端口可以在yaml中指定nodePort参数设置

修改yaml添加nodePort

 

[root@master ~]# vi recommended.yaml

 

 

 

 

 

 更新Service

[root@master ~]# kubectl apply -f recommended.yaml

 

查看端口

 

[root@master ~]# kubectl get svc -n kubernetes-dashboard

 

 

 6、登录Dashboard

地址:https://NodeIP:31245

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Dashboard


免责声明!

本站转载的文章为个人学习借鉴使用,本站对版权不负任何法律责任。如果侵犯了您的隐私权益,请联系本站邮箱yoyou2525@163.com删除。



 
粤ICP备18138465号  © 2018-2025 CODEPRJ.COM