使用kubeadmin快速部署一个K8S集群(1.14)


一、系统要求

1.1、安装要求
操作系统
Ubuntu 16.04+
Debian 9
CentOS 7
RHEL 7
Fedora 25/26 (best-effort)
其他
内存2GB + ,2核CPU +

2.2、需要三台服务器,并配置IP地址
192.168.56.51
192.168.56.52
192.168.56.53

[root@k8s-node1 ~]# cat /etc/sysconfig/network-scripts/ifcfg-ens33
TYPE=Ethernet
PROXY_METHOD=none
BROWSER_ONLY=no
BOOTPROTO=static
DEFROUTE=yes
IPV4_FAILURE_FATAL=no
IPV6INIT=no
IPV6_AUTOCONF=yes
IPV6_DEFROUTE=yes
IPV6_FAILURE_FATAL=no
IPV6_ADDR_GEN_MODE=stable-privacy
NAME=ens33
UUID=3e8c39d8-c82e-46b4-ac8d-7d331e1360c6
DEVICE=ens33
ONBOOT=yes
IPADDR=192.168.56.52
PREFIX=24
GATEWAY=192.168.56.2
DNS1=192.168.56.2
DNS2=8.8.8.8
IPV6_PRIVACY=no

3.3、配置主机名
#k8s-master
hostname k8s-master
hostnamectl set-hostname k8s-master
hostname
k8s-master

#k8s-node1
hostname k8s-node1
hostnamectl set-hostname k8s-node1

#k8s-node2
hostname k8s-node2
hostnamectl set-hostname k8s-node2

4.4、配置域名解析,所有服务器都要配置
echo "192.168.56.51 k8s-master" >>/etc/hosts
echo "192.168.56.52 k8s-node1" >>/etc/hosts
echo "192.168.56.53 k8s-node2" >>/etc/hosts
[root@k8s-master ~]# tail -3 /etc/hosts
192.168.56.51 k8s-master
192.168.56.52 k8s-node1
192.168.56.53 k8s-node2

5.5、关闭防火墙和selinux
systemctl stop firewalld
systemctl disable firewalld

关闭selinux:
sed -i 's/enforcing/disabled/' /etc/selinux/config
setenforce 0

6.6、关闭swap:
swapoff -a # 临时关闭
free $ 查看Swap一行值均为0

vi /etc/fstab # 永久关闭
#/dev/mapper/centos_template-swap swap swap defaults 0 0 #注释掉这行

free $ 查看Swap一行值均为0

7.7、安装ntp,配置时间同步
[root@k8s-node2 ~]# yum install ntp wget -y

[root@k8s-master ~]# ntpdate ntp.api.bz
28 May 19:15:43 ntpdate[10356]: step time server 114.118.7.161 offset 0.908151 sec

8.8、将桥接的IPv4流量传递到iptables的链
cat >> /etc/sysctl.d/k8s.conf << EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl --system #生效

 8.9、添加国内yum源和repo源

1)配置阿里云base源

mv /etc/yum.repos.d/CentOS-Base.repo /etc/yum.repos.d/CentOS-Base.repo.bak
wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo  
cat /etc/yum.repos.d/CentOS-Base.repo
2)配置阿里云epel源

mv /etc/yum.repos.d/epel.repo /etc/yum.repos.d/epel.repo.backup  
mv /etc/yum.repos.d/epel-testing.repo /etc/yum.repos.d/epel-testing.repo.backup  
wget -O /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo
cat /etc/yum.repos.d/epel.repo

3)创建yum缓存进行测试
yum clean all
yum makecache fast

#退出再登录

exit

二、安装kubernetes集群

2.1、安装dokcer

#安装依赖
yum install -y yum-utils device-mapper-persistent-data lvm2

#配置aliyun源
cd /etc/yum.repos.d/
wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -O /etc/yum.repos.d/docker-ce.repo

#安装dokcer
yum -y install docker-ce-18.06.1.ce-3.el7 #要指定该版本安装
systemctl enable docker && systemctl start docker

#查看版本
[root@k8s-master ~]# docker --version
Docker version 18.06.1-ce, build e68fc7a

2.2、 添加阿里云YUM软件源

cat > /etc/yum.repos.d/kubernetes.repo << EOF
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

2.3、安装kubeadm,kubelet和kubectl  (备注:三台服务器都要操作)

kubeadm: 引导集群的命令
kubelet:集群中运行任务的代理程序
kubectl:命令行管理工具

由于版本更新频繁,这里指定版本号部署:

yum install -y kubelet-1.14.0 kubeadm-1.14.0 kubectl-1.14.0
systemctl enable kubelet

#此时此刻不需要启动kubelet,仅仅设置开机启动即可
systemctl enable kubelet

2.4、初始化kubernetes Master (备注:仅在master节点安装,再开一个窗口,一会要用到这个token

kubeadm init \
--apiserver-advertise-address=192.168.56.51 \
--image-repository registry.aliyuncs.com/google_containers \
--kubernetes-version v1.14.0 \
--service-cidr=10.1.0.0/16 \
--pod-network-cidr=10.244.0.0/16

备注:记得改成部署主机的IP地址。

初始化完成,会生成token,记得复制下来,一会后面要用到。
kubeadm join 192.168.56.51:6443 --token fwcgkb.1g65pag18m86w71e \
--discovery-token-ca-cert-hash sha256:980bd9b4dafd518650e7ccf69c76c81b8c8e55084b470e3daa3ed5dafdc56ba0

#查看镜像

[root@k8s-master ~]# docker images
REPOSITORY                                                        TAG                 IMAGE ID            CREATED             SIZE
registry.aliyuncs.com/google_containers/kube-apiserver            v1.13.3             fe242e556a99        3 months ago        181MB
registry.aliyuncs.com/google_containers/kube-controller-manager   v1.13.3             0482f6400933        3 months ago        146MB
registry.aliyuncs.com/google_containers/kube-proxy                v1.13.3             98db19758ad4        3 months ago        80.3MB
registry.aliyuncs.com/google_containers/kube-scheduler            v1.13.3             3a6f709e97a0        3 months ago        79.6MB
registry.aliyuncs.com/google_containers/coredns                   1.2.6               f59dcacceff4        6 months ago        40MB
registry.aliyuncs.com/google_containers/etcd                      3.2.24              3cab8e1b9802        8 months ago        220MB
registry.aliyuncs.com/google_containers/pause                     3.1                 da86e6ba6ca1        17 months ago       742kB

2.5、使用kubectl工具,执行: (备注:仅master上面执行)
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

2.6、他会在根目录下面生成config文件 (备注:仅master上面执行)
[root@k8s-master ~]# ls .kube/
cache config http-cache

2.7、查看节点(没有安装容器网络,所以是没有准备好的状态,这是正常的)备注:仅master上面执行)
[root@k8s-master ~]# kubectl get node
NAME STATUS ROLES AGE VERSION
k8s-master NotReady master 4m v1.13.3

 

2.8、安装Pod网络插件(CNI)

kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/a70459be0084506e4ec919aa1c114638878db11b/Documentation/kube-flannel.yml

 确保能够访问到quay.io这个registery。
如果下载失败,可以改成这个镜像地址:
docker pull lizhenliang/flannel:v0.11.0-amd64

因镜像在国外,没有翻墙无法下载成功,所以用打包好的镜像文件,上传到服务器上面,再导入镜像文件。

[root@k8s-master ~]# rz -E
rz waiting to receive.
[root@k8s-master ~]# ll
total 54116
-rw-------. 1 root root 1589 Apr 30 20:16 anaconda-ks.cfg
-rw-r--r-- 1 root root 55390720 May 28 22:34 flannelv0.11.0-amd64.tar
-rw-r--r-- 1 root root 12306 May 28 22:07 kube-flannel.yml

##################################################

#从其它服务器上面下载镜像,再打包

docker pull lizhenliang/flannel:v0.11.0-amd64
docker images
docker image save lizhenliang/flannel:v0.11.0-amd64 >flannelv0.11.0-amd64.tar

###################################################

#导入镜像文件到部署服务器上面。(备注:三台服务器都导入)

[root@k8s-master ~]# docker load <flannelv0.11.0-amd64.tar
7bff100f35cb: Loading layer [==================================================>] 4.672MB/4.672MB
5d3f68f6da8f: Loading layer [==================================================>] 9.526MB/9.526MB
9b48060f404d: Loading layer [==================================================>] 5.912MB/5.912MB
3f3a4ce2b719: Loading layer [==================================================>] 35.25MB/35.25MB
9ce0bb155166: Loading layer [==================================================>] 5.12kB/5.12kB
Loaded image: lizhenliang/flannel:v0.11.0-amd64

#查看新导入的flannel的镜像
[root@k8s-master ~]# docker images
REPOSITORY TAG IMAGE ID C
registry.aliyuncs.com/google_containers/kube-proxy v1.13.3 98db19758ad4 3
registry.aliyuncs.com/google_containers/kube-apiserver v1.13.3 fe242e556a99 3
registry.aliyuncs.com/google_containers/kube-controller-manager v1.13.3 0482f6400933 3
registry.aliyuncs.com/google_containers/kube-scheduler v1.13.3 3a6f709e97a0 3
lizhenliang/flannel v0.11.0-amd64 ff281650a721 3      #就是这个镜像
registry.aliyuncs.com/google_containers/coredns 1.2.6 f59dcacceff4 6
registry.aliyuncs.com/google_containers/etcd 3.2.24 3cab8e1b9802 8
registry.aliyuncs.com/google_containers/pause 3.1 da86e6ba6ca1 1

#再执行这个命令,就可以安装成功了。

kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/a70459be0084506e4ec919aa1c114638878db11b/Documentation/kube-flannel.yml

#如果之前安装过,没有成功,则执行下面这条命令,删除他

#删除yaml
[root@k8s-master ~]# kubectl delete -f kube-flannel.yml
podsecuritypolicy.extensions "psp.flannel.unprivileged" deleted
clusterrole.rbac.authorization.k8s.io "flannel" deleted
clusterrolebinding.rbac.authorization.k8s.io "flannel" deleted
serviceaccount "flannel" deleted
configmap "kube-flannel-cfg" deleted
daemonset.extensions "kube-flannel-ds-amd64" deleted
daemonset.extensions "kube-flannel-ds-arm64" deleted
daemonset.extensions "kube-flannel-ds-arm" deleted
daemonset.extensions "kube-flannel-ds-ppc64le" deleted
daemonset.extensions "kube-flannel-ds-s390x" deleted

 

#查看状态,成功
[root@k8s-master ~]# kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-78d4cf999f-pwxc2 1/1 Running 0 51m
coredns-78d4cf999f-qgc94 1/1 Running 0 51m
etcd-k8s-master 1/1 Running 3 50m
kube-apiserver-k8s-master 1/1 Running 5 50m
kube-controller-manager-k8s-master 1/1 Running 4 50m
kube-flannel-ds-amd64-fvgfs 1/1 Running 0 93s
kube-proxy-6sb9v 1/1 Running 3 51m
kube-scheduler-k8s-master 1/1 Running 3 50m

 

#查看状态,就成了Ready
[root@k8s-master ~]# kubectl get node
NAME STATUS ROLES AGE VERSION
k8s-master Ready master 54m v1.13.3

 

#下面这个是教你,如何得到images的下载地址

#下载这个文件
[root@k8s-master ~]# wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
--2019-05-28 22:07:05--  https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
Resolving raw.githubusercontent.com (raw.githubusercontent.com)... 151.101.108.133
Connecting to raw.githubusercontent.com (raw.githubusercontent.com)|151.101.108.133|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 12306 (12K) [text/plain]
Saving to: ‘kube-flannel.yml’

100%[========================================================>] 12,306      --.-K/s   in 0.004s  

2019-05-28 22:07:06 (2.71 MB/s) - ‘kube-flannel.yml’ saved [12306/12306]

[root@k8s-master ~]# ll
total 20
-rw-------. 1 root root  1589 Apr 30 20:16 anaconda-ks.cfg
-rw-r--r--  1 root root 12306 May 28 22:07 kube-flannel.yml

#查看内容(备注:找到image的镜像下载地址)
[root@k8s-master ~]# cat kube-flannel.yml 
---
apiVersion: extensions/v1beta1
kind: PodSecurityPolicy
metadata:
  name: psp.flannel.unprivileged
  annotations:
    seccomp.security.alpha.kubernetes.io/allowedProfileNames: docker/default
    seccomp.security.alpha.kubernetes.io/defaultProfileName: docker/default
    apparmor.security.beta.kubernetes.io/allowedProfileNames: runtime/default
    apparmor.security.beta.kubernetes.io/defaultProfileName: runtime/default
spec:
  privileged: false
  volumes:
    - configMap
    - secret
    - emptyDir
    - hostPath
  allowedHostPaths:
    - pathPrefix: "/etc/cni/net.d"
    - pathPrefix: "/etc/kube-flannel"
    - pathPrefix: "/run/flannel"
  readOnlyRootFilesystem: false
  # Users and groups
  runAsUser:
    rule: RunAsAny
  supplementalGroups:
    rule: RunAsAny
  fsGroup:
    rule: RunAsAny
  # Privilege Escalation
  allowPrivilegeEscalation: false
  defaultAllowPrivilegeEscalation: false
  # Capabilities
  allowedCapabilities: ['NET_ADMIN']
  defaultAddCapabilities: []
  requiredDropCapabilities: []
  # Host namespaces
  hostPID: false
  hostIPC: false
  hostNetwork: true
  hostPorts:
  - min: 0
    max: 65535
  # SELinux
  seLinux:
    # SELinux is unsed in CaaSP
    rule: 'RunAsAny'
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: flannel
rules:
  - apiGroups: ['extensions']
    resources: ['podsecuritypolicies']
    verbs: ['use']
    resourceNames: ['psp.flannel.unprivileged']
  - apiGroups:
      - ""
    resources:
      - pods
    verbs:
      - get
  - apiGroups:
      - ""
    resources:
      - nodes
    verbs:
      - list
      - watch
  - apiGroups:
      - ""
    resources:
      - nodes/status
    verbs:
      - patch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: flannel
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: flannel
subjects:
- kind: ServiceAccount
  name: flannel
  namespace: kube-system
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: flannel
  namespace: kube-system
---
kind: ConfigMap
apiVersion: v1
metadata:
  name: kube-flannel-cfg
  namespace: kube-system
  labels:
    tier: node
    app: flannel
data:
  cni-conf.json: |
    {
      "name": "cbr0",
      "plugins": [
        {
          "type": "flannel",
          "delegate": {
            "hairpinMode": true,
            "isDefaultGateway": true
          }
        },
        {
          "type": "portmap",
          "capabilities": {
            "portMappings": true
          }
        }
      ]
    }
  net-conf.json: |
    {
      "Network": "10.244.0.0/16",
      "Backend": {
        "Type": "vxlan"
      }
    }
---
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
  name: kube-flannel-ds-amd64
  namespace: kube-system
  labels:
    tier: node
    app: flannel
spec:
  template:
    metadata:
      labels:
        tier: node
        app: flannel
    spec:
      hostNetwork: true
      nodeSelector:
        beta.kubernetes.io/arch: amd64
      tolerations:
      - operator: Exists
        effect: NoSchedule
      serviceAccountName: flannel
      initContainers:
      - name: install-cni
        image: quay.io/coreos/flannel:v0.11.0-amd64
        command:
        - cp
        args:
        - -f
        - /etc/kube-flannel/cni-conf.json
        - /etc/cni/net.d/10-flannel.conflist
        volumeMounts:
        - name: cni
          mountPath: /etc/cni/net.d
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      containers:
      - name: kube-flannel
        image: quay.io/coreos/flannel:v0.11.0-amd64
        command:
        - /opt/bin/flanneld
        args:
        - --ip-masq
        - --kube-subnet-mgr
        resources:
          requests:
            cpu: "100m"
            memory: "50Mi"
          limits:
            cpu: "100m"
            memory: "50Mi"
        securityContext:
          privileged: false
          capabilities:
             add: ["NET_ADMIN"]
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        volumeMounts:
        - name: run
          mountPath: /run/flannel
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      volumes:
        - name: run
          hostPath:
            path: /run/flannel
        - name: cni
          hostPath:
            path: /etc/cni/net.d
        - name: flannel-cfg
          configMap:
            name: kube-flannel-cfg
---
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
  name: kube-flannel-ds-arm64
  namespace: kube-system
  labels:
    tier: node
    app: flannel
spec:
  template:
    metadata:
      labels:
        tier: node
        app: flannel
    spec:
      hostNetwork: true
      nodeSelector:
        beta.kubernetes.io/arch: arm64
      tolerations:
      - operator: Exists
        effect: NoSchedule
      serviceAccountName: flannel
      initContainers:
      - name: install-cni
        image: quay.io/coreos/flannel:v0.11.0-arm64
        command:
        - cp
        args:
        - -f
        - /etc/kube-flannel/cni-conf.json
        - /etc/cni/net.d/10-flannel.conflist
        volumeMounts:
        - name: cni
          mountPath: /etc/cni/net.d
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      containers:
      - name: kube-flannel
        image: quay.io/coreos/flannel:v0.11.0-arm64
        command:
        - /opt/bin/flanneld
        args:
        - --ip-masq
        - --kube-subnet-mgr
        resources:
          requests:
            cpu: "100m"
            memory: "50Mi"
          limits:
            cpu: "100m"
            memory: "50Mi"
        securityContext:
          privileged: false
          capabilities:
             add: ["NET_ADMIN"]
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        volumeMounts:
        - name: run
          mountPath: /run/flannel
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      volumes:
        - name: run
          hostPath:
            path: /run/flannel
        - name: cni
          hostPath:
            path: /etc/cni/net.d
        - name: flannel-cfg
          configMap:
            name: kube-flannel-cfg
---
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
  name: kube-flannel-ds-arm
  namespace: kube-system
  labels:
    tier: node
    app: flannel
spec:
  template:
    metadata:
      labels:
        tier: node
        app: flannel
    spec:
      hostNetwork: true
      nodeSelector:
        beta.kubernetes.io/arch: arm
      tolerations:
      - operator: Exists
        effect: NoSchedule
      serviceAccountName: flannel
      initContainers:
      - name: install-cni
        image: quay.io/coreos/flannel:v0.11.0-arm
        command:
        - cp
        args:
        - -f
        - /etc/kube-flannel/cni-conf.json
        - /etc/cni/net.d/10-flannel.conflist
        volumeMounts:
        - name: cni
          mountPath: /etc/cni/net.d
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      containers:
      - name: kube-flannel
        image: quay.io/coreos/flannel:v0.11.0-arm
        command:
        - /opt/bin/flanneld
        args:
        - --ip-masq
        - --kube-subnet-mgr
        resources:
          requests:
            cpu: "100m"
            memory: "50Mi"
          limits:
            cpu: "100m"
            memory: "50Mi"
        securityContext:
          privileged: false
          capabilities:
             add: ["NET_ADMIN"]
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        volumeMounts:
        - name: run
          mountPath: /run/flannel
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      volumes:
        - name: run
          hostPath:
            path: /run/flannel
        - name: cni
          hostPath:
            path: /etc/cni/net.d
        - name: flannel-cfg
          configMap:
            name: kube-flannel-cfg
---
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
  name: kube-flannel-ds-ppc64le
  namespace: kube-system
  labels:
    tier: node
    app: flannel
spec:
  template:
    metadata:
      labels:
        tier: node
        app: flannel
    spec:
      hostNetwork: true
      nodeSelector:
        beta.kubernetes.io/arch: ppc64le
      tolerations:
      - operator: Exists
        effect: NoSchedule
      serviceAccountName: flannel
      initContainers:
      - name: install-cni
        image: quay.io/coreos/flannel:v0.11.0-ppc64le
        command:
        - cp
        args:
        - -f
        - /etc/kube-flannel/cni-conf.json
        - /etc/cni/net.d/10-flannel.conflist
        volumeMounts:
        - name: cni
          mountPath: /etc/cni/net.d
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      containers:
      - name: kube-flannel
        image: quay.io/coreos/flannel:v0.11.0-ppc64le
        command:
        - /opt/bin/flanneld
        args:
        - --ip-masq
        - --kube-subnet-mgr
        resources:
          requests:
            cpu: "100m"
            memory: "50Mi"
          limits:
            cpu: "100m"
            memory: "50Mi"
        securityContext:
          privileged: false
          capabilities:
             add: ["NET_ADMIN"]
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        volumeMounts:
        - name: run
          mountPath: /run/flannel
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      volumes:
        - name: run
          hostPath:
            path: /run/flannel
        - name: cni
          hostPath:
            path: /etc/cni/net.d
        - name: flannel-cfg
          configMap:
            name: kube-flannel-cfg
---
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
  name: kube-flannel-ds-s390x
  namespace: kube-system
  labels:
    tier: node
    app: flannel
spec:
  template:
    metadata:
      labels:
        tier: node
        app: flannel
    spec:
      hostNetwork: true
      nodeSelector:
        beta.kubernetes.io/arch: s390x
      tolerations:
      - operator: Exists
        effect: NoSchedule
      serviceAccountName: flannel
      initContainers:
      - name: install-cni
        image: quay.io/coreos/flannel:v0.11.0-s390x
        command:
        - cp
        args:
        - -f
        - /etc/kube-flannel/cni-conf.json
        - /etc/cni/net.d/10-flannel.conflist
        volumeMounts:
        - name: cni
          mountPath: /etc/cni/net.d
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      containers:
      - name: kube-flannel
        image: quay.io/coreos/flannel:v0.11.0-s390x
        command:
        - /opt/bin/flanneld
        args:
        - --ip-masq
        - --kube-subnet-mgr
        resources:
          requests:
            cpu: "100m"
            memory: "50Mi"
          limits:
            cpu: "100m"
            memory: "50Mi"
        securityContext:
          privileged: false
          capabilities:
             add: ["NET_ADMIN"]
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        volumeMounts:
        - name: run
          mountPath: /run/flannel
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      volumes:
        - name: run
          hostPath:
            path: /run/flannel
        - name: cni
          hostPath:
            path: /etc/cni/net.d
        - name: flannel-cfg
          configMap:
            name: kube-flannel-cfg

#致此master部署成功。


 

三、部署Kubernetes Node节点

3.1、加入Kubernetes Node节点

向集群添加新节点,执行在kubeadm init输出的kubeadm join命令:

将node节点加入网络备注:仅在两台node节点上面操作,添加节点,方法都是一样的。)
kubeadm join 192.168.88.149:6443 --token qlzhpw.fy30lar1jiz11xbw --discovery-token-ca-cert-hash sha256:ed2f22c8a4727f7be52a1495b49e52638e1f79107677daf6722dfa009218f2e8

 

#查看node,是否添加成功 (备注:在master节点上面操作)

[root@k8s-master ~]# kubectl get node
NAME STATUS ROLES AGE VERSION
k8s-master Ready master 3h11m v1.14.0
k8s-node1 Ready <none> 3h5m v1.14.0   #添加成功
k8s-node2 Ready <none> 3h v1.14.0       #添加成功

四、测试kubernetes集群

在Kubernetes集群中创建一个pod,验证是否正常运行:

#在其它服务器上面,下载nginx镜像包。导入到部署服务器中(备注:三台服务器都要导入)

docker pull nginx

docker image save nginx:latest >nginx.tar

#导入到需要部署的服务器上面,(备注:三台服务器都要导入)

[root@k8s-master ~]# docker load < nginx.tar
6270adb5794c: Loading layer [==================================================>] 58.45MB/58.45MB
6ba094226eea: Loading layer [==================================================>] 54.59MB/54.59MB
332fa54c5886: Loading layer [==================================================>] 3.584kB/3.584kB
Loaded image: nginx:latest

 

#创建容器,并暴露端口

kubectl create deployment nginx --image=nginx

kubectl expose deployment nginx --port=80 --type=NodePort

#查看是否创建成功,并暴露端口

#查看pod
[root@k8s-master ~]# kubectl get pod,svc
NAME                         READY   STATUS    RESTARTS   AGE
pod/nginx-65f88748fd-6859x   1/1     Running   0          133m

NAME                 TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)        AGE
service/kubernetes   ClusterIP   10.1.0.1       <none>        443/TCP        3h31m
service/nginx        NodePort    10.1.255.203   <none>        80:31118/TCP   3h16m

#显示详细pod的详细信息
[root@k8s-master ~]# kubectl get pod,svc -o wide
NAME                         READY   STATUS    RESTARTS   AGE    IP           NODE        NOMINATED NODE   READINESS GATES
pod/nginx-65f88748fd-6859x   1/1     Running   0          133m   10.244.2.2   k8s-node2   <none>           <none>

NAME                 TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)        AGE     SELECTOR
service/kubernetes   ClusterIP   10.1.0.1       <none>        443/TCP        3h31m   <none>
service/nginx        NodePort    10.1.255.203   <none>        80:31118/TCP   3h16m   app=nginx

#访问网站

访问地址:http://NodeIP:Port 

or

http://192.168.56.51:31118/

 五、部署 Dashboard

5.1、创建service account并绑定默认cluster-admin管理员集群角色:kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v1.10.1/src/deploy/recommended/kubernetes-dashboard.yaml
默认镜像国内无法访问,修改镜像地址为: lizhenliang/kubernetes-dashboard-amd64:v1.10.1
默认Dashboard只能集群内部访问,修改Service为NodePort类型,暴露到外部:

#修改配置文件

kubernetes-dashboard.yaml

只需:修改如下两个地方
# ------------------- Dashboard Deployment ------------------- #
image: lizhenliang/kubernetes-dashboard-amd64:v1.10.1  #更换镜像源

# ------------------- Dashboard Service ------------------- #
spec:
type: NodePort #增加这行
ports:
- port: 443
targetPort: 8443
nodePort: 30001 #添加这行

#####################################################

因镜像源是国外的,所以下载下来导入到服务器上面(备注:三台服务器都要操作)

[root@k8s-master ~]# ll
total 338072
-rw-------. 1 root root 1589 Apr 30 20:16 anaconda-ks.cfg
-rw-r--r-- 1 root root 55390720 May 28 22:34 flannelv0.11.0-amd64.tar
-rw-r--r-- 1 root root 55390720 May 28 22:34 flannelv0.11.0-amd64.tar.0
-rw-r--r-- 1 root root 10599 May 29 21:29 kube-flannel.yml
-rw-r--r-- 1 root root 122310656 May 29 21:46 kubernetes-dashboard-amd64_v1.10.1.tar
-rw-r--r-- 1 root root 4595 May 29 21:34 kubernetes-dashboard.yaml
-rw-r--r-- 1 root root 113059840 May 29 21:19 nginx.tar

#三台服务器都要导入
[root@k8s-master ~]# docker load < kubernetes-dashboard-amd64_v1.10.1.tar
fbdfe08b001c: Loading layer [==================================================>] 122.3MB/122.3MB
Loaded image: lizhenliang/kubernetes-dashboard-amd64:v1.10.1

#查看镜像

[root@k8s-master ~]# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
nginx latest 53f3fd8007f7 3 weeks ago 109MB
registry.aliyuncs.com/google_containers/kube-proxy v1.14.0 5cd54e388aba 2 months ago 82.1MB
registry.aliyuncs.com/google_containers/kube-controller-manager v1.14.0 b95b1efa0436 2 months ago 158MB
registry.aliyuncs.com/google_containers/kube-scheduler v1.14.0 00638a24688b 2 months ago 81.6MB
registry.aliyuncs.com/google_containers/kube-apiserver v1.14.0 ecf910f40d6e 2 months ago 210MB
lizhenliang/flannel v0.11.0-amd64 ff281650a721 4 months ago 52.6MB
quay.io/coreos/flannel v0.11.0-amd64 ff281650a721 4 months ago 52.6MB
registry.aliyuncs.com/google_containers/coredns 1.3.1 eb516548c180 4 months ago 40.3MB
lizhenliang/kubernetes-dashboard-amd64 v1.10.1 f9aed6605b81 5 months ago 122MB
registry.aliyuncs.com/google_containers/etcd 3.3.10 2c4adeb21b4f 5 months ago 258MB
registry.aliyuncs.com/google_containers/pause 3.1 da86e6ba6ca1 17 months ago 742kB

#####################################################

#正式开始:

创建kubernetes-dashboard  (备注:仅在master上面操作)
[root@k8s-master ~]# kubectl apply -f kubernetes-dashboard.yaml
secret/kubernetes-dashboard-certs created
serviceaccount/kubernetes-dashboard created
role.rbac.authorization.k8s.io/kubernetes-dashboard-minimal created
rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard-minimal created
deployment.apps/kubernetes-dashboard created
service/kubernetes-dashboard created

 

[root@k8s-master ~]# kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-8686dcc4fd-hldfb 1/1 Running 0 120m
coredns-8686dcc4fd-tnxhp 1/1 Running 0 120m
etcd-k8s-master 1/1 Running 0 119m
kube-apiserver-k8s-master 1/1 Running 0 119m
kube-controller-manager-k8s-master 1/1 Running 0 119m
kube-flannel-ds-amd64-7kn8b 1/1 Running 0 114m
kube-flannel-ds-amd64-gpqp6 1/1 Running 0 116m
kube-flannel-ds-amd64-pq9cw 1/1 Running 1 110m
kube-proxy-95rcx 1/1 Running 0 114m
kube-proxy-mr58b 1/1 Running 0 120m
kube-proxy-npw2h 1/1 Running 0 110m
kube-scheduler-k8s-master 1/1 Running 0 119m

 

[root@k8s-master ~]# kubectl delete -f kubernetes-dashboard.yaml
secret "kubernetes-dashboard-certs" deleted
serviceaccount "kubernetes-dashboard" deleted
role.rbac.authorization.k8s.io "kubernetes-dashboard-minimal" deleted
rolebinding.rbac.authorization.k8s.io "kubernetes-dashboard-minimal" deleted
deployment.apps "kubernetes-dashboard" deleted
service "kubernetes-dashboard" deleted

 

5.2、创建账号(备注:仅在master上面操作)
[root@k8s-master ~]# kubectl create serviceaccount dashboard-admin -n kube-system
serviceaccount/dashboard-admin created

[root@k8s-master ~]# kubectl create clusterrolebinding dashboard-admin --clusterrole=cluster-admin --se-system:dashboard-admin
clusterrolebinding.rbac.authorization.k8s.io/dashboard-admin created

 

5.3、获取登录用的token (备注:仅在master上面操作)

[root@k8s-master ~]# kubectl describe secrets -n kube-system $(kubectl -n kube-system get secret | awkn/{print $1}')
Name: dashboard-admin-token-cgk7f
Namespace: kube-system
Labels: <none>
Annotations: kubernetes.io/service-account.name: dashboard-admin
kubernetes.io/service-account.uid: ce06ce47-821a-11e9-a6d0-0050563847cd

Type: kubernetes.io/service-account-token

Data
====
namespace: 11 bytes
token: eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiWRtaW4tdG9rZW4tY2drN2YiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiZGFzaGJa3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiY2UwNmNlNDctODIxYS0xMWU5LWE2ZDAtMDic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Omt1YmUtc3lzdGVtOmRhc2hib2FyZC1hZG1pbiJ9.bn1PQo1VV5h8lMHa7Y3oebqJze_ApNbhchn5Kl6tLA-5f6M50r7ZS5_FrMyIE5Ay22hpE5yXc907jcztrbvPz6_J4V1NPKqW1HDtuvQ3DhqkcVBJTC01nvHJJ6gdU9mQIOHPgCHb8cOT67CWEX8MAvf6LqyNZ61rT-zxW1qg69Ee5Er33wJmwZSPgk-Sbi9w5mDhCC2-OYpjQZFoEmD4lefvvHCmyfIxOLbJtZyoPR8TCYj5jlX_iH4k9k8lLSOYwTN9aQaut-vmXlolcZO5MDaKJsr_v-3HbNA
ca.crt: 1025 bytes

 

5.4、创建服务账号和集群角色绑定配置文件

vi dashboard-adminuser.yaml

---
apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-admin
  namespace: kube-system

---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: kubernetes-dashboard-admin
  labels:
    k8s-app: kubernetes-dashboard
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: kubernetes-dashboard-admin
  namespace: kube-system

#应用授权

kubectl apply -f dashboard-adminuser.yaml

#没有授权,会报如下错误

#查看kubernetes-dashboard暴露的端口

[root@k8s-master ~]# kubectl get service -n kube-system
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kube-dns ClusterIP 10.1.0.10 <none> 53/UDP,53/TCP,9153/TCP 3h45m
kubernetes-dashboard NodePort 10.1.117.120 <none> 443:30001/TCP 50m  #备注:这里是443接口,所以是https的。

 

5.5、查看登录的token

[root@k8s-master ~]# kubectl get secret -n kube-system
NAME                                             TYPE                                  DATA   AGE
attachdetach-controller-token-6qb84              kubernetes.io/service-account-token   3      3h49m
bootstrap-signer-token-drvcz                     kubernetes.io/service-account-token   3      3h49m
bootstrap-token-lshbrx                           bootstrap.kubernetes.io/token         7      3h50m
certificate-controller-token-dm5zh               kubernetes.io/service-account-token   3      3h49m
clusterrole-aggregation-controller-token-7s7mn   kubernetes.io/service-account-token   3      3h49m
coredns-token-b2hh4                              kubernetes.io/service-account-token   3      3h50m
cronjob-controller-token-5nbqm                   kubernetes.io/service-account-token   3      3h49m
daemon-set-controller-token-x2664                kubernetes.io/service-account-token   3      3h49m
dashboard-admin-token-cgk7f                      kubernetes.io/service-account-token   3      108m
default-token-tsc8b                              kubernetes.io/service-account-token   3      3h49m
deployment-controller-token-vnd4g                kubernetes.io/service-account-token   3      3h49m
disruption-controller-token-zph4w                kubernetes.io/service-account-token   3      3h49m
endpoint-controller-token-k666g                  kubernetes.io/service-account-token   3      3h49m
expand-controller-token-vj4ng                    kubernetes.io/service-account-token   3      3h49m
flannel-token-2kgdh                              kubernetes.io/service-account-token   3      3h46m
generic-garbage-collector-token-zr6hw            kubernetes.io/service-account-token   3      3h49m
horizontal-pod-autoscaler-token-r9p9m            kubernetes.io/service-account-token   3      3h50m
job-controller-token-zx8ld                       kubernetes.io/service-account-token   3      3h49m
kube-proxy-token-94b87                           kubernetes.io/service-account-token   3      3h50m
kubernetes-dashboard-admin-token-6kp9h           kubernetes.io/service-account-token   3      54m     #就是这个token文件,才有管理权限。
kubernetes-dashboard-certs                       Opaque                                0      55m
kubernetes-dashboard-key-holder                  Opaque                                2      111m
kubernetes-dashboard-token-mzqmk                 kubernetes.io/service-account-token   3      55m
namespace-controller-token-bh4fj                 kubernetes.io/service-account-token   3      3h50m
node-controller-token-4kgk7                      kubernetes.io/service-account-token   3      3h50m
persistent-volume-binder-token-2blr5             kubernetes.io/service-account-token   3      3h49m
pod-garbage-collector-token-4zd82                kubernetes.io/service-account-token   3      3h49m
pv-protection-controller-token-j85rh             kubernetes.io/service-account-token   3      3h49m
pvc-protection-controller-token-scmqx            kubernetes.io/service-account-token   3      3h49m
replicaset-controller-token-8llf7                kubernetes.io/service-account-token   3      3h49m
replication-controller-token-kzgcp               kubernetes.io/service-account-token   3      3h49m
resourcequota-controller-token-5kpt7             kubernetes.io/service-account-token   3      3h49m
service-account-controller-token-d8j8s           kubernetes.io/service-account-token   3      3h49m
service-controller-token-hzg78                   kubernetes.io/service-account-token   3      3h49m
statefulset-controller-token-cdrc8               kubernetes.io/service-account-token   3      3h50m
token-cleaner-token-jg8j7                        kubernetes.io/service-account-token   3      3h49m
ttl-controller-token-lw567                       kubernetes.io/service-account-token   3      3h49m

5.6、复制下面token,选择令牌登录。

[root@k8s-master ~]# kubectl describe secret kubernetes-dashboard-admin-token-6kp9h -n kube-system
Name:         kubernetes-dashboard-admin-token-6kp9h
Namespace:    kube-system
Labels:       <none>
Annotations:  kubernetes.io/service-account.name: kubernetes-dashboard-admin
              kubernetes.io/service-account.uid: 5d4c6518-8222-11e9-a6d0-0050563847cd

Type:  kubernetes.io/service-account-token

Data
====
ca.crt:     1025 bytes
namespace:  11 bytes
token:    #复制下面token,选择令牌登录。
eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmV
ybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJrdWJlcm5ldGVzLWRhc2hib2FyZC1hZG1pbi10b2tlbi02a3A5aCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWU
iOiJrdWJlcm5ldGVzLWRhc2hib2FyZC1hZG1pbiIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6IjVkNGM2NTE4LTgyMjItMTFlOS1hNmQwLTAwNTA1NjM4NDdjZCIsInN1YiI6InN
5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTprdWJlcm5ldGVzLWRhc2hib2FyZC1hZG1pbiJ9.Vhxr_IxRyLuNZ5TDRUcspf1hnOuQ2P-_HlNRkmvdEc0s6TnxhvYn6G1sevJXQ671jBAIefWowoWGQpT-TTXUe8iOE7
yjG04xqRgqJhodvyF1Tv5T5praLP9FNzBcvR5V2BA9u3gd66-9UEsJ2cvwyODlIcbpNKGLLP9K-jw-cDRlpWGFasAJZjFSliJ7HpwZd5zLsaB4BElLZXlPu98TvkcJVVt0S9slWIbWgtlL7wq9zdSoMRZQLfEba54INofE1fWiLI
jkqaW9fmX3vH4FFe7qP0LBiSY_EYMw_Hk_B7Ck6EkFVrtlggkLeEry7UsYO3HoKhqt1MQts55hP8uy7g

5.7、登录网站

备注:登录网站一定要用火狐浏览器,否则无法访问成功。

https://ip:prot

https://192.168.56.51:30001

登录成功如下图:

 


免责声明!

本站转载的文章为个人学习借鉴使用,本站对版权不负任何法律责任。如果侵犯了您的隐私权益,请联系本站邮箱yoyou2525@163.com删除。



 
粤ICP备18138465号  © 2018-2025 CODEPRJ.COM