簡單理解:
docker默認是單機使用的,不可以跨主機,也沒有高可用,所以生產環境一般不會單獨使用docker跑應用,k8s主要是作為docker的一個調度器來使用,可以使容器實現跨節點通信,當一台運行容器的節點故障后,容器會自動遷移到其它可用節點上繼續運行服務,目前比較常用的是k8s
k8s架構:k8s主要由master節點和node節點構成,而且常用操作都在master上操作,作為控制節點角色,node節點提供計算功能
master:
kubectl:k8s的所有操作都是通過kubectl指令操作的
REST API:k8s對外部服務提供的接口服務,例如圖形化界面或者kubectl都會通過REST API接口下發指令來控制k8s
scheduler:調度器,例如創建pod,scheduler可以控制將pod分配到哪個pod節點
controller-manager:檢測pod或者node的健康狀態,並維持pod的正常運行,如果pod故障,controller-manger會自動修復,例如在啟動一個pod副本
kubelet:代理軟件,例如在master上對node節點下發的指令,都需要通過kubelet組建來告知各個組件
etcd:數據庫,所有配置數據都存放在etcd數據庫中
kubeproxy:在所有節點都需要運行kubeproxy,后期通過創建svc來將pod映射到外網,當外部通過svc-ip訪問pod的時候就需要通過kubeporxy進行路由轉發到pod
node:
pod:k8s環境運行的最小單位,一個pod中可以包含一個或多個容器
安裝方式:
kubeadmin(生產)
yum
源碼安裝(學習,深刻理解)
===================================================================================
節點規划
開始安裝:
初始化各個節點的配置:
安裝常用工具
yum -y install wget telnet net-tools lrzsz vim zip unzip
修改主機名
將master節點主機名修改為k8s-master01 node節點為k8s-node01 以這種命名規則命名即可
關閉防火牆
[root@k8s-node01 ~]# systemctl stop firewalld && systemctl disable firewalld
永久關閉selinux
[root@k8s-node01 ~]# sed -i 's/SELINUX=enforcing/SELINUX=disabled/' /etc/selinux/config;cat /etc/selinux/config
臨時關閉selinux
[root@k8s-node01 ~]# setenforce 0
配置hosts
[root@k8s-node01 ~]# cat >> /etc/hosts <<EOF 10.1.1.100 k8s-master01 10.1.1.110 k8s-node01 10.1.1.120 k8s-node02 EOF
臨時關閉swap
[root@k8s-node01 ~]# swapoff -a
永久關閉swap
[root@k8s-node01 ~]# vim /etc/fstab
配置yum源(自帶的kubernetes版本太低)
rm -rf /etc/yum.repos.d/*
cd /etc/yum.repo/
wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo
wget -O /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo
vim k8s.repo
[kubernetes] name=Kubernetes baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/ enabled=1 gpgcheck=1 repo_gpgcheck=1 gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
yum clean all && yum makecache
安裝docker
[root@k8s-master01 ~]#yum install -y yum-utils device-mapper-persistent-data lvm2
[root@k8s-master01 ~]# yum -y install docker
由於內核不支持 overlay2所以需要升級內核或者禁用overlay2(我們選擇禁用,安裝完docker可以啟動docker測試下是否支持,啟動docker不報錯的可以忽略這一步)
vim /etc/sysconfig/docker 將 --selinux-enabled=false
啟動docker服務
[root@k8s-master01 ~]# systemctl start docker && systemctl enable docker
設置服務器時區
timedatectl set-timezone Asia/Shanghai
設置相關k8s參數
cat > /etc/sysctl.d/k8s.conf << EOF net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 EOF
[root@k8s-master01 ~]# sysctl -p
[root@k8s-master01 ~]# echo "1" > /proc/sys/net/ipv4/ip_forward
安裝k8s相關安裝包 這里安裝的是1.15.1版本,可以根據自己的需求指定版本,前提是你yum源支持 使用 yum list kube\* 來查看當前yum源支持的最新版本
[root@k8s-master01 ~]#yum -y install kubeadm-1.15.1 kubelet-1.15.1 kubectl-1.15.1
啟動服務
[root@k8s-master01 ~]# systemctl restart kubelet && systemctl enable kubelet
====================================以上操作在所有節點執行,如果新加node節點也需要執行上述操作================================================================
開始初始化k8s集群:在master上執行
$ kubeadm init \ --apiserver-advertise-address=10.1.1.100 \ #master組件監聽的api地址,這里寫masterIP地址即可或者多網卡選擇另一個IP地址 --image-repository registry.aliyuncs.com/google_containers \ --kubernetes-version v1.15.1 \ --service-cidr=10.2.0.0/16 \ --pod-network-cidr=10.244.0.0/16
初始化完成后kubelet服務如果沒異常的話應該正常啟動了
mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config
安裝flannel網絡
vim kube-flannel.yml添加下面內容

--- kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1beta1 metadata: name: flannel rules: - apiGroups: - "" resources: - pods verbs: - get - apiGroups: - "" resources: - nodes verbs: - list - watch - apiGroups: - "" resources: - nodes/status verbs: - patch --- kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1beta1 metadata: name: flannel roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: flannel subjects: - kind: ServiceAccount name: flannel namespace: kube-system --- apiVersion: v1 kind: ServiceAccount metadata: name: flannel namespace: kube-system --- kind: ConfigMap apiVersion: v1 metadata: name: kube-flannel-cfg namespace: kube-system labels: tier: node app: flannel data: cni-conf.json: | { "name": "cbr0", "plugins": [ { "type": "flannel", "delegate": { "hairpinMode": true, "isDefaultGateway": true } }, { "type": "portmap", "capabilities": { "portMappings": true } } ] } net-conf.json: | { "Network": "10.244.0.0/16", "Backend": { "Type": "vxlan" } } --- apiVersion: extensions/v1beta1 kind: DaemonSet metadata: name: kube-flannel-ds-amd64 namespace: kube-system labels: tier: node app: flannel spec: template: metadata: labels: tier: node app: flannel spec: hostNetwork: true nodeSelector: beta.kubernetes.io/arch: amd64 tolerations: - operator: Exists effect: NoSchedule serviceAccountName: flannel initContainers: - name: install-cni image: quay.io/coreos/flannel:v0.11.0-amd64 command: - cp args: - -f - /etc/kube-flannel/cni-conf.json - /etc/cni/net.d/10-flannel.conflist volumeMounts: - name: cni mountPath: /etc/cni/net.d - name: flannel-cfg mountPath: /etc/kube-flannel/ containers: - name: kube-flannel image: quay.io/coreos/flannel:v0.11.0-amd64 command: - /opt/bin/flanneld args: - --ip-masq - --kube-subnet-mgr resources: requests: cpu: "100m" memory: "50Mi" limits: cpu: "100m" memory: "50Mi" securityContext: privileged: true env: - name: POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace volumeMounts: - name: run mountPath: /run - name: flannel-cfg mountPath: /etc/kube-flannel/ volumes: - name: run hostPath: path: /run - name: cni hostPath: path: /etc/cni/net.d - name: flannel-cfg configMap: name: kube-flannel-cfg --- apiVersion: extensions/v1beta1 kind: DaemonSet metadata: name: kube-flannel-ds-arm64 namespace: kube-system labels: tier: node app: flannel spec: template: metadata: labels: tier: node app: flannel spec: hostNetwork: true nodeSelector: beta.kubernetes.io/arch: arm64 tolerations: - operator: Exists effect: NoSchedule serviceAccountName: flannel initContainers: - name: install-cni image: quay.io/coreos/flannel:v0.11.0-arm64 command: - cp args: - -f - /etc/kube-flannel/cni-conf.json - /etc/cni/net.d/10-flannel.conflist volumeMounts: - name: cni mountPath: /etc/cni/net.d - name: flannel-cfg mountPath: /etc/kube-flannel/ containers: - name: kube-flannel image: quay.io/coreos/flannel:v0.11.0-arm64 command: - /opt/bin/flanneld args: - --ip-masq - --kube-subnet-mgr resources: requests: cpu: "100m" memory: "50Mi" limits: cpu: "100m" memory: "50Mi" securityContext: privileged: true env: - name: POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace volumeMounts: - name: run mountPath: /run - name: flannel-cfg mountPath: /etc/kube-flannel/ volumes: - name: run hostPath: path: /run - name: cni hostPath: path: /etc/cni/net.d - name: flannel-cfg configMap: name: kube-flannel-cfg --- apiVersion: extensions/v1beta1 kind: DaemonSet metadata: name: kube-flannel-ds-arm namespace: kube-system labels: tier: node app: flannel spec: template: metadata: labels: tier: node app: flannel spec: hostNetwork: true nodeSelector: beta.kubernetes.io/arch: arm tolerations: - operator: Exists effect: NoSchedule serviceAccountName: flannel initContainers: - name: install-cni image: quay.io/coreos/flannel:v0.11.0-arm command: - cp args: - -f - /etc/kube-flannel/cni-conf.json - /etc/cni/net.d/10-flannel.conflist volumeMounts: - name: cni mountPath: /etc/cni/net.d - name: flannel-cfg mountPath: /etc/kube-flannel/ containers: - name: kube-flannel image: quay.io/coreos/flannel:v0.11.0-arm command: - /opt/bin/flanneld args: - --ip-masq - --kube-subnet-mgr resources: requests: cpu: "100m" memory: "50Mi" limits: cpu: "100m" memory: "50Mi" securityContext: privileged: true env: - name: POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace volumeMounts: - name: run mountPath: /run - name: flannel-cfg mountPath: /etc/kube-flannel/ volumes: - name: run hostPath: path: /run - name: cni hostPath: path: /etc/cni/net.d - name: flannel-cfg configMap: name: kube-flannel-cfg --- apiVersion: extensions/v1beta1 kind: DaemonSet metadata: name: kube-flannel-ds-ppc64le namespace: kube-system labels: tier: node app: flannel spec: template: metadata: labels: tier: node app: flannel spec: hostNetwork: true nodeSelector: beta.kubernetes.io/arch: ppc64le tolerations: - operator: Exists effect: NoSchedule serviceAccountName: flannel initContainers: - name: install-cni image: quay.io/coreos/flannel:v0.11.0-ppc64le command: - cp args: - -f - /etc/kube-flannel/cni-conf.json - /etc/cni/net.d/10-flannel.conflist volumeMounts: - name: cni mountPath: /etc/cni/net.d - name: flannel-cfg mountPath: /etc/kube-flannel/ containers: - name: kube-flannel image: quay.io/coreos/flannel:v0.11.0-ppc64le command: - /opt/bin/flanneld args: - --ip-masq - --kube-subnet-mgr resources: requests: cpu: "100m" memory: "50Mi" limits: cpu: "100m" memory: "50Mi" securityContext: privileged: true env: - name: POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace volumeMounts: - name: run mountPath: /run - name: flannel-cfg mountPath: /etc/kube-flannel/ volumes: - name: run hostPath: path: /run - name: cni hostPath: path: /etc/cni/net.d - name: flannel-cfg configMap: name: kube-flannel-cfg --- apiVersion: extensions/v1beta1 kind: DaemonSet metadata: name: kube-flannel-ds-s390x namespace: kube-system labels: tier: node app: flannel spec: template: metadata: labels: tier: node app: flannel spec: hostNetwork: true nodeSelector: beta.kubernetes.io/arch: s390x tolerations: - operator: Exists effect: NoSchedule serviceAccountName: flannel initContainers: - name: install-cni image: quay.io/coreos/flannel:v0.11.0-s390x command: - cp args: - -f - /etc/kube-flannel/cni-conf.json - /etc/cni/net.d/10-flannel.conflist volumeMounts: - name: cni mountPath: /etc/cni/net.d - name: flannel-cfg mountPath: /etc/kube-flannel/ containers: - name: kube-flannel image: quay.io/coreos/flannel:v0.11.0-s390x command: - /opt/bin/flanneld args: - --ip-masq - --kube-subnet-mgr resources: requests: cpu: "100m" memory: "50Mi" limits: cpu: "100m" memory: "50Mi" securityContext: privileged: true env: - name: POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace volumeMounts: - name: run mountPath: /run - name: flannel-cfg mountPath: /etc/kube-flannel/ volumes: - name: run hostPath: path: /run - name: cni hostPath: path: /etc/cni/net.d - name: flannel-cfg configMap: name: kube-flannel-cfg
kubectl apply -f kube-flannel.yml
遇到問題,執行kubectl get pods -n kube-system查看到flannel容器拉取鏡像錯誤,在master節點上手動拉取鏡像后再次執行kubectl apply,查看到flannel網絡運行正常
docker pull quay.io/coreos/flannel:v0.11.0-amd64
kubectl apply -f kube-flannel.yml
=====================================================master節點部署完成==================================================================================
將node節點加入到master集群中,在需要加入k8s集群的node節點上執行該命令
kubeadm join 10.1.1.100:6443 --token jib7mk.mybh3vpbfcf2tgw0 \ --discovery-token-ca-cert-hash sha256:e5ab611a21d0442c1f25b9f88eaf5cddc66ad906d206163f2630d6228efa733
如果忘記kubeadm join后的參數,可用通過下面命令來獲取
[root@k8s-master01 opt]# kubeadm token create --print-join-command
問題1:
執行完kubeadm join后在master節點執行kubectl get nodes 發現k8s-node01節點狀態為NotReady
解決方法:在node節點查看日志,tail -f /var/log/message或者journalctl -f -u kubelet發現有報錯日志如下
Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
通過日志可以看出是網絡組件出現的問題,然后在node節點上執行docker images,查看到flannel組件的鏡像沒有拉取到,可用稍微等一下,如果不想等或者等了一會還拉取不到,可用手動拉取
docker pull quay.io/coreos/flannel:v0.11.0-amd64
過一倆分鍾在master節點上進行驗證
[root@k8s-master01 opt]# kubectl get nodes NAME STATUS ROLES AGE VERSION k8s-master01 Ready master 15h v1.15.1 k8s-node01 Ready <none> 6m42s v1.15.1
在master節點上部署k8s-dashboard組件,該組件為k8s的web ui組件
vim kubernetes-dashboard.yml

# Copyright 2017 The Kubernetes Authors. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # ------------------- Dashboard Secret ------------------- # apiVersion: v1 kind: Secret metadata: labels: k8s-app: kubernetes-dashboard name: kubernetes-dashboard-certs namespace: kube-system type: Opaque --- # ------------------- Dashboard Service Account ------------------- # apiVersion: v1 kind: ServiceAccount metadata: labels: k8s-app: kubernetes-dashboard name: kubernetes-dashboard namespace: kube-system --- # ------------------- Dashboard Role & Role Binding ------------------- # kind: Role apiVersion: rbac.authorization.k8s.io/v1 metadata: name: kubernetes-dashboard-minimal namespace: kube-system rules: # Allow Dashboard to create 'kubernetes-dashboard-key-holder' secret. - apiGroups: [""] resources: ["secrets"] verbs: ["create"] # Allow Dashboard to create 'kubernetes-dashboard-settings' config map. - apiGroups: [""] resources: ["configmaps"] verbs: ["create"] # Allow Dashboard to get, update and delete Dashboard exclusive secrets. - apiGroups: [""] resources: ["secrets"] resourceNames: ["kubernetes-dashboard-key-holder", "kubernetes-dashboard-certs"] verbs: ["get", "update", "delete"] # Allow Dashboard to get and update 'kubernetes-dashboard-settings' config map. - apiGroups: [""] resources: ["configmaps"] resourceNames: ["kubernetes-dashboard-settings"] verbs: ["get", "update"] # Allow Dashboard to get metrics from heapster. - apiGroups: [""] resources: ["services"] resourceNames: ["heapster"] verbs: ["proxy"] - apiGroups: [""] resources: ["services/proxy"] resourceNames: ["heapster", "http:heapster:", "https:heapster:"] verbs: ["get"] --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: kubernetes-dashboard-minimal namespace: kube-system roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: kubernetes-dashboard-minimal subjects: - kind: ServiceAccount name: kubernetes-dashboard namespace: kube-system --- # ------------------- Dashboard Deployment ------------------- # kind: Deployment apiVersion: apps/v1 metadata: labels: k8s-app: kubernetes-dashboard name: kubernetes-dashboard namespace: kube-system spec: replicas: 1 revisionHistoryLimit: 10 selector: matchLabels: k8s-app: kubernetes-dashboard template: metadata: labels: k8s-app: kubernetes-dashboard spec: containers: - name: kubernetes-dashboard image: registry.aliyuncs.com/google_containers/kubernetes-dashboard-amd64:v1.10.1 ports: - containerPort: 8443 protocol: TCP args: - --auto-generate-certificates # Uncomment the following line to manually specify Kubernetes API server Host # If not specified, Dashboard will attempt to auto discover the API server and connect # to it. Uncomment only if the default does not work. # - --apiserver-host=http://my-address:port volumeMounts: - name: kubernetes-dashboard-certs mountPath: /certs # Create on-disk volume to store exec logs - mountPath: /tmp name: tmp-volume livenessProbe: httpGet: scheme: HTTPS path: / port: 8443 initialDelaySeconds: 30 timeoutSeconds: 30 volumes: - name: kubernetes-dashboard-certs secret: secretName: kubernetes-dashboard-certs - name: tmp-volume emptyDir: {} serviceAccountName: kubernetes-dashboard # Comment the following tolerations if Dashboard must not be deployed on master tolerations: - key: node-role.kubernetes.io/master effect: NoSchedule --- # ------------------- Dashboard Service ------------------- # kind: Service apiVersion: v1 metadata: labels: k8s-app: kubernetes-dashboard name: kubernetes-dashboard namespace: kube-system spec: type: NodePort ports: - port: 443 targetPort: 8443 selector: k8s-app: kubernetes-dashboard
部署該服務
[root@k8s-master01 opt]# kubectl apply -f kubernetes-dashboard.yaml
查看訪問dashboard服務的端口號,訪問IP使用master節點的IP地址
[root@k8s-master01 opt]# kubectl get svc kubernetes-dashboard -n kube-system NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes-dashboard NodePort 10.103.18.237 <none> 443:31072/TCP 2m33s
訪問驗證:https://10.1.1.100:31072/
如何使用令牌登陸:
[root@k8s-master01 opt]# kubectl create serviceaccount dashboard-admin -n kube-system [root@k8s-master01 opt]# kubectl create clusterrolebinding dashboard-admin --clusterrole=cluster-admin --serviceaccount=kube-system:dashboard-admin [root@k8s-master01 opt]# kubectl get secret -n kube-system|grep dashboard dashboard-admin-token-pbg4s kubernetes.io/service-account-token 3 96s kubernetes-dashboard-certs Opaque 0 48m kubernetes-dashboard-key-holder Opaque 2 47m kubernetes-dashboard-token-ksd9f kubernetes.io/service-account-token 3 48m
查看token信息
[root@k8s-master01 opt]# kubectl describe secret dashboard-admin-token-pbg4s -n kube-system
登陸成功,部署完成