一、前言
- Kubernetes(簡稱K8S)是開源的容器集群管理系統,可以實現容器集群的自動化部署、自動擴縮容、維護等功能。它既是一款容器編排工具,也是全新的基於容器技術的分布式架構領先方案。在Docker技術的基礎上,為容器化的應用提供部署運行、資源調度、服務發現和動態伸縮等功能,提高了大規模容器集群管理的便捷性。【Kubernetes是容器集群管理工具】
二、Kubernetes的架構圖
三、重要概念
3.1、cluster
- cluster是 計算、存儲和網絡資源的集合,k8s利用這些資源運行各種基於容器的應用。
3.2、master
- master是cluster的大腦,他的主要職責是調度,即決定將應用放在那里運行。master運行linux操作系統,可以是物理機或者虛擬機。為了實現高可用,可以運行多個master。
3.3、node
- node的職責是運行容器應用。node由master管理,node負責監控並匯報容器的狀態,同時根據master的要求管理容器的生命周期。node運行在linux的操作系統上,可以是物理機或者是虛擬機。
3.4、pod
- pod是k8s的最小工作單元。每個pod包含一個或者多個容器。pod中的容器會作為一個整體被master調度到一個node上運行。
3.5、controller
- k8s通常不會直接創建pod,而是通過controller來管理pod的。controller中定義了pod的部署特性,比如有幾個劇本,在什么樣的node上運行等。為了滿足不同的業務場景,k8s提供了多種controller,包括deployment、replicaset、daemonset、statefulset、job等。
3.6、deployment
- 是最常用的controller。deployment可以管理pod的多個副本,並確保pod按照期望的狀態運行。
3.7、replicaset
- 實現了pod的多副本管理。使用deployment時會自動創建replicaset,也就是說deployment是通過replicaset來管理pod的多個副本的,我們通常不需要直接使用replicaset。
3.8、daemonset
- 用於每個node最多只運行一個pod副本的場景。正如其名稱所示的,daemonset通常用於運行daemon。
3.9、statefuleset
- 能夠保證pod的每個副本在整個生命周期中名稱是不變的,而其他controller不提供這個功能。當某個pod發生故障需要刪除並重新啟動時,pod的名稱會發生變化,同時statefulset會保證副本按照固定的順序啟動、更新或者刪除。
3.10、job
- 用於運行結束就刪除的應用,而其他controller中的pod通常是長期持續運行的。
3.11、service
- k8s的 service定義了外界訪問一組特定pod的方式。service有自己的IP和端口,service為pod提供了負載均衡。
- k8s運行容器pod與訪問容器這兩項任務分別由controller和service執行。
3.12、namespace
- 可以將一個物理的cluster邏輯上划分成多個虛擬cluster,每個cluster就是一個namespace。不同的namespace里的資源是完全隔離的。
四、安裝部署K8S集群
備注:第2步~第9步,所有的節點都要操作,第10、11步Master節點操作,第12步Node節點操作。
如果第10、11、12步操作失敗,可以通過 kubeadm reset 命令來清理環境重新安裝。
4.1、機器准備
機器IP | hostname |
192.168.182.135 | k8s-master |
192.168.182.136 | k8s-node1 |
192.168.182.137 | k8s-node2 |
4.2、關閉防火牆
- systemctl stop firewalld
- systemctl disable firewalld
4.3、關閉selinux
- setenforce 0 # 臨時關閉
- sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config # 永久關閉
4.4、關閉swap
- swapoff -a # 臨時關閉;關閉swap主要是為了性能考慮
- free # 可以通過這個命令查看swap是否關閉了
- sed -ri 's/.*swap.*/#&/' /etc/fstab # 永久關閉
4.5、添加主機名與IP對應的關系
- $ vim /etc/hosts
- 添加如下內容:
192.168.182.135 k8s-master 192.168.182.136 k8s-node1 192.168.182.137 k8s-node2
修改主機名
k8s-master:
[root@localhost ~]# hostname
localhost.localdomain
[root@localhost ~]# hostname k8s-master ##臨時生效
[root@localhost ~]# hostnamectl set-hostname k8s-master ##重啟后永久生效
k8s-node1:
[root@localhost ~]# hostname
localhost.localdomain
[root@localhost ~]# hostname k8s-node1##臨時生效
[root@localhost ~]# hostnamectl set-hostname k8s-node1##重啟后永久生效
k8s-node2:
[root@localhost ~]# hostname
localhost.localdomain
[root@localhost ~]# hostname k8s-node2 ##臨時生效
[root@localhost ~]# hostnamectl set-hostname k8s-node2 ##重啟后永久生效
4.6、將橋接的IPV4流量傳遞到iptables 的鏈
$ cat > /etc/sysctl.d/k8s.conf << EOF net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 EOF $ sysctl --system
4.7、安裝Docker
# 添加docker yum源
$ wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -O/etc/yum.repos.d/docker-ce.repo
# 安裝 $ yum -y install docker-ce
# 設置開機啟動 $ systemctl enable docker
# 啟動docker $ systemctl start docker
4.8、添加阿里雲YUM軟件源
cat > /etc/yum.repos.d/kubernetes.repo << EOF [k8s] name=k8s enabled=1 gpgcheck=0 baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/ EOF
4.9、安裝kubeadm,kubelet和kubectl
kubelet # 運行在 Cluster 所有節點上,負責啟動 Pod 和容器。
kubeadm # 用於初始化 Cluster。
kubectl # 是 Kubernetes 命令行工具。通過 kubectl 可以部署和管理應用,查看各種資源,創建、刪除和更新各種組件。
- 在部署kubernetes時,要求master node和worker node上的版本保持一致,否則會出現版本不匹配導致奇怪的問題出現。本文將介紹如何在CentOS系統上,使用yum安裝指定版本的Kubernetes。
- 我們需要安裝指定版本的kubernetes。那么如何做呢?在進行yum安裝時,可以使用下列的格式來進行安裝:
- yum install -y kubelet-<version> kubectl-<version> kubeadm-<version>
# 這里安裝最新的
$ yum install kubelet kubeadm kubectl -y
# 最后安裝的版本為:kubeadm.x86_64 0:1.17.0-0;kubectl.x86_64 0:1.17.0-0;kubelet.x86_64 0:1.17.0-0
-
啟動kubelet
# 此時,還不能啟動kubelet,因為此時配置還不能,現在僅僅可以設置開機自啟動
$ systemctl enable kubelet
4.10、部署Kubernetes Master
1)初始化kubeadm
$ kubeadm init \
--apiserver-advertise-address=192.168.182.135 \
--image-repository registry.aliyuncs.com/google_containers \
--kubernetes-version v1.17.0 \
--service-cidr=10.1.0.0/16 \
--pod-network-cidr=10.244.0.0/16
# –image-repository string: 這個用於指定從什么位置來拉取鏡像(1.13版本才有的),默認值是k8s.gcr.io,我們將其指定為國內鏡像地址:registry.aliyuncs.com/google_containers
# –kubernetes-version string: 指定kubenets版本號,默認值是stable-1,會導致從https://dl.k8s.io/release/stable-1.txt下載最新的版本號,我們可以將其指定為固定版本(v1.15.1)來跳過網絡請求。
# –apiserver-advertise-address 指明用 Master 的哪個 interface 與 Cluster 的其他節點通信。如果 Master 有多個 interface,建議明確指定,如果不指定,kubeadm 會自動選擇有默認網關的 interface。
# –pod-network-cidr 指定 Pod 網絡的范圍。Kubernetes 支持多種網絡方案,而且不同網絡方案對 –pod-network-cidr有自己的要求,這里設置為10.244.0.0/16 是因為我們將使用 flannel 網絡方案,必須設置成這個 CIDR。
注意:
- 建議至少2 cpu ,2G,非硬性要求,1cpu,1G也可以搭建起集群。但是:
- 1個cpu的話初始化master的時候會報 [WARNING NumCPU]: the number of available CPUs 1 is less than the required 2
- 部署插件或者pod時可能會報warning:FailedScheduling:Insufficient cpu, Insufficient memory
- 如果出現這種提示,說明你的虛擬機分配的CPU為1核,需要重新設置虛擬機master節點內核數。
最后初始化成功:
# 其它node節點執行,如果重置了,就不一樣了,得根據上面執行情況(node)
kubeadm join 192.168.182.135:6443 --token 7rpjfp.n3vg39zrgstzr0rs \
--discovery-token-ca-cert-hash sha256:8c5aa1a4e82e70fed62b02e8d7bff54c801251b5ee40c7cec68a8c214dcc1234
$ docker images
2)使用kubectl工具
復制如下命令直接執行(master)
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
下面就可以直接使用kubectl命令了(master)
4.11、安裝Pod網絡插件(CNI)(master)
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
或者:
kubectl apply -f kube-flannel.yml
$ cat kube-flannel.yml

--- apiVersion: policy/v1beta1 kind: PodSecurityPolicy metadata: name: psp.flannel.unprivileged annotations: seccomp.security.alpha.kubernetes.io/allowedProfileNames: docker/default seccomp.security.alpha.kubernetes.io/defaultProfileName: docker/default apparmor.security.beta.kubernetes.io/allowedProfileNames: runtime/default apparmor.security.beta.kubernetes.io/defaultProfileName: runtime/default spec: privileged: false volumes: - configMap - secret - emptyDir - hostPath allowedHostPaths: - pathPrefix: "/etc/cni/net.d" - pathPrefix: "/etc/kube-flannel" - pathPrefix: "/run/flannel" readOnlyRootFilesystem: false # Users and groups runAsUser: rule: RunAsAny supplementalGroups: rule: RunAsAny fsGroup: rule: RunAsAny # Privilege Escalation allowPrivilegeEscalation: false defaultAllowPrivilegeEscalation: false # Capabilities allowedCapabilities: ['NET_ADMIN'] defaultAddCapabilities: [] requiredDropCapabilities: [] # Host namespaces hostPID: false hostIPC: false hostNetwork: true hostPorts: - min: 0 max: 65535 # SELinux seLinux: # SELinux is unused in CaaSP rule: 'RunAsAny' --- kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1beta1 metadata: name: flannel rules: - apiGroups: ['extensions'] resources: ['podsecuritypolicies'] verbs: ['use'] resourceNames: ['psp.flannel.unprivileged'] - apiGroups: - "" resources: - pods verbs: - get - apiGroups: - "" resources: - nodes verbs: - list - watch - apiGroups: - "" resources: - nodes/status verbs: - patch --- kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1beta1 metadata: name: flannel roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: flannel subjects: - kind: ServiceAccount name: flannel namespace: kube-system --- apiVersion: v1 kind: ServiceAccount metadata: name: flannel namespace: kube-system --- kind: ConfigMap apiVersion: v1 metadata: name: kube-flannel-cfg namespace: kube-system labels: tier: node app: flannel data: cni-conf.json: | { "name": "cbr0", "cniVersion": "0.3.1", "plugins": [ { "type": "flannel", "delegate": { "hairpinMode": true, "isDefaultGateway": true } }, { "type": "portmap", "capabilities": { "portMappings": true } } ] } net-conf.json: | { "Network": "10.244.0.0/16", "Backend": { "Type": "vxlan" } } --- apiVersion: apps/v1 kind: DaemonSet metadata: name: kube-flannel-ds-amd64 namespace: kube-system labels: tier: node app: flannel spec: selector: matchLabels: app: flannel template: metadata: labels: tier: node app: flannel spec: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: beta.kubernetes.io/os operator: In values: - linux - key: beta.kubernetes.io/arch operator: In values: - amd64 hostNetwork: true tolerations: - operator: Exists effect: NoSchedule serviceAccountName: flannel initContainers: - name: install-cni image: quay.io/coreos/flannel:v0.11.0-amd64 command: - cp args: - -f - /etc/kube-flannel/cni-conf.json - /etc/cni/net.d/10-flannel.conflist volumeMounts: - name: cni mountPath: /etc/cni/net.d - name: flannel-cfg mountPath: /etc/kube-flannel/ containers: - name: kube-flannel image: quay.io/coreos/flannel:v0.11.0-amd64 command: - /opt/bin/flanneld args: - --ip-masq - --kube-subnet-mgr resources: requests: cpu: "100m" memory: "50Mi" limits: cpu: "100m" memory: "50Mi" securityContext: privileged: false capabilities: add: ["NET_ADMIN"] env: - name: POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace volumeMounts: - name: run mountPath: /run/flannel - name: flannel-cfg mountPath: /etc/kube-flannel/ volumes: - name: run hostPath: path: /run/flannel - name: cni hostPath: path: /etc/cni/net.d - name: flannel-cfg configMap: name: kube-flannel-cfg --- apiVersion: apps/v1 kind: DaemonSet metadata: name: kube-flannel-ds-arm64 namespace: kube-system labels: tier: node app: flannel spec: selector: matchLabels: app: flannel template: metadata: labels: tier: node app: flannel spec: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: beta.kubernetes.io/os operator: In values: - linux - key: beta.kubernetes.io/arch operator: In values: - arm64 hostNetwork: true tolerations: - operator: Exists effect: NoSchedule serviceAccountName: flannel initContainers: - name: install-cni image: quay.io/coreos/flannel:v0.11.0-arm64 command: - cp args: - -f - /etc/kube-flannel/cni-conf.json - /etc/cni/net.d/10-flannel.conflist volumeMounts: - name: cni mountPath: /etc/cni/net.d - name: flannel-cfg mountPath: /etc/kube-flannel/ containers: - name: kube-flannel image: quay.io/coreos/flannel:v0.11.0-arm64 command: - /opt/bin/flanneld args: - --ip-masq - --kube-subnet-mgr resources: requests: cpu: "100m" memory: "50Mi" limits: cpu: "100m" memory: "50Mi" securityContext: privileged: false capabilities: add: ["NET_ADMIN"] env: - name: POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace volumeMounts: - name: run mountPath: /run/flannel - name: flannel-cfg mountPath: /etc/kube-flannel/ volumes: - name: run hostPath: path: /run/flannel - name: cni hostPath: path: /etc/cni/net.d - name: flannel-cfg configMap: name: kube-flannel-cfg --- apiVersion: apps/v1 kind: DaemonSet metadata: name: kube-flannel-ds-arm namespace: kube-system labels: tier: node app: flannel spec: selector: matchLabels: app: flannel template: metadata: labels: tier: node app: flannel spec: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: beta.kubernetes.io/os operator: In values: - linux - key: beta.kubernetes.io/arch operator: In values: - arm hostNetwork: true tolerations: - operator: Exists effect: NoSchedule serviceAccountName: flannel initContainers: - name: install-cni image: quay.io/coreos/flannel:v0.11.0-arm command: - cp args: - -f - /etc/kube-flannel/cni-conf.json - /etc/cni/net.d/10-flannel.conflist volumeMounts: - name: cni mountPath: /etc/cni/net.d - name: flannel-cfg mountPath: /etc/kube-flannel/ containers: - name: kube-flannel image: quay.io/coreos/flannel:v0.11.0-arm command: - /opt/bin/flanneld args: - --ip-masq - --kube-subnet-mgr resources: requests: cpu: "100m" memory: "50Mi" limits: cpu: "100m" memory: "50Mi" securityContext: privileged: false capabilities: add: ["NET_ADMIN"] env: - name: POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace volumeMounts: - name: run mountPath: /run/flannel - name: flannel-cfg mountPath: /etc/kube-flannel/ volumes: - name: run hostPath: path: /run/flannel - name: cni hostPath: path: /etc/cni/net.d - name: flannel-cfg configMap: name: kube-flannel-cfg --- apiVersion: apps/v1 kind: DaemonSet metadata: name: kube-flannel-ds-ppc64le namespace: kube-system labels: tier: node app: flannel spec: selector: matchLabels: app: flannel template: metadata: labels: tier: node app: flannel spec: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: beta.kubernetes.io/os operator: In values: - linux - key: beta.kubernetes.io/arch operator: In values: - ppc64le hostNetwork: true tolerations: - operator: Exists effect: NoSchedule serviceAccountName: flannel initContainers: - name: install-cni image: quay.io/coreos/flannel:v0.11.0-ppc64le command: - cp args: - -f - /etc/kube-flannel/cni-conf.json - /etc/cni/net.d/10-flannel.conflist volumeMounts: - name: cni mountPath: /etc/cni/net.d - name: flannel-cfg mountPath: /etc/kube-flannel/ containers: - name: kube-flannel image: quay.io/coreos/flannel:v0.11.0-ppc64le command: - /opt/bin/flanneld args: - --ip-masq - --kube-subnet-mgr resources: requests: cpu: "100m" memory: "50Mi" limits: cpu: "100m" memory: "50Mi" securityContext: privileged: false capabilities: add: ["NET_ADMIN"] env: - name: POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace volumeMounts: - name: run mountPath: /run/flannel - name: flannel-cfg mountPath: /etc/kube-flannel/ volumes: - name: run hostPath: path: /run/flannel - name: cni hostPath: path: /etc/cni/net.d - name: flannel-cfg configMap: name: kube-flannel-cfg --- apiVersion: apps/v1 kind: DaemonSet metadata: name: kube-flannel-ds-s390x namespace: kube-system labels: tier: node app: flannel spec: selector: matchLabels: app: flannel template: metadata: labels: tier: node app: flannel spec: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: beta.kubernetes.io/os operator: In values: - linux - key: beta.kubernetes.io/arch operator: In values: - s390x hostNetwork: true tolerations: - operator: Exists effect: NoSchedule serviceAccountName: flannel initContainers: - name: install-cni image: quay.io/coreos/flannel:v0.11.0-s390x command: - cp args: - -f - /etc/kube-flannel/cni-conf.json - /etc/cni/net.d/10-flannel.conflist volumeMounts: - name: cni mountPath: /etc/cni/net.d - name: flannel-cfg mountPath: /etc/kube-flannel/ containers: - name: kube-flannel image: quay.io/coreos/flannel:v0.11.0-s390x command: - /opt/bin/flanneld args: - --ip-masq - --kube-subnet-mgr resources: requests: cpu: "100m" memory: "50Mi" limits: cpu: "100m" memory: "50Mi" securityContext: privileged: false capabilities: add: ["NET_ADMIN"] env: - name: POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace volumeMounts: - name: run mountPath: /run/flannel - name: flannel-cfg mountPath: /etc/kube-flannel/ volumes: - name: run hostPath: path: /run/flannel - name: cni hostPath: path: /etc/cni/net.d - name: flannel-cfg configMap: name: kube-flannel-cfg
2)查看是否部署成功
3)再次查看node,可以看到狀態為ready
12、Node節點加入集群(node)
- 向集群添加新節點,執行在kubeadm init輸出的kubeadm join命令:
- 復制上面命令,在node節點上執行,記得更換對應的ip
- 這里的–token 來自前面kubeadm init輸出提示,如果當時沒有記錄下來可以通過kubeadm token list 查看。
kubeadm join 192.168.182.135:6443 --token 7rpjfp.n3vg39zrgstzr0rs \
--discovery-token-ca-cert-hash sha256:8c5aa1a4e82e70fed62b02e8d7bff54c801251b5ee40c7cec68a8c214dcc1234
在master端查看:
注意:重新初始化得先執行這幾步
docker rm -f `docker ps -a -q`
rm -rf /etc/kubernetes/*
rm -rf /var/lib/etcd/
kubeadm reset
問題1:No networks found in /etc/cni/net.d
- 解決:cat /etc/cni/net.d/10-flannel.conflist
-
{ "name": "cbr0", "cniVersion": "0.3.1", "plugins": [ { "type": "flannel", "delegate": { "hairpinMode": true, "isDefaultGateway": true } }, { "type": "portmap", "capabilities": { "portMappings": true } } ] }
問題2:如果出現如下錯誤提示:
- 解決:
-
# 更改docker的啟動參數 $ vim /usr/lib/systemd/system/docker.service #ExecStart=/usr/bin/dockerd ExecStart=/usr/bin/dockerd --exec-opt native.cgroupdriver=systemd
# 重啟docker
$ systemctl daemon-reload
$ systemctl restart docker
13、補充:移除NODE節點的方法(master執行)
第一步:先將節點設置為維護模式(k8s-node1是節點名稱)
[root@k8s-master ~]# kubectl drain k8s-node1 --delete-local-data --force --ignore-daemonsets
node/k8s-node1 cordoned
node/k8s-node1 drained
第二步:然后刪除節點
[root@k8s-master ~]# kubectl delete node k8s-node1
node "k8s-node1" deleted
第三步:查看節點
[root@k8s-master ~]# kubectl get nodes NAME STATUS ROLES AGE VERSION k8s-master Ready master 18m v1.17.0 k8s-node2 Ready <none> 5m7s v1.17.0
發現k8s-node1節點已經被刪除了
14、如果這個時候再想添加進來這個node,需要執行兩步操作
第一步:停掉kubelet(需要添加進來的節點操作)
[root@k8s-node2 ~]# systemctl stop kubelet
第二步:刪除相關文件
[root@k8s-node2 ~]# rm -rf /etc/kubernetes/*
第三步:添加節點
kubeadm join 192.168.182.135:6443 --token 7rpjfp.n3vg39zrgstzr0rs \
--discovery-token-ca-cert-hash sha256:8c5aa1a4e82e70fed62b02e8d7bff54c801251b5ee40c7cec68a8c214dcc1234
第四步:查看節點
[root@k8s-master ~]# kubectl get nodes NAME STATUS ROLES AGE VERSION k8s-master Ready master 24m v1.17.0 k8s-node1 NotReady <none> 6s v1.17.0 k8s-node2 Ready <none> 10m v1.17.0
15、忘掉token再次添加進k8s集群
第一步:主節點執行命令
獲取token
[root@k8s-master ~]# kubeadm token list
TOKEN TTL EXPIRES USAGES DESCRIPTION EXTRA GROUPS
7rpjfp.n3vg39zrgstzr0rs 23h 2019-12-30T20:01:50+08:00 authentication,signing The default bootstrap token generated by 'kubeadm init'. system:bootstrappers:kubeadm:default-node-token
第二步: 獲取ca證書sha256編碼hash值
[root@k8s-master ~]# openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //' 8c5aa1a4e82e70fed62b02e8d7bff54c801251b5ee40c7cec68a8c214dcc1234
第三步:從節點執行如下的命令
[root@k8s-master ~]# systemctl stop kubelet
第四步:刪除相關文件
[root@k8s-master ~]# rm -rf /etc/kubernetes/*
第五步:加入集群
指定主節點IP,端口是6443
在生成的證書前有sha256:
kubeadm join 192.168.64.10:6443 --token 7rpjfp.n3vg39zrgstzr0rs --discovery-token-ca-cert-hash sha256:8c5aa1a4e82e70fed62b02e8d7bff54c801251b5ee40c7cec68a8c214dcc1234
16、常用命令
16.1、查看node
# -o wide以yaml格式顯示詳細信息 kubectl get node -o wide
16.2、創建deployments
kubectl run net-test --image=alpine --replicas=2 sleep 10
16.3、查看deployments詳情
kubectl describe deployment net-test
16.4、刪除deployments
kubectl delete deployment net-test -n default
16.5、查看pod
kubectl get pod -o wide
16.6、查看pod的詳情
kubectl describe pod net-test-5767cb94df-7lwtq
16.7、手動擴容縮容
# 通過執行擴容命令,對某個deployment直接進行擴容: $ kubectl scale deployment net-test --replicas=4 # 當要縮容,減少副本數量即可: $ kubectl scale deployment net-test --replicas=2
17、搭建K8S Dashboard
1. 下載dashboard文件:
curl -o kubernetes-dashboard.yaml https://raw.githubusercontent.com/kubernetes/dashboard/master/aio/deploy/recommended/kubernetes-dashboard.yaml
如果地址不可用可以使用下面的【kubernetes-dashboard.yaml】文件:對外端口為nodePort

apiVersion: v1 kind: Secret metadata: labels: k8s-app: kubernetes-dashboard name: kubernetes-dashboard-certs namespace: kube-system type: Opaque --- apiVersion: v1 kind: ServiceAccount metadata: labels: k8s-app: kubernetes-dashboard name: kubernetes-dashboard namespace: kube-system --- kind: Role apiVersion: rbac.authorization.k8s.io/v1 metadata: name: kubernetes-dashboard-minimal namespace: kube-system rules: - apiGroups: [""] resources: ["secrets"] verbs: ["create"] - apiGroups: [""] resources: ["configmaps"] verbs: ["create"] - apiGroups: [""] resources: ["secrets"] resourceNames: ["kubernetes-dashboard-key-holder", "kubernetes-dashboard-certs"] verbs: ["get", "update", "delete"] - apiGroups: [""] resources: ["configmaps"] resourceNames: ["kubernetes-dashboard-settings"] verbs: ["get", "update"] - apiGroups: [""] resources: ["services"] resourceNames: ["heapster"] verbs: ["proxy"] - apiGroups: [""] resources: ["services/proxy"] resourceNames: ["heapster", "http:heapster:", "https:heapster:"] verbs: ["get"] --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: kubernetes-dashboard-minimal namespace: kube-system roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: cluster-admin subjects: - kind: ServiceAccount name: kubernetes-dashboard namespace: kube-system --- kind: Deployment apiVersion: apps/v1 metadata: labels: k8s-app: kubernetes-dashboard name: kubernetes-dashboard namespace: kube-system spec: replicas: 1 revisionHistoryLimit: 10 selector: matchLabels: k8s-app: kubernetes-dashboard template: metadata: labels: k8s-app: kubernetes-dashboard spec: containers: - name: kubernetes-dashboard image: 192.168.182.135:9090/library/kubernetes-dashboard-amd64:v1.8.3 ports: - containerPort: 8443 protocol: TCP args: - --auto-generate-certificates volumeMounts: - name: kubernetes-dashboard-certs mountPath: /certs - mountPath: /tmp name: tmp-volume livenessProbe: httpGet: scheme: HTTPS path: / port: 8443 initialDelaySeconds: 30 timeoutSeconds: 30 volumes: - name: kubernetes-dashboard-certs secret: secretName: kubernetes-dashboard-certs - name: tmp-volume emptyDir: {} serviceAccountName: kubernetes-dashboard tolerations: - key: node-role.kubernetes.io/master effect: NoSchedule --- kind: Service apiVersion: v1 metadata: labels: k8s-app: kubernetes-dashboard name: kubernetes-dashboard namespace: kube-system spec: type: NodePort ports: - port: 443 targetPort: 8443 nodePort: 31080 selector: k8s-app: kubernetes-dashboard
2.下載鏡像鏡像【k8s.gcr.io#kubernetes-dashboard-amd64.tar】
鏈接: https://pan.baidu.com/s/1vQUGpx89TZw-HyxCVE972w 提取碼: 5pui 復制這段內容后打開百度網盤手機App,操作更方便哦
3. 創建kubernetes-dashboard:注意:修改【kubernetes-dashboard.yaml】文件里面的鏡像地址
kubectl create -f kubernetes-dashboard.yaml
4. 由於我之前安裝過一次,所以報錯:

Error from server (AlreadyExists): error when creating "kubernetes-dashboard.yaml": secrets "kubernetes-dashboard-certs" already exists Error from server (AlreadyExists): error when creating "kubernetes-dashboard.yaml": serviceaccounts "kubernetes-dashboard" already exists Error from server (AlreadyExists): error when creating "kubernetes-dashboard.yaml": roles.rbac.authorization.k8s.io "kubernetes-dashboard-minimal" already exists Error from server (AlreadyExists): error when creating "kubernetes-dashboard.yaml": rolebindings.rbac.authorization.k8s.io "kubernetes-dashboard-minimal" already exists Error from server (AlreadyExists): error when creating "kubernetes-dashboard.yaml": deployments.apps "kubernetes-dashboard" already exists Error from server (AlreadyExists): error when creating "kubernetes-dashboard.yaml": services "kubernetes-dashboard" already exists
5. 卸載之前安裝的內容:
kubectl delete -f kubernetes-dashboard.yaml
6. 重新安裝dashboard:
kubectl create -f kubernetes-dashboard.yaml
7. 獲取token:
kubectl -n kube-system describe $(kubectl -n kube-system get secret -n kube-system -o name | grep namespace) | grep token
8.如果谷歌瀏覽器訪問不了,則用火狐訪問:http://nodeIp:nodePort
查看服務被分配到哪個節點上:$ kubectl get pods -n kube-system -o wide
9. 訪問以下鏈接時,將獲取的token粘貼到輸入框中:
之后就可以通過令牌登錄了。
解決證書問題:
# 定義別名 $ alias ksys='kubectl -n kube-system' # 查看證書 $ ksys get secret # 刪除證書 $ ksys delete secret kubernetes-dashboard-certs # 創建證書 $ openssl genrsa -out dashboard.key 2048 $ openssl req -new -out dashboard.csr -key dashboard.key -subj '/CN=192.168.182.139' $ openssl x509 -req -in dashboard.csr -signkey dashboard.key -out dashboard.crt $ openssl x509 -in dashboard.crt -text -noout $ ksys create secret generic kubernetes-dashboard-certs --from-file=dashboard.key --from-file=dashboard.crt
~~~未完待續~~~