一、系統要求
1.1、安裝要求
操作系統
Ubuntu 16.04+
Debian 9
CentOS 7
RHEL 7
Fedora 25/26 (best-effort)
其他
內存2GB + ,2核CPU +
2.2、需要三台服務器,並配置IP地址
192.168.56.51
192.168.56.52
192.168.56.53
[root@k8s-node1 ~]# cat /etc/sysconfig/network-scripts/ifcfg-ens33
TYPE=Ethernet
PROXY_METHOD=none
BROWSER_ONLY=no
BOOTPROTO=static
DEFROUTE=yes
IPV4_FAILURE_FATAL=no
IPV6INIT=no
IPV6_AUTOCONF=yes
IPV6_DEFROUTE=yes
IPV6_FAILURE_FATAL=no
IPV6_ADDR_GEN_MODE=stable-privacy
NAME=ens33
UUID=3e8c39d8-c82e-46b4-ac8d-7d331e1360c6
DEVICE=ens33
ONBOOT=yes
IPADDR=192.168.56.52
PREFIX=24
GATEWAY=192.168.56.2
DNS1=192.168.56.2
DNS2=8.8.8.8
IPV6_PRIVACY=no
3.3、配置主機名
#k8s-master
hostname k8s-master
hostnamectl set-hostname k8s-master
hostname
k8s-master
#k8s-node1
hostname k8s-node1
hostnamectl set-hostname k8s-node1
#k8s-node2
hostname k8s-node2
hostnamectl set-hostname k8s-node2
4.4、配置域名解析,所有服務器都要配置
echo "192.168.56.51 k8s-master" >>/etc/hosts
echo "192.168.56.52 k8s-node1" >>/etc/hosts
echo "192.168.56.53 k8s-node2" >>/etc/hosts
[root@k8s-master ~]# tail -3 /etc/hosts
192.168.56.51 k8s-master
192.168.56.52 k8s-node1
192.168.56.53 k8s-node2
5.5、關閉防火牆和selinux
systemctl stop firewalld
systemctl disable firewalld
關閉selinux:
sed -i 's/enforcing/disabled/' /etc/selinux/config
setenforce 0
6.6、關閉swap:
swapoff -a # 臨時關閉
free $ 查看Swap一行值均為0
vi /etc/fstab # 永久關閉
#/dev/mapper/centos_template-swap swap swap defaults 0 0 #注釋掉這行
free $ 查看Swap一行值均為0
7.7、安裝ntp,配置時間同步
[root@k8s-node2 ~]# yum install ntp wget -y
[root@k8s-master ~]# ntpdate ntp.api.bz
28 May 19:15:43 ntpdate[10356]: step time server 114.118.7.161 offset 0.908151 sec
8.8、將橋接的IPv4流量傳遞到iptables的鏈
cat >> /etc/sysctl.d/k8s.conf << EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl --system #生效
8.9、添加國內yum源和repo源
1)配置阿里雲base源 mv /etc/yum.repos.d/CentOS-Base.repo /etc/yum.repos.d/CentOS-Base.repo.bak wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo cat /etc/yum.repos.d/CentOS-Base.repo 2)配置阿里雲epel源 mv /etc/yum.repos.d/epel.repo /etc/yum.repos.d/epel.repo.backup mv /etc/yum.repos.d/epel-testing.repo /etc/yum.repos.d/epel-testing.repo.backup wget -O /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo cat /etc/yum.repos.d/epel.repo 3)創建yum緩存進行測試 yum clean all yum makecache fast
#退出再登錄
exit
二、安裝kubernetes集群
2.1、安裝dokcer
#安裝依賴
yum install -y yum-utils device-mapper-persistent-data lvm2
#配置aliyun源
cd /etc/yum.repos.d/
wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -O /etc/yum.repos.d/docker-ce.repo
#安裝dokcer
yum -y install docker-ce-18.06.1.ce-3.el7 #要指定該版本安裝
systemctl enable docker && systemctl start docker
#查看版本
[root@k8s-master ~]# docker --version
Docker version 18.06.1-ce, build e68fc7a
2.2、 添加阿里雲YUM軟件源
cat > /etc/yum.repos.d/kubernetes.repo << EOF [kubernetes] name=Kubernetes baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64 enabled=1 gpgcheck=1 repo_gpgcheck=1 gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg EOF
2.3、安裝kubeadm,kubelet和kubectl (備注:三台服務器都要操作)
kubeadm: 引導集群的命令
kubelet:集群中運行任務的代理程序
kubectl:命令行管理工具
由於版本更新頻繁,這里指定版本號部署:
yum install -y kubelet-1.14.0 kubeadm-1.14.0 kubectl-1.14.0
systemctl enable kubelet
#此時此刻不需要啟動kubelet,僅僅設置開機啟動即可
systemctl enable kubelet
2.4、初始化kubernetes Master (備注:僅在master節點安裝,再開一個窗口,一會要用到這個token)
kubeadm init \
--apiserver-advertise-address=192.168.56.51 \
--image-repository registry.aliyuncs.com/google_containers \
--kubernetes-version v1.14.0 \
--service-cidr=10.1.0.0/16 \
--pod-network-cidr=10.244.0.0/16
備注:記得改成部署主機的IP地址。
初始化完成,會生成token,記得復制下來,一會后面要用到。
kubeadm join 192.168.56.51:6443 --token fwcgkb.1g65pag18m86w71e \
--discovery-token-ca-cert-hash sha256:980bd9b4dafd518650e7ccf69c76c81b8c8e55084b470e3daa3ed5dafdc56ba0
#查看鏡像
[root@k8s-master ~]# docker images REPOSITORY TAG IMAGE ID CREATED SIZE registry.aliyuncs.com/google_containers/kube-apiserver v1.13.3 fe242e556a99 3 months ago 181MB registry.aliyuncs.com/google_containers/kube-controller-manager v1.13.3 0482f6400933 3 months ago 146MB registry.aliyuncs.com/google_containers/kube-proxy v1.13.3 98db19758ad4 3 months ago 80.3MB registry.aliyuncs.com/google_containers/kube-scheduler v1.13.3 3a6f709e97a0 3 months ago 79.6MB registry.aliyuncs.com/google_containers/coredns 1.2.6 f59dcacceff4 6 months ago 40MB registry.aliyuncs.com/google_containers/etcd 3.2.24 3cab8e1b9802 8 months ago 220MB registry.aliyuncs.com/google_containers/pause 3.1 da86e6ba6ca1 17 months ago 742kB
2.5、使用kubectl工具,執行: (備注:僅master上面執行)
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
2.6、他會在根目錄下面生成config文件 (備注:僅master上面執行)
[root@k8s-master ~]# ls .kube/
cache config http-cache
2.7、查看節點(沒有安裝容器網絡,所以是沒有准備好的狀態,這是正常的)備注:僅master上面執行)
[root@k8s-master ~]# kubectl get node
NAME STATUS ROLES AGE VERSION
k8s-master NotReady master 4m v1.13.3
2.8、安裝Pod網絡插件(CNI)
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/a70459be0084506e4ec919aa1c114638878db11b/Documentation/kube-flannel.yml
確保能夠訪問到quay.io這個registery。
如果下載失敗,可以改成這個鏡像地址:
docker pull lizhenliang/flannel:v0.11.0-amd64
因鏡像在國外,沒有翻牆無法下載成功,所以用打包好的鏡像文件,上傳到服務器上面,再導入鏡像文件。
[root@k8s-master ~]# rz -E
rz waiting to receive.
[root@k8s-master ~]# ll
total 54116
-rw-------. 1 root root 1589 Apr 30 20:16 anaconda-ks.cfg
-rw-r--r-- 1 root root 55390720 May 28 22:34 flannelv0.11.0-amd64.tar
-rw-r--r-- 1 root root 12306 May 28 22:07 kube-flannel.yml
##################################################
#從其它服務器上面下載鏡像,再打包
docker pull lizhenliang/flannel:v0.11.0-amd64
docker images
docker image save lizhenliang/flannel:v0.11.0-amd64 >flannelv0.11.0-amd64.tar
###################################################
#導入鏡像文件到部署服務器上面。(備注:三台服務器都導入)
[root@k8s-master ~]# docker load <flannelv0.11.0-amd64.tar
7bff100f35cb: Loading layer [==================================================>] 4.672MB/4.672MB
5d3f68f6da8f: Loading layer [==================================================>] 9.526MB/9.526MB
9b48060f404d: Loading layer [==================================================>] 5.912MB/5.912MB
3f3a4ce2b719: Loading layer [==================================================>] 35.25MB/35.25MB
9ce0bb155166: Loading layer [==================================================>] 5.12kB/5.12kB
Loaded image: lizhenliang/flannel:v0.11.0-amd64
#查看新導入的flannel的鏡像
[root@k8s-master ~]# docker images
REPOSITORY TAG IMAGE ID C
registry.aliyuncs.com/google_containers/kube-proxy v1.13.3 98db19758ad4 3
registry.aliyuncs.com/google_containers/kube-apiserver v1.13.3 fe242e556a99 3
registry.aliyuncs.com/google_containers/kube-controller-manager v1.13.3 0482f6400933 3
registry.aliyuncs.com/google_containers/kube-scheduler v1.13.3 3a6f709e97a0 3
lizhenliang/flannel v0.11.0-amd64 ff281650a721 3 #就是這個鏡像
registry.aliyuncs.com/google_containers/coredns 1.2.6 f59dcacceff4 6
registry.aliyuncs.com/google_containers/etcd 3.2.24 3cab8e1b9802 8
registry.aliyuncs.com/google_containers/pause 3.1 da86e6ba6ca1 1
#再執行這個命令,就可以安裝成功了。
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/a70459be0084506e4ec919aa1c114638878db11b/Documentation/kube-flannel.yml
#如果之前安裝過,沒有成功,則執行下面這條命令,刪除他
#刪除yaml
[root@k8s-master ~]# kubectl delete -f kube-flannel.yml
podsecuritypolicy.extensions "psp.flannel.unprivileged" deleted
clusterrole.rbac.authorization.k8s.io "flannel" deleted
clusterrolebinding.rbac.authorization.k8s.io "flannel" deleted
serviceaccount "flannel" deleted
configmap "kube-flannel-cfg" deleted
daemonset.extensions "kube-flannel-ds-amd64" deleted
daemonset.extensions "kube-flannel-ds-arm64" deleted
daemonset.extensions "kube-flannel-ds-arm" deleted
daemonset.extensions "kube-flannel-ds-ppc64le" deleted
daemonset.extensions "kube-flannel-ds-s390x" deleted
#查看狀態,成功
[root@k8s-master ~]# kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-78d4cf999f-pwxc2 1/1 Running 0 51m
coredns-78d4cf999f-qgc94 1/1 Running 0 51m
etcd-k8s-master 1/1 Running 3 50m
kube-apiserver-k8s-master 1/1 Running 5 50m
kube-controller-manager-k8s-master 1/1 Running 4 50m
kube-flannel-ds-amd64-fvgfs 1/1 Running 0 93s
kube-proxy-6sb9v 1/1 Running 3 51m
kube-scheduler-k8s-master 1/1 Running 3 50m
#查看狀態,就成了Ready
[root@k8s-master ~]# kubectl get node
NAME STATUS ROLES AGE VERSION
k8s-master Ready master 54m v1.13.3
#下面這個是教你,如何得到images的下載地址
#下載這個文件 [root@k8s-master ~]# wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml --2019-05-28 22:07:05-- https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml Resolving raw.githubusercontent.com (raw.githubusercontent.com)... 151.101.108.133 Connecting to raw.githubusercontent.com (raw.githubusercontent.com)|151.101.108.133|:443... connected. HTTP request sent, awaiting response... 200 OK Length: 12306 (12K) [text/plain] Saving to: ‘kube-flannel.yml’ 100%[========================================================>] 12,306 --.-K/s in 0.004s 2019-05-28 22:07:06 (2.71 MB/s) - ‘kube-flannel.yml’ saved [12306/12306] [root@k8s-master ~]# ll total 20 -rw-------. 1 root root 1589 Apr 30 20:16 anaconda-ks.cfg -rw-r--r-- 1 root root 12306 May 28 22:07 kube-flannel.yml #查看內容(備注:找到image的鏡像下載地址) [root@k8s-master ~]# cat kube-flannel.yml --- apiVersion: extensions/v1beta1 kind: PodSecurityPolicy metadata: name: psp.flannel.unprivileged annotations: seccomp.security.alpha.kubernetes.io/allowedProfileNames: docker/default seccomp.security.alpha.kubernetes.io/defaultProfileName: docker/default apparmor.security.beta.kubernetes.io/allowedProfileNames: runtime/default apparmor.security.beta.kubernetes.io/defaultProfileName: runtime/default spec: privileged: false volumes: - configMap - secret - emptyDir - hostPath allowedHostPaths: - pathPrefix: "/etc/cni/net.d" - pathPrefix: "/etc/kube-flannel" - pathPrefix: "/run/flannel" readOnlyRootFilesystem: false # Users and groups runAsUser: rule: RunAsAny supplementalGroups: rule: RunAsAny fsGroup: rule: RunAsAny # Privilege Escalation allowPrivilegeEscalation: false defaultAllowPrivilegeEscalation: false # Capabilities allowedCapabilities: ['NET_ADMIN'] defaultAddCapabilities: [] requiredDropCapabilities: [] # Host namespaces hostPID: false hostIPC: false hostNetwork: true hostPorts: - min: 0 max: 65535 # SELinux seLinux: # SELinux is unsed in CaaSP rule: 'RunAsAny' --- kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1beta1 metadata: name: flannel rules: - apiGroups: ['extensions'] resources: ['podsecuritypolicies'] verbs: ['use'] resourceNames: ['psp.flannel.unprivileged'] - apiGroups: - "" resources: - pods verbs: - get - apiGroups: - "" resources: - nodes verbs: - list - watch - apiGroups: - "" resources: - nodes/status verbs: - patch --- kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1beta1 metadata: name: flannel roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: flannel subjects: - kind: ServiceAccount name: flannel namespace: kube-system --- apiVersion: v1 kind: ServiceAccount metadata: name: flannel namespace: kube-system --- kind: ConfigMap apiVersion: v1 metadata: name: kube-flannel-cfg namespace: kube-system labels: tier: node app: flannel data: cni-conf.json: | { "name": "cbr0", "plugins": [ { "type": "flannel", "delegate": { "hairpinMode": true, "isDefaultGateway": true } }, { "type": "portmap", "capabilities": { "portMappings": true } } ] } net-conf.json: | { "Network": "10.244.0.0/16", "Backend": { "Type": "vxlan" } } --- apiVersion: extensions/v1beta1 kind: DaemonSet metadata: name: kube-flannel-ds-amd64 namespace: kube-system labels: tier: node app: flannel spec: template: metadata: labels: tier: node app: flannel spec: hostNetwork: true nodeSelector: beta.kubernetes.io/arch: amd64 tolerations: - operator: Exists effect: NoSchedule serviceAccountName: flannel initContainers: - name: install-cni image: quay.io/coreos/flannel:v0.11.0-amd64 command: - cp args: - -f - /etc/kube-flannel/cni-conf.json - /etc/cni/net.d/10-flannel.conflist volumeMounts: - name: cni mountPath: /etc/cni/net.d - name: flannel-cfg mountPath: /etc/kube-flannel/ containers: - name: kube-flannel image: quay.io/coreos/flannel:v0.11.0-amd64 command: - /opt/bin/flanneld args: - --ip-masq - --kube-subnet-mgr resources: requests: cpu: "100m" memory: "50Mi" limits: cpu: "100m" memory: "50Mi" securityContext: privileged: false capabilities: add: ["NET_ADMIN"] env: - name: POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace volumeMounts: - name: run mountPath: /run/flannel - name: flannel-cfg mountPath: /etc/kube-flannel/ volumes: - name: run hostPath: path: /run/flannel - name: cni hostPath: path: /etc/cni/net.d - name: flannel-cfg configMap: name: kube-flannel-cfg --- apiVersion: extensions/v1beta1 kind: DaemonSet metadata: name: kube-flannel-ds-arm64 namespace: kube-system labels: tier: node app: flannel spec: template: metadata: labels: tier: node app: flannel spec: hostNetwork: true nodeSelector: beta.kubernetes.io/arch: arm64 tolerations: - operator: Exists effect: NoSchedule serviceAccountName: flannel initContainers: - name: install-cni image: quay.io/coreos/flannel:v0.11.0-arm64 command: - cp args: - -f - /etc/kube-flannel/cni-conf.json - /etc/cni/net.d/10-flannel.conflist volumeMounts: - name: cni mountPath: /etc/cni/net.d - name: flannel-cfg mountPath: /etc/kube-flannel/ containers: - name: kube-flannel image: quay.io/coreos/flannel:v0.11.0-arm64 command: - /opt/bin/flanneld args: - --ip-masq - --kube-subnet-mgr resources: requests: cpu: "100m" memory: "50Mi" limits: cpu: "100m" memory: "50Mi" securityContext: privileged: false capabilities: add: ["NET_ADMIN"] env: - name: POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace volumeMounts: - name: run mountPath: /run/flannel - name: flannel-cfg mountPath: /etc/kube-flannel/ volumes: - name: run hostPath: path: /run/flannel - name: cni hostPath: path: /etc/cni/net.d - name: flannel-cfg configMap: name: kube-flannel-cfg --- apiVersion: extensions/v1beta1 kind: DaemonSet metadata: name: kube-flannel-ds-arm namespace: kube-system labels: tier: node app: flannel spec: template: metadata: labels: tier: node app: flannel spec: hostNetwork: true nodeSelector: beta.kubernetes.io/arch: arm tolerations: - operator: Exists effect: NoSchedule serviceAccountName: flannel initContainers: - name: install-cni image: quay.io/coreos/flannel:v0.11.0-arm command: - cp args: - -f - /etc/kube-flannel/cni-conf.json - /etc/cni/net.d/10-flannel.conflist volumeMounts: - name: cni mountPath: /etc/cni/net.d - name: flannel-cfg mountPath: /etc/kube-flannel/ containers: - name: kube-flannel image: quay.io/coreos/flannel:v0.11.0-arm command: - /opt/bin/flanneld args: - --ip-masq - --kube-subnet-mgr resources: requests: cpu: "100m" memory: "50Mi" limits: cpu: "100m" memory: "50Mi" securityContext: privileged: false capabilities: add: ["NET_ADMIN"] env: - name: POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace volumeMounts: - name: run mountPath: /run/flannel - name: flannel-cfg mountPath: /etc/kube-flannel/ volumes: - name: run hostPath: path: /run/flannel - name: cni hostPath: path: /etc/cni/net.d - name: flannel-cfg configMap: name: kube-flannel-cfg --- apiVersion: extensions/v1beta1 kind: DaemonSet metadata: name: kube-flannel-ds-ppc64le namespace: kube-system labels: tier: node app: flannel spec: template: metadata: labels: tier: node app: flannel spec: hostNetwork: true nodeSelector: beta.kubernetes.io/arch: ppc64le tolerations: - operator: Exists effect: NoSchedule serviceAccountName: flannel initContainers: - name: install-cni image: quay.io/coreos/flannel:v0.11.0-ppc64le command: - cp args: - -f - /etc/kube-flannel/cni-conf.json - /etc/cni/net.d/10-flannel.conflist volumeMounts: - name: cni mountPath: /etc/cni/net.d - name: flannel-cfg mountPath: /etc/kube-flannel/ containers: - name: kube-flannel image: quay.io/coreos/flannel:v0.11.0-ppc64le command: - /opt/bin/flanneld args: - --ip-masq - --kube-subnet-mgr resources: requests: cpu: "100m" memory: "50Mi" limits: cpu: "100m" memory: "50Mi" securityContext: privileged: false capabilities: add: ["NET_ADMIN"] env: - name: POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace volumeMounts: - name: run mountPath: /run/flannel - name: flannel-cfg mountPath: /etc/kube-flannel/ volumes: - name: run hostPath: path: /run/flannel - name: cni hostPath: path: /etc/cni/net.d - name: flannel-cfg configMap: name: kube-flannel-cfg --- apiVersion: extensions/v1beta1 kind: DaemonSet metadata: name: kube-flannel-ds-s390x namespace: kube-system labels: tier: node app: flannel spec: template: metadata: labels: tier: node app: flannel spec: hostNetwork: true nodeSelector: beta.kubernetes.io/arch: s390x tolerations: - operator: Exists effect: NoSchedule serviceAccountName: flannel initContainers: - name: install-cni image: quay.io/coreos/flannel:v0.11.0-s390x command: - cp args: - -f - /etc/kube-flannel/cni-conf.json - /etc/cni/net.d/10-flannel.conflist volumeMounts: - name: cni mountPath: /etc/cni/net.d - name: flannel-cfg mountPath: /etc/kube-flannel/ containers: - name: kube-flannel image: quay.io/coreos/flannel:v0.11.0-s390x command: - /opt/bin/flanneld args: - --ip-masq - --kube-subnet-mgr resources: requests: cpu: "100m" memory: "50Mi" limits: cpu: "100m" memory: "50Mi" securityContext: privileged: false capabilities: add: ["NET_ADMIN"] env: - name: POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace volumeMounts: - name: run mountPath: /run/flannel - name: flannel-cfg mountPath: /etc/kube-flannel/ volumes: - name: run hostPath: path: /run/flannel - name: cni hostPath: path: /etc/cni/net.d - name: flannel-cfg configMap: name: kube-flannel-cfg
#致此master部署成功。
三、部署Kubernetes Node節點
3.1、加入Kubernetes Node節點
向集群添加新節點,執行在kubeadm init輸出的kubeadm join命令:
將node節點加入網絡(備注:僅在兩台node節點上面操作,添加節點,方法都是一樣的。)
kubeadm join 192.168.88.149:6443 --token qlzhpw.fy30lar1jiz11xbw --discovery-token-ca-cert-hash sha256:ed2f22c8a4727f7be52a1495b49e52638e1f79107677daf6722dfa009218f2e8
#查看node,是否添加成功 (備注:在master節點上面操作)
[root@k8s-master ~]# kubectl get node
NAME STATUS ROLES AGE VERSION
k8s-master Ready master 3h11m v1.14.0
k8s-node1 Ready <none> 3h5m v1.14.0 #添加成功
k8s-node2 Ready <none> 3h v1.14.0 #添加成功
四、測試kubernetes集群
在Kubernetes集群中創建一個pod,驗證是否正常運行:
#在其它服務器上面,下載nginx鏡像包。導入到部署服務器中(備注:三台服務器都要導入)
docker pull nginx
docker image save nginx:latest >nginx.tar
#導入到需要部署的服務器上面,(備注:三台服務器都要導入)
[root@k8s-master ~]# docker load < nginx.tar
6270adb5794c: Loading layer [==================================================>] 58.45MB/58.45MB
6ba094226eea: Loading layer [==================================================>] 54.59MB/54.59MB
332fa54c5886: Loading layer [==================================================>] 3.584kB/3.584kB
Loaded image: nginx:latest
#創建容器,並暴露端口
kubectl create deployment nginx --image=nginx
kubectl expose deployment nginx --port=80 --type=NodePort
#查看是否創建成功,並暴露端口
#查看pod [root@k8s-master ~]# kubectl get pod,svc NAME READY STATUS RESTARTS AGE pod/nginx-65f88748fd-6859x 1/1 Running 0 133m NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/kubernetes ClusterIP 10.1.0.1 <none> 443/TCP 3h31m service/nginx NodePort 10.1.255.203 <none> 80:31118/TCP 3h16m #顯示詳細pod的詳細信息 [root@k8s-master ~]# kubectl get pod,svc -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES pod/nginx-65f88748fd-6859x 1/1 Running 0 133m 10.244.2.2 k8s-node2 <none> <none> NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR service/kubernetes ClusterIP 10.1.0.1 <none> 443/TCP 3h31m <none> service/nginx NodePort 10.1.255.203 <none> 80:31118/TCP 3h16m app=nginx
#訪問網站
訪問地址:http://NodeIP:Port
or
五、部署 Dashboard
5.1、創建service account並綁定默認cluster-admin管理員集群角色:kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v1.10.1/src/deploy/recommended/kubernetes-dashboard.yaml
默認鏡像國內無法訪問,修改鏡像地址為: lizhenliang/kubernetes-dashboard-amd64:v1.10.1
默認Dashboard只能集群內部訪問,修改Service為NodePort類型,暴露到外部:
#修改配置文件
kubernetes-dashboard.yaml
只需:修改如下兩個地方
# ------------------- Dashboard Deployment ------------------- #
image: lizhenliang/kubernetes-dashboard-amd64:v1.10.1 #更換鏡像源
# ------------------- Dashboard Service ------------------- #
spec:
type: NodePort #增加這行
ports:
- port: 443
targetPort: 8443
nodePort: 30001 #添加這行
#####################################################
因鏡像源是國外的,所以下載下來導入到服務器上面(備注:三台服務器都要操作)
[root@k8s-master ~]# ll
total 338072
-rw-------. 1 root root 1589 Apr 30 20:16 anaconda-ks.cfg
-rw-r--r-- 1 root root 55390720 May 28 22:34 flannelv0.11.0-amd64.tar
-rw-r--r-- 1 root root 55390720 May 28 22:34 flannelv0.11.0-amd64.tar.0
-rw-r--r-- 1 root root 10599 May 29 21:29 kube-flannel.yml
-rw-r--r-- 1 root root 122310656 May 29 21:46 kubernetes-dashboard-amd64_v1.10.1.tar
-rw-r--r-- 1 root root 4595 May 29 21:34 kubernetes-dashboard.yaml
-rw-r--r-- 1 root root 113059840 May 29 21:19 nginx.tar
#三台服務器都要導入
[root@k8s-master ~]# docker load < kubernetes-dashboard-amd64_v1.10.1.tar
fbdfe08b001c: Loading layer [==================================================>] 122.3MB/122.3MB
Loaded image: lizhenliang/kubernetes-dashboard-amd64:v1.10.1
#查看鏡像
[root@k8s-master ~]# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
nginx latest 53f3fd8007f7 3 weeks ago 109MB
registry.aliyuncs.com/google_containers/kube-proxy v1.14.0 5cd54e388aba 2 months ago 82.1MB
registry.aliyuncs.com/google_containers/kube-controller-manager v1.14.0 b95b1efa0436 2 months ago 158MB
registry.aliyuncs.com/google_containers/kube-scheduler v1.14.0 00638a24688b 2 months ago 81.6MB
registry.aliyuncs.com/google_containers/kube-apiserver v1.14.0 ecf910f40d6e 2 months ago 210MB
lizhenliang/flannel v0.11.0-amd64 ff281650a721 4 months ago 52.6MB
quay.io/coreos/flannel v0.11.0-amd64 ff281650a721 4 months ago 52.6MB
registry.aliyuncs.com/google_containers/coredns 1.3.1 eb516548c180 4 months ago 40.3MB
lizhenliang/kubernetes-dashboard-amd64 v1.10.1 f9aed6605b81 5 months ago 122MB
registry.aliyuncs.com/google_containers/etcd 3.3.10 2c4adeb21b4f 5 months ago 258MB
registry.aliyuncs.com/google_containers/pause 3.1 da86e6ba6ca1 17 months ago 742kB
#####################################################
#正式開始:
創建kubernetes-dashboard (備注:僅在master上面操作)
[root@k8s-master ~]# kubectl apply -f kubernetes-dashboard.yaml
secret/kubernetes-dashboard-certs created
serviceaccount/kubernetes-dashboard created
role.rbac.authorization.k8s.io/kubernetes-dashboard-minimal created
rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard-minimal created
deployment.apps/kubernetes-dashboard created
service/kubernetes-dashboard created
[root@k8s-master ~]# kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-8686dcc4fd-hldfb 1/1 Running 0 120m
coredns-8686dcc4fd-tnxhp 1/1 Running 0 120m
etcd-k8s-master 1/1 Running 0 119m
kube-apiserver-k8s-master 1/1 Running 0 119m
kube-controller-manager-k8s-master 1/1 Running 0 119m
kube-flannel-ds-amd64-7kn8b 1/1 Running 0 114m
kube-flannel-ds-amd64-gpqp6 1/1 Running 0 116m
kube-flannel-ds-amd64-pq9cw 1/1 Running 1 110m
kube-proxy-95rcx 1/1 Running 0 114m
kube-proxy-mr58b 1/1 Running 0 120m
kube-proxy-npw2h 1/1 Running 0 110m
kube-scheduler-k8s-master 1/1 Running 0 119m
[root@k8s-master ~]# kubectl delete -f kubernetes-dashboard.yaml
secret "kubernetes-dashboard-certs" deleted
serviceaccount "kubernetes-dashboard" deleted
role.rbac.authorization.k8s.io "kubernetes-dashboard-minimal" deleted
rolebinding.rbac.authorization.k8s.io "kubernetes-dashboard-minimal" deleted
deployment.apps "kubernetes-dashboard" deleted
service "kubernetes-dashboard" deleted
5.2、創建賬號(備注:僅在master上面操作)
[root@k8s-master ~]# kubectl create serviceaccount dashboard-admin -n kube-system
serviceaccount/dashboard-admin created
[root@k8s-master ~]# kubectl create clusterrolebinding dashboard-admin --clusterrole=cluster-admin --se-system:dashboard-admin
clusterrolebinding.rbac.authorization.k8s.io/dashboard-admin created
5.3、獲取登錄用的token (備注:僅在master上面操作)
[root@k8s-master ~]# kubectl describe secrets -n kube-system $(kubectl -n kube-system get secret | awkn/{print $1}')
Name: dashboard-admin-token-cgk7f
Namespace: kube-system
Labels: <none>
Annotations: kubernetes.io/service-account.name: dashboard-admin
kubernetes.io/service-account.uid: ce06ce47-821a-11e9-a6d0-0050563847cd
Type: kubernetes.io/service-account-token
Data
====
namespace: 11 bytes
token: eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiWRtaW4tdG9rZW4tY2drN2YiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiZGFzaGJa3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiY2UwNmNlNDctODIxYS0xMWU5LWE2ZDAtMDic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Omt1YmUtc3lzdGVtOmRhc2hib2FyZC1hZG1pbiJ9.bn1PQo1VV5h8lMHa7Y3oebqJze_ApNbhchn5Kl6tLA-5f6M50r7ZS5_FrMyIE5Ay22hpE5yXc907jcztrbvPz6_J4V1NPKqW1HDtuvQ3DhqkcVBJTC01nvHJJ6gdU9mQIOHPgCHb8cOT67CWEX8MAvf6LqyNZ61rT-zxW1qg69Ee5Er33wJmwZSPgk-Sbi9w5mDhCC2-OYpjQZFoEmD4lefvvHCmyfIxOLbJtZyoPR8TCYj5jlX_iH4k9k8lLSOYwTN9aQaut-vmXlolcZO5MDaKJsr_v-3HbNA
ca.crt: 1025 bytes
5.4、創建服務賬號和集群角色綁定配置文件
vi dashboard-adminuser.yaml
--- apiVersion: v1 kind: ServiceAccount metadata: labels: k8s-app: kubernetes-dashboard name: kubernetes-dashboard-admin namespace: kube-system --- apiVersion: rbac.authorization.k8s.io/v1beta1 kind: ClusterRoleBinding metadata: name: kubernetes-dashboard-admin labels: k8s-app: kubernetes-dashboard roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: cluster-admin subjects: - kind: ServiceAccount name: kubernetes-dashboard-admin namespace: kube-system
#應用授權
kubectl apply -f dashboard-adminuser.yaml
#沒有授權,會報如下錯誤
#查看kubernetes-dashboard暴露的端口
[root@k8s-master ~]# kubectl get service -n kube-system
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kube-dns ClusterIP 10.1.0.10 <none> 53/UDP,53/TCP,9153/TCP 3h45m
kubernetes-dashboard NodePort 10.1.117.120 <none> 443:30001/TCP 50m #備注:這里是443接口,所以是https的。
5.5、查看登錄的token
[root@k8s-master ~]# kubectl get secret -n kube-system NAME TYPE DATA AGE attachdetach-controller-token-6qb84 kubernetes.io/service-account-token 3 3h49m bootstrap-signer-token-drvcz kubernetes.io/service-account-token 3 3h49m bootstrap-token-lshbrx bootstrap.kubernetes.io/token 7 3h50m certificate-controller-token-dm5zh kubernetes.io/service-account-token 3 3h49m clusterrole-aggregation-controller-token-7s7mn kubernetes.io/service-account-token 3 3h49m coredns-token-b2hh4 kubernetes.io/service-account-token 3 3h50m cronjob-controller-token-5nbqm kubernetes.io/service-account-token 3 3h49m daemon-set-controller-token-x2664 kubernetes.io/service-account-token 3 3h49m dashboard-admin-token-cgk7f kubernetes.io/service-account-token 3 108m default-token-tsc8b kubernetes.io/service-account-token 3 3h49m deployment-controller-token-vnd4g kubernetes.io/service-account-token 3 3h49m disruption-controller-token-zph4w kubernetes.io/service-account-token 3 3h49m endpoint-controller-token-k666g kubernetes.io/service-account-token 3 3h49m expand-controller-token-vj4ng kubernetes.io/service-account-token 3 3h49m flannel-token-2kgdh kubernetes.io/service-account-token 3 3h46m generic-garbage-collector-token-zr6hw kubernetes.io/service-account-token 3 3h49m horizontal-pod-autoscaler-token-r9p9m kubernetes.io/service-account-token 3 3h50m job-controller-token-zx8ld kubernetes.io/service-account-token 3 3h49m kube-proxy-token-94b87 kubernetes.io/service-account-token 3 3h50m kubernetes-dashboard-admin-token-6kp9h kubernetes.io/service-account-token 3 54m #就是這個token文件,才有管理權限。 kubernetes-dashboard-certs Opaque 0 55m kubernetes-dashboard-key-holder Opaque 2 111m kubernetes-dashboard-token-mzqmk kubernetes.io/service-account-token 3 55m namespace-controller-token-bh4fj kubernetes.io/service-account-token 3 3h50m node-controller-token-4kgk7 kubernetes.io/service-account-token 3 3h50m persistent-volume-binder-token-2blr5 kubernetes.io/service-account-token 3 3h49m pod-garbage-collector-token-4zd82 kubernetes.io/service-account-token 3 3h49m pv-protection-controller-token-j85rh kubernetes.io/service-account-token 3 3h49m pvc-protection-controller-token-scmqx kubernetes.io/service-account-token 3 3h49m replicaset-controller-token-8llf7 kubernetes.io/service-account-token 3 3h49m replication-controller-token-kzgcp kubernetes.io/service-account-token 3 3h49m resourcequota-controller-token-5kpt7 kubernetes.io/service-account-token 3 3h49m service-account-controller-token-d8j8s kubernetes.io/service-account-token 3 3h49m service-controller-token-hzg78 kubernetes.io/service-account-token 3 3h49m statefulset-controller-token-cdrc8 kubernetes.io/service-account-token 3 3h50m token-cleaner-token-jg8j7 kubernetes.io/service-account-token 3 3h49m ttl-controller-token-lw567 kubernetes.io/service-account-token 3 3h49m
5.6、復制下面token,選擇令牌登錄。
[root@k8s-master ~]# kubectl describe secret kubernetes-dashboard-admin-token-6kp9h -n kube-system Name: kubernetes-dashboard-admin-token-6kp9h Namespace: kube-system Labels: <none> Annotations: kubernetes.io/service-account.name: kubernetes-dashboard-admin kubernetes.io/service-account.uid: 5d4c6518-8222-11e9-a6d0-0050563847cd Type: kubernetes.io/service-account-token Data ==== ca.crt: 1025 bytes namespace: 11 bytes token: #復制下面token,選擇令牌登錄。
eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmV
ybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJrdWJlcm5ldGVzLWRhc2hib2FyZC1hZG1pbi10b2tlbi02a3A5aCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWU
iOiJrdWJlcm5ldGVzLWRhc2hib2FyZC1hZG1pbiIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6IjVkNGM2NTE4LTgyMjItMTFlOS1hNmQwLTAwNTA1NjM4NDdjZCIsInN1YiI6InN
5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTprdWJlcm5ldGVzLWRhc2hib2FyZC1hZG1pbiJ9.Vhxr_IxRyLuNZ5TDRUcspf1hnOuQ2P-_HlNRkmvdEc0s6TnxhvYn6G1sevJXQ671jBAIefWowoWGQpT-TTXUe8iOE7
yjG04xqRgqJhodvyF1Tv5T5praLP9FNzBcvR5V2BA9u3gd66-9UEsJ2cvwyODlIcbpNKGLLP9K-jw-cDRlpWGFasAJZjFSliJ7HpwZd5zLsaB4BElLZXlPu98TvkcJVVt0S9slWIbWgtlL7wq9zdSoMRZQLfEba54INofE1fWiLI
jkqaW9fmX3vH4FFe7qP0LBiSY_EYMw_Hk_B7Ck6EkFVrtlggkLeEry7UsYO3HoKhqt1MQts55hP8uy7g
5.7、登錄網站
備注:登錄網站一定要用火狐瀏覽器,否則無法訪問成功。
https://ip:prot
https://192.168.56.51:30001
登錄成功如下圖: