背景
本次環境使用kubeadm搭建1.17.2版本的kubernetes,並使用kubeadm升級到1.17.4,鏡像倉庫為自建harbor,web界面使用dashboard展示,此文檔僅是學習理論環境使用
https://github.com/kubernetes/kubernetes
https://kubernetes.io/zh/docs/concepts/workloads/controllers/deployment/#
搭建k8s使用的主機環境
IP | Hostname | Role |
---|---|---|
10.203.124.230 | master1.linux.com | mster |
10.203.124.231 | master2.linux.com | mster |
10.203.124.232 | master3.linux.com | mster |
10.203.124.233 | node1.linux.com | node |
10.203.124.234 | node2.linux.com | node |
10.203.124.235 | node3.linux.com | node |
10.203.124.236 | harbor.linux.com | harbor |
10.203.124.237 | ha1.linux.com | HA |
10.203.124.238 | keeplived-vip | VIP |
-
環境准備
所有主機時間同步 root@node3:~# timedatectl set-timezone Asia/Shanghai root@node3:~# timedatectl set-ntp on root@node3:~# timedatectl Local time: Sat 2020-06-06 00:18:27 CST Universal time: Fri 2020-06-05 16:18:27 UTC RTC time: Fri 2020-06-05 16:18:27 Time zone: Asia/Shanghai (CST, +0800) System clock synchronized: yes systemd-timesyncd.service active: yes RTC in local TZ: no 停止apparmor root@k8s-master1~# /etc/init.d/apparmor stop 從系統中清除AppArmor root@k8s-master1:~# apt purge apparmor 關閉防火牆 ufw disable
-
確保所有機器的net.ipv4.ip_forward = 1
root@master1:~# sysctl -p net.ipv4.ip_forward = 1 修改內核參數 root@master1:~# cat /etc/sysctl.conf 最后添加一下內容 net.ipv4.tcp_max_orphans = 3276800 net.ipv4.tcp_max_tw_buckets =20000 net.ipv4.tcp_synack_retries = 1 net.ipv4.tcp_syncookies = 1 net.ipv4.tcp_timestamps = 1 #? # keepalive conn net.ipv4.tcp_keepalive_intvl = 30 net.ipv4.tcp_keepalive_time = 300 net.ipv4.tcp_keepalive_probes = 3 net.ipv4.ip_local_port_range = 10001 65000 # swap vm.overcommit_memory = 0 vm.swappiness = 10 #net.ipv4.conf.eth1.rp_filter = 0 #net.ipv4.conf.lo.arp_ignore = 1 #net.ipv4.conf.lo.arp_announce = 2 #net.ipv4.conf.all.arp_ignore = 1 #net.ipv4.conf.all.arp_announce = 2
-
在所有master節點安裝指定版本的kubeadm、kubelet、kubectl、docker
-
在所有node節點安裝指定版本的kubeadm、kubelet、docker,在node節點kubectl為可選安裝
-
master 節點運行 kubeadm init 初始化命令
-
驗證 master 節點狀態
-
.在 node 節點使用 kubeadm 命令將自己加入 k8s master(需要使用 master 生成 token 認證)
-
驗證 node 節點狀態
-
創建 pod 並測試網絡通信
-
部署 web 服務 Dashboard
-
k8s 集群升級
部署ha和harbor
harbor
安裝docker和docker-compose
root@master1:~# cat docker-install_\(2\).sh
#!/bin/bash
# step 1: 安裝必要的一些系統工具
sudo apt-get update
sudo apt-get -y install apt-transport-https ca-certificates curl software-properties-common
# step 2: 安裝GPG證書
curl -fsSL https://mirrors.aliyun.com/docker-ce/linux/ubuntu/gpg | sudo apt-key add -
# Step 3: 寫入軟件源信息
sudo add-apt-repository "deb [arch=amd64] https://mirrors.aliyun.com/docker-ce/linux/ubuntu $(lsb_release -cs) stable"
# Step 4: 更新並安裝Docker-CE
sudo apt-get -y update
sudo apt-get -y install docker-ce docker-ce-cli
root@harbor:~# sh docker-install.sh
root@harbor:~# apt install docker-compose
啟動docker
root@harbor:/usr/local/src/harbor# systemctl start docker
部署harbor
下載harbor的offline包到系統的/usr/local/src/目錄下
root@harbor:~# cd /usr/local/src/
root@harbor:/usr/local/src# ls
harbor-offline-installer-v1.1.2.tgz
root@harbor:/usr/local/src# tar xvf harbor-offline-installer-v1.1.2.tg
root@harbor:/usr/local/src# ls
harbor harbor-offline-installer-v1.1.2.tgz
root@harbor:/usr/local/src# cd harbor/
編輯harbor.cfg文件修改hostname及harbor的admin密碼
root@harbor:/usr/local/src/harbor# vim harbor.cfg
root@harbor:/usr/local/src/harbor# cat harbor.cfg | grep hostname
#The IP address or hostname to access admin UI and registry service.
hostname = harbor.linux.com
root@harbor:/usr/local/src/harbor# cat harbor.cfg | grep harbor_admin_password
harbor_admin_password = xxxxx
修改prepare文件的empty_subj參數
root@harbor:/usr/local/src/harbor# cat prepare | grep empty
#empty_subj = "/C=/ST=/L=/O=/CN=/"
empty_subj = "/C=US/ST=California/L=Palo Alto/O=VMware, Inc./OU=Harbor/CN=notarysigner"
安裝harbor
root@harbor:/usr/local/src/harbor# ./install.sh
通過配置了harbor域名解析的主機,可訪問harbor的hostname測試harbor,可通過harbor.cfg中配置的admin密碼登錄
docker login harbor
如果docker login時報以下錯誤,則需安裝apt install gnupg2 pass包
root@harbor:/images/kubeadm_images/quay.io/coreos# docker login harbor.linux.com
Username: admin
Password:
Error response from daemon: Get http://harbor.linux.com/v2/: dial tcp 10.203.124.236:80: connect: connection refused
root@harbor:/images# apt install gnupg2 pass
root@harbor:/images# docker login harbor.linux.com
Username: admin
Password:
WARNING! Your password will be stored unencrypted in /root/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store
Login Succeeded
HA
兩台HA主機上都需要執行以下安裝操作
keepalived
root@ha1:~# apt install haproxy keepalived
root@ha1:~# cp /usr/share/doc/keepalived/samples/keepalived.conf.vrrp /etc/keepalived/keepalived.conf
root@ha1:~# vim /etc/keepalived/keepalived.conf
root@ha1:~# cat /etc/keepalived/keepalived.conf
! Configuration File for keepalived
global_defs {
notification_email {
acassen
}
notification_email_from Alexandre.Cassen@firewall.loc
smtp_server 10.203.124.230
smtp_connect_timeout 30
router_id LVS_DEVEL
}
vrrp_instance VI_1 {
state MASTER
interface ens33
garp_master_delay 10
smtp_alert
virtual_router_id 51
priority 100
advert_int 1
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
10.203.124.238 dev ens33 label ens33:1
}
}
root@ha1:~# systemctl restart keepalived.service
root@ha1:~# systemctl enable keepalived.service
Synchronizing state of keepalived.service with SysV service script with /lib/systemd/systemd-sysv-install.
Executing: /lib/systemd/systemd-sysv-install enable keepalived
haproxy
root@ha1:~# vim /etc/haproxy/haproxy.cfg
在haproxy.cfg配置文件最后添加keepalived VIP的監聽,負載均衡到三台master server
listen k8s-api-6443
bind 10.203.124.238:6443
mode tcp
server master1 10.203.124.230:6443 check inter 3s fall 3 rise 5
server master2 10.203.124.231:6443 check inter 3s fall 3 rise 5
server master3 10.203.124.232:6443 check inter 3s fall 3 rise 5
root@ha1:~# systemctl restart haproxy.service
root@ha1:~# systemctl enable haproxy.service
Synchronizing state of haproxy.service with SysV service script with /lib/systemd/systemd-sysv-install.
Executing: /lib/systemd/systemd-sysv-install enable haproxy
root@ha1:~# ss -ntl | grep 6443
LISTEN 0 128 10.203.124.238:6443 0.0.0.0:*
kubernetes部署
master、node安裝組件
阿里雲k8s鏡像源配置(https://developer.aliyun.com/mirror/docker-ce?spm=a2c6h.13651102.0.0.3e221b11wJY3XG)
docker安裝
master節點和node節點安裝docker,docker版本為19.03
root@master1:~# cat docker-install.sh
#!/bin/bash
# step 1: 安裝必要的一些系統工具
sudo apt-get update
sudo apt-get -y install apt-transport-https ca-certificates curl software-properties-common
# step 2: 安裝GPG證書
curl -fsSL https://mirrors.aliyun.com/docker-ce/linux/ubuntu/gpg | sudo apt-key add -
# Step 3: 寫入軟件源信息
sudo add-apt-repository "deb [arch=amd64] https://mirrors.aliyun.com/docker-ce/linux/ubuntu $(lsb_release -cs) stable"
# Step 4: 更新並安裝Docker-CE
sudo apt-get -y update
sudo apt-get -y install docker-ce docker-ce-cli
root@master1:~# sh docker-install.sh
root@node1:~# systemctl start docker
root@node1:~# systemctl enable docker
Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
Executing: /lib/systemd/systemd-sysv-install enable docker
配置docker 加速器
在master和node中配置docker加速器
root@master1:~# mkdir -p /etc/docker
root@master1:~# cat /etc/docker/daemon.json
{
"registry-mirrors": ["https://9916w1ow.mirror.aliyuncs.com"]
}
root@master1:~# systemctl daemon-reload && sudo systemctl restart docker
kubeadm安裝
root@master1:~# apt-get update && apt-get install -y apt-transport-https
root@master1:~# curl https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg | apt-key add -
root@master1:~# cat <<EOF >/etc/apt/sources.list.d/kubernetes.list
deb https://mirrors.aliyun.com/kubernetes/apt/ kubernetes-xenial main
EOF //手動輸入EOF
root@master1:~# apt-get update
顯示可用的kubeadm版本,安裝的版本為1.17.2
root@k8s-master1:~# apt-cache madison kubeadm
master節點安裝
root@k8s-master1:~# apt-get install -y kubeadm=1.17.2-00 kubectl=1.17.2-00 kubelet=1.17.2-00
node節點安裝
root@k8s-node1:~# apt-get install -y kubeadm=1.17.2-00 kubelet=1.17.2-00
各節點配置Kubelet的配置文件並啟動Kubelet服務
將Environment="KUBELET_EXTRA_ARGS=--fail-swap-on=false"寫入/etc/systemd/system/kubelet.service.d/10-kubeadm.conf文件中
root@k8s-master1:~# vim /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
root@k8s-master1:~# cat /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
# Note: This dropin only works with kubeadm and kubelet v1.11+
[Service]
Environment="KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf"
Environment="KUBELET_CONFIG_ARGS=--config=/var/lib/kubelet/config.yaml"
Environment="KUBELET_EXTRA_ARGS=--fail-swap-on=false"
# This is a file that "kubeadm init" and "kubeadm join" generates at runtime, populating the KUBELET_KUBEADM_ARGS variable dynamically
EnvironmentFile=-/var/lib/kubelet/kubeadm-flags.env
# This is a file that the user can use for overrides of the kubelet args as a last resort. Preferably, the user should use
# the .NodeRegistration.KubeletExtraArgs object in the configuration files instead. KUBELET_EXTRA_ARGS should be sourced from this file.
EnvironmentFile=-/etc/default/kubelet
ExecStart=
ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS
啟動Kubelet
root@k8s-master1:~# systemctl daemon-reload
root@k8s-master1:~# systemctl restart kubelet
root@k8s-master1:~# systemctl enable kubelet
master節點上安裝#bash命令補全腳本
root@k8s-master1:~# mkdir /data/scripts -p
root@k8s-master1:~# kubeadm completion bash > /data/scripts/kubeadm_completion.sh
root@k8s-master1:~# source /data/scripts/kubeadm_completion.sh
root@k8s-master1:~# cat /etc/profile
source /data/scripts/kubeadm_completion.sh
root@k8s-master1:~# chmod a+x /data/scripts/kubeadm_completion.sh
初始化前准備
在三台 master 中任意一台 master 進行集群初始化,而且集群初始化只需要初始化一次
kubeadm 命令使用
Available Commands:
alpha #kubeadm 處於測試階段的命令
completion #bash 命令補全,需要安裝 bash-completion
#mkdir /data/scripts -p
#kubeadm completion bash > /data/scripts/kubeadm_completion.sh
#source /data/scripts/kubeadm_completion.sh
#vim /etc/profile
source /data/scripts/kubeadm_completion.sh
config #管理 kubeadm 集群的配置,該配置保留在集群的 ConfigMap 中
#kubeadm config print init-defaults
help Help about any command
init #啟動一個 Kubernetes 主節點
join #將節點加入到已經存在的 k8s master
reset 還原使用 kubeadm init 或者 kubeadm join 對系統產生的環境變化token #管理 token
upgrade #升級 k8s 版本
version #查看版本信息
kubeadm init 命令簡介
-
命令使用:
https://kubernetes.io/zh/docs/reference/setup-tools/kubeadm/kubeadm/ -
集群初始化:
https://kubernetes.io/zh/docs/reference/setup-tools/kubeadm/kubeadm-init/root@docker-node1:~# kubeadm init --help --apiserver-advertise-address string #K8S API Server 將要監聽的監聽的本機 IP --apiserver-bind-port int32 #API Server 綁定的端口,默認為 6443 --apiserver-cert-extra-sans stringSlice #可選的證書額外信息,用於指定 API Server 的服務器證書。可以是 IP 地址也可以是 DNS 名稱。 --cert-dir string #證書的存儲路徑,缺省路徑為 /etc/kubernetes/pki --certificate-key string #定義一個用於加密 kubeadm-certs Secret 中的控制平台證書的密鑰 --config string #kubeadm #配置文件的路徑 --control-plane-endpoint string #為控制平台指定一個穩定的 IP 地址或 DNS 名稱,即配置一個可以長期使用切是高可用的 VIP 或者域名,k8s 多 master 高可用基於此參數實現 --cri-socket string #要連接的 CRI(容器運行時接口,Container Runtime Interface, 簡稱 CRI)套接字的路徑,如果為空,則 kubeadm 將嘗試自動檢測此值,"僅當安裝了多個 CRI 或具有非標准 CRI 插槽時,才使用此選項" --dry-run #不要應用任何更改,只是輸出將要執行的操作,其實就是測試運行。 --experimental-kustomize string #用於存儲 kustomize 為靜態 pod 清單所提供的補丁的路徑。 --feature-gates string #一組用來描述各種功能特性的鍵值(key=value)對,選項是:IPv6DualStack=true|false (ALPHA - default=false) --ignore-preflight-errors strings #可以忽略檢查過程 中出現的錯誤信息,比如忽略 swap,如果為 all 就忽略所有 --image-repository string #設置一個鏡像倉庫,默認為 k8s.gcr.io --kubernetes-version string #指定安裝 k8s 版本,默認為 stable-1 --node-name string #指定 node 節點名稱 --pod-network-cidr #設置 pod ip 地址范圍 --service-cidr #設置 service 網絡地址范圍 --service-dns-domain string #設置 k8s 內部域名,默認為 cluster.local,會有相應的 DNS 服務(kube-dns/coredns)解析生成的域名記錄。 --skip-certificate-key-print #不打印用於加密的 key 信息 --skip-phases strings #要跳過哪些階段 --skip-token-print #跳過打印 token 信息 --token #指定 token --token-ttl #指定 token 過期時間,默認為 24 小時,0 為永不過期 --upload-certs #更新證書 #全局可選項: --add-dir-header #如果為 true,在日志頭部添加日志目錄 --log-file string #如果不為空,將使用此日志文件 --log-file-max-size uint #設置日志文件的最大大小,單位為兆,默認為 1800 兆,0 為沒有限制 --rootfs #宿主機的根路徑,也就是絕對路徑 --skip-headers #如果為 true,在 log 日志里面不顯示標題前綴 --skip-log-headers #如果為 true,在 log 日志里里不顯示標題
驗證當前 kubeadm 版本
# kubeadm version #查看當前 kubeadm 版本
root@k8s-master1:~# kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.2", GitCommit:"59603c6e503c87169aea6106f57b9f242f64df89", GitTreeState:"clean", BuildDate:"2020-01-18T23:27:49Z", GoVersion:"go1.13.5", Compiler:"gc", Platform:"linux/amd64"}
准備k8s鏡像
查看安裝指定版本 k8s 需要的鏡像有哪些
# kubeadm config images list --kubernetes-version v1.17.2
k8s.gcr.io/kube-apiserver:v1.17.2
k8s.gcr.io/kube-controller-manager:v1.17.2
k8s.gcr.io/kube-scheduler:v1.17.2
k8s.gcr.io/kube-proxy:v1.17.2
k8s.gcr.io/pause:3.1
k8s.gcr.io/etcd:3.4.3-0
k8s.gcr.io/coredns:1.6.5
master 節點鏡像下載
推薦提前在 master 節點下載鏡像以減少安裝等待時間,但是鏡像默認使用 Google 的鏡像倉庫,所以國內無法直接下載,但是可以通過阿里雲的鏡像倉庫把鏡像先提前下載下來,可以避免后期因鏡像下載異常而導致 k8s 部署異常。
下載k8s image
root@k8s-master1:~# cat k8s-1.17.3-images-download.sh
#!/bin/bash
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.17.3
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.17.3
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.17.3
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.17.3
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.1
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.4.3-0
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:1.6.5
root@k8s-master1:~# bash k8s-1.17.3-images-download.sh
root@k8s-master1:~# docker image ls
REPOSITORY TAG IMAGE ID CREATED SIZE
registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy v1.17.3 ae853e93800d 3 months ago 116MB
registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager v1.17.3 b0f1517c1f4b 3 months ago 161MB
registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver v1.17.3 90d27391b780 3 months ago 171MB
registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler v1.17.3 d109c0821a2b 3 months ago 94.4MB
registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy v1.17.2 cba2a99699bd 4 months ago 116MB
registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager v1.17.2 da5fd66c4068 4 months ago 161MB
registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver v1.17.2 41ef50a5f06a 4 months ago 171MB
registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler v1.17.2 f52d4c527ef2 4 months ago 94.4MB
registry.cn-hangzhou.aliyuncs.com/google_containers/coredns 1.6.5 70f311871ae1 7 months ago 41.6MB
registry.cn-hangzhou.aliyuncs.com/google_containers/etcd 3.4.3-0 303ce5db0e90 7 months ago 288MB
registry.cn-hangzhou.aliyuncs.com/google_containers/pause 3.1 da86e6ba6ca1 2 years ago 742kB
Master 初始化
單節點master init命令初始化
root@k8s-master1:~# kubeadm init --apiserver-advertise-address=10.203.124.230 --control-plane-endpoint=10.203.124.238--apiserver-bind-port=6443 --ignore-preflight-errors=swap --image-repository=registry.cnhangzhou.aliyuncs.com/google_containers --kubernetes-version=v1.17.2 --pod-network-cidr=10.10.0.0/16 --service-cidr=172.16.0.0/16 --service-dns-domain=linux.com
初始化結果
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
You can now join any number of control-plane nodes by copying certificate authorities
and service account keys on each node and then running the following as root:
kubeadm join 10.203.124.238:6443 --token ytnfvw.2m0wg4v22lq51mdw \
--discovery-token-ca-cert-hash sha256:0763db2d6454eeb4a25a8a91159a565f8bf81b9ffe69ba90fa3967652a00964f \
--control-plane
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 10.203.124.238:6443 --token ytnfvw.2m0wg4v22lq51mdw \
--discovery-token-ca-cert-hash sha256:0763db2d6454eeb4a25a8a91159a565f8bf81b9ffe69ba90fa3967652a00964f
基於文件初始化高可用master
編輯kubeadm-init.yml文件
將kubeadm config的默認參數輸入到yml文件
root@k8s-master1:~# kubeadm config print init-defaults > kubeadm-init.yml
W0606 16:11:52.170765 79373 validation.go:28] Cannot validate kube-proxy config - no validator is available
W0606 16:11:52.170940 79373 validation.go:28] Cannot validate kubelet config - no validator is available
修改yml文件的$\color{red}{advertiseAddress}$,$\color{red}{dnsDomain}$,$\color{red}{serviceSubnet}$,$\color{red}{imageRepository}$,並添加$\color{red}{controlPlaneEndpoint}$,$\color{red}{podSubnet}$
root@k8s-master1:~# cat kubeadm-init.yml
apiVersion: kubeadm.k8s.io/v1beta2
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
token: abcdef.0123456789abcdef
ttl: 48h0m0s
usages:
- signing
- authentication
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 10.203.124.230
bindPort: 6443
nodeRegistration:
criSocket: /var/run/dockershim.sock
name: k8s-master1.linux.com
taints:
- effect: NoSchedule
key: node-role.kubernetes.io/master
---
apiServer:
timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta2
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controlPlaneEndpoint: 10.203.124.238:6443
controllerManager: {}
dns:
type: CoreDNS
etcd:
local:
dataDir: /var/lib/etcd
imageRepository: registry.cn-hangzhou.aliyuncs.com/google_containers
kind: ClusterConfiguration
kubernetesVersion: v1.17.2
networking:
dnsDomain: linux.com
podSubnet: 10.10.0.0/16
serviceSubnet: 172.16.0.0/16
scheduler: {}
基於yml文件初始化集群
root@k8s-master1:~# kubeadm init --config kubeadm-init.yml --ignore-preflight-errors=swap
輸出結果
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
You can now join any number of control-plane nodes by copying certificate authorities
and service account keys on each node and then running the following as root:
kubeadm join 10.203.124.238:6443 --token abcdef.0123456789abcdef \
--discovery-token-ca-cert-hash sha256:c5ae436a1bd5d7c128f7db7060d51551d8ee3903c35dd16d780c38653f937a06 \
--control-plane
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 10.203.124.238:6443 --token abcdef.0123456789abcdef \
--discovery-token-ca-cert-hash sha256:c5ae436a1bd5d7c128f7db7060d51551d8ee3903c35dd16d780c38653f937a06
配置kube-config文件
kube-config文件中包含kube-apiserver地址及相關認證信息
master1上配置kube-config文件
root@k8s-master1:~# mkdir -p $HOME/.kube
root@k8s-master1:~# cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
root@k8s-master1:~# chown $(id -u):$(id -g) $HOME/.kube/config
master上沒有部署flannel網絡組件時,kubectl get nodes顯示的狀態為NotReady
查看cluster節點
root@k8s-master1:~# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master1.linux.com NotReady master 22m v1.17.2
部署網絡組件flannel
https://github.com/coreos/flannel/
准備kube-flannel.yml文件
將https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml 下載到master1
[root@JevonWei ~]# wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
root@k8s-master1:~# ls kube-flannel.yml
kube-flannel.yml
修改kube-flannel.yml文件中的pod network網絡中kubeadm init指定的--pod-network-cidr=10.10.0.0/16地址
root@k8s-master1:~# cat kube-flannel.yml
net-conf.json: |
{
"Network": "10.10.0.0/16",
"Backend": {
"Type": "vxlan"
}
}
准備flannel image
執行kube-flannel.yml文件,apply時會pull quay.io/coreos/flannel:v0.12.0-amd64 image,若無法直接從quay.io/coreos/flannel:v0.12.0-amd64 pull image,可先使用其他途徑將image pull到所有的master上
root@k8s-master1:~/images# docker load < flannel.tar
256a7af3acb1: Loading layer [==================================================>] 5.844MB/5.844MB
d572e5d9d39b: Loading layer [==================================================>] 10.37MB/10.37MB
57c10be5852f: Loading layer [==================================================>] 2.249MB/2.249MB
7412f8eefb77: Loading layer [==================================================>] 35.26MB/35.26MB
05116c9ff7bf: Loading layer [==================================================>] 5.12kB/5.12kB
Loaded image: quay.io/coreos/flannel:v0.12.0-amd64
root@k8s-master1:~/images# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
quay.io/coreos/flannel v0.12.0-amd64 4e9f801d2217 2 months ago 52.8MB
部署flannel
master1上執行kube-flannel.yml文件
root@k8s-master1:~# kubectl apply -f kube-flannel.yml
podsecuritypolicy.policy/psp.flannel.unprivileged created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds-amd64 created
daemonset.apps/kube-flannel-ds-arm64 created
daemonset.apps/kube-flannel-ds-arm created
daemonset.apps/kube-flannel-ds-ppc64le created
daemonset.apps/kube-flannel-ds-s390x created
確認node狀態
root@k8s-master1:~# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master1.linux.com Ready master 3h25m v1.17.2
k8s-master2.linux.com Ready master 164m v1.17.2
k8s-master3.linux.com Ready master 159m v1.17.2
master添加新節點
當前maste1生成證書用於添加新控制節點
root@k8s-master1:~# kubeadm init phase upload-certs --upload-certs
I0606 16:38:29.449089 84654 version.go:251] remote version is much newer: v1.18.3; falling back to: stable-1.17
W0606 16:38:34.434978 84654 validation.go:28] Cannot validate kube-proxy config - no validator is available
W0606 16:38:34.435026 84654 validation.go:28] Cannot validate kubelet config - no validator is available
[upload-certs] Storing the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace
[upload-certs] Using certificate key:
de15087cf5e3e5986c5e1577f9d041ed02f95ceb911a20d41eba792b9afc507c
master join 新節點
在已經安裝了docker、kubeadm和kubelet的 master2、master3節點上執行以下操作:
root@k8s-master2:~# kubeadm join 10.203.124.238:6443 --token abcdef.0123456789abcdef \
--discovery-token-ca-cert-hash sha256:c5ae436a1bd5d7c128f7db7060d51551d8ee3903c35dd16d780c38653f937a06 \
--control-plane --certificate-key de15087cf5e3e5986c5e1577f9d041ed02f95ceb911a20d41eba792b9afc507c --ignore-preflight-errors=swap
\\ -certificate-key指定的key為上步操作kubeadm init phase upload-certs --upload-certs生成的Using certificate key
kubeadm join輸出結果
This node has joined the cluster and a new control plane instance was created:
* Certificate signing request was sent to apiserver and approval was received.
* The Kubelet was informed of the new secure connection details.
* Control plane (master) label and taint were applied to the new node.
* The Kubernetes control plane instances scaled up.
* A new etcd member was added to the local/stacked etcd cluster.
To start administering your cluster from this node, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Run 'kubectl get nodes' to see this node join the cluster.
master1查看當前節點狀態
在master1上執行kubectl get nodes
root@k8s-master1:~# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master1.linux.com NotReady master 46m v1.17.2
k8s-master2.linux.com NotReady master 6m18s v1.17.2
k8s-master3.linux.com NotReady master 36s v1.17.2
當master節點上有quay.io/coreos/flannel iamge時,STATUS會由NotReady變為Ready
root@k8s-master1:~# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master1.linux.com Ready master 3h25m v1.17.2
k8s-master2.linux.com Ready master 164m v1.17.2
k8s-master3.linux.com Ready master 159m v1.17.2
添加node節點
在所有node上執行kubeadm join,加入k8s集群 node節點
root@node1:~# kubeadm join 10.203.124.238:6443 --token abcdef.0123456789abcdef --discovery-token-ca-cert-hash sha256:c5ae436a1bd5d7c128f7db7060d51551d8ee3903c35dd16d780c38653f937a06 --ignore-preflight-errors=swap
輸出為
W0606 15:47:49.095318 5123 join.go:346] [preflight] WARNING: JoinControlPane.controlPlane settings will be ignored when control-plane flag is not set.
[preflight] Running pre-flight checks
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[WARNING Swap]: running with swap on is not supported. Please disable swap
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.17" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.
Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
查看node節點狀態
node1節點添加之后,若master1上kubectl get nodes 輸出的node節點STATUS為NotReady,則也需要手動將quay.io/coreos/flannel:v0.12.0-amd64 鏡像load到node節點上
root@k8s-master1:~# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master1.linux.com Ready master 3h45m v1.17.2
k8s-master2.linux.com Ready master 3h5m v1.17.2
k8s-master3.linux.com Ready master 179m v1.17.2
node1.linux.com NotReady <none> 3m58s v1.17.2
node2.linux.com NotReady <none> 2m30s v1.17.2
node3.linux.com NotReady <none> 2m24s v1.17.2
可手動將master節點上的lannel image 拷貝到 node節點上,並在node上導入
root@node3:~# docker load < images/flannel.tar
256a7af3acb1: Loading layer [==================================================>] 5.844MB/5.844MB
d572e5d9d39b: Loading layer [==================================================>] 10.37MB/10.37MB
57c10be5852f: Loading layer [==================================================>] 2.249MB/2.249MB
7412f8eefb77: Loading layer [==================================================>] 35.26MB/35.26MB
05116c9ff7bf: Loading layer [==================================================>] 5.12kB/5.12kB
Loaded image: quay.io/coreos/flannel:v0.12.0-amd64
root@node3:~# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
quay.io/coreos/flannel v0.12.0-amd64 4e9f801d2217 2 months ago 52.8MB
master1上再次確認nodes狀態為Ready
root@k8s-master1:~# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master1.linux.com Ready master 3h52m v1.17.2
k8s-master2.linux.com Ready master 3h11m v1.17.2
k8s-master3.linux.com Ready master 3h6m v1.17.2
node1.linux.com Ready <none> 10m v1.17.2
node2.linux.com Ready <none> 9m1s v1.17.2
node3.linux.com Ready <none> 8m55s v1.17.2
測試
驗證master狀態
root@k8s-master1:~# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master1.linux.com Ready master 5h29m v1.17.2
k8s-master2.linux.com Ready master 5h24m v1.17.2
k8s-master3.linux.com Ready master 5h23m v1.17.2
node1.linux.com Ready <none> 5h23m v1.17.2
node2.linux.com Ready <none> 5h22m v1.17.2
node3.linux.com Ready <none> 5h22m v1.17.2
驗證k8s集群狀態
root@k8s-master1:~# kubectl get cs
NAME STATUS MESSAGE ERROR
scheduler Healthy ok
controller-manager Healthy ok
etcd-0 Healthy {"health":"true"}
當前csr證書狀態
root@k8s-master1:~# kubectl get csr
測試pod間的網絡
k8s集群創建容器並測試網絡通信
創建一個alpine pod
root@k8s-master1:~# kubectl run net-test1 --image=alpine --replicas=3 sleep 360000
kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
deployment.apps/net-test1 created
pod已創建
root@k8s-master1:~# kubectl get pod
NAME READY STATUS RESTARTS AGE
net-test1-5fcc69db59-7s5nc 1/1 Running 0 42s
net-test1-5fcc69db59-9v95p 1/1 Running 0 42s
net-test1-5fcc69db59-vq5wq 1/1 Running 0 42s
root@k8s-master1:~# kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
net-test1-5fcc69db59-7s5nc 1/1 Running 0 77s 10.10.3.2 node1.linux.com <none> <none>
net-test1-5fcc69db59-9v95p 1/1 Running 0 77s 10.10.4.2 node2.linux.com <none> <none>
net-test1-5fcc69db59-vq5wq 1/1 Running 0 77s 10.10.5.2 node3.linux.com <none> <none>
測試pod與外部網絡的通信
容器與外網和pod內其他容器都是可通信
root@k8s-master1:~# kubectl exec -it net-test1-5fcc69db59-9v95p sh
/ # ifconfig
eth0 Link encap:Ethernet HWaddr FA:37:E5:7B:61:BA
inet addr:10.10.4.2 Bcast:0.0.0.0 Mask:255.255.255.0
UP BROADCAST RUNNING MULTICAST MTU:1450 Metric:1
RX packets:26 errors:0 dropped:0 overruns:0 frame:0
TX packets:1 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:1968 (1.9 KiB) TX bytes:42 (42.0 B)
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
UP LOOPBACK RUNNING MTU:65536 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
/ # ping 10.10.3.2
PING 10.10.3.2 (10.10.3.2): 56 data bytes
64 bytes from 10.10.3.2: seq=0 ttl=62 time=1.934 ms
64 bytes from 10.10.3.2: seq=1 ttl=62 time=0.402 ms
/ # ping www.baidu.com
PING www.baidu.com (36.152.44.95): 56 data bytes
64 bytes from 36.152.44.95: seq=0 ttl=55 time=9.423 ms
部署dashboard
將dashboard上傳到Harbor倉庫
登錄harbor,創建倉庫baseimages
pull dashboard iamge
root@k8s-master1:~# docker pull kubernetesui/dashboard:v2.0.0-rc6
v2.0.0-rc6: Pulling from kubernetesui/dashboard
1f45830e3050: Pull complete
Digest: sha256:61f9c378c427a3f8a9643f83baa9f96db1ae1357c67a93b533ae7b36d71c69dc
Status: Downloaded newer image for kubernetesui/dashboard:v2.0.0-rc6
docker.io/kubernetesui/dashboard:v2.0.0-rc6
root@k8s-master1:~# docker pull kubernetesui/metrics-scraper:v1.0.3
v1.0.3: Pulling from kubernetesui/metrics-scraper
75d12d4b9104: Pull complete
fcd66fda0b81: Pull complete
53ff3f804bbd: Pull complete
Digest: sha256:40f1d5785ea66609b1454b87ee92673671a11e64ba3bf1991644b45a818082ff
Status: Downloaded newer image for kubernetesui/metrics-scraper:v1.0.3
docker.io/kubernetesui/metrics-scraper:v1.0.3
root@k8s-master1:~# docker image ls
REPOSITORY TAG IMAGE ID CREATED SIZE
harbor.linux.com/baseimages/dashboard v2.0.0-rc6 cdc71b5a8a0e 2 months ago 221MB
kubernetesui/dashboard v2.0.0-rc6 cdc71b5a8a0e 2 months ago 221MB
quay.io/coreos/flannel v0.12.0-amd64 4e9f801d2217 2 months ago 52.8MB
registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy v1.17.3 ae853e93800d 3 months ago 116MB
registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager v1.17.3 b0f1517c1f4b 3 months ago 161MB
registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver v1.17.3 90d27391b780 3 months ago 171MB
registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler v1.17.3 d109c0821a2b 3 months ago 94.4MB
kubernetesui/metrics-scraper v1.0.3 3327f0dbcb4a 4 months ago 40.1MB
registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy v1.17.2 cba2a99699bd 4 months ago 116MB
registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver v1.17.2 41ef50a5f06a 4 months ago 171MB
registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager v1.17.2 da5fd66c4068 4 months ago 161MB
registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler v1.17.2 f52d4c527ef2 4 months ago 94.4MB
registry.cn-hangzhou.aliyuncs.com/google_containers/coredns 1.6.5 70f311871ae1 7 months ago 41.6MB
registry.cn-hangzhou.aliyuncs.com/google_containers/etcd 3.4.3-0 303ce5db0e90 7 months ago 288MB
registry.cn-hangzhou.aliyuncs.com/google_containers/pause 3.1 da86e6ba6ca1 2 years ago 742kB
將image push到harbor
修改所有master、node主機上的hosts解析文件,確保可訪問harbor主機
root@k8s-master1:~# cat /etc/hosts | grep harbor
10.203.124.236 harbor harbor.linux.com
配置允許docker使用http協議
在所有主機上將--insecure-registry harbor.linux.com添加到docker啟動參數中,表示可使用http協議的倉庫
root@k8s-master1:~# dockerd --help | grep ins
--authorization-plugin list Authorization plugins to load
--dns-search list DNS search domains to use
--insecure-registry list Enable insecure registry communication
root@k8s-master1:~# cat /lib/systemd/system/docker.service | grep ExecStart
ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock --insecure-registry harbor.linux.com
重啟docker生效
root@k8s-master1:~# systemctl daemon-reload
root@k8s-master1:~# systemctl restart docker
將dashboard 鏡像tag修改為harbor的信息
root@k8s-master1:~# docker tag 3327f0dbcb4a harbor.linux.com/baseimages/metrics-scraper:v1.0.3
root@k8s-master1:~# docker tag cdc71b5a8a0e harbor.linux.com/baseimages/dashboard:v2.0.0-rc6
root@k8s-master1:~# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
kubernetesui/dashboard v2.0.0-rc6 cdc71b5a8a0e 2 months ago 221MB
harbor.linux.com/baseimages/dashboard v2.0.0-rc6 cdc71b5a8a0e 2 months ago 221MB
上傳鏡像到harbor
root@k8s-master1:~# docker login harbor.linux.com
Username: admin
Password:
WARNING! Your password will be stored unencrypted in /root/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store
Login Succeeded
root@k8s-master1:~# docker push harbor.linux.com/baseimages/dashboard:v2.0.0-rc6
The push refers to repository [harbor.linux.com/baseimages/dashboard]
f6419c845e04: Pushed
v2.0.0-rc6: digest: sha256:7d7273c38f37c62375bb8262609b746f646da822dc84ea11710eed7082482b12 size: 529
root@k8s-master1:~# docker push harbor.linux.com/baseimages/metrics-scraper:v1.0.3
The push refers to repository [harbor.linux.com/baseimages/metrics-scraper]
4e247d9378a1: Pushed
0aec45b843c5: Pushed
3ebaca24781b: Pushed
v1.0.3: digest: sha256:e24a74b3b1cdc84d6285d507a12eb06907fd8c457b3e8ae9baa9418eca43efc4 size: 946
harbor中確認image是否上傳
harbor pull image
從harbor下載鏡像
root@node1:~# docker pull harbor.linux.com/baseimages/dashboard:v2.0.0-rc6
v2.0.0-rc6: Pulling from baseimages/dashboard
1f45830e3050: Pull complete
Digest: sha256:7d7273c38f37c62375bb8262609b746f646da822dc84ea11710eed7082482b12
Status: Downloaded newer image for harbor.linux.com/baseimages/dashboard:v2.0.0-rc6
harbor.linux.com/baseimages/dashboard:v2.0.0-rc6
部署dashboard
准備yml文件
准備dashboard-2.0.0-rc6.yml文件,image倉庫指向harbor,nodePort為30002
root@k8s-master1:~# cat dashboard-2.0.0-rc6.yml
# Copyright 2017 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
apiVersion: v1
kind: Namespace
metadata:
name: kubernetes-dashboard
---
apiVersion: v1
kind: ServiceAccount
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kubernetes-dashboard
---
kind: Service
apiVersion: v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kubernetes-dashboard
spec:
type: NodePort
ports:
- port: 443
targetPort: 8443
nodePort: 30002
selector:
k8s-app: kubernetes-dashboard
---
apiVersion: v1
kind: Secret
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard-certs
namespace: kubernetes-dashboard
type: Opaque
---
apiVersion: v1
kind: Secret
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard-csrf
namespace: kubernetes-dashboard
type: Opaque
data:
csrf: ""
---
apiVersion: v1
kind: Secret
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard-key-holder
namespace: kubernetes-dashboard
type: Opaque
---
kind: ConfigMap
apiVersion: v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard-settings
namespace: kubernetes-dashboard
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kubernetes-dashboard
rules:
# Allow Dashboard to get, update and delete Dashboard exclusive secrets.
- apiGroups: [""]
resources: ["secrets"]
resourceNames: ["kubernetes-dashboard-key-holder", "kubernetes-dashboard-certs", "kubernetes-dashboard-csrf"]
verbs: ["get", "update", "delete"]
# Allow Dashboard to get and update 'kubernetes-dashboard-settings' config map.
- apiGroups: [""]
resources: ["configmaps"]
resourceNames: ["kubernetes-dashboard-settings"]
verbs: ["get", "update"]
# Allow Dashboard to get metrics.
- apiGroups: [""]
resources: ["services"]
resourceNames: ["heapster", "dashboard-metrics-scraper"]
verbs: ["proxy"]
- apiGroups: [""]
resources: ["services/proxy"]
resourceNames: ["heapster", "http:heapster:", "https:heapster:", "dashboard-metrics-scraper", "http:dashboard-metrics-scraper"]
verbs: ["get"]
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
rules:
# Allow Metrics Scraper to get metrics from the Metrics server
- apiGroups: ["metrics.k8s.io"]
resources: ["pods", "nodes"]
verbs: ["get", "list", "watch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kubernetes-dashboard
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: kubernetes-dashboard
subjects:
- kind: ServiceAccount
name: kubernetes-dashboard
namespace: kubernetes-dashboard
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: kubernetes-dashboard
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: kubernetes-dashboard
subjects:
- kind: ServiceAccount
name: kubernetes-dashboard
namespace: kubernetes-dashboard
---
kind: Deployment
apiVersion: apps/v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kubernetes-dashboard
spec:
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
k8s-app: kubernetes-dashboard
template:
metadata:
labels:
k8s-app: kubernetes-dashboard
spec:
containers:
- name: kubernetes-dashboard
image: harbor.linux.com/baseimages/dashboard:v2.0.0-rc6
imagePullPolicy: Always
ports:
- containerPort: 8443
protocol: TCP
args:
- --auto-generate-certificates
- --namespace=kubernetes-dashboard
# Uncomment the following line to manually specify Kubernetes API server Host
# If not specified, Dashboard will attempt to auto discover the API server and connect
# to it. Uncomment only if the default does not work.
# - --apiserver-host=http://my-address:port
volumeMounts:
- name: kubernetes-dashboard-certs
mountPath: /certs
# Create on-disk volume to store exec logs
- mountPath: /tmp
name: tmp-volume
livenessProbe:
httpGet:
scheme: HTTPS
path: /
port: 8443
initialDelaySeconds: 30
timeoutSeconds: 30
securityContext:
allowPrivilegeEscalation: false
readOnlyRootFilesystem: true
runAsUser: 1001
runAsGroup: 2001
volumes:
- name: kubernetes-dashboard-certs
secret:
secretName: kubernetes-dashboard-certs
- name: tmp-volume
emptyDir: {}
serviceAccountName: kubernetes-dashboard
nodeSelector:
"beta.kubernetes.io/os": linux
# Comment the following tolerations if Dashboard must not be deployed on master
tolerations:
- key: node-role.kubernetes.io/master
effect: NoSchedule
---
kind: Service
apiVersion: v1
metadata:
labels:
k8s-app: dashboard-metrics-scraper
name: dashboard-metrics-scraper
namespace: kubernetes-dashboard
spec:
ports:
- port: 8000
targetPort: 8000
selector:
k8s-app: dashboard-metrics-scraper
---
kind: Deployment
apiVersion: apps/v1
metadata:
labels:
k8s-app: dashboard-metrics-scraper
name: dashboard-metrics-scraper
namespace: kubernetes-dashboard
spec:
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
k8s-app: dashboard-metrics-scraper
template:
metadata:
labels:
k8s-app: dashboard-metrics-scraper
annotations:
seccomp.security.alpha.kubernetes.io/pod: 'runtime/default'
spec:
containers:
- name: dashboard-metrics-scraper
image: harbor.linux.com/baseimages/metrics-scraper:v1.0.3
ports:
- containerPort: 8000
protocol: TCP
livenessProbe:
httpGet:
scheme: HTTP
path: /
port: 8000
initialDelaySeconds: 30
timeoutSeconds: 30
volumeMounts:
- mountPath: /tmp
name: tmp-volume
securityContext:
allowPrivilegeEscalation: false
readOnlyRootFilesystem: true
runAsUser: 1001
runAsGroup: 2001
serviceAccountName: kubernetes-dashboard
nodeSelector:
"beta.kubernetes.io/os": linux
# Comment the following tolerations if Dashboard must not be deployed on master
tolerations:
- key: node-role.kubernetes.io/master
effect: NoSchedule
volumes:
- name: tmp-volume
emptyDir: {}
編輯admin-user.yml權限控制文件
root@k8s-master1:~# cat admin-user.yml
apiVersion: v1
kind: ServiceAccount
metadata:
name: admin-user
namespace: kubernetes-dashboard
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: admin-user
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: admin-user
namespace: kubernetes-dashboard
部署
root@k8s-master1:~# kubectl apply -f dashboard-2.0.0-rc6.yml
namespace/kubernetes-dashboard created
serviceaccount/kubernetes-dashboard created
service/kubernetes-dashboard created
secret/kubernetes-dashboard-certs created
secret/kubernetes-dashboard-csrf created
secret/kubernetes-dashboard-key-holder created
configmap/kubernetes-dashboard-settings created
role.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrole.rbac.authorization.k8s.io/kubernetes-dashboard created
rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
deployment.apps/kubernetes-dashboard created
service/dashboard-metrics-scraper created
deployment.apps/dashboard-metrics-scraper created
root@k8s-master1:~# kubectl apply -f admin-user.yml
serviceaccount/admin-user created
clusterrolebinding.rbac.authorization.k8s.io/admin-user created
查看集群的services,可看到dashboard的端口為30002
root@k8s-master1:~# kubectl get service -A
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default kubernetes ClusterIP 172.16.0.1 <none> 443/TCP 16m
kube-system kube-dns ClusterIP 172.16.0.10 <none> 53/UDP,53/TCP,9153/TCP 16m
kubernetes-dashboard dashboard-metrics-scraper ClusterIP 172.16.213.211 <none> 8000/TCP 65s
kubernetes-dashboard kubernetes-dashboard NodePort 172.16.65.18 <none> 443:30002/TCP 66s
所有的master和node節點都監聽dashboard 30002端口
root@node1:~# ss -ntl | grep 30002
LISTEN 0 128 *:30002 *:*
在瀏覽器中訪問任意節點的30002端口登錄(https://10.203.124.233:30002)
使用token登錄dashboard
在master節點上查看kubernetes-dashboard的token,並復制token,在dashboard的瀏覽器中使用token登錄
root@k8s-master1:~# kubectl get secret -A | grep admin
kubernetes-dashboard admin-user-token-t8fjg kubernetes.io/service-account-token 3 7m9s
root@k8s-master1:~# kubectl describe secret admin-user-token-t8fjg -n kubernetes-dashboard
Name: admin-user-token-t8fjg
Namespace: kubernetes-dashboard
Labels: <none>
Annotations: kubernetes.io/service-account.name: admin-user
kubernetes.io/service-account.uid: c1d4a1b1-5082-447c-b29b-f0730290ae9f
Type: kubernetes.io/service-account-token
Data
====
ca.crt: 1025 bytes
namespace: 20 bytes
token: eyJhbGciOiJSUzI1NiIsImtpZCI6IkpiYks5aGpXUV84eHV2ZElGRVZ4emxuNEUyWl9PbUU0WkIzNk1tRjVNZHcifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi11c2VyLXRva2VuLXQ4ZmpnIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImFkbWluLXVzZXIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiJjMWQ0YTFiMS01MDgyLTQ0N2MtYjI5Yi1mMDczMDI5MGFlOWYiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZXJuZXRlcy1kYXNoYm9hcmQ6YWRtaW4tdXNlciJ9.RN5AFQI_gZ5IxwHQ-yfkfyQDL4UXmyFSq8gfi_JAEUBNvKW-jEeIVri2ZF8O1PqJpkoiKouVxdgc3MwJDdBpszxFv1bub0GeMdKidiJ4hifo1QJOHV8V84B51PshFpMvDLYpX_JkK40tiYnjH5CnRrNgeM-SalOo-QlfpbK1mDyBostsuz8-9lOJGXqOZLv6TZPlBfA31rcfmo2G7eDreE4kR61EhW2q5pYmEgmATIBJwAZXSy0-W3oIvSwxekXzgGByzrObmczpW_wdcP81T83Uzhhz5cst1n3xUhP8a-WepeiNfHQbh46UK310KpqivLEEcDS6Aa1HdQTxpv1aKA
在瀏覽器中,使用token登錄kubernetes-dashboard
登錄
使用KubeCondif文件登錄Dashboard
查看token
root@k8s-master1:~# kubectl get secret -A | grep admin
kubernetes-dashboard admin-user-token-t8fjg kubernetes.io/service-account-token 3 10h
root@k8s-master1:~# kubectl describe secret admin-user-token-t8fjg -n kubernetes-dashboard
Name: admin-user-token-t8fjg
Namespace: kubernetes-dashboard
Labels: <none>
Annotations: kubernetes.io/service-account.name: admin-user
kubernetes.io/service-account.uid: c1d4a1b1-5082-447c-b29b-f0730290ae9f
Type: kubernetes.io/service-account-token
Data
====
ca.crt: 1025 bytes
namespace: 20 bytes
token: eyJhbGciOiJSUzI1NiIsImtpZCI6IkpiYks5aGpXUV84eHV2ZElGRVZ4emxuNEUyWl9PbUU0WkIzNk1tRjVNZHcifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi11c2VyLXRva2VuLXQ4ZmpnIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImFkbWluLXVzZXIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiJjMWQ0YTFiMS01MDgyLTQ0N2MtYjI5Yi1mMDczMDI5MGFlOWYiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZXJuZXRlcy1kYXNoYm9hcmQ6YWRtaW4tdXNlciJ9.RN5AFQI_gZ5IxwHQ-yfkfyQDL4UXmyFSq8gfi_JAEUBNvKW-jEeIVri2ZF8O1PqJpkoiKouVxdgc3MwJDdBpszxFv1bub0GeMdKidiJ4hifo1QJOHV8V84B51PshFpMvDLYpX_JkK40tiYnjH5CnRrNgeM-SalOo-QlfpbK1mDyBostsuz8-9lOJGXqOZLv6TZPlBfA31rcfmo2G7eDreE4kR61EhW2q5pYmEgmATIBJwAZXSy0-W3oIvSwxekXzgGByzrObmczpW_wdcP81T83Uzhhz5cst1n3xUhP8a-WepeiNfHQbh46UK310KpqivLEEcDS6Aa1HdQTxpv1aKA
拷貝一份kube的配置文件做修改
root@k8s-master1:~# cp /root/.kube/config /opt/kubeconfig
將token信息追加到配置文件最后
將添加token后的kubeconfig文件存放於電腦本地,每次登錄時選中該文件即可登錄
K8s集群升級
k8s從1.17.2升級到1.17.4
升級 k8s 集群必須先升級 kubeadm 版本到目的 k8s 版本,也就是說 kubeadm 是 k8s 升級的 ”准升證”。
升級 k8s master 服務
在 k8s 的所有 master 進行升級,將管理端服務 kube-controller-manager、kube-apiserver、kube-scheduler、kube-proxy
驗證當 k8s 前版本
root@k8s-master1:~# kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.2", GitCommit:"59603c6e503c87169aea6106f57b9f242f64df89", GitTreeState:"clean", BuildDate:"2020-01-18T23:27:49Z", GoVersion:"go1.13.5", Compiler:"gc", Platform:"linux/amd64"}
各 master 安裝指定新版本 kubeadm
查看k8s版本列表
root@k8s-master1:~# apt-cache madison kubeadm
kubeadm | 1.18.3-00 | https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial/main amd64 Packages
kubeadm | 1.18.2-00 | https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial/main amd64 Packages
kubeadm | 1.18.1-00 | https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial/main amd64 Packages
kubeadm | 1.18.0-00 | https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial/main amd64 Packages
kubeadm | 1.17.6-00 | https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial/main amd64 Packages
kubeadm | 1.17.5-00 | https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial/main amd64 Packages
kubeadm | 1.17.4-00 | https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial/main amd64 Packages
kubeadm | 1.17.3-00 | https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial/main amd64 Packages
在各master安裝新版本 kubeadm
root@k8s-master1:~# apt-get install kubeadm=1.17.4-00
驗證 kubeadm 版本
root@k8s-master1:~# kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.4", GitCommit:"8d8aa39598534325ad77120c120a22b3a990b5ea", GitTreeState:"clean", BuildDate:"2020-03-12T21:01:11Z", GoVersion:"go1.13.8", Compiler:"gc", Platform:"linux/amd64"}
kubeadm 升級命令使用幫助
root@k8s-master1:~# kubeadm upgrade --help
Upgrade your cluster smoothly to a newer version with this command
Usage:
kubeadm upgrade [flags]
kubeadm upgrade [command]
Available Commands:
apply Upgrade your Kubernetes cluster to the specified version
diff Show what differences would be applied to existing static pod manifests. See also: kubeadm upgrade apply --dry-run
node Upgrade commands for a node in the cluster
plan Check which versions are available to upgrade to and validate whether your current cluster is upgradeable. To skip the internet check, pass in the optional [version] parameter
Flags:
-h, --help help for upgrade
Global Flags:
--add-dir-header If true, adds the file directory to the header
--log-file string If non-empty, use this log file
--log-file-max-size uint Defines the maximum size a log file can grow to. Unit is megabytes. If the value is 0, the maximum file size is unlimited. (default 1800)
--rootfs string [EXPERIMENTAL] The path to the 'real' host root filesystem.
--skip-headers If true, avoid header prefixes in the log messages
--skip-log-headers If true, avoid headers when opening log files
-v, --v Level number for the log level verbosity
Use "kubeadm upgrade [command] --help" for more information about a command.
查看升級計划
查看升級計划
root@k8s-master1:~# kubeadm upgrade plan
[upgrade/config] Making sure the configuration is correct:
[upgrade/config] Reading configuration from the cluster...
[upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[preflight] Running pre-flight checks.
[upgrade] Making sure the cluster is healthy:
[upgrade] Fetching available versions to upgrade to
[upgrade/versions] Cluster version: v1.17.2
[upgrade/versions] kubeadm version: v1.17.4
I0607 11:59:53.319376 33051 version.go:251] remote version is much newer: v1.18.3; falling back to: stable-1.17
[upgrade/versions] Latest stable version: v1.17.6
[upgrade/versions] Latest version in the v1.17 series: v1.17.6
Components that must be upgraded manually after you have upgraded the control plane with 'kubeadm upgrade apply':
COMPONENT CURRENT AVAILABLE
Kubelet 6 x v1.17.2 v1.17.6
Upgrade to the latest version in the v1.17 series:
COMPONENT CURRENT AVAILABLE
API Server v1.17.2 v1.17.6
Controller Manager v1.17.2 v1.17.6
Scheduler v1.17.2 v1.17.6
Kube Proxy v1.17.2 v1.17.6
CoreDNS 1.6.5 1.6.5
Etcd 3.4.3 3.4.3-0
You can now apply the upgrade by executing the following command:
kubeadm upgrade apply v1.17.6
Note: Before you can perform this upgrade, you have to update kubeadm to v1.17.6.
_____________________________________________________________________
master升級
非主節點升級
升級前,在在haproxy中修改配置負載均衡配置,取消升級節點的請求
root@ha1:~# cat /etc/haproxy/haproxy.cfg
listen k8s-api-6443
bind 10.203.124.238:6443
mode tcp
server master1 10.203.124.230:6443 check inter 3s fall 3 rise 5
#server master2 10.203.124.231:6443 check inter 3s fall 3 rise 5
#server master3 10.203.124.232:6443 check inter 3s fall 3 rise 5
root@ha1:~# systemctl restart haproxy.service
master3、master2升級
先升級master3,最后升級master1
root@k8s-master3:~# kubeadm upgrade apply v1.17.4
master1主節點升級
master1升級前准備
在master1中,可提前將需要升級的1.17.4版本的image下載到系統上,縮短升級時間
root@k8s-master1:~# cat k8s-1.17.4-images-download.sh
#!/bin/bash
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.17.4
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.17.4
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.17.4
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.17.4
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.1
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.4.3-0
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:1.6.5
下載iamges
root@k8s-master1:~# sh k8s-1.17.4-images-download.sh
修改haproxy配置
master1升級前,再次調整haproxy.cfg配置文件,將master1的節點移除,master2、master3添加到haproxy資源組中
root@ha1:~# cat /etc/haproxy/haproxy.cfg
listen k8s-api-6443
bind 10.203.124.238:6443
mode tcp
#server master1 10.203.124.230:6443 check inter 3s fall 3 rise 5
server master2 10.203.124.231:6443 check inter 3s fall 3 rise 5
server master3 10.203.124.232:6443 check inter 3s fall 3 rise 5
root@ha1:~# systemctl restart haproxy.service
master1升級
root@k8s-master1:~# kubeadm upgrade apply v1.17.4
驗證鏡像
獲取主機上pod信息
root@k8s-master1:~# kubectl get pod -A
master升級kubelet、kubectl
root@k8s-master1:~# apt-cache madison kubelet
root@k8s-master1:~# apt-cache madison kubectl
root@k8s-master1:~# apt install kubelet=1.17.4-00 kubectl=1.17.4-00
查看node信息
若kubelet、kubectl升級后,顯示的VERSION版本仍舊為舊版本1.17.2,查看kubectl狀態,顯示沒有重新創建etcd pod,則手工刪除pod后,重啟kubectl服務,則VERSION顯示升級后的版本
root@k8s-master1:~# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master1.linux.com Ready master 113m v1.17.2
k8s-master2.linux.com Ready master 110m v1.17.4
k8s-master3.linux.com Ready master 109m v1.17.2
node1.linux.com Ready <none> 108m v1.17.2
node2.linux.com Ready <none> 108m v1.17.2
node3.linux.com Ready <none> 108m v1.17.2
root@k8s-master2:~# systemctl status kubelet
root@k8s-master1:~# kubectl get pod -A
root@k8s-master1:~# kubectl delete pod etcd-k8s-master3.linux.com -n kube-system
pod "etcd-k8s-master3.linux.com" deleted
root@k8s-master1:~# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master1.linux.com Ready master 115m v1.17.4
k8s-master2.linux.com Ready master 111m v1.17.4
k8s-master3.linux.com Ready master 111m v1.17.4
node1.linux.com Ready <none> 110m v1.17.2
node2.linux.com Ready <none> 110m v1.17.2
node3.linux.com Ready <none> 110m v1.17.2
升級 k8s node 服務
驗證當前 node 版本信息
node 節點還是 1.17.2 的舊版本
root@k8s-master1:~# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master1.linux.com Ready master 115m v1.17.4
k8s-master2.linux.com Ready master 111m v1.17.4
k8s-master3.linux.com Ready master 111m v1.17.4
node1.linux.com Ready <none> 110m v1.17.2
node2.linux.com Ready <none> 110m v1.17.2
node3.linux.com Ready <none> 110m v1.17.2
升級各 node 節點kubelet
在node節點上升級
root@node1:~# kubeadm upgrade node --kubelet-version 1.17.4
Node 升級 kubectl、kubelet
包含 master 節點在的 kubectl、kubelet 也要升級
root@node1:~# apt install kubelet=1.17.4-00 kubeadm=1.17.4-00
驗證node節點的版本
root@k8s-master1:~# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master1.linux.com Ready master 131m v1.17.4
k8s-master2.linux.com Ready master 128m v1.17.4
k8s-master3.linux.com Ready master 127m v1.17.4
node1.linux.com Ready <none> 126m v1.17.4
node2.linux.com Ready <none> 126m v1.17.4
node3.linux.com Ready <none> 126m v1.17.4
測試運行Nginx+Tomcat
https://kubernetes.io/zh/docs/concepts/workloads/controllers/deployment/
運行測試Nginx,並實現動靜分離
下載Nginx、Tomcat Image,push到harbor
Nginx
root@k8s-master1:~# docker pull nginx:1.14.2
1.14.2: Pulling from library/nginx
27833a3ba0a5: Pull complete
0f23e58bd0b7: Pull complete
8ca774778e85: Pull complete
Digest: sha256:f7988fb6c02e0ce69257d9bd9cf37ae20a60f1df7563c3a2a6abe24160306b8d
Status: Downloaded newer image for nginx:1.14.2
docker.io/library/nginx:1.14.2
root@k8s-master1:~# docker tag nginx:1.14.2 harbor.linux.com/baseimages/nginx:1.14.2
root@k8s-master1:~# docker push harbor.linux.com/baseimages/nginx:1.14.2
The push refers to repository [harbor.linux.com/baseimages/nginx]
82ae01d5004e: Pushed
b8f18c3b860b: Pushed
5dacd731af1b: Pushed
1.14.2: digest: sha256:706446e9c6667c0880d5da3f39c09a6c7d2114f5a5d6b74a2fafd24ae30d2078 size: 948
Tomcat(Dockfile重做Tomcat Image)
root@k8s-master1:~# docker pull tomcat
Using default tag: latest
latest: Pulling from library/tomcat
376057ac6fa1: Pull complete
5a63a0a859d8: Pull complete
496548a8c952: Pull complete
2adae3950d4d: Pull complete
0a297eafb9ac: Pull complete
09a4142c5c9d: Pull complete
9e78d9befa39: Pull complete
18f492f90b9c: Pull complete
7834493ec6cd: Pull complete
216b2be21722: Pull complete
Digest: sha256:ce753be7b61d86f877fe5065eb20c23491f783f283f25f6914ba769fee57886b
Status: Downloaded newer image for tomcat:latest
docker.io/library/tomcat:latest
根據pull 下來的iamge,通過dockerfile添加一個Tomcat的訪問目錄,重新制作image,push到harbor
root@k8s-master1:/usr/local/src/kubeadm# mkdir tomcat-dockerfile
root@k8s-master1:/usr/local/src/kubeadm# cd tomcat-dockerfile/
root@k8s-master1:/usr/local/src/kubeadm/tomcat-dockerfile# vim Dockerfile
FROM tomcat
ADD ./app /usr/local/tomcat/webapps/app/
基於當前目錄下Dockerfile Build Image
root@k8s-master1:/usr/local/src/kubeadm/tomcat-dockerfile# docker build -t harbor.linux.com/danran/tomcat:app .
Sending build context to Docker daemon 3.584kB
Step 1/2 : FROM tomcat
---> 1b6b1fe7261e
Step 2/2 : ADD ./app /usr/local/tomcat/webapps/app/
---> 1438161fa122
Successfully built 1438161fa122
Successfully tagged harbor.linux.com/danran/tomcat:app
root@k8s-master1:/usr/local/src/kubeadm/tomcat-dockerfile# docker push harbor.linux.com/danran/tomcat:app
The push refers to repository [harbor.linux.com/danran/tomcat]
2b7a658772fe: Pushed
b0ac242ce8d3: Mounted from baseimages/tomcat
5e71d8e4cd3d: Mounted from baseimages/tomcat
eb4497d7dab7: Mounted from baseimages/tomcat
bfbfe00b44fc: Mounted from baseimages/tomcat
d39111fb2602: Mounted from baseimages/tomcat
155d997ed77c: Mounted from baseimages/tomcat
88cfc2fcd059: Mounted from baseimages/tomcat
760e8d95cf58: Mounted from baseimages/tomcat
7cc1c2d7e744: Mounted from baseimages/tomcat
8c02234b8605: Mounted from baseimages/tomcat
app: digest: sha256:68a9021fa5ed55c75d12abef17d55333d80bd299d867722b9dbd86a3799f8210 size: 2628
運行Nginx
root@k8s-master1:~# cd /usr/local/src/
root@k8s-master1:/usr/local/src# mkdir kubeadm
root@k8s-master1:/usr/local/src# cd kubeadm/
root@k8s-master1:/usr/local/src/kubeadm# mkdir nginx-yml
root@k8s-master1:/usr/local/src/kubeadm# cd nginx-yml/
nginx.yml指定image為harbor倉庫
root@k8s-master1:/usr/local/src/kubeadm/nginx-yml# cat nginx.yml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 1
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: harbor.linux.com/baseimages/nginx:1.14.2
ports:
- containerPort: 80
---
kind: Service
apiVersion: v1
metadata:
labels:
app: danran-nginx-service-label
name: danran-nginx-service
namespace: default
spec:
type: NodePort
ports:
- name: http
port: 80
protocol: TCP
targetPort: 80
nodePort: 30004
selector:
app: nginx
root@k8s-master1:/usr/local/src/kubeadm/nginx-yml# kubectl apply -f nginx.yml
deployment.apps/nginx-deployment created
service/danran-nginx-service created
分別向三個nginx pod的index.html寫入不同數據,觀察訪問nginx時的pod的調度
root@k8s-master1:/usr/local/src/kubeadm/nginx-yml# kubectl get pod
NAME READY STATUS RESTARTS AGE
nginx-deployment-c55f88b6b-5wkhr 1/1 Running 0 52s
nginx-deployment-c55f88b6b-g2kjs 1/1 Running 0 52s
nginx-deployment-c55f88b6b-z226z 1/1 Running 0 52s
root@k8s-master1:/usr/local/src/kubeadm/nginx-yml# kubectl exec -it nginx-deployment-c55f88b6b-5wkhr bash
root@nginx-deployment-c55f88b6b-5wkhr:/# ls /usr/share/nginx/html/
50x.html index.html
root@nginx-deployment-c55f88b6b-5wkhr:/# echo pod1 > /usr/share/nginx/html/index.html
root@nginx-deployment-c55f88b6b-5wkhr:/# exit
exit
root@k8s-master1:/usr/local/src/kubeadm/nginx-yml# kubectl exec -it nginx-deployment-c55f88b6b-g2kjs bash
root@nginx-deployment-c55f88b6b-g2kjs:/# echo pod2 > /usr/share/nginx/html/index.html
root@nginx-deployment-c55f88b6b-g2kjs:/# exit
exit
root@k8s-master1:/usr/local/src/kubeadm/nginx-yml# kubectl exec -it nginx-deployment-c55f88b6b-z226z bash
root@nginx-deployment-c55f88b6b-z226z:/# echo pod3 > /usr/share/nginx/html/index.html
root@nginx-deployment-c55f88b6b-z226z:/# exit
exit
在瀏覽器中通過訪問node節點的地址和Nginx的端口,訪問nginx pod(1092.168.1.163為其中一台node的地址)
修改haproxy配置文件,添加Nginx 反向代理
在keepalived中新增一個10.203.124.239 用作Nginx的VIP
root@ha1:~# cat /etc/keepalived/keepalived.conf
! Configuration File for keepalived
global_defs {
notification_email {
acassen
}
notification_email_from Alexandre.Cassen@firewall.loc
smtp_server 10.203.124.230
smtp_connect_timeout 30
router_id LVS_DEVEL
}
vrrp_instance VI_1 {
state MASTER
interface ens33
garp_master_delay 10
smtp_alert
virtual_router_id 51
priority 100
advert_int 1
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
10.203.124.238 dev ens33 label ens33:1
10.203.124.239 dev ens33 label ens33:1
}
}
root@ha1:~# systemctl restart keepalived.service
haproxy中新增一個用於Nginx的反向代理負載均衡
root@ha1:~# cat /etc/haproxy/haproxy.cfg
listen k8s-nginx
bind 10.203.124.239:80
mode tcp
server master1 10.203.124.233:30004 check inter 3s fall 3 rise 5
server master2 10.203.124.234:30004 check inter 3s fall 3 rise 5
server master3 10.203.124.235:30004 check inter 3s fall 3 rise 5
10.203.124.239:80 端口已監聽
root@ha1:~# systemctl restart haproxy.service
root@ha1:~# ss -ntl
State Recv-Q Send-Q Local Address:Port Peer Address:Port
LISTEN 0 128 127.0.0.53%lo:53 0.0.0.0:*
LISTEN 0 128 0.0.0.0:22 0.0.0.0:*
LISTEN 0 128 10.203.124.238:6443 0.0.0.0:*
LISTEN 0 128 10.203.124.239:80 0.0.0.0:*
LISTEN 0 128 [::]:22 [::]:*
測試瀏覽器通過訪問Nginx的VIP
運行 tomcat
root@k8s-master1:/usr/local/src/kubeadm/tomcat-dockerfile# cd ..
root@k8s-master1:/usr/local/src/kubeadm# mkdir tomcat-yml
root@k8s-master1:/usr/local/src/kubeadm# cd tomcat-
-bash: cd: tomcat-: No such file or directory
root@k8s-master1:/usr/local/src/kubeadm# cd tomcat-yml/
root@k8s-master1:/usr/local/src/kubeadm/tomcat-yml# cp ../nginx-yml/nginx.yml tomcat.yml
root@k8s-master1:/usr/local/src/kubeadm/tomcat-yml# vim tomcat.yml
apiVersion: apps/v1
kind: Deployment
metadata:
name: tomcat-deployment
labels:
app: tomcat
spec:
replicas: 1
selector:
matchLabels:
app: tomcat
template:
metadata:
labels:
app: tomcat
spec:
containers:
- name: tomcat
image: harbor.linux.com/danran/tomcat:app
ports:
- containerPort: 8080
---
kind: Service
apiVersion: v1
metadata:
labels:
app: danran-tomcat-service-label
name: danran-tomcat-service
namespace: default
spec:
type: NodePort
ports:
- name: http
port: 80
protocol: TCP
targetPort: 8080
selector:
app: tomcat
創建Tomcat Pod
root@k8s-master1:/usr/local/src/kubeadm/tomcat-yml# kubectl apply -f tomcat.yml
deployment.apps/tomcat-deployment created
service/danran-tomcat-service created
root@k8s-master1:~# kubectl get pod
NAME READY STATUS RESTARTS AGE
nginx-deployment-c55f88b6b-z226z 1/1 Running 0 57m
tomcat-deployment-678684f574-x6g4h 1/1 Running 0 7m1s
dashboard進入到Nginx Pod
在Pod中添加Tomcat的反向代理配置
獲取Tomcat的Service Name為danran-tomcat-service,和tomcat pod的IP地址10.10.5.4
root@k8s-master1:~# kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
danran-nginx-service NodePort 172.16.190.128 <none> 80:30004/TCP 95m
danran-tomcat-service NodePort 172.16.105.29 <none> 80:30716/TCP 45m
kubernetes ClusterIP 172.16.0.1 <none> 443/TCP 5h31m
root@k8s-master1:~# kubectl describe service danran-tomcat-service
Name: danran-tomcat-service
Namespace: default
Labels: app=danran-tomcat-service-label
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"app":"danran-tomcat-service-label"},"name":"danran-tomcat-serv...
Selector: app=tomcat
Type: NodePort
IP: 172.16.105.29
Port: http 80/TCP
TargetPort: 8080/TCP
NodePort: http 30716/TCP
Endpoints: 10.10.5.4:8080
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
將Nginx鏡像啟動的danran-nginx-service Pod中的/etc/nginx/conf.d/default.conf文件拷貝出來編輯后,將該文件重啟使用Dockerfile制作新的鏡像,或在danran-nginx-service Pod中在線修改配置文件,然后制作為image,重新啟動pod
編輯拷貝至image中的/etc/nginx/conf.d/default.conf配置文件(將Tomcat Pod的Name和IP寫入Nginx的代理配置)
root@k8s-master1:/usr/local/src/kubeadm/Nginx-dockerfile# pwd
/usr/local/src/kubeadm/Nginx-dockerfile
root@k8s-master1:/usr/local/src/kubeadm/Nginx-dockerfile# ls
default.conf
在default.conf 中新增proxy_pass到Tomcat的Service
root@k8s-master1:/usr/local/src/kubeadm/Nginx-dockerfile# cat default.conf
location /app {
proxy_pass http://danran-tomcat-service;
}
編輯Dockerfile文件,制作新的Nginx image
root@k8s-master1:/usr/local/src/kubeadm/Nginx-dockerfile# ls
default.conf Dockerfile
root@k8s-master1:/usr/local/src/kubeadm/Nginx-dockerfile# vim Dockerfile
FROM harbor.linux.com/baseimages/nginx:1.14.2
ADD default.conf /etc/nginx/conf.d/
root@k8s-master1:/usr/local/src/kubeadm/Nginx-dockerfile# docker build -t harbor.linux.com/baseimages/nginx-tomcat:v1 .
Sending build context to Docker daemon 4.096kB
Step 1/2 : FROM harbor.linux.com/baseimages/nginx:1.14.2
---> 295c7be07902
Step 2/2 : ADD default.conf /etc/nginx/conf.d/
---> 356359fcd741
Successfully built 356359fcd741
Successfully tagged harbor.linux.com/baseimages/nginx-tomcat:v1
root@k8s-master1:~# docker push harbor.linux.com/baseimages/nginx-tomcat:v1
The push refers to repository [harbor.linux.com/baseimages/nginx-tomcat]
1bd551b21d90: Pushed
82ae01d5004e: Mounted from baseimages/nginx
b8f18c3b860b: Mounted from baseimages/nginx
5dacd731af1b: Mounted from baseimages/nginx
v1: digest: sha256:eb7d1cce19182025765d49734c1936f273355fdc71a0d02b65cf67aa1734d1f5 size: 1155
重新運行Nginx
修改nginx.yml文件中的image地址為harbor.linux.com/baseimages/nginx-tomcat:v1
root@k8s-master1:/usr/local/src/kubeadm/nginx-yml# cat nginx.yml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 1
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: harbor.linux.com/baseimages/nginx-tomcat:v1
ports:
- containerPort: 80
---
kind: Service
apiVersion: v1
metadata:
labels:
app: danran-nginx-service-label
name: danran-nginx-service
namespace: default
spec:
type: NodePort
ports:
- name: http
port: 80
protocol: TCP
targetPort: 80
nodePort: 30004
selector:
app: nginx
root@k8s-master1:/usr/local/src/kubeadm/nginx-yml# kubectl apply -f nginx.yml
deployment.apps/nginx-deployment configured
service/danran-nginx-service unchanged
在dashboard中確認Nginx Pod是否使用的nginx-tomcat:v1 Image
瀏覽器訪問Nginx 的VIP,可跳轉到Tomcat的界面
kubeadm命令
token 管理
# kubeadm token --help
create #創建 token,默認有效期 24 小時
delete #刪除 token
generate #生成並打印 token,但不在服務器上創建,即將 token 用於其他操作
list #列出服務器所有的 token
顯示已有的token
root@k8s-master1:~# kubeadm token list
TOKEN TTL EXPIRES USAGES DESCRIPTION EXTRA GROUPS
abcdef.0123456789abcdef 1d 2020-06-09T11:03:47+08:00 authentication,signing <none> system:bootstrappers:kubeadm:default-node-token
創建token
root@k8s-master1:~# kubeadm token create
W0607 17:12:33.444298 5914 validation.go:28] Cannot validate kube-proxy config - no validator is available
W0607 17:12:33.444396 5914 validation.go:28] Cannot validate kubelet config - no validator is available
rolz9i.aeoaj86ga4p8g03k
root@k8s-master1:~# kubeadm token list
TOKEN TTL EXPIRES USAGES DESCRIPTION EXTRA GROUPS
abcdef.0123456789abcdef 1d 2020-06-09T11:03:47+08:00 authentication,signing <none> system:bootstrappers:kubeadm:default-node-token
rolz9i.aeoaj86ga4p8g03k 23h 2020-06-08T17:12:33+08:00 authentication,signing <none> system:bootstrappers:kubeadm:default-node-token
問題
-
kubectl delete flannel
root@k8s-master1:~# kubectl delete -f kube-flannel.yml podsecuritypolicy.policy "psp.flannel.unprivileged" deleted clusterrole.rbac.authorization.k8s.io "flannel" deleted clusterrolebinding.rbac.authorization.k8s.io "flannel" deleted serviceaccount "flannel" deleted configmap "kube-flannel-cfg" deleted daemonset.apps "kube-flannel-ds-amd64" deleted daemonset.apps "kube-flannel-ds-arm64" deleted daemonset.apps "kube-flannel-ds-arm" deleted daemonset.apps "kube-flannel-ds-ppc64le" deleted daemonset.apps "kube-flannel-ds-s390x" deleted root@k8s-master1:~# kubectl apply -f kube-flannel.yml podsecuritypolicy.policy/psp.flannel.unprivileged created clusterrole.rbac.authorization.k8s.io/flannel created clusterrolebinding.rbac.authorization.k8s.io/flannel created serviceaccount/flannel created configmap/kube-flannel-cfg created daemonset.apps/kube-flannel-ds-amd64 created daemonset.apps/kube-flannel-ds-arm64 created daemonset.apps/kube-flannel-ds-arm created daemonset.apps/kube-flannel-ds-ppc64le created daemonset.apps/kube-flannel-ds-s390x created