文章目錄
Kubernetes
一、環境准備(全部執行)空間 🐱💻🐱💻🐱💻
1、服務器的環境准備
1》nod節點CPU核數必須是 : 大於等於2核2G ,否則k8s無法啟動 ,如果不是,則在集群初始化時,后面后面增加參數: --ignore-preflight-errors=NumCPU
2》DNS網絡: 最好設置為本地網絡連通的DNS,否則網絡不通,無法下載一些鏡像
3》linux內核: linux內核必須是 4 版本以上就可以,建議最好是4.4之上的,因此必須把linux核心進行升級
4》准備3台虛擬機環境(或者3台雲服務器)
k8s-m01: #此機器用來安裝k8s-master的操作環境
k8s-nod01: #此機器用來安裝k8s node節點的環境
k8s-nod02: #此機器用來安裝k8s node節點的環境
服務原理圖:
2、本地機器准備
服務器 | IP | 主機名 |
---|---|---|
k8s-master | 192.168.15.55 | m01 |
k8s-node1 | 192.168.15.56 | nod01 |
k8s-node2 | 192.168.15.57 | nod02 |
3、設置主機及解析添加(全部都執行)
#設置主機名
[root@m01 ~]# hostnamectl set-hostname m01
[root@nod01 ~]# hostnamectl set-hostname nod1
[root@nod02 ~]# hostnamectl set-hostname nod2
#添加hosts解析(三台機器全執行)
[root@m01 ~]# cat >> /etc/hosts << EOF
192.168.15.55 m01
192.168.15.56 nod01
192.168.15.57 nod02
EOF
#查看是否添加(以防萬一解析有問題)
[root@m01 ~]# cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.15.55 m01
192.168.15.56 nod01
192.168.15.57 nod02
4、系統優化(全部都執行)
1》 #關閉防火牆
[root@m01 ~]# systemctl disable --now firewalld
[root@nod01 ~]# systemctl disable --now firewalld
[root@nod02 ~]# systemctl disable --now firewalld
2》#關閉Selinux
[root@m01 ~]# setenforce 0
setenforce: SELinux is disabled
[root@nod01 ~]# setenforce 0
setenforce: SELinux is disabled
[root@nod02 ~]# setenforce 0
setenforce: SELinux is disabled
2》 #關閉swap交換分區
(臨時關閉swap分區)
[root@m01 ~]# swapoff -a
(禁用永久關閉)
[root@m01 ~]# sed -i.bak '/swap/s/^/#/' /etc/fstab
(修改/etc/fstab 讓kubelet忽略swap)
[root@m01 ~]#echo 'KUBELET_EXTRA_ARGS="--fail-swap-on=false"' > /etc/sysconfig/kubelet
3》# 查看swap交換分區(確認關閉狀態)
[root@m01 ~]# free -h
total used free shared buff/cache available
Mem: 1.9G 1.0G 77M 9.5M 843M 796M
Swap: 0B 0B 0B
5、主機之間進行做免密操作(全部包括自己本身)
#做免密操作(集群之間應該互相免交互)
[root@m01 ~]# ssh-keygen -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa):
/root/.ssh/id_rsa already exists.
.......
....
root@m01 ~]# for i in m01 nod01 nod02;do ssh-copy-id -i ~/.ssh/id_rsa.pub root@$i; done
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub"
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: WARNING: All keys were skipped because they already exist on the remote system.
(if you think this is a mistake, you may want to use -f option)
.........
......
#測試是否免密成功
[root@m01 ~]# ssh m01 #使用主機名連接m01
Last login: Sun Aug 1 15:40:54 2021 from 192.168.15.1
[root@m01 ~]# exit 登出
Connection to m01 closed.
root@m01 ~]# ssh nod01 #使用主機名連接nod01
Last login: Sun Aug 1 15:40:56 2021 from 192.168.15.1
[root@nod01 ~]# exit 登出
Connection to nod01 closed.
[root@m01 ~]# ssh nod02 #使用主機名連接nod02
Last login: Sun Aug 1 15:40:58 2021 from 192.168.15.1
[root@nod02 ~]# exit 登出
Connection to nod02 closed.
6、配置鏡像源(選其一)
#添加阿里雲鏡像源( 默認選擇)~\(^o^)/~
[root@m01 ~]# curl -o /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo
#添加華為鏡像源
[root@m01 ~]# curl -o /etc/yum.repos.d/CentOS-Base.repo https://repo.huaweicloud.com/repository/conf/CentOS-7-reg.repo
[root@m01 ~]# yum clean all
已加載插件:fastestmirror
正在清理軟件源: base docker-ce-stable elrepo epel extras kubernetes updates
Cleaning up list of fastest mirrors
Other repos take up 11 M of disk space (use --verbose for details)
[root@m01 ~]# yum makecache
7、安裝常用工具包(全部執行)
1)#更新系統
[root@m01 ~]# yum update -y --exclud=kernel*
已加載插件:fastestmirror
Loading mirror speeds from cached hostfile
epel/x86_64/metalink | 8.9 kB 0
..........
......
2)#安裝常用軟件工具包
[root@m01 ~]# yum install wget expect vim net-tools ntp bash-completion ipvsadm ipset jq iptables conntrack sysstat libseccomp -y
已加載插件:fastestmirror
Loading mirror speeds from cached hostfile
* base: mirrors.aliyun.com
* elrepo: mirror-hk.koddos.net
* epel: mirror.sjtu.edu.cn
* extras: mirrors.aliyun.com
* updates: mirrors.aliyun.com
........
....
8、主機時間與系統時間同步(集群時間必須一致)
1》#全部時間進行同步統一(方式一)
[root@m01 ~]# yum install ntpdate -y
[root@m01 ~]# ntpdate ntp1.aliyun.com
1 Aug 17:32:28 ntpdate[55595]: adjust time server 120.25.115.20 offset 0.045773 sec
[root@m01 ~]# hwclock --systohc
[root@m01 ~]# hwclock
2021年08月01日 星期日 17時34分05秒 -0.428788 秒
[root@m01 ~]# date
2021年 08月 01日 星期日 17:34:20 CST
2》#設置系統時區為中國/上海(方式二)
[root@m01 ~]# timedatectl set-timezone Asia/Shanghai
#將當前的 UTC 時間寫入硬件時鍾
[root@m01 ~]# timedatectl set-local-rtc 0
#重啟依賴於系統時間的服務
[root@nod01 ~]# systemctl restart rsyslog
systemctl restart rsyslog
[root@m01 ~]# systemctl restart crond
9、系統內核更新 (升級Linux內核為4.44之上版本)
docker 對系統內核要求比較高,最好使用4.4之上
【kernel使用的倉庫】
1》#安裝包獲取下載(選其一安裝即可)
✨✨
[root@m01 ~]# wget https://elrepo.org/linux/kernel/el7/x86_64/RPMS/kernel-lt-5.4.136-1.el7.elrepo.x86_64.rpm
[root@m01 ~]# wget https://elrepo.org/linux/kernel/el7/x86_64/RPMS/kernel-lt-devel-4.4.245-1.el7.el repo.x86_64.rpm
✨✨
[root@m01 ~]# rpm -Uvh http://www.elrepo.org/elrepo-release-7.0-4.el7.elrepo.noarch.rpm
獲取http://www.elrepo.org/elrepo-release-7.0-4.el7.elrepo.noarch.rpm
准備中... ################################# [100%]
......
...
2》#安裝內核
yum --enablerepo=elrepo-kernel install -y kernel-lt*
[root@m01 ~]# yum --enablerepo=elrepo-kernel install -y kernel-lt
已加載插件:fastestmirror
Loading mirror speeds from cached hostfile
* base: mirrors.aliyun.com
* elrepo: mirror-hk.koddos.net
* elrepo-kernel: mirror-hk.koddos.net
* epel: mirror.sjtu.edu.cn
* extras: mirrors.aliyun.com
.......
...
3》#查看當前的所有內核版本
[root@m01 ~]# cat /boot/grub2/grub.cfg | grep menuentry
if [ x"${feature_menuentry_id}" = xy ]; then
menuentry_id_option="--id"
menuentry_id_option=""
export menuentry_id_option
menuentry 'CentOS Linux (5.4.137-1.el7.elrepo.x86_64) 7 (Core)' --class centos --class gnu-linux --class gnu --class os --unrestricted $menuentry_id_option 'gnulinux-5.4.137-1.el7.elrepo.x86_64-advanced-507fc260-78cc-4ce0-8310-af00334de578' {
menuentry 'CentOS Linux (3.10.0-1160.36.2.el7.x86_64) 7 (Core)' --class centos --class gnu-linux --class gnu --class os --unrestricted $menuentry_id_option 'gnulinux-3.10.0-1160.36.2.el7.x86_64-advanced-507fc260-78cc-4ce0-8310-af00334de578' {
menuentry 'CentOS Linux (3.10.0-693.el7.x86_64) 7 (Core)' --class centos --class gnu-linux --class gnu --class os --unrestricted $menuentry_id_option 'gnulinux-3.10.0-693.el7.x86_64-advanced-507fc260-78cc-4ce0-8310-af00334de578' {
menuentry 'CentOS Linux (0-rescue-b9c18819be20424b8f84a2cad6ddf12e) 7 (Core)' --class centos --class gnu-linux --class gnu --class os --unrestricted $menuentry_id_option 'gnulinux-0-rescue-b9c18819be20424b8f84a2cad6ddf12e-advanced-507fc260-78cc-4ce0-8310-af00334de578' {
4》#查看當前啟動內核版本
[root@m01 ~]# grub2-editenv list
saved_entry=CentOS Linux (3.10.0-693.el7.x86_64) 7 (Core)
5》#修改啟動內核版本,設置開機從新內核啟動(默認調動版本)
grub2-set-default 'CentOS Linux (5.7.7-1.el7.elrepo.x86_64) 7 (Core)'
#注意:設置完內核后,需要重啟服務器才會生效
6》#調到默認啟動
[root@nod01 ~]# grub2-set-default 0 && grub2-mkconfig -o /etc/grub2.cfg
Generating grub configuration file ...
Found linux image: /boot/vmlinuz-5.4.137-1.el7.elrepo.x86_64
Found initrd image: /boot/initramfs-5.4.137-1.el7.elrepo.x86_64.img
Found linux image: /boot/vmlinuz-3.10.0-1160.36.2.el7.x86_64
Found initrd image: /boot/initramfs-3.10.0-1160.36.2.el7.x86_64.img
Found linux image: /boot/vmlinuz-3.10.0-693.el7.x86_64
Found initrd image: /boot/initramfs-3.10.0-693.el7.x86_64.img
Found linux image: /boot/vmlinuz-0-rescue-b9c18819be20424b8f84a2cad6ddf12e
Found initrd image: /boot/initramfs-0-rescue-b9c18819be20424b8f84a2cad6ddf12e.img
done
#查看當前默認啟動的內核
[root@m01 ~]# grubby --default-kernel
7》#重啟后查詢內核
[root@nod01 ~]# reboot
[root@nod01 ~]# uname -r
5.4.137-1.el7.elrepo.x86_64
10、增加命令提示安裝
[root@m01 ~]# yum install -y bash-completion
已加載插件:fastestmirror
Loading mirror speeds from cached hostfile
* base: mirrors.aliyun.com
* elrepo: mirrors.tuna.tsinghua.edu.cn
* epel: mirrors.bfsu.edu.cn
* extras: mirrors.aliyun.com
* updates: mirrors.aliyun.com
[root@m01 ~]# source /usr/share/bash-completion/bash_completion
[root@m01 ~]# source <(kubectl completion bash)
[root@m01 ~]# echo "source <(kubectl completion bash)" >> ~/.bashrc
11、設置日志保存方式(此步可跳過)
1)#創建保存日志的目錄
[root@m01 ~]# mkdir /var/log/journal
2)#創建配置文件存放目錄
[root@m01 ~]# mkdir /etc/systemd/journald.conf.d
3)#創建配置文件
[root@m01 ~]# cat > /etc/systemd/journald.conf.d/99-prophet.conf << EOF
[Journal]
Storage=persistent
Compress=yes
SyncIntervalSec=5m
RateLimitInterval=30s
RateLimitBurst=1000
SystemMaxUse=10G
SystemMaxFileSize=200M
MaxRetentionSec=2week
ForwardToSyslog=no
EOF
4)#重啟systemd journald的配置
[root@m01 ~]# systemctl restart systemd-journald
二、IPVS安裝及模塊調用(全部執行)✨✨✨
1》#監控系統安裝(ipvs)
[root@nod01 ~]# yum install -y conntrack-tools ipvsadm ipset conntrack libseccomp
已加載插件:fastestmirror
Loading mirror speeds from cached hostfile
* base: mirrors.aliyun.com
* elrepo: mirror-hk.koddos.net
* epel: mirror.sjtu.edu.cn
* extras: mirrors.aliyun.com
* updates: mirrors.aliyun.com
2》#IPVS模塊加載
[root@nod01 ~]#cat > /etc/sysconfig/modules/ipvs.modules <<EOF
#!/bin/bash
ipvs_modules="ip_vs ip_vs_lc ip_vs_wlc ip_vs_rr ip_vs_wrr ip_vs_lblc ip_vs_lblcr ip_vs_dh ip_vs_sh ip_vs_fo ip_vs_nq ip_vs_sed ip_vs_ftp nf_conntrack"
for kernel_module in \${ipvs_modules}; do
/sbin/modinfo -F filename \${kernel_module} > /dev/null 2>&1
if [ $? -eq 0 ]; then
/sbin/modprobe \${kernel_module}
fi
done
EOF
3》#模塊文件授權及執行
[root@m01 ~]# chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules
#查看監控模塊
[root@nod01 ~]# lsmod | grep ip_vs
ip_vs_ftp 16384 0
nf_nat 40960 5 ip6table_nat,xt_nat,iptable_nat,xt_MASQUERADE,ip_vs_ftp
ip_vs_sed 16384 0
ip_vs_nq 16384 0
ip_vs_fo 16384 0
ip_vs_sh 16384 0
ip_vs_dh 16384 0
ip_vs_lblcr 16384 0
ip_vs_lblc 16384 0
ip_vs_wrr 16384 0
ip_vs_rr 16384 0
ip_vs_wlc 16384 0
ip_vs_lc 16384 0
ip_vs 155648 25 ip_vs_wlc,ip_vs_rr,ip_vs_dh,ip_vs_lblcr,ip_vs_sh,ip_vs_fo,ip_vs_nq,ip_vs_lblc,ip_vs_wrr,ip_vs_lc,ip_vs_sed,ip_vs_ftp
nf_conntrack 147456 6 xt_conntrack,nf_nat,xt_nat,nf_conntrack_netlink,xt_MASQUERADE,ip_vs
nf_defrag_ipv6 24576 2 nf_conntrack,ip_vs
libcrc32c 16384 5 nf_conntrack,nf_nat,btrfs,xfs,ip_vs
4》# 修改內核啟動參數
[root@m01 ~]# cat > /etc/sysctl.d/k8s.conf << EOF
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
fs.may_detach_mounts = 1
vm.overcommit_memory=1
vm.panic_on_oom=0
fs.inotify.max_user_watches=89100
fs.file-max=52706963
fs.nr_open=52706963
net.ipv4.tcp_keepalive_time = 600
net.ipv4.tcp.keepaliv.probes = 3
net.ipv4.tcp_keepalive_intvl = 15
net.ipv4.tcp.max_tw_buckets = 36000
net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp.max_orphans = 327680
net.ipv4.tcp_orphan_retries = 3
net.ipv4.tcp_syncookies = 1
net.ipv4.tcp_max_syn_backlog = 16384
net.ipv4.ip_conntrack_max = 65536
net.ipv4.tcp_max_syn_backlog = 16384
net.ipv4.top_timestamps = 0
net.core.somaxconn = 16384
EOF
5》# 立即生效添加的內核參數
[root@nod01 ~]# sysctl --system
* Applying /usr/lib/sysctl.d/00-system.conf ...
net.bridge.bridge-nf-call-ip6tables = 0
net.bridge.bridge-nf-call-iptables = 0
net.bridge.bridge-nf-call-arptables = 0
* Applying /usr/lib/sysctl.d/10-default-yama-scope.conf ...
* Applying /usr/lib/sysctl.d/50-default.conf ...
kernel.sysrq = 16
kernel.core_uses_pid = 1
kernel.kptr_restrict = 1
net.ipv4.conf.default.rp_filter = 1
net.ipv4.conf.all.rp_filter = 1
net.ipv4.conf.default.accept_source_route = 0
net.ipv4.conf.all.accept_source_route = 0
net.ipv4.conf.default.promote_secondaries = 1
net.ipv4.conf.all.promote_secondaries = 1
fs.protected_hardlinks = 1
fs.protected_symlinks = 1
* Applying /etc/sysctl.d/99-sysctl.conf ...
* Applying /etc/sysctl.d/k8s.conf ...
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
vm.overcommit_memory = 1
vm.panic_on_oom = 0
fs.inotify.max_user_watches = 89100
fs.file-max = 52706963
fs.nr_open = 52706963
net.ipv4.tcp_keepalive_time = 600
net.ipv4.tcp_keepalive_intvl = 15
net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp_orphan_retries = 3
net.ipv4.tcp_syncookies = 1
net.ipv4.tcp_max_syn_backlog = 16384
net.ipv4.tcp_max_syn_backlog = 16384
net.core.somaxconn = 16384
* Applying /etc/sysctl.conf ...
三、安裝docker(全部執行)
1》 #卸載之前安裝過得docker(若沒有安裝直接跳過此步)
[root@m01 ~]# sudo yum remove docker docker-common docker-selinux docker-engine
2》#安裝docker需要的依賴包 (之前執行過,可以省略)
root@nod01 ~]# sudo yum install -y yum-utils device-mapper-persistent-data lvm2
已加載插件:fastestmirror
Loading mirror speeds from cached hostfile
* base: mirrors.aliyun.com
* elrepo: mirror-hk.koddos.net
* epel: mirror.sjtu.edu.cn
* extras: mirrors.aliyun.com
* updates: mirrors.aliyun.com
軟件包 yum-utils-1.1.31-54.el7_8.noarch 已安裝並且是最新版本
··········
......
3》 #安裝docker鏡像源
(添加Docker repository,這里改為國內阿里雲yum源)
[root@nod01 ~]#yum-config-manager \
--add-repo \
http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
(安裝華為鏡像源)
[root@nod01 ~]# wget -O /etc/yum.repos.d/docker-ce.repo https://repo.huaweicloud.com/docker-ce/linux/centos/docker-ce.repo
--2021-08-01 18:06:21-- https://repo.huaweicloud.com/docker-ce/linux/centos/docker-ce.repo
正在解析主機 repo.huaweicloud.com (repo.huaweicloud.com)... 218.92.219.17, 58.222.56.24, 117.91.188.35, ...
正在連接 repo.huaweicloud.com (repo.huaweicloud.com)|218.92.219.17|:443... 已連接。
已發出 HTTP 請求,正在等待回應... 200 OK
長度:1919 (1.9K) [application/octet-stream]
正在保存至: “/etc/yum.repos.d/docker-ce.repo”
100%[=====================================================================================================>] 1,919 --.-K/s 用時 0s
2021-08-01 18:06:21 (612 MB/s) - 已保存 “/etc/yum.repos.d/docker-ce.repo” [1919/1919])
4》#安裝docker
[root@nod01 ~]# yum install docker-ce -y
已加載插件:fastestmirror
Loading mirror speeds from cached hostfile
* base: mirrors.aliyun.com
* elrepo: mirrors.tuna.tsinghua.edu.cn
* epel: mirror.sjtu.edu.cn
* extras: mirrors.aliyun.com
* updates: mirrors.aliyun.com
..........
....
5》#配置鏡像下載加速器
[root@docker ~]# cat > /etc/docker/daemon.json << EOF
{
"registry-mirrors": ["https://hahexyip.mirror.aliyuncs.com"]
}
EOF
6》 #啟動docker並加入開機自啟動
[root@m01 ~]# systemctl enable docker && systemctl start docker
7》#查看docker詳細信息,也可看docker運行狀態
[root@nod01 ~]# docker info
Client:
Context: default
Debug Mode: false
Plugins:
app: Docker App (Docker Inc., v0.9.1-beta3)
buildx: Build with BuildKit (Docker Inc., v0.5.1-docker)
scan: Docker Scan (Docker Inc., v0.8.0)
Server:
Containers: 7
Running: 6
........
...
四、安裝kubelet(全部執行)
1》 #添加kubernetes鏡像源
[root@nod01 ~]# cat > /etc/yum.repos.d/kubernetes.repo << EOF
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
2》 #安裝kubeadm、kubelet、kubectl (版本更新頻繁,指定版本號部署安裝)
🐱🐉(默認安裝) : yum install -y kubelet kubeadm kubectl
[root@nod01 ~]# yum install -y kubelet-1.21.2 kubeadm-1.21.2 kubectl-1.21.2 (不指定版本,默認安裝最新版本)
已加載插件:fastestmirror
Loading mirror speeds from cached hostfile
* base: mirrors.aliyun.com
* elrepo: mirrors.tuna.tsinghua.edu.cn
* epel: mirror.sjtu.edu.cn
* extras: mirrors.aliyun.com
* updates: mirrors.aliyun.com
.........
.....
3》 #啟動kubelet並加入開機自啟動
[root@m01 ~]#systemctl enable --now kubelet
五、部署kubernetes集群 👨💻👨💻👨💻
1、初始化master節點(master節點執行)
1》 # master節點初始化 (方式一)
[root@m01 ~]# kubectl version #查看安裝的版本(跳過此步)
Client Version: version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.2", GitCommit:"ca643a4d1f7bfe34773c74f79527be4afd95bf39", GitTreeState:"clean", BuildDate:"2021-07-15T21:04:39Z", GoVersion:"go1.16.6", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.2", GitCommit:"092fbfbf53427de67cac1e9fa54aaa09a28371d7", GitTreeState:"clean", BuildDate:"2021-06-16T12:53:14Z", GoVersion:"go1.16.5", Compiler:"gc", Platform:"linux/amd64"}
[root@nod01 ~]# kubeadm init \ #初始化master
--apiserver-advertise-address=192.168.15.55 \ #master的主機地址
--image-repository registry.aliyuncs.com/google_containers/k8sos \ #使用安裝下載的鏡像地址
--kubernetes-version v1.21.2 \ #指定的安裝的版本,不指定,默認使用最新版本
--service-cidr=10.96.0.0/12 \
--pod-network-cidr=10.244.0.0/16 \
# --ignore-preflight-errors=all
ps : 可以先使用手動先下載鏡像: kubeadm config images pull --image-repository=registry.cn-hangzhou.aliyuncs.com/google_containers
2》 #master節點初始化 (方式二)
[root@nod01 ~]# vi kubeadm.conf
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
kubernetesVersion: v1.21.2
imageRepository: registry.aliyuncs.com/google_containers
networking:
podSubnet: 10.244.0.0/16
serviceSubnet: 10.96.0.0/12
#指定文件進行初始化
[root@nod01 ~]# kubeadm init --config kubeadm.conf --ignore-preflight-errors=all
3》#查看下載的image
[root@m01 ~]# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
nginx latest 08b152afcfae 10 days ago 133MB
registry.cn-hangzhou.aliyuncs.com/k8sos/kube-proxy v1.21.2 adb2816ea823 2 weeks ago 103MB
registry.cn-hangzhou.aliyuncs.com/k8sos/kube-apiserver v1.21.2 106ff58d4308 6 weeks ago 126MB
registry.cn-hangzhou.aliyuncs.com/k8sos/kube-scheduler v1.21.2 f917b8c8f55b 6 weeks ago 50.6MB
registry.cn-hangzhou.aliyuncs.com/k8sos/kube-controller-manager v1.21.2 ae24db9aa2cc 6 weeks ago 120MB
quay.io/coreos/flannel v0.14.0 8522d622299c 2 months ago 67.9MB
registry.cn-hangzhou.aliyuncs.com/k8sos/pause 3.4.1 0f8457a4c2ec 6 months ago 683kB
registry.cn-hangzhou.aliyuncs.com/k8sos/coredns v1.8.0 7916bcd0fd70 9 months ago 42.5MB
registry.cn-hangzhou.aliyuncs.com/k8sos/etcd 3.4.13-0 8855aefc3b26 11 months ago 253MB
--------------------------------------------------------------------------------------------------------------------------------------
#參數詳解:
–apiserver-advertise-address #集群通告地址
–image-repository #由於默認拉取鏡像地址k8s.gcr.io國內無法訪問,這里指定阿里雲鏡像倉庫地址
–kubernetes-version #K8s版本,與安裝的一致
–service-cidr #集群內部虛擬網絡,Pod統一訪問入口
–pod-network-cidr #Pod網絡,,與下面部署的CNI網絡組件yaml中保持一致
----------------------------------------------------------------------------------------------------------------------------------
#注:若配置不夠可以在以上命令后面加上--ignore-preflight-errors= NumCPU
ps : 初始化失敗可以進行重置kubeadm:kubeadm reset
2、配置 kubernetes 用戶信息(master節點執行)
#kubernetes集群認證文件初始化
[root@m01 ~]# mkdir -p $HOME/.kube
[root@m01 ~]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@m01 ~]# sudo chown $(id -u):$(id -g) $HOME/.kube/config
ps : 如果是root用戶,則可以使用:export KUBECONFIG=/etc/kubernetes/admin.conf(只能臨時使用,不建議使用)
#查看當前的node
[root@m01 ~]# kubectl get node
NAME STATUS ROLES AGE VERSION
m01 Ready control-plane,master 10m v1.21.3
3、安裝集群網絡插件(flannel)
kubernetes 需要使用第三方的網絡插件來實現 kubernetes 的網絡功能:
第三方網絡插件有多種,常用的有 flanneld、calico 和 cannel(flanneld+calico),不同的網絡組件,都提供 基本的網絡功能,為各個 Node 節點提供 IP 網絡等
1》#插件文件下載(方式一)
[root@m01 ~]#wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
[root@m01 ~]# kubectl apply -f kube-flannel.yml #指定文件進行部署集群網絡
Warning: policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
podsecuritypolicy.policy/psp.flannel.unprivileged configured
clusterrole.rbac.authorization.k8s.io/flannel unchanged
clusterrolebinding.rbac.authorization.k8s.io/flannel unchanged
serviceaccount/flannel unchanged
configmap/kube-flannel-cfg unchanged
daemonset.apps/kube-flannel-ds unchanged
2》#直接在指定URL部署網絡插件(方式二)
[root@m01 ~]# kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
Warning: policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
podsecuritypolicy.policy/psp.flannel.unprivileged configured
clusterrole.rbac.authorization.k8s.io/flannel unchanged
clusterrolebinding.rbac.authorization.k8s.io/flannel unchanged
serviceaccount/flannel unchanged
configmap/kube-flannel-cfg unchanged
daemonset.apps/kube-flannel-ds unchanged
3》 #查看集群狀態
[root@m01 ~]# kubectl get node
NAME STATUS ROLES AGE VERSION
m01 Ready control-plane,master 10m v1.21.3
【kube-flanne.yml】
---
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
name: psp.flannel.unprivileged
annotations:
seccomp.security.alpha.kubernetes.io/allowedProfileNames: docker/default
seccomp.security.alpha.kubernetes.io/defaultProfileName: docker/default
apparmor.security.beta.kubernetes.io/allowedProfileNames: runtime/default
apparmor.security.beta.kubernetes.io/defaultProfileName: runtime/default
spec:
privileged: false
volumes:
- configMap
- secret
- emptyDir
- hostPath
allowedHostPaths:
- pathPrefix: "/etc/cni/net.d"
- pathPrefix: "/etc/kube-flannel"
- pathPrefix: "/run/flannel"
readOnlyRootFilesystem: false
# Users and groups
runAsUser:
rule: RunAsAny
supplementalGroups:
rule: RunAsAny
fsGroup:
rule: RunAsAny
# Privilege Escalation
allowPrivilegeEscalation: false
defaultAllowPrivilegeEscalation: false
# Capabilities
allowedCapabilities: ['NET_ADMIN', 'NET_RAW']
defaultAddCapabilities: []
requiredDropCapabilities: []
# Host namespaces
hostPID: false
hostIPC: false
hostNetwork: true
hostPorts:
- min: 0
max: 65535
# SELinux
seLinux:
# SELinux is unused in CaaSP
rule: 'RunAsAny'
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: flannel
rules:
- apiGroups: ['extensions']
resources: ['podsecuritypolicies']
verbs: ['use']
resourceNames: ['psp.flannel.unprivileged']
- apiGroups:
- ""
resources:
- pods
verbs:
- get
- apiGroups:
- ""
resources:
- nodes
verbs:
- list
- watch
- apiGroups:
- ""
resources:
- nodes/status
verbs:
- patch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: flannel
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: flannel
subjects:
- kind: ServiceAccount
name: flannel
namespace: kube-system
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: flannel
namespace: kube-system
---
kind: ConfigMap
apiVersion: v1
metadata:
name: kube-flannel-cfg
namespace: kube-system
labels:
tier: node
app: flannel
data:
cni-conf.json: |
{
"name": "cbr0",
"cniVersion": "0.3.1",
"plugins": [
{
"type": "flannel",
"delegate": {
"hairpinMode": true,
"isDefaultGateway": true
}
},
{
"type": "portmap",
"capabilities": {
"portMappings": true
}
}
]
}
net-conf.json: |
{
"Network": "10.244.0.0/16",
"Backend": {
"Type": "vxlan"
}
}
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: kube-flannel-ds
namespace: kube-system
labels:
tier: node
app: flannel
spec:
selector:
matchLabels:
app: flannel
template:
metadata:
labels:
tier: node
app: flannel
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/os
operator: In
values:
- linux
hostNetwork: true
priorityClassName: system-node-critical
tolerations:
- operator: Exists
effect: NoSchedule
serviceAccountName: flannel
initContainers:
- name: install-cni
image: registry.cn-hangzhou.aliyuncs.com/alvinos/flanned:v0.13.1-rc1
command:
- cp
args:
- -f
- /etc/kube-flannel/cni-conf.json
- /etc/cni/net.d/10-flannel.conflist
volumeMounts:
- name: cni
mountPath: /etc/cni/net.d
- name: flannel-cfg
mountPath: /etc/kube-flannel/
containers:
- name: kube-flannel
image: registry.cn-hangzhou.aliyuncs.com/alvinos/flanned:v0.13.1-rc1
command:
- /opt/bin/flanneld
args:
- --ip-masq
- --kube-subnet-mgr
resources:
requests:
cpu: "100m"
memory: "50Mi"
limits:
cpu: "100m"
memory: "50Mi"
securityContext:
privileged: false
capabilities:
add: ["NET_ADMIN", "NET_RAW"]
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
volumeMounts:
- name: run
mountPath: /run/flannel
- name: flannel-cfg
mountPath: /etc/kube-flannel/
volumes:
- name: run
hostPath:
path: /run/flannel
- name: cni
hostPath:
path: /etc/cni/net.d
- name: flannel-cfg
configMap:
name: kube-flannel-cfg
4、加入Kubernetes Node(master點執行)
(加入master前,注意細節,一步錯,步步錯、要注意觀察 !!!)👨💻👨💻👨💻👨💻👨💻👨💻
1》 #集群命令生成(kubeadm init輸出的kubeadm join命令) 注意看 ---->👀 👀 👀【master點執行】
[root@m01 ~]# kubeadm token create --print-join-command #在master生成join命令
kubeadm join 192.168.15.55:6443 --token 750r73.ae9c3uhcy4hueyn9 --discovery-token-ca-cert-hash sha256:09ba151096839d7a9b4f363462f8f9d3e12682bca0ee56bcdd1114fabeca0868
ps :將上方生成的token復制到node節點上執行
注:默認token有效期為24小時,當過期之后,該token就不可用了。這時就需要重新創建token,如下所示:
2》#也可以執行安裝日志中的命令即可(此步略)
#查看日志文件 cat kubeadm-init.log
-----------------------------------------------------------------------------------------------------------
#創建token:
方式一:(直接使用命令快捷生成token, 如上所示)
[root@m01 ~]# kubeadm token create --print-join-command
方式二: (創建token)
[root@m01 ~]# kubeadm token create
[root@m01 ~]# kubeadm token list
TOKEN TTL EXPIRES USAGES DESCRIPTION EXTRA GROUPS
750r73.ae9c3uhcy4hueyn9 18h 2021-08-02T16:11:49+08:00 authentication,signing <none> system:bootstrappers:kubeadm:default-node-token
sbzppu.xtedbbjwz3qu9agc 21h 2021-08-02T19:07:01+08:00 authentication,signing <none> system:bootstrappers:kubeadm:default-node-token
x4nurb.h7naw7lb7btzm194 18h 2021-08-02T15:56:02+08:00 authentication,signing The default bootstrap token generated by 'kubeadm init'. system:bootstrappers:kubeadm:default-node-token
[root@m01 ~]# openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //' #使用命令過濾截取出token
09ba151096839d7a9b4f363462f8f9d3e12682bca0ee56bcdd1114fabeca0868
--------------------------------------------------------------------------------------------------------------------------------------------
2》#node加入集群 (復制之上生成命令token即可加入) -----> 👀 👀 👀【node點執行】
[root@nod01 ~]# kubeadm join 192.168.15.55:6443 --token 750r73.ae9c3uhcy4hueyn9 --discovery-token-ca-cert-hash sha256:09ba151096839d7a9b4f363462f8f9d3e12682bca0ee56bcdd1114fabeca0868
[preflight] Running pre-flight checks
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
.............
......
[root@nod02 ~]# kubeadm join 192.168.15.55:6443 --token 750r73.ae9c3uhcy4hueyn9 --discovery-token-ca-cert-hash sha256:09ba151096839d7a9b4f363462f8f9d3e12682bca0ee56bcdd1114fabeca0868
[preflight] Running pre-flight checks
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
...........
......
----------------------------------------------------------------------------------------------------------------------------------------
################################## 檢查集群狀態 ####################################
1》#查看集群主機狀態(只能在master節點查看) 方式一:
[root@m01 ~]# kubectl get node
NAME STATUS ROLES AGE VERSION
m01 Ready control-plane,master 28m v1.21.3
nod01 Ready <none> 9m36s v1.21.3
nod02 Ready <none> 9m33s v1.21.3
2》#查看集群服務狀態 (只能在master節點查看) 方式二:
[root@m01 ~]# kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-978bbc4b6-6p2zv 1/1 Running 0 12m
coredns-978bbc4b6-qg2g6 1/1 Running 0 12m
etcd-m01 1/1 Running 0 12m
kube-apiserver-m01 1/1 Running 0 12m
kube-controller-manager-m01 1/1 Running 0 12m
kube-flannel-ds-d8zjs 1/1 Running 0 7m49s
kube-proxy-5thp5 1/1 Running 0 12m
kube-scheduler-m01 1/1 Running 0 12m
3》#直接驗證集群DNS 方式三:
[root@m01 ~]# kubectl run test -it --rm --image=busybox:1.28.3
If you don't see a command prompt, try pressing enter.
/ # nslookup kubernetes
Server: 10.96.0.10
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local
Name: kubernetes
Address 1: 10.96.0.1 kubernetes.default.svc.cluster.local
5、錯誤方案解決(沒有問題就不用管 OK)
1) #錯誤一:
The connection to the server localhost:8080 was refused - did you specify the right host or port?
#問題分析:(環境變量)
原因:kubernetes master沒有與本機綁定,集群初始化的時候沒有綁定,此時設置在本機的環境變量即可解決問題
#解決方案:
1》加入環境變量
方式一:編輯文件設置
[root@m01 ~]# vim /etc/profile #追加新的環境變量即可
export KUBECONFIG=/etc/kubernetes/admin.conf
方式二:使用命令直接追加文件內容
[root@m01 ~]# echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> /etc/profile
2》重載配置文件
[root@m01 ~]# source /etc/profile
----------------------------------------------------------------------------------------------------------------------------------------
2)#錯誤二:
部署完master節點,檢測組件的運行狀態時,運行不健康(狀態檢查命令:kubectl get cs)
[root@m01 ~]# kubectl get cs
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME STATUS MESSAGE ERROR
controller-manager Unhealthy Get "http://127.0.0.1:10252/healthz": dial tcp 127.0.0.1:10252: connect: connection refused
scheduler Unhealthy Get "http://127.0.0.1:10251/healthz": dial tcp 127.0.0.1:10251: connect: connection refused
etcd-0 Healthy {"health":"true"}
#原因分析:(端口問題)
這種狀態 ,一般是/etc/kubernetes/manifests/下的kube-controller-manager.yaml和kube-scheduler.yaml文件端口問題,默認端口設置的是0,注釋port即可
#解決方案如下圖:(完成下圖操作后執行重新啟動服務)
[root@m01 ~]#systemctl restart kubelet.service
#重新檢查服務狀態
[root@m01 ~]# kubectl get cs
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME STATUS MESSAGE ERROR
controller-manager Healthy ok
scheduler Healthy ok
etcd-0 Healthy {"health":"true"}
1》kube-controller-manager.yaml文件修改: 注釋 - --port=0 即可
2》kube-scheduler.yaml文件修改:同樣注釋 - --port=0 即可
6、測試kubernetes集群
驗證Pod工作
驗證Pod網絡通信
驗證DNS解析
#方式一:
1》#集群創建服務nginx測試
[root@m01 ~]# kubectl create deployment nginx --image=nginx
deployment.apps/nginx created #部署創建nginx
2》#啟動創建的實列,指定端口
[root@m01 ~]# kubectl expose deployment nginx --port=80 --type=NodePort
service/nginx exposed #啟動已創建的nginx
3》#查看服務pod狀態
[root@m01 ~]# kubectl get pod,svc
NAME READY STATUS RESTARTS AGE
pod/nginx-6799fc88d8-pp4lk 1/1 Running 0 95s
pod/test 1/1 Running 0 5h21m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE #服務的狀態
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 5h42m
service/nginx NodePort 10.101.203.98 <none> 80:30779/TCP 59s
ps : 1) pod : 一個Pod(就像一群鯨魚,或者一個豌豆夾)相當於一個共享context的配置組,一個Pod是一個容器環境下的“邏輯主機
2)svc :是service 一個svc表示一個服務,不懂自己悟
--------------------------------------------------------------------------------------------------------------------------------------------------------
#方式二:(簡單點操作吧,如下所示😉😉😉)
✨#首先使用docker拉取鏡像
[root@m01 ~]# docker pull nginx
✨#然后查看docker鏡像是否成功拉取(看,最新版nginx拉取完成,不指定版本,默認獲取最新nginx)
docker.io/library/nginx:latest
[root@m01 ~]# docker images |grep nginx
nginx latest 08b152afcfae 10 days ago 133MB
✨#再然后創建Pod ,在master節點上運行一個鏡像:--image=nginx ,並且啟動2台機器 :--replicas=2 指定端口: --port=80
[root@m01 ~]# kubectl run my-nginx --image=nginx --replicas=2 --port=80
kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
deployment.apps/my-nginx created
✨#繼續查看pod是否添加完成
[root@m01 ~]# kubectl get pod
✨#最后,沒有最后了,執行下面就OK了 (☞゚ヮ゚)☞
------------------------------------------------------------------------------------------------------------------------------------------------
4》#瀏覽器測(試訪問地址:http://NodeIP:Port)
#本地測試
[root@m01 ~]# curl http://10.101.203.98:80
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
body {
width: 35em;
margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif;
#瀏覽器測試:http://192.168.15.55:30779
7、kubectl命令使用
【kubectl常用的指令】
🐱🚀 #對於以上問題,看似簡單,實則不然,只會啟服務,服務出問題怎么辦,全靠kubectl不一定好(當一個pod宕機之后,不能全靠機器,可以檢測分析發生的問題)
🐱🏍 查看pod的詳細信息:
格式: kubectl describe pod [pod名稱]
示列:
[root@m01 ~]# kubectl describe pod nginx
Name: nginx-6799fc88d8-pp4lk
Namespace: default
Priority: 0
Node: nod02/192.168.15.57
Start Time: Sun, 01 Aug 2021 21:36:52 +0800
Labels: app=nginx
pod-template-hash=6799fc88d8
Annotations: <none>
Status: Running
IP: 10.244.2.2
IPs:
IP: 10.244.2.2
........
.....
🐱🏍 進入到pod:(命令與docker十有八九相似,換湯不換葯嘛,重在理解)
格式:
kubectl exec -it [pod名稱] -n default bash (pod名稱,使用全稱,不然進不去,你懂得)
示列:
[root@m01 ~]# kubectl exec -it nginx-6799fc88d8-pp4lk -n default bash #進入pod,也進入容器
root@nginx-6799fc88d8-pp4lk:/# ls
bin dev docker-entrypoint.sh home lib64 mnt proc run srv tmp var
boot docker-entrypoint.d etc lib media opt root sbin sys usr
root@nginx-6799fc88d8-pp4lk:/# exit
[root@m01 ~]#
🐱🏍 刪除pod:(需退出pod或者再開一個終端)
格式:
kubectl delete deployment [pod名稱]
示列:
[root@m01 ~]# kubectl get deployment #先看看部署的服務
NAME READY UP-TO-DATE AVAILABLE AGE
nginx 1/1 1 1 5h44m
redis 1/1 1 1 75m
[root@m01 ~]# kubectl delete deployment redis #使用命令刪除一個pod
deployment.apps "redis" deleted
[root@m01 ~]# kubectl get deployment #再查看部署的pod,redis已經不在了
NAME READY UP-TO-DATE AVAILABLE AGE
nginx 1/1 1 1 5h56m
ps : kubernetes 可能會產生垃圾或者僵屍pod,在刪除rc的時候,相應的pod沒有被刪除,手動刪除pod后會自動重新創建,這時一般需要先刪除掉相關聯的resources,先刪除pod的話,馬上會創建一個新的pod,因為deployment.yaml文件中定義了副本數量
(正確步驟:應先刪除deployment,然后再刪除pod)
🐱🏍 刪除pod與上面一個意思:(如果使用上面刪除不干凈,可以使用當前方式刪除)
格式:
kubectl delete rc <name>
kubectl delete rs <name>
🐱🏍 查看當前集群pod:
[root@m01 ~]# kubectl get rc 或者
[root@m01 ~]# kubectl get rs
NAME DESIRED CURRENT READY AGE
nginx-6799fc88d8 1 1 1 7h29m
#注:
1>Replication Controller(RC) RC是K8S中的另一個核心概念,應用托管在K8S后,K8S需要保證應用能夠持續運行,這是RC的工作內容。
主要功能 確保pod數量:RC用來管理正常運行Pod數量,一個RC可以由一個或多個Pod組成,在RC被創建后,系統會根據定義好的副本數來創建Pod數量
2>被認為 是“升級版”的RC。RS也是用於保證與label selector匹配的pod數量維持在期望狀態
六、部署 Dashboard 圖形化
Kubernetes Dashboard是Kubernetes集群的Web UI,用戶可以通過Dashboard進行管理集群內所有資源對象
1》#下載安裝Dashboard
[root@m01 ~]# wget https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.3/aio/deploy/recommended.yaml
--2021-08-01 22:10:54-- https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.3/aio/deploy/recommended.yaml
正在解析主機 raw.githubusercontent.com (raw.githubusercontent.com)... 185.199.111.133, 185.199.110.133, 185.199.109.133, ...
正在連接 raw.githubusercontent.com (raw.githubusercontent.com)|185.199.111.133|:443... 已連接。
已發出 HTTP 請求,正在等待回應... 200 OK
長度:7552 (7.4K) [text/plain]
正在保存至: “recommended.yaml”
100%[===================================================================================================>] 7,552 --.-K/s 用時 0s
2021-08-01 22:10:55 (33.9 MB/s) - 已保存 “recommended.yaml” [7552/7552])
[root@m01 ~]# ll |grep recommended.yaml
-rw-r--r-- 1 root root 7552 8月 1 22:10 recommended.yaml
2》dashboard配置文件使用
方式一:修改svc服務為NodePort類型
[root@m01 ~]# kubectl patch svc kubernetes-dashboard -p '{"spec":{"type":"NodePort"}}' -n kubernetes-dashboard (直接使用命令更改)
(默認Dashboard只能集群內部訪問,修改Service為NodePort類型,暴露到外部)
方式二:配置文件更改
[root@m01 ~]# vim recommended.yaml #配置文件修改
---
kind: Service
apiVersion: v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kubernetes-dashboard
spec:
type: NodePort
ports:
- port: 443
targetPort: 8443
nodePort: 30443 #修改端口
selector:
k8s-app: kubernetes-dashboard
---
apiVersion: v1
kind: Secret
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard-certs
namespace: kubernetes-dashboard
type: Opaque
---
...........
......
4》指定dashboard文件,創建dashboard(更新配置)
[root@m01 ~]# kubectl apply -f recommended.yaml
namespace/kubernetes-dashboard unchanged
serviceaccount/kubernetes-dashboard unchanged
service/kubernetes-dashboard configured
secret/kubernetes-dashboard-certs unchanged
secret/kubernetes-dashboard-csrf unchanged
secret/kubernetes-dashboard-key-holder unchanged
configmap/kubernetes-dashboard-settings unchanged
role.rbac.authorization.k8s.io/kubernetes-dashboard unchanged
clusterrole.rbac.authorization.k8s.io/kubernetes-dashboard unchanged
rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard unchanged
clusterrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard unchanged
deployment.apps/kubernetes-dashboard unchanged
service/dashboard-metrics-scraper unchanged
deployment.apps/dashboard-metrics-scraper unchanged
5》#查看啟動的pod
[root@m01 ~]# kubectl get pods -n kubernetes-dashboard #全部都在運行狀態
NAME READY STATUS RESTARTS AGE
dashboard-metrics-scraper-5594697f48-ccdf4 1/1 Running 0 103s
kubernetes-dashboard-5c785c8bcf-rzjp9 1/1 Running 0 103s
6》Dashboard 支持 Kubeconfig 和 Token 兩種認證方式:
######################################################################################
1)#Token認證方式登錄(推薦方式一)
[root@m01 ~]# cat > dashboard-adminuser.yaml << EOF
> apiVersion: v1
> kind: ServiceAccount
> metadata:
> name: admin-user
> namespace: kubernetes-dashboard
>
> ---
> apiVersion: rbac.authorization.k8s.io/v1
> kind: ClusterRoleBinding
> metadata:
> name: admin-user
> roleRef:
> apiGroup: rbac.authorization.k8s.io
> kind: ClusterRole
> name: cluster-admin
> subjects:
> - kind: ServiceAccount
> name: admin-user
> namespace: kubernetes-dashboard
> EOF
2)#創建登錄用戶
[root@m01 ~]# kubectl apply -f dashboard-adminuser.yaml
serviceaccount/admin-user unchanged
clusterrolebinding.rbac.authorization.k8s.io/admin-user unchanged
#注解:上面創建了一個叫admin-user的服務賬號,並放在kubernetes-dashboard 命名空間下,並將cluster-admin角色綁定到admin-user賬戶,這樣admin-user賬戶就有了管理員的權限。默認情況下,kubeadm創建集群時已經創建了cluster-admin角色,我們直接綁定即可
####################################################################################
#執行yaml文件直接部署(方式二)
1)#文件下載
[root@m01 ~]# kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.3/aio/deploy/recommended.yaml
2)#使用nodeport方式將dashboard服務暴露在集群外,指定使用30443端口 (自定義端口:30443)
[root@m01 ~]# kubectl patch svc kubernetes-dashboard -n kubernetes-dashboard \
-p '{"spec":{"type":"NodePort","ports":[{"port":443,"targetPort":8443,"nodePort":30443}]}}'
service/kubernetes-dashboard patched (no change)
3)#查看服務是否運行
[root@m01 ~]# kubectl -n kubernetes-dashboard get pods
NAME READY STATUS RESTARTS AGE
dashboard-metrics-scraper-5594697f48-ccdf4 1/1 Running 0 84m
kubernetes-dashboard-5c785c8bcf-rzjp9 1/1 Running 0 84m
4)#查看服務(查看暴露的service,已修改為nodeport類型)
[root@m01 ~]# kubectl -n kubernetes-dashboard get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
dashboard-metrics-scraper ClusterIP 10.96.47.198 <none> 8000/TCP 40m
kubernetes-dashboard NodePort 10.106.194.136 <none> 443:30443/TCP 82m
############################################################################################
7》#瀏覽器訪問登錄: url:https://NodeIP:30443
#本地測試:
[root@m01 ~]# curl http://192.168.15.55:30443
#登錄dashboard
http://192.168.15.55:30443
----------------------------------------------------------------------------------------------------------------------
重裝Dashboard
(在kubernetes-dashboard.yaml所在路徑下)
[root@m01 ~]#kubectl delete -f kubernetes-dashboard.yaml
[root@m01 ~]#kubectl create -f kubernetes-dashboard.yaml
查看所有的pod運行狀態
[root@m01 ~]# kubectl get pod --all-namespaces
查看dashboard映射的端口
[root@m01 ~]# kubectl -n kube-system get service kubernetes-dashboard
----------------------------------------------------------------------------------------------------------------------------------
<<<<<<< 安裝正常,跳過此頁 ✌ >>>>>>>>>>>
#查看admin-user賬戶的token
[root@m01 ~]# kubectl -n kubernetes-dashboard describe secret $(kubectl -n kubernetes-dashboard get secret | grep admin-user | awk '{print $1}')
Name: admin-user-token-594mg
Namespace: kubernetes-dashboard
Labels: <none>
Annotations: kubernetes.io/service-account.name: admin-user
kubernetes.io/service-account.uid: 877f6ca3-6b33-4781-86df-ece578e95f03
Type: kubernetes.io/service-account-token
Data
====
ca.crt: 1066 bytes
namespace: 20 bytes
token: eyJhbGciOiJSUzI1NiIsImtpZCI6IjBIblhSN0ZReklRdE1tckdhQnRiSEZkX3V4S0w4alByYnBmWmUxYUNONFEifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi11c2VyLXRva2VuLTU5NG1nIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImFkbWluLXVzZXIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiI4NzdmNmNhMy02YjMzLTQ3ODEtODZkZi1lY2U1NzhlOTVmMDMiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZXJuZXRlcy1kYXNoYm9hcmQ6YWRtaW4tdXNlciJ9.TwDSy944pskNd1bI5da9IH0EIK3ziPse1QGu5sA3iIJuS2jQiM01L8YFquL7Ro9CqK2VrIFhGcx5m8bWQDcls3_VuBv-BeBwPYmUdYKmB2brT64FixY1ziE8bD2LhYCjAuR0wh0jSsN4hu3lVaS2q_3t3xVAjZmNSQGHxR7TmZWobd1OHqFCtoPX8DQzhnZbxkQ_6kDqXU7Tc8cQ7y63az4h15vESwcd6mx-OJgGC61lo6POTR0z9sy-mRRhii9b2lFwt0-KHORftCQ_KY8oIHboK7DlEJBMyRJ0c7zSZ000CJQQcXCO0UVW8-YFdGJpnvUIfbo7ZmsOYGj0b4_gFg
#獲取到的Token復制到登錄界面的Token輸入框中,就可以登陸成功
#創建service account並綁定默認cluster-admin管理員集群角色
創建用戶
[root@m01 ~]# kubectl create serviceaccount dashboard-admin -n kube-system
serviceaccount/dashboard-admin created
用戶授權
[root@m01 ~]# kubectl create clusterrolebinding dashboard-admin --clusterrole=cluster-admin --serviceaccount=kube-system:dashboard-admin
clusterrolebinding.rbac.authorization.k8s.io/dashboard-admin created
獲取用戶Token
[root@m01 ~]# kubectl describe secrets -n kube-system $(kubectl -n kube-system get secret | awk '/dashboard-admin/{print $1}')
Name: dashboard-admin-token-q2hh5
Namespace: kube-system
Labels: <none>
Annotations: kubernetes.io/service-account.name: dashboard-admin
kubernetes.io/service-account.uid: 56eb680e-97c6-4684-a90a-5f2a96034cee
Type: kubernetes.io/service-account-token
Data
====
token: eyJhbGciOiJSUzI1NiIsImtpZCI6IjBIblhSN0ZReklRdE1tckdhQnRiSEZkX3V4S0w4alByYnBmWmUxYUNONFEifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJkYXNoYm9hcmQtYWRtaW4tdG9rZW4tcTJoaDUiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiZGFzaGJvYXJkLWFkbWluIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiNTZlYjY4MGUtOTdjNi00Njg0LWE5MGEtNWYyYTk2MDM0Y2VlIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Omt1YmUtc3lzdGVtOmRhc2hib2FyZC1hZG1pbiJ9.Bc7mRXcRYU-5oSi3VAb0sBUnau2AAe4Gubrke62nAXaTwW9USzdW_q1s-P9wX-zD3OQ797yCfV-trel_E5gBp490syLcGKBNgGAT0RU1iIrTwJr_Hlyq9QKUBBv7Sm6A6Ln6CHpRohrBNZvc1yrobDYvORbJA1rJ8huPdnzuU30yMdlfilyN4YEyDf100MpTso6TR74tH4E-2ELaZEXU1ApISTgHZ5LSti-iUX1mRwgFqCUa_m_Vbrziu30YzpgWLZvfbisOn00fuHqRrub3dmqdBRQSdCywxvwluwliUEZ4fInh2Sp7mTO6M09SXza7fwM4WOKx2UhmQUiKwzIfig
ca.crt: 1066 bytes
namespace: 11 bytes
------------------------------------------------------------------------------------------------------------------------------------
dashboard的刪除:
1》#使用pod刪除dashboard
[root@m01 /opt]# kubectl -n kube-system delete $(kubectl -n kube-system get pod -o name | grep dashboard)
pod "kubernetes-dashboard-65ff5d4cc8-4t4cc" deleted
2》#強制刪除dashboard
[root@m01 /opt]# kubectl delete pod kubernetes-dashboard-59f548c4c7-6b9nj -n kube-system --force --grace-period=0
3》#kubernetes-dashboard卸載
[root@m01 /opt]# kubectl delete deployment kubernetes-dashboard --namespace=kube-system
[root@m01 /opt]# kubectl delete service kubernetes-dashboard --namespace=kube-system
[root@m01 /opt]# kubectl delete role kubernetes-dashboard-minimal --namespace=kube-system
[root@m01 /opt]# kubectl delete rolebinding kubernetes-dashboard-minimal --namespace=kube-system
[root@m01 /opt]# kubectl delete sa kubernetes-dashboard --namespace=kube-system
[root@m01 /opt]# kubectl delete secret kubernetes-dashboard-certs --namespace=kube-system
[root@m01 /opt]# kubectl delete secret kubernetes-dashboard-csrf --namespace=kube-system
[root@m01 /opt]# kubectl delete secret kubernetes-dashboard-key-holder --namespace=kube-system
4》#編寫成腳本執行刪除(和上面一個意思)
[root@m01 /opt]# cat > dashboard_dalete.sh << EOF
#!/bin/bash
kubectl delete deployment kubernetes-dashboard --namespace=kube-system
kubectl delete service kubernetes-dashboard --namespace=kube-system
kubectl delete role kubernetes-dashboard-minimal --namespace=kube-system
kubectl delete rolebinding kubernetes-dashboard-minimal --namespace=kube-system
kubectl delete sa kubernetes-dashboard --namespace=kube-system
kubectl delete secret kubernetes-dashboard-certs --namespace=kube-system
kubectl delete secret kubernetes-dashboard-csrf --namespace=kube-system
kubectl delete secret kubernetes-dashboard-key-holder --namespace=kube-system
EOF
------------------------------------------------------------------------------------------------------
(訪問出現以下問題,:把請求路徑改為https://ip:端口去訪問)
Client sent an HTTP request to an HTTPS server.
1》》獲取token值😎😎😎
2》》登錄dashboard
3》》輸入token值
4》》登錄后的儀表圖🤔🤔🤔
5》》當前節點控制狀態
《《《《《 dashboard 管理的業務項目真的很多,不懂就看以上步驟,進行搭建 》》》》》 🤞 🤞 🤞
【dashboard的中文版】
1》 下載dashboard文件
[root@m01 /opt]#wget https://raw.githubusercontent.com/kubernetes/dashboard/v1.10.1/src/deploy/recommended/kubernetes-dashboard.yaml #模板文件
2》 鏡像地址更改:
[root@m01 /opt]# cat kubernetes-dashboard.yaml
.........
containers:
- name: kubernetes-dashboard
#image: k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.1
image: registry.cn-shanghai.aliyuncs.com/hzl-images/kubernetes-dashboard-amd64:v1.6.3
........
.....
3》默認Dashboard只能集群內部訪問,因此修改Service為NodePort類型,暴露到外部可以訪問:
[root@m01 /opt]# cat kubernetes-dashboard.yaml
kind: Service
apiVersion: v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kube-system
spec:
type: NodePort
ports:
- port: 443
targetPort: 8443
nodePort: 30003 #更改端口
selector:
k8s-app: kubernetes-dashboard
4》使用命令獲取token值
[root@m01 /opt]# kubectl -n kubernetes-dashboard describe secret $(kubectl -n kubernetes-dashboard get secret | grep admin-user | awk '{print $1}')
5》瀏覽器訪問(這里有個坑,只有火狐可以直接打開,其他360(兩種模式)、chrome、Edge都不行)
火狐瀏覽器訪問:
https:192.168.15.55:30003
配置更改:
1.設置瀏覽器安全策略
2.將證書設置成系統信任
-----------------------------------------------------------------------------------------
訪問瀏覽器遇到的錯誤:
1》用戶權限問題:(使用此命令設置用戶)
[root@m01 /opt]# kubectl create clusterrolebinding test:anonymous --clusterrole=cluster-admin --user=system:anonymous
clusterrolebinding.rbac.authorization.k8s.io/test:anonymous created
2》獲取token值
[root@m01 /opt]# kubectl describe secrets -n kube-system $(kubectl -n kube-system get secret | awk '/dashboard-admin/{print $1}')
3》然后瀏覽器訪問就出現了中的文頁面