一 Node管理
1.1 Node隔離——方式一
在硬件升級、硬件維護等情況下,我們需要將某些Node隔離,使其脫離Kubernetes集群的調度范圍。Kubernetes提供了一種機制,既可以將Node納入調度范圍,也可以將Node脫離調度范圍。
創建配置文件unschedule_node.yaml,在spec部分指定unschedulable為true:
[root@k8smaster01 study]# vi unschedule_node.yaml
apiVersion: v1
kind: Node
metadata:
name: k8snode03
labels:
kubernetes.io/hostname: k8snode03
spec:
unschedulable: true
[root@k8smaster01 study]# kubectl replace -f unschedule_node.yaml
[root@k8smaster01 study]# kubectl get nodes #查看下線的節點
隔離之后,對於后續創建的Pod,系統將不會再向該Node進行調度。
提示:也可以使用如下命令進行隔離:
kubectl patch node k8s-node1 -p '{"spec":"{"unschedulable":"true"}"}'
注意:將某個Node脫離調度范圍時,在其上運行的Pod並不會自動停止,管理員需要手動停止在該Node上運行的Pod。
1.2 Node恢復——方式一
[root@k8smaster01 study]# vi schedule_node.yaml
apiVersion: v1
kind: Node
metadata:
name: k8snode03
labels:
kubernetes.io/hostname: k8snode03
spec:
unschedulable: false
[root@k8smaster01 study]# kubectl replace -f schedule_node.yaml
[root@k8smaster01 study]# kubectl get nodes #查看下線的節點
提示:也可以使用如下命令進行隔離:
kubectl patch node k8s-node1 -p '{"spec":"{"unschedulable":"false"}"}'
1.3 Node隔離——方式二
[root@k8smaster01 study]# kubectl cordon k8snode01
[root@k8smaster01 study]# kubectl get nodes | grep -E 'NAME|node01'
NAME STATUS ROLES AGE VERSION
k8snode01 Ready,SchedulingDisabled <none> 47h v1.15.6
1.4 Node恢復——方式二
[root@k8smaster01 study]# kubectl uncordon k8snode01
[root@k8smaster01 study]# kubectl get nodes | grep -E 'NAME|node01'
NAME STATUS ROLES AGE VERSION
k8snode01 Ready <none> 47h v1.15.6
1.5 Node擴容
生產環境中通常需要對Node節點進行擴容,從而將應用系統進行水平擴展。在Kubernetes集群中,一個新Node的加入需要在該Node上安裝Docker、kubelet和kube-proxy服務,然后配置kubelet和kubeproxy的啟動參數,將Master URL指定為當前Kubernetes集群Master的地址,最后啟動這些服務。通過kubelet默認的自動注冊機制,新的Node將會自動加入現有的Kubernetes集群中。
Kubernetes Master在接受了新Node的注冊之后,會自動將其納入當前集群的調度范圍,之后創建容器時,可對新的Node進行調度了。通過這種機制,Kubernetes實現了集群中Node的擴容。
示例1:基於kubeadm部署的Kubernetes擴容Node。
[root@k8smaster01 ~]# ssh-copy-id -i ~/.ssh/id_rsa.pub root@k8snode04
[root@localhost study]# vi k8sinit.sh #節點初始化
# Modify Author: xhy # Modify Date: 2019-06-23 22:19 # Version: #***************************************************************# # Initialize the machine. This needs to be executed on every machine. # Add host domain name. cat >> /etc/hosts << EOF 172.24.8.71 k8smaster01 172.24.8.72 k8smaster02 172.24.8.73 k8smaster03 172.24.8.74 k8snode01 172.24.8.75 k8snode02 172.24.8.76 k8snode03 172.24.8.41 k8snode04 EOF # Add docker user useradd -m docker # Disable the SELinux. sed -i 's/^SELINUX=.*/SELINUX=disabled/' /etc/selinux/config # Turn off and disable the firewalld. systemctl stop firewalld systemctl disable firewalld # Modify related kernel parameters & Disable the swap. cat > /etc/sysctl.d/k8s.conf << EOF net.ipv4.ip_forward = 1 net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 net.ipv4.tcp_tw_recycle = 0 vm.swappiness = 0 vm.overcommit_memory = 1 vm.panic_on_oom = 0 net.ipv6.conf.all.disable_ipv6 = 1 EOF sysctl -p /etc/sysctl.d/k8s.conf >&/dev/null swapoff -a sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab modprobe br_netfilter # Add ipvs modules cat > /etc/sysconfig/modules/ipvs.modules <<EOF #!/bin/bash modprobe -- ip_vs modprobe -- ip_vs_rr modprobe -- ip_vs_wrr modprobe -- ip_vs_sh modprobe -- nf_conntrack_ipv4 EOF chmod 755 /etc/sysconfig/modules/ipvs.modules bash /etc/sysconfig/modules/ipvs.modules # Install rpm yum install -y conntrack git ntpdate ntp ipvsadm ipset jq iptables curl sysstat libseccomp wget gcc gcc-c++ make openssl-devel # Install Docker Compose sudo curl -L "https://get.daocloud.io/docker/compose/releases/download/1.25.0/docker-compose-`uname -s`-`uname -m`" -o /usr/local/bin/docker-compose sudo chmod +x /usr/local/bin/docker-compose # Update kernel rpm --import https://www.elrepo.org/RPM-GPG-KEY-elrepo.org rpm -Uvh https://www.elrepo.org/elrepo-release-7.0-4.el7.elrepo.noarch.rpm yum --disablerepo="*" --enablerepo="elrepo-kernel" install -y kernel-ml-5.4.1-1.el7.elrepo sed -i 's/^GRUB_DEFAULT=.*/GRUB_DEFAULT=0/' /etc/default/grub grub2-mkconfig -o /boot/grub2/grub.cfg yum update -y # Reboot the machine. reboot
[root@localhost study]# vi dockerinit.sh #docker初始化及安裝
# Modify Author: xhy # Modify Date: 2019-06-23 22:19 # Version: #***************************************************************# yum -y install yum-utils device-mapper-persistent-data lvm2 yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo yum -y install docker-ce-18.09.9-3.el7.x86_64 mkdir /etc/docker cat > /etc/docker/daemon.json <<EOF { "registry-mirrors": ["https://dbzucv6w.mirror.aliyuncs.com"], "exec-opts": ["native.cgroupdriver=systemd"], "log-driver": "json-file", "log-opts": { "max-size": "100m" }, "storage-driver": "overlay2", "storage-opts": [ "overlay2.override_kernel_check=true" ] } EOF systemctl restart docker systemctl enable docker
[root@localhost study]# vi kubeinit.sh #kube初始化及安裝
# Modify Author: xhy # Modify Date: 2019-06-23 22:19 # Version: #***************************************************************# cat <<EOF > /etc/yum.repos.d/kubernetes.repo [kubernetes] name=Kubernetes baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/ enabled=1 gpgcheck=1 repo_gpgcheck=1 gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg EOF yum install -y kubeadm-1.15.6-0.x86_64 kubelet-1.15.6-0.x86_64 kubectl-1.15.6-0.x86_64 --disableexcludes=kubernetes systemctl enable kubelet
[root@k8smaster01 study]# kubeadm token create #創建token
dzqqnn.ar4w7xcz9byenf7i
[root@k8smaster01 study]# openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //'
d8cf7c0384fff8779227f1a913d981d02b9f8f79a70365ba76a909e7160899a9
[root@k8snode04 study]# kubeadm join --token dzqqnn.ar4w7xcz9byenf7i 172.24.8.100:16443 --discovery-token-ca-cert-hash sha256:d8cf7c0384fff8779227f1a913d981d02b9f8f79a70365ba76a909e7160899a9
[root@k8smaster01 study]# kubectl get nodes
二 更新Label
2.1 資源標簽管理
[root@k8smaster01 study]# kubectl label pod kubernetes-dashboard-66cb8889-6ssqh role=mydashboard -n kube-system #增加label
[root@k8smaster01 study]# kubectl get pods -L role -n kube-system #查看label
[root@k8smaster01 study]# kubectl label pod kubernetes-dashboard-66cb8889-6ssqh role=yourdashboard --overwrite -n kube-system #修改label
[root@k8smaster01 study]# kubectl get pods -L role -n kube-system #查看label
[root@k8smaster01 study]# kubectl label pod kubernetes-dashboard-66cb8889-6ssqh role- -n kube-system #刪除label
[root@k8smaster01 study]# kubectl get pods -L role -n kube-system #查看label
三 Namespace管理
3.1 創建namespace
Kubernetes通過命名空間和Context的設置對不同的工作組進行區分,使得它們既可以共享同一個Kubernetes集群的服務,也能夠互不干擾。
[root@k8smaster01 study]# vi namespace-dev.yaml
apiVersion: v1
kind: Namespace
metadata:
name: dev
[root@k8smaster01 study]# vi namespace-pro.yaml
apiVersion: v1
kind: Namespace
metadata:
name: pro
[root@k8smaster01 study]# kubectl create -f namespace-dev.yaml
[root@k8smaster01 study]# kubectl create -f namespace-pro.yaml #創建如上兩個namespace
[root@k8smaster01 study]# kubectl get namespaces #查看namespace
四 Context管理
4.1 定義Context
為不同的兩個工作組分別定義一個Context,即運行環境,這個運行環境將屬於某個特定的命名空間。
[root@k8smaster01 study]# kubectl config set-context kubernetes --server=https://172.24.8.100:16443
[root@k8smaster01 study]# kubectl config set-context ctx-dev --namespace=dev --cluster=kubernetes --user=devuser
[root@k8smaster01 study]# kubectl config set-context ctx-prod --namespace=prod --cluster=kubernetes --user=produser
[root@k8smaster01 study]# kubectl config view #查看定義的Context
4.2 設置工作組環境
使用kubectl config use-context <context_name>命令設置當前運行環境。
[root@k8smaster01 ~]# kubectl config use-context ctx-dev #將當前運行環境設置為ctx-dev
注意:運如上設置,當前的運行環境被設置為開發組所需的環境。之后的所有操作都將在名為development的命名空間中完成。
4.3 創建資源
[root@k8smaster01 ~]# vi redis-slave-controller.yaml
apiVersion: v1 kind: ReplicationController metadata: name: redis-slave labels: name: redis-slave spec: replicas: 2 selector: name: redis-slave template: metadata: labels: name: redis-slave spec: containser: - name: slave image: kubeguide/guestbook-redis-slave ports: - containerPort: 6379
[root@k8smaster01 ~]# kubectl create -f redis-slave-controller.yaml #在ctx-dev組創建應用
[root@k8smaster01 ~]# kubectl get pods #查看
[root@k8smaster01 ~]# kubectl config use-context ctx-prod #切換ctx-prod工作組
[root@k8smaster01 ~]# kubectl get pods #再次查看
結論:為兩個工作組分別設置了兩個運行環境,設置好當前運行環境時,各工作組之間的工作將不會相互干擾,並且都能在同一個Kubernetes集群中同時工作。