Blog.094 K8S 集群架構:kubadm、dashboard 與 Harbor 私有倉庫部署


本章目錄

 

 

 

 

1. Kubeadm 部署 K8S 集群架構
  1.1 部署步驟
  1.2 環境准備
  1.3 部署過程
2. Dashboard 部署
  2.1 部署過程
3. 安裝 Harbor 私有倉庫
  3.1 安裝過程
4. 內核參數優化方案

 

 

 

 

1. Kubeadm 部署 K8S 集群架構
  1.1 部署步驟

  1. 在所有節點上安裝 Docker 和 kubeadm
  2. 部署Kubernetes Master
  3. 部署容器網絡插件
  4. 部署 Kubernetes Node,將節點加入Kubernetes集群中
  5. 部署 Dashboard Web 頁面,可視化查看 Kubernetes 資源
  6. 部署 Harbor 私有倉庫,存放鏡像資源


  1.2 環境准備

  • master(2C/4G,cpu 核心數要求大於2):192.168.30.10:docker、kubeadm、kubelet、kubectl、flannel
  • node01(2C/2G):192.168.30.20:docker、kubeadm、kubelet、kubectl、flannel
  • node02(2C/2G):192.168.30.30:docker、kubeadm、kubelet、kubectl、flannel
  • Harbor 節點(hub.city.com):192.168.30.40:docker、docker-compose、harbor-offline-v1.2.2

 

  1.3 部署過程

    (1)所有節點,關閉防火牆規則,關閉 selinux,關閉 swap 交換

 1 systemctl stop firewalld
 2 systemctl disable firewalld
 3 setenforce 0
 4 iptables -F && iptables -t nat -F && iptables -t mangle -F && iptables -X
 5 
 6 #交換分區必須要關閉
 7 swapoff -a  
 8 
 9 #永久關閉swap分區,&符號在sed命令中代表上次匹配的結果
10 sed -ri 's/.*swap.*/#&/' /etc/fstab 
11 
12 #加載 ip_vs 模塊
13 for i in $(ls /usr/lib/modules/$(uname -r)/kernel/net/netfilter/ipvs|grep -o "^[^.]*");do echo $i; /sbin/modinfo -F filename $i >/dev/null 2>&1 && /sbin/modprobe $i;done

 

 

    (2)修改主機名

1 hostnamectl set-hostname master
2 hostnamectl set-hostname node01
3 hostnamectl set-hostname node02

 

 

    (3)所有節點,修改 hosts 文件

1 vim /etc/hosts
2 192.168.30.10 master
3 192.168.30.20 node01
4 192.168.30.30 node02

 

 

    (4)調整內核參數

 1 cat > /etc/sysctl.d/kubernetes.conf << EOF
 2 
 3 #開啟網橋模式,可將網橋的流量傳遞給iptables鏈
 4 net.bridge.bridge-nf-call-ip6tables=1
 5 net.bridge.bridge-nf-call-iptables=1
 6 
 7 #關閉ipv6協議
 8 net.ipv6.conf.all.disable_ipv6=1
 9 net.ipv4.ip_forward=1
10 
11 EOF

 

    (5)生效參數

1 sysctl --system

 

 

    (6)所有節點安裝 docker

 1 yum install -y yum-utils device-mapper-persistent-data lvm2
 2 yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
 3 yum install -y docker-ce docker-ce-cli containerd.io
 4  
 5 mkdir /etc/docker
 6 cat > /etc/docker/daemon.json <<EOF
 7 {
 8 "registry-mirrors": ["https://6ijb8ubo.mirror.aliyuncs.com"],
 9 "exec-opts": ["native.cgroupdriver=systemd"],
10 "log-driver": "json-file",
11 "log-opts": {
12 "max-size": "100m"
13 }
14 }
15 EOF
16 #使用Systemd管理的Cgroup來進行資源控制與管理,因為相對Cgroupfs而言,Systemd限制CPU、內存等資源更加簡單和成熟穩定。
17 #日志使用json-file格式類型存儲,大小為100M,保存在/var/log/containers目錄下,方便ELK等日志系統收集和管理日志。
18  
19 systemctl daemon-reload
20 systemctl restart docker.service
21 systemctl enable docker.service
22  
23 docker info | grep "Cgroup Driver"
24 Cgroup Driver: systemd

 

 

 

    (7)所有節點安裝 kubeadm、kubelet 和 kubectl

  • 定義 kubernetes 源,安裝 kubeadm,kubelet 和 kubectl
 1 cat > /etc/yum.repos.d/kubernetes.repo << EOF
 2 [kubernetes]
 3 name=Kubernetes
 4 baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
 5 enabled=1
 6 gpgcheck=0
 7 repo_gpgcheck=0
 8 gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
 9 EOF
10  
11 yum install -y kubelet-1.15.1 kubeadm-1.15.1 kubectl-1.15.1

 

 

  • 設置開機自啟 kubelet
1 systemctl enable kubelet.service

 

    K8S 通過 kubeadm 安裝出來以后都是以 Pod 方式存在,即底層是以容器方式運行,所以 kubelet 必須設置開機自啟。

 

 

    (8)部署 K8S 集群

  • 查看初始化需要的鏡像
1 kubeadm config images list

 

 

  • 在 master 節點上傳 kubeadm-basic.images.tar.gz 壓縮包至 /opt 目錄
1 cd /opt
2 tar zxvf kubeadm-basic.images.tar.gz
3  
4 for i in $(ls /opt/kubeadm-basic.images/*.tar); do docker load -i $i; done

 

 

 

 

  • 復制鏡像和腳本到 node 節點,並在 node 節點上執行腳本 bash /opt/load-images.sh
1 scp -r kubeadm-basic.images root@node01:/opt
2 scp -r kubeadm-basic.images root@node02:/opt

 

 

 

 

    (9)初始化 kubeadm

    方法一:

 1 kubeadm config print init-defaults > /opt/kubeadm-config.yaml
 2  
 3 cd /opt/
 4 vim kubeadm-config.yaml
 5 ......
 6 11 localAPIEndpoint:
 7 12 advertiseAddress: 192.168.229.90 #指定master節點的IP地址
 8 13 bindPort: 6443
 9 ......
10 34 kubernetesVersion: v1.15.1   #指定kubernetes版本號
11 35 networking:
12 36 dnsDomain: cluster.local
13 37 podSubnet: "10.244.0.0/16"   #指定pod網段,10.244.0.0/16用於匹配flannel默認網段
14 38 serviceSubnet: 10.96.0.0/16  #指定service網段
15 39 scheduler: {}
16 --- #末尾再添加以下內容
17 apiVersion: kubeproxy.config.k8s.io/v1alpha1
18 kind: KubeProxyConfiguration
19 mode: ipvs  #把默認的service調度方式改為ipvs模式
20  
21  
22 kubeadm init --config=kubeadm-config.yaml --experimental-upload-certs | tee kubeadm-init.log
23 #--experimental-upload-certs 參數可以在后續執行加入節點時自動分發證書文件,k8sV1.16版本開始替換為 --upload-certs
24 #tee kubeadm-init.log 用以輸出日志

 

 

    TIPS:由於國內網絡原因,kubeadm init 會卡住不動,一卡就是半個小時,然后報出這種問題:

 

1 [ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-apiserver:v1.18.5: output: Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)

 

    是因為要下載k8s.gcr.io的docker鏡像,但是國內連不上https://k8s.gcr.io/v2/,除非FQ。
    解決方法:使用阿里雲鏡像,執行命令:

 

1 kubeadm init --image-repository=registry.aliyuncs.com/google_containers --pod-network-cidr=10.244.0.0/16 --kubernetes-version=v1.15.1

 

 

  • 查看 kubeadm-init 日志
1 less kubeadm-init.log

 

  • kubernetes 配置文件目錄
1 ls /etc/kubernetes/

 

 

  • 存放 ca 等證書和密碼的目錄
1 ls /etc/kubernetes/pki

 

 

    方法二:

  • 設定 kubectl
1 kubeadm init \
2 --apiserver-advertise-address=0.0.0.0 \
3 --image-repository registry.aliyuncs.com/google_containers \
4 --kubernetes-version=v1.15.1 \
5 --service-cidr=10.1.0.0/16 \
6 --pod-network-cidr=10.244.0.0/16

 

    初始化集群需使用 kubeadm init 命令,可以指定具體參數初始化,也可以指定配置文件初始化。

    可選參數:

  • --apiserver-advertise-address:apiserver 通告給其他組件的IP地址,一般應該為 Master 節點的用於集群內部通信的 IP 地址,0.0.0.0 表示節點上所有可用地址
  • --apiserver-bind-port:apiserver 的監聽端口,默認是 6443
  • --cert-dir:通訊的 ssl 證書文件,默認 /etc/kubernetes/pki
  • --control-plane-endpoint:控制台平面的共享終端,可以是負載均衡的 ip 地址或者 dns 域名,高可用集群時需要添加
  • --image-repository:拉取鏡像的鏡像倉庫,默認是 k8s.gcr.io
  • --kubernetes-version:指定 kubernetes 版本
  • --pod-network-cidr:pod 資源的網段,需與 pod 網絡插件的值設置一致。通常, Flannel 網絡插件的默認為 10.244.0.0/16,Calico 插件的默認值為 192.168.0.0/16;
  • --service-cidr:service 資源的網段
  • --service-dns-domain:service 全域名的后綴,默認是 cluster.local

 

    方法二初始化后需要修改 kube-proxy 的 configmap,開啟 ipvs。

 1 kubectl edit cm kube-proxy -n=kube-system
 2 修改mode: ipvs
 3  
 4   
 5  
 6 提示:
 7 ......
 8 Your Kubernetes control-plane has initialized successfully!
 9  
10 To start using your cluster, you need to run the following as a regular user:
11  
12 mkdir -p $HOME/.kube
13 sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
14 sudo chown $(id -u):$(id -g) $HOME/.kube/config
15  
16 You should now deploy a pod network to the cluster.
17 Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
18 https://kubernetes.io/docs/concepts/cluster-administration/addons/
19  
20 Then you can join any number of worker nodes by running the following on each as root:
21  
22 kubeadm join 192.168.229.90:6443 --token rc0kfs.a1sfe3gl4dvopck5 \
23 --discovery-token-ca-cert-hash sha256:864fe553c812df2af262b406b707db68b0fd450dc08b34efb73dd5a4771d37a2 

 

    (10)設定 kubectl

    kubectl 需經由 API server 認證及授權后方能執行相應的管理操作,kubeadm 部署的集群為其生成了一個具有管理員權限的認證配置文件 /etc/kubernetes/admin.conf,它可由 kubectl 通過默認的 “$HOME/.kube/config” 的路徑進行加載。

1 mkdir -p $HOME/.kube
2 cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
3 chown $(id -u):$(id -g) $HOME/.kube/config

 

  • 在 node 節點上執行 kubeadm join 命令加入群集
1 kubeadm join 192.168.229.90:6443 --token rc0kfs.a1sfe3gl4dvopck5 \
2 --discovery-token-ca-cert-hash sha256:864fe553c812df2af262b406b707db68b0fd450dc08b34efb73dd5a4771d37a2

 

 

 

 

 

  • 所有節點部署網絡插件 flannel

    方法一:

  • 所有節點上傳flannel鏡像 flannel.tar 到 /opt 目錄,master節點上傳 kube-flannel.yml 文件
1 cd /opt
2 docker load < flannel.tar

 

  • 在 master 節點創建 flannel 資源
1 kubectl apply -f kube-flannel.yml

 

 

 

 

 

 

    方法二:

1 kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

 

  • 在 master 節點查看節點狀態(需要等幾分鍾)
 1 kubectl get nodes
 2 NAME STATUS ROLES AGE VERSION
 3 master Ready master 71m v1.15.1
 4 node01 Ready <none> 99s v1.15.1
 5 node02 Ready <none> 96s v1.15.1
 6  
 7 kubectl get pods -n kube-system
 8 NAME READY STATUS RESTARTS AGE
 9 coredns-bccdc95cf-c9w6l 1/1 Running 0 71m
10 coredns-bccdc95cf-nql5j 1/1 Running 0 71m
11 etcd-master 1/1 Running 0 71m
12 kube-apiserver-master 1/1 Running 0 70m
13 kube-controller-manager-master 1/1 Running 0 70m
14 kube-flannel-ds-amd64-kfhwf 1/1 Running 0 2m53s
15 kube-flannel-ds-amd64-qkdfh 1/1 Running 0 46m
16 kube-flannel-ds-amd64-vffxv 1/1 Running 0 2m56s
17 kube-proxy-558p8 1/1 Running 0 2m53s
18 kube-proxy-nwd7g 1/1 Running 0 2m56s
19 kube-proxy-qpz8t 1/1 Running 0 71m
20 kube-scheduler-master 1/1 Running 0 70m

 

 

  • 測試 pod 資源創建
1 kubectl create deployment nginx --image=nginx
2  
3 kubectl get pods -o wide
4 NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
5 nginx-554b9c67f9-zr2xs 1/1 Running 0 14m 10.244.1.2 node01 <none> <none>

 

 

  • 暴露端口提供服務
1 kubectl expose deployment nginx --port=80 --type=NodePort
2  
3 kubectl get svc
4 NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
5 kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 25h
6 nginx NodePort 10.96.15.132 <none> 80:32698/TCP 4s

 

  • 測試訪問
1 curl http://node01:31599  #使用Node1或者node2的IP進行訪問測試

 

 

  • 擴展3個副本
1 kubectl scale deployment nginx --replicas=3
2 kubectl get pods -o wide
3 NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
4 nginx-554b9c67f9-9kh4s 1/1 Running 0 66s 10.244.1.3 node01 <none> <none>
5 nginx-554b9c67f9-rv77q 1/1 Running 0 66s 10.244.2.2 node02 <none> <none>
6 nginx-554b9c67f9-zr2xs 1/1 Running 0 17m 10.244.1.2 node01 <none> <none>

 

 

2. Dashboard 部署
  2.1 部署過程

    (1)所有節點安裝 dashboard

    方法一:

  • 所有節點上傳 dashboard 鏡像 dashboard.tar 到 /opt 目錄,master 節點上傳 kubernetes-dashboard.yaml 文件
1 cd /opt/
2 docker load < dashboard.tar
3  
4 kubectl apply -f kubernetes-dashboard.yaml

 

 

 

 

    方法二:

1 kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0/aio/deploy/recommended.yaml  

 

  • 查看所有容器運行狀態

 

 1 [root@master opt]# kubectl get pods,svc -n kube-system -o wide
 2 NAME                                        READY   STATUS    RESTARTS   AGE     IP               NODE     NOMINATED NODE   READINESS GATES
 3 pod/coredns-5c98db65d4-2txjt                1/1     Running   0          62m     10.244.1.2       node01   <none>           <none>
 4 pod/coredns-5c98db65d4-bgh4j                1/1     Running   0          62m     10.244.1.3       node01   <none>           <none>
 5 pod/etcd-master                             1/1     Running   0          61m     192.168.229.90   master   <none>           <none>
 6 pod/kube-apiserver-master                   1/1     Running   0          61m     192.168.229.90   master   <none>           <none>
 7 pod/kube-controller-manager-master          1/1     Running   0          61m     192.168.229.90   master   <none>           <none>
 8 pod/kube-flannel-ds-amd64-fpglh             1/1     Running   0          36m     192.168.229.70   node02   <none>           <none>
 9 pod/kube-flannel-ds-amd64-nrx8l             1/1     Running   0          36m     192.168.229.90   master   <none>           <none>
10 pod/kube-flannel-ds-amd64-xt8sx             1/1     Running   0          36m     192.168.229.80   node01   <none>           <none>
11 pod/kube-proxy-b6c97                        1/1     Running   0          53m     192.168.229.70   node02   <none>           <none>
12 pod/kube-proxy-pf68q                        1/1     Running   0          62m     192.168.229.90   master   <none>           <none>
13 pod/kube-proxy-rvnxc                        1/1     Running   0          53m     192.168.229.80   node01   <none>           <none>
14 pod/kube-scheduler-master                   1/1     Running   0          62m     192.168.229.90   master   <none>           <none>
15 pod/kubernetes-dashboard-859b87d4f7-flkrm   1/1     Running   0          2m54s   10.244.2.4       node02   <none>           <none>
16  
17 NAME                           TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)                  AGE     SELECTOR
18 service/kube-dns               ClusterIP   10.96.0.10     <none>        53/UDP,53/TCP,9153/TCP   62m     k8s-app=kube-dns
19 service/kubernetes-dashboard   NodePort    10.96.128.46   <none>        443:30001/TCP            2m54s   k8s-app=kubernetes-dashboard  

 

    (2)使用火狐或者360瀏覽器訪問

1 https://node02:30001/
2 https://192.168.229.80:30001/    #使用Node1或者node2訪問

 

  • 創建 service account 並綁定默認 cluster-admin 管理員集群角色
1 kubectl create serviceaccount dashboard-admin -n kube-system
2  
3 kubectl create clusterrolebinding dashboard-admin --clusterrole=cluster-admin --serviceaccount=kube-system:dashboard-admin

 

 

  • 獲取令牌密鑰
 1 kubectl describe secrets -n kube-system $(kubectl -n kube-system get secret | awk '/dashboard-admin/{print $1}')
 2 Name: dashboard-admin-token-xf4dk
 3 Namespace: kube-system
 4 Labels: <none>
 5 Annotations: kubernetes.io/service-account.name: dashboard-admin
 6 kubernetes.io/service-account.uid: 736a7c1e-0fa1-430a-9244-71cda7899293
 7  
 8 Type: kubernetes.io/service-account-token
 9  
10 Data
11 ====
12 token: eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJkYXNoYm9hcmQtYWRtaW4tdG9rZW4teGY0ZGsiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiZGFzaGJvYXJkLWFkbWluIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiNzM2YTdjMWUtMGZhMS00MzBhLTkyNDQtNzFjZGE3ODk5MjkzIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Omt1YmUtc3lzdGVtOmRhc2hib2FyZC1hZG1pbiJ9.uNyAUOqejg7UOVCYkP0evQzG9_h-vAReaDtmYuCPdnvAf150eBsfpRPL1QmsDRsWF0xbI2Yb9m1VajMgKGneHCYFBqD-bsw0ffvbYRwM-roRnLtX-qN1kGMUyMU3iB8y_L6x-ZhiLXwjxUYZzO4WurY-e0h3yI0O2n9qQQmencEoz4snUKK4p_nBIcQrexMzO-aqhuQU_6JJQlN0q5jKHqnB11TfNQX1CNmTqN_dpZy0Wm1JzujVEd-6GQg7xawJkoSZjPYKgmN89z3o2o4cRydshUyLlb6Rmw_FSRvRWiobzL6xhWeGND4i7LgDCAr9YPRJ8LMjJYh_dPbN2Dnpxg
13 ca.crt: 1025 bytes
14 namespace: 11 bytes

 

 

  • 復制 token 令牌直接登錄網站

 

 

 

3. 安裝 Harbor 私有倉庫
  3.1 安裝過程

    (1)修改主機名

1 hostnamectl set-hostname hub.ly.com

 

 

    (2)所有節點加上主機名映射

1 echo '192.168.229.60 hub.ly.com' >> /etc/hosts

 

 

    (3)安裝 docker

 1 yum install -y yum-utils device-mapper-persistent-data lvm2
 2 yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
 3 yum install -y docker-ce docker-ce-cli containerd.io
 4  
 5 mkdir /etc/docker
 6 cat > /etc/docker/daemon.json <<EOF### 下面命令也需要在master和node節點重新執行,因為之前沒有指定harbor倉庫地址
 7 {
 8 "registry-mirrors": ["https://6ijb8ubo.mirror.aliyuncs.com"],
 9 "exec-opts": ["native.cgroupdriver=systemd"],
10 "log-driver": "json-file",
11 "log-opts": {
12 "max-size": "100m"
13 },
14 "insecure-registries": ["https://hub.ly.com"]   
15 }
16 EOF
17  
18 systemctl start docker
19 systemctl enable docker

 

 

    (4)所有 node 節點都修改 docker 配置文件,加上私有倉庫配置

 1 cat > /etc/docker/daemon.json <<EOF
 2 {
 3 "registry-mirrors": ["https://6ijb8ubo.mirror.aliyuncs.com"],
 4 "exec-opts": ["native.cgroupdriver=systemd"],
 5 "log-driver": "json-file",
 6 "log-opts": {
 7 "max-size": "100m"
 8 },
 9 "insecure-registries": ["https://hub.ly.com"]
10 }
11 EOF
12  
13 systemctl daemon-reload
14 systemctl restart docker

 

    (5)上傳 harbor-offline-installer-v1.2.2.tgz 和 docker-compose 文件到 /opt 目錄

 1 cd /opt
 2 cp docker-compose /usr/local/bin/
 3 chmod +x /usr/local/bin/docker-compose
 4  
 5 tar zxvf harbor-offline-installer-v1.2.2.tgz
 6 cd harbor/
 7 vim harbor.cfg
 8 5 hostname = hub.ly.com
 9 9 ui_url_protocol = https
10 24 ssl_cert = /data/cert/server.crt
11 25 ssl_cert_key = /data/cert/server.key
12 59 harbor_admin_password = Harbor12345

 

 

 

    (6)生成證書

1 mkdir -p /data/cert
2 cd /data/cert

 

  • 生成私鑰
1 openssl genrsa -des3 -out server.key 2048

    輸入兩遍密碼:123456

 

 

  • 生成證書簽名請求文件
1 openssl req -new -key server.key -out server.csr

    輸入私鑰密碼:123456
    輸入國家名:CN
    輸入省名:BJ
    輸入市名:BJ
    輸入組織名:LV
    輸入機構名:LV
    輸入域名:hub.ly.com
    輸入管理員郵箱:admin@ly.com
    其它全部直接回車

 

 

  • 備份私鑰
1 cp server.key server.key.org

 

 

  • 清除私鑰密碼
openssl rsa -in server.key.org -out server.key  

 

 

  • 簽名證書
1 openssl x509 -req -days 1000 -in server.csr -signkey server.key -out server.crt
2  
3 chmod +x /data/cert/*
4  
5 cd /opt/harbor/
6 ./install.sh

    瀏覽器訪問:https://hub.ly.com
    用戶名:admin
    密碼:Harbor12345

 

 

 

 

    (7)在一個 node 節點上登錄 harbor

1 docker login -u admin -p Harbor12345 https://hub.ly.com

 

 

    (8)上傳鏡像

1 docker tag nginx:latest hub.ly.com/library/nginx:v1
2 docker push hub.ly.com/library/nginx:v1

 

 

 

    (9)在 master 節點上刪除之前創建的 nginx 資源

 1 kubectl delete deployment nginx
 2  
 3 kubectl run nginx-deployment --image=hub.ly.com/library/nginx:v1 --port=80 --replicas=3
 4  
 5 kubectl expose deployment nginx-deployment --port=30000 --target-port=80
 6 kubectl get svc,pods
 7 NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
 8 service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 10m
 9 service/nginx-deployment ClusterIP 10.96.222.161 <none> 30000/TCP 3m15s
10  
11 NAME READY STATUS RESTARTS AGE
12 pod/nginx-deployment-77bcbfbfdc-bv5bz 1/1 Running 0 16s
13 pod/nginx-deployment-77bcbfbfdc-fq8wr 1/1 Running 0 16s
14 pod/nginx-deployment-77bcbfbfdc-xrg45 1/1 Running 0 3m39s
15  
16  
17 yum install ipvsadm -y
18 ipvsadm -Ln
19  
20 curl 10.96.222.161:30000
21  
22  
23 kubectl edit svc nginx-deployment
24 25 type: NodePort   #把調度策略改成NodePort
25  
26 kubectl get svc
27 NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
28 service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 29m
29 service/nginx-deployment NodePort 10.96.222.161 <none> 30000:32340/TCP 22m

 

 

 

 

 

 

 

 

    (10)瀏覽器訪問

 

 

4. 內核參數優化方案

 1 cat > /etc/sysctl.d/kubernetes.conf <<EOF
 2 net.bridge.bridge-nf-call-iptables=1
 3 net.bridge.bridge-nf-call-ip6tables=1
 4 net.ipv4.ip_forward=1
 5 net.ipv4.tcp_tw_recycle=0
 6 vm.swappiness=0 #禁止使用 swap 空間,只有當系統內存不足(OOM)時才允許使用它
 7 vm.overcommit_memory=1  #不檢查物理內存是否夠用
 8 vm.panic_on_oom=0   #開啟 OOM
 9 fs.inotify.max_user_instances=8192
10 fs.inotify.max_user_watches=1048576
11 fs.file-max=52706963    #指定最大文件句柄數
12 fs.nr_open=52706963 #僅4.4以上版本支持
13 net.ipv6.conf.all.disable_ipv6=1
14 net.netfilter.nf_conntrack_max=2310720
15 EOF

 

 

 

 

 

-

 


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM