Blog.094 K8S 集群架构:kubadm、dashboard 与 Harbor 私有仓库部署


本章目录

 

 

 

 

1. Kubeadm 部署 K8S 集群架构
  1.1 部署步骤
  1.2 环境准备
  1.3 部署过程
2. Dashboard 部署
  2.1 部署过程
3. 安装 Harbor 私有仓库
  3.1 安装过程
4. 内核参数优化方案

 

 

 

 

1. Kubeadm 部署 K8S 集群架构
  1.1 部署步骤

  1. 在所有节点上安装 Docker 和 kubeadm
  2. 部署Kubernetes Master
  3. 部署容器网络插件
  4. 部署 Kubernetes Node,将节点加入Kubernetes集群中
  5. 部署 Dashboard Web 页面,可视化查看 Kubernetes 资源
  6. 部署 Harbor 私有仓库,存放镜像资源


  1.2 环境准备

  • master(2C/4G,cpu 核心数要求大于2):192.168.30.10:docker、kubeadm、kubelet、kubectl、flannel
  • node01(2C/2G):192.168.30.20:docker、kubeadm、kubelet、kubectl、flannel
  • node02(2C/2G):192.168.30.30:docker、kubeadm、kubelet、kubectl、flannel
  • Harbor 节点(hub.city.com):192.168.30.40:docker、docker-compose、harbor-offline-v1.2.2

 

  1.3 部署过程

    (1)所有节点,关闭防火墙规则,关闭 selinux,关闭 swap 交换

 1 systemctl stop firewalld
 2 systemctl disable firewalld
 3 setenforce 0
 4 iptables -F && iptables -t nat -F && iptables -t mangle -F && iptables -X
 5 
 6 #交换分区必须要关闭
 7 swapoff -a  
 8 
 9 #永久关闭swap分区,&符号在sed命令中代表上次匹配的结果
10 sed -ri 's/.*swap.*/#&/' /etc/fstab 
11 
12 #加载 ip_vs 模块
13 for i in $(ls /usr/lib/modules/$(uname -r)/kernel/net/netfilter/ipvs|grep -o "^[^.]*");do echo $i; /sbin/modinfo -F filename $i >/dev/null 2>&1 && /sbin/modprobe $i;done

 

 

    (2)修改主机名

1 hostnamectl set-hostname master
2 hostnamectl set-hostname node01
3 hostnamectl set-hostname node02

 

 

    (3)所有节点,修改 hosts 文件

1 vim /etc/hosts
2 192.168.30.10 master
3 192.168.30.20 node01
4 192.168.30.30 node02

 

 

    (4)调整内核参数

 1 cat > /etc/sysctl.d/kubernetes.conf << EOF
 2 
 3 #开启网桥模式,可将网桥的流量传递给iptables链
 4 net.bridge.bridge-nf-call-ip6tables=1
 5 net.bridge.bridge-nf-call-iptables=1
 6 
 7 #关闭ipv6协议
 8 net.ipv6.conf.all.disable_ipv6=1
 9 net.ipv4.ip_forward=1
10 
11 EOF

 

    (5)生效参数

1 sysctl --system

 

 

    (6)所有节点安装 docker

 1 yum install -y yum-utils device-mapper-persistent-data lvm2
 2 yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
 3 yum install -y docker-ce docker-ce-cli containerd.io
 4  
 5 mkdir /etc/docker
 6 cat > /etc/docker/daemon.json <<EOF
 7 {
 8 "registry-mirrors": ["https://6ijb8ubo.mirror.aliyuncs.com"],
 9 "exec-opts": ["native.cgroupdriver=systemd"],
10 "log-driver": "json-file",
11 "log-opts": {
12 "max-size": "100m"
13 }
14 }
15 EOF
16 #使用Systemd管理的Cgroup来进行资源控制与管理,因为相对Cgroupfs而言,Systemd限制CPU、内存等资源更加简单和成熟稳定。
17 #日志使用json-file格式类型存储,大小为100M,保存在/var/log/containers目录下,方便ELK等日志系统收集和管理日志。
18  
19 systemctl daemon-reload
20 systemctl restart docker.service
21 systemctl enable docker.service
22  
23 docker info | grep "Cgroup Driver"
24 Cgroup Driver: systemd

 

 

 

    (7)所有节点安装 kubeadm、kubelet 和 kubectl

  • 定义 kubernetes 源,安装 kubeadm,kubelet 和 kubectl
 1 cat > /etc/yum.repos.d/kubernetes.repo << EOF
 2 [kubernetes]
 3 name=Kubernetes
 4 baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
 5 enabled=1
 6 gpgcheck=0
 7 repo_gpgcheck=0
 8 gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
 9 EOF
10  
11 yum install -y kubelet-1.15.1 kubeadm-1.15.1 kubectl-1.15.1

 

 

  • 设置开机自启 kubelet
1 systemctl enable kubelet.service

 

    K8S 通过 kubeadm 安装出来以后都是以 Pod 方式存在,即底层是以容器方式运行,所以 kubelet 必须设置开机自启。

 

 

    (8)部署 K8S 集群

  • 查看初始化需要的镜像
1 kubeadm config images list

 

 

  • 在 master 节点上传 kubeadm-basic.images.tar.gz 压缩包至 /opt 目录
1 cd /opt
2 tar zxvf kubeadm-basic.images.tar.gz
3  
4 for i in $(ls /opt/kubeadm-basic.images/*.tar); do docker load -i $i; done

 

 

 

 

  • 复制镜像和脚本到 node 节点,并在 node 节点上执行脚本 bash /opt/load-images.sh
1 scp -r kubeadm-basic.images root@node01:/opt
2 scp -r kubeadm-basic.images root@node02:/opt

 

 

 

 

    (9)初始化 kubeadm

    方法一:

 1 kubeadm config print init-defaults > /opt/kubeadm-config.yaml
 2  
 3 cd /opt/
 4 vim kubeadm-config.yaml
 5 ......
 6 11 localAPIEndpoint:
 7 12 advertiseAddress: 192.168.229.90 #指定master节点的IP地址
 8 13 bindPort: 6443
 9 ......
10 34 kubernetesVersion: v1.15.1   #指定kubernetes版本号
11 35 networking:
12 36 dnsDomain: cluster.local
13 37 podSubnet: "10.244.0.0/16"   #指定pod网段,10.244.0.0/16用于匹配flannel默认网段
14 38 serviceSubnet: 10.96.0.0/16  #指定service网段
15 39 scheduler: {}
16 --- #末尾再添加以下内容
17 apiVersion: kubeproxy.config.k8s.io/v1alpha1
18 kind: KubeProxyConfiguration
19 mode: ipvs  #把默认的service调度方式改为ipvs模式
20  
21  
22 kubeadm init --config=kubeadm-config.yaml --experimental-upload-certs | tee kubeadm-init.log
23 #--experimental-upload-certs 参数可以在后续执行加入节点时自动分发证书文件,k8sV1.16版本开始替换为 --upload-certs
24 #tee kubeadm-init.log 用以输出日志

 

 

    TIPS:由于国内网络原因,kubeadm init 会卡住不动,一卡就是半个小时,然后报出这种问题:

 

1 [ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-apiserver:v1.18.5: output: Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)

 

    是因为要下载k8s.gcr.io的docker镜像,但是国内连不上https://k8s.gcr.io/v2/,除非FQ。
    解决方法:使用阿里云镜像,执行命令:

 

1 kubeadm init --image-repository=registry.aliyuncs.com/google_containers --pod-network-cidr=10.244.0.0/16 --kubernetes-version=v1.15.1

 

 

  • 查看 kubeadm-init 日志
1 less kubeadm-init.log

 

  • kubernetes 配置文件目录
1 ls /etc/kubernetes/

 

 

  • 存放 ca 等证书和密码的目录
1 ls /etc/kubernetes/pki

 

 

    方法二:

  • 设定 kubectl
1 kubeadm init \
2 --apiserver-advertise-address=0.0.0.0 \
3 --image-repository registry.aliyuncs.com/google_containers \
4 --kubernetes-version=v1.15.1 \
5 --service-cidr=10.1.0.0/16 \
6 --pod-network-cidr=10.244.0.0/16

 

    初始化集群需使用 kubeadm init 命令,可以指定具体参数初始化,也可以指定配置文件初始化。

    可选参数:

  • --apiserver-advertise-address:apiserver 通告给其他组件的IP地址,一般应该为 Master 节点的用于集群内部通信的 IP 地址,0.0.0.0 表示节点上所有可用地址
  • --apiserver-bind-port:apiserver 的监听端口,默认是 6443
  • --cert-dir:通讯的 ssl 证书文件,默认 /etc/kubernetes/pki
  • --control-plane-endpoint:控制台平面的共享终端,可以是负载均衡的 ip 地址或者 dns 域名,高可用集群时需要添加
  • --image-repository:拉取镜像的镜像仓库,默认是 k8s.gcr.io
  • --kubernetes-version:指定 kubernetes 版本
  • --pod-network-cidr:pod 资源的网段,需与 pod 网络插件的值设置一致。通常, Flannel 网络插件的默认为 10.244.0.0/16,Calico 插件的默认值为 192.168.0.0/16;
  • --service-cidr:service 资源的网段
  • --service-dns-domain:service 全域名的后缀,默认是 cluster.local

 

    方法二初始化后需要修改 kube-proxy 的 configmap,开启 ipvs。

 1 kubectl edit cm kube-proxy -n=kube-system
 2 修改mode: ipvs
 3  
 4   
 5  
 6 提示:
 7 ......
 8 Your Kubernetes control-plane has initialized successfully!
 9  
10 To start using your cluster, you need to run the following as a regular user:
11  
12 mkdir -p $HOME/.kube
13 sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
14 sudo chown $(id -u):$(id -g) $HOME/.kube/config
15  
16 You should now deploy a pod network to the cluster.
17 Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
18 https://kubernetes.io/docs/concepts/cluster-administration/addons/
19  
20 Then you can join any number of worker nodes by running the following on each as root:
21  
22 kubeadm join 192.168.229.90:6443 --token rc0kfs.a1sfe3gl4dvopck5 \
23 --discovery-token-ca-cert-hash sha256:864fe553c812df2af262b406b707db68b0fd450dc08b34efb73dd5a4771d37a2 

 

    (10)设定 kubectl

    kubectl 需经由 API server 认证及授权后方能执行相应的管理操作,kubeadm 部署的集群为其生成了一个具有管理员权限的认证配置文件 /etc/kubernetes/admin.conf,它可由 kubectl 通过默认的 “$HOME/.kube/config” 的路径进行加载。

1 mkdir -p $HOME/.kube
2 cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
3 chown $(id -u):$(id -g) $HOME/.kube/config

 

  • 在 node 节点上执行 kubeadm join 命令加入群集
1 kubeadm join 192.168.229.90:6443 --token rc0kfs.a1sfe3gl4dvopck5 \
2 --discovery-token-ca-cert-hash sha256:864fe553c812df2af262b406b707db68b0fd450dc08b34efb73dd5a4771d37a2

 

 

 

 

 

  • 所有节点部署网络插件 flannel

    方法一:

  • 所有节点上传flannel镜像 flannel.tar 到 /opt 目录,master节点上传 kube-flannel.yml 文件
1 cd /opt
2 docker load < flannel.tar

 

  • 在 master 节点创建 flannel 资源
1 kubectl apply -f kube-flannel.yml

 

 

 

 

 

 

    方法二:

1 kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

 

  • 在 master 节点查看节点状态(需要等几分钟)
 1 kubectl get nodes
 2 NAME STATUS ROLES AGE VERSION
 3 master Ready master 71m v1.15.1
 4 node01 Ready <none> 99s v1.15.1
 5 node02 Ready <none> 96s v1.15.1
 6  
 7 kubectl get pods -n kube-system
 8 NAME READY STATUS RESTARTS AGE
 9 coredns-bccdc95cf-c9w6l 1/1 Running 0 71m
10 coredns-bccdc95cf-nql5j 1/1 Running 0 71m
11 etcd-master 1/1 Running 0 71m
12 kube-apiserver-master 1/1 Running 0 70m
13 kube-controller-manager-master 1/1 Running 0 70m
14 kube-flannel-ds-amd64-kfhwf 1/1 Running 0 2m53s
15 kube-flannel-ds-amd64-qkdfh 1/1 Running 0 46m
16 kube-flannel-ds-amd64-vffxv 1/1 Running 0 2m56s
17 kube-proxy-558p8 1/1 Running 0 2m53s
18 kube-proxy-nwd7g 1/1 Running 0 2m56s
19 kube-proxy-qpz8t 1/1 Running 0 71m
20 kube-scheduler-master 1/1 Running 0 70m

 

 

  • 测试 pod 资源创建
1 kubectl create deployment nginx --image=nginx
2  
3 kubectl get pods -o wide
4 NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
5 nginx-554b9c67f9-zr2xs 1/1 Running 0 14m 10.244.1.2 node01 <none> <none>

 

 

  • 暴露端口提供服务
1 kubectl expose deployment nginx --port=80 --type=NodePort
2  
3 kubectl get svc
4 NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
5 kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 25h
6 nginx NodePort 10.96.15.132 <none> 80:32698/TCP 4s

 

  • 测试访问
1 curl http://node01:31599  #使用Node1或者node2的IP进行访问测试

 

 

  • 扩展3个副本
1 kubectl scale deployment nginx --replicas=3
2 kubectl get pods -o wide
3 NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
4 nginx-554b9c67f9-9kh4s 1/1 Running 0 66s 10.244.1.3 node01 <none> <none>
5 nginx-554b9c67f9-rv77q 1/1 Running 0 66s 10.244.2.2 node02 <none> <none>
6 nginx-554b9c67f9-zr2xs 1/1 Running 0 17m 10.244.1.2 node01 <none> <none>

 

 

2. Dashboard 部署
  2.1 部署过程

    (1)所有节点安装 dashboard

    方法一:

  • 所有节点上传 dashboard 镜像 dashboard.tar 到 /opt 目录,master 节点上传 kubernetes-dashboard.yaml 文件
1 cd /opt/
2 docker load < dashboard.tar
3  
4 kubectl apply -f kubernetes-dashboard.yaml

 

 

 

 

    方法二:

1 kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0/aio/deploy/recommended.yaml  

 

  • 查看所有容器运行状态

 

 1 [root@master opt]# kubectl get pods,svc -n kube-system -o wide
 2 NAME                                        READY   STATUS    RESTARTS   AGE     IP               NODE     NOMINATED NODE   READINESS GATES
 3 pod/coredns-5c98db65d4-2txjt                1/1     Running   0          62m     10.244.1.2       node01   <none>           <none>
 4 pod/coredns-5c98db65d4-bgh4j                1/1     Running   0          62m     10.244.1.3       node01   <none>           <none>
 5 pod/etcd-master                             1/1     Running   0          61m     192.168.229.90   master   <none>           <none>
 6 pod/kube-apiserver-master                   1/1     Running   0          61m     192.168.229.90   master   <none>           <none>
 7 pod/kube-controller-manager-master          1/1     Running   0          61m     192.168.229.90   master   <none>           <none>
 8 pod/kube-flannel-ds-amd64-fpglh             1/1     Running   0          36m     192.168.229.70   node02   <none>           <none>
 9 pod/kube-flannel-ds-amd64-nrx8l             1/1     Running   0          36m     192.168.229.90   master   <none>           <none>
10 pod/kube-flannel-ds-amd64-xt8sx             1/1     Running   0          36m     192.168.229.80   node01   <none>           <none>
11 pod/kube-proxy-b6c97                        1/1     Running   0          53m     192.168.229.70   node02   <none>           <none>
12 pod/kube-proxy-pf68q                        1/1     Running   0          62m     192.168.229.90   master   <none>           <none>
13 pod/kube-proxy-rvnxc                        1/1     Running   0          53m     192.168.229.80   node01   <none>           <none>
14 pod/kube-scheduler-master                   1/1     Running   0          62m     192.168.229.90   master   <none>           <none>
15 pod/kubernetes-dashboard-859b87d4f7-flkrm   1/1     Running   0          2m54s   10.244.2.4       node02   <none>           <none>
16  
17 NAME                           TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)                  AGE     SELECTOR
18 service/kube-dns               ClusterIP   10.96.0.10     <none>        53/UDP,53/TCP,9153/TCP   62m     k8s-app=kube-dns
19 service/kubernetes-dashboard   NodePort    10.96.128.46   <none>        443:30001/TCP            2m54s   k8s-app=kubernetes-dashboard  

 

    (2)使用火狐或者360浏览器访问

1 https://node02:30001/
2 https://192.168.229.80:30001/    #使用Node1或者node2访问

 

  • 创建 service account 并绑定默认 cluster-admin 管理员集群角色
1 kubectl create serviceaccount dashboard-admin -n kube-system
2  
3 kubectl create clusterrolebinding dashboard-admin --clusterrole=cluster-admin --serviceaccount=kube-system:dashboard-admin

 

 

  • 获取令牌密钥
 1 kubectl describe secrets -n kube-system $(kubectl -n kube-system get secret | awk '/dashboard-admin/{print $1}')
 2 Name: dashboard-admin-token-xf4dk
 3 Namespace: kube-system
 4 Labels: <none>
 5 Annotations: kubernetes.io/service-account.name: dashboard-admin
 6 kubernetes.io/service-account.uid: 736a7c1e-0fa1-430a-9244-71cda7899293
 7  
 8 Type: kubernetes.io/service-account-token
 9  
10 Data
11 ====
12 token: eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJkYXNoYm9hcmQtYWRtaW4tdG9rZW4teGY0ZGsiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiZGFzaGJvYXJkLWFkbWluIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiNzM2YTdjMWUtMGZhMS00MzBhLTkyNDQtNzFjZGE3ODk5MjkzIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Omt1YmUtc3lzdGVtOmRhc2hib2FyZC1hZG1pbiJ9.uNyAUOqejg7UOVCYkP0evQzG9_h-vAReaDtmYuCPdnvAf150eBsfpRPL1QmsDRsWF0xbI2Yb9m1VajMgKGneHCYFBqD-bsw0ffvbYRwM-roRnLtX-qN1kGMUyMU3iB8y_L6x-ZhiLXwjxUYZzO4WurY-e0h3yI0O2n9qQQmencEoz4snUKK4p_nBIcQrexMzO-aqhuQU_6JJQlN0q5jKHqnB11TfNQX1CNmTqN_dpZy0Wm1JzujVEd-6GQg7xawJkoSZjPYKgmN89z3o2o4cRydshUyLlb6Rmw_FSRvRWiobzL6xhWeGND4i7LgDCAr9YPRJ8LMjJYh_dPbN2Dnpxg
13 ca.crt: 1025 bytes
14 namespace: 11 bytes

 

 

  • 复制 token 令牌直接登录网站

 

 

 

3. 安装 Harbor 私有仓库
  3.1 安装过程

    (1)修改主机名

1 hostnamectl set-hostname hub.ly.com

 

 

    (2)所有节点加上主机名映射

1 echo '192.168.229.60 hub.ly.com' >> /etc/hosts

 

 

    (3)安装 docker

 1 yum install -y yum-utils device-mapper-persistent-data lvm2
 2 yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
 3 yum install -y docker-ce docker-ce-cli containerd.io
 4  
 5 mkdir /etc/docker
 6 cat > /etc/docker/daemon.json <<EOF### 下面命令也需要在master和node节点重新执行,因为之前没有指定harbor仓库地址
 7 {
 8 "registry-mirrors": ["https://6ijb8ubo.mirror.aliyuncs.com"],
 9 "exec-opts": ["native.cgroupdriver=systemd"],
10 "log-driver": "json-file",
11 "log-opts": {
12 "max-size": "100m"
13 },
14 "insecure-registries": ["https://hub.ly.com"]   
15 }
16 EOF
17  
18 systemctl start docker
19 systemctl enable docker

 

 

    (4)所有 node 节点都修改 docker 配置文件,加上私有仓库配置

 1 cat > /etc/docker/daemon.json <<EOF
 2 {
 3 "registry-mirrors": ["https://6ijb8ubo.mirror.aliyuncs.com"],
 4 "exec-opts": ["native.cgroupdriver=systemd"],
 5 "log-driver": "json-file",
 6 "log-opts": {
 7 "max-size": "100m"
 8 },
 9 "insecure-registries": ["https://hub.ly.com"]
10 }
11 EOF
12  
13 systemctl daemon-reload
14 systemctl restart docker

 

    (5)上传 harbor-offline-installer-v1.2.2.tgz 和 docker-compose 文件到 /opt 目录

 1 cd /opt
 2 cp docker-compose /usr/local/bin/
 3 chmod +x /usr/local/bin/docker-compose
 4  
 5 tar zxvf harbor-offline-installer-v1.2.2.tgz
 6 cd harbor/
 7 vim harbor.cfg
 8 5 hostname = hub.ly.com
 9 9 ui_url_protocol = https
10 24 ssl_cert = /data/cert/server.crt
11 25 ssl_cert_key = /data/cert/server.key
12 59 harbor_admin_password = Harbor12345

 

 

 

    (6)生成证书

1 mkdir -p /data/cert
2 cd /data/cert

 

  • 生成私钥
1 openssl genrsa -des3 -out server.key 2048

    输入两遍密码:123456

 

 

  • 生成证书签名请求文件
1 openssl req -new -key server.key -out server.csr

    输入私钥密码:123456
    输入国家名:CN
    输入省名:BJ
    输入市名:BJ
    输入组织名:LV
    输入机构名:LV
    输入域名:hub.ly.com
    输入管理员邮箱:admin@ly.com
    其它全部直接回车

 

 

  • 备份私钥
1 cp server.key server.key.org

 

 

  • 清除私钥密码
openssl rsa -in server.key.org -out server.key  

 

 

  • 签名证书
1 openssl x509 -req -days 1000 -in server.csr -signkey server.key -out server.crt
2  
3 chmod +x /data/cert/*
4  
5 cd /opt/harbor/
6 ./install.sh

    浏览器访问:https://hub.ly.com
    用户名:admin
    密码:Harbor12345

 

 

 

 

    (7)在一个 node 节点上登录 harbor

1 docker login -u admin -p Harbor12345 https://hub.ly.com

 

 

    (8)上传镜像

1 docker tag nginx:latest hub.ly.com/library/nginx:v1
2 docker push hub.ly.com/library/nginx:v1

 

 

 

    (9)在 master 节点上删除之前创建的 nginx 资源

 1 kubectl delete deployment nginx
 2  
 3 kubectl run nginx-deployment --image=hub.ly.com/library/nginx:v1 --port=80 --replicas=3
 4  
 5 kubectl expose deployment nginx-deployment --port=30000 --target-port=80
 6 kubectl get svc,pods
 7 NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
 8 service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 10m
 9 service/nginx-deployment ClusterIP 10.96.222.161 <none> 30000/TCP 3m15s
10  
11 NAME READY STATUS RESTARTS AGE
12 pod/nginx-deployment-77bcbfbfdc-bv5bz 1/1 Running 0 16s
13 pod/nginx-deployment-77bcbfbfdc-fq8wr 1/1 Running 0 16s
14 pod/nginx-deployment-77bcbfbfdc-xrg45 1/1 Running 0 3m39s
15  
16  
17 yum install ipvsadm -y
18 ipvsadm -Ln
19  
20 curl 10.96.222.161:30000
21  
22  
23 kubectl edit svc nginx-deployment
24 25 type: NodePort   #把调度策略改成NodePort
25  
26 kubectl get svc
27 NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
28 service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 29m
29 service/nginx-deployment NodePort 10.96.222.161 <none> 30000:32340/TCP 22m

 

 

 

 

 

 

 

 

    (10)浏览器访问

 

 

4. 内核参数优化方案

 1 cat > /etc/sysctl.d/kubernetes.conf <<EOF
 2 net.bridge.bridge-nf-call-iptables=1
 3 net.bridge.bridge-nf-call-ip6tables=1
 4 net.ipv4.ip_forward=1
 5 net.ipv4.tcp_tw_recycle=0
 6 vm.swappiness=0 #禁止使用 swap 空间,只有当系统内存不足(OOM)时才允许使用它
 7 vm.overcommit_memory=1  #不检查物理内存是否够用
 8 vm.panic_on_oom=0   #开启 OOM
 9 fs.inotify.max_user_instances=8192
10 fs.inotify.max_user_watches=1048576
11 fs.file-max=52706963    #指定最大文件句柄数
12 fs.nr_open=52706963 #仅4.4以上版本支持
13 net.ipv6.conf.all.disable_ipv6=1
14 net.netfilter.nf_conntrack_max=2310720
15 EOF

 

 

 

 

 

-

 


免责声明!

本站转载的文章为个人学习借鉴使用,本站对版权不负任何法律责任。如果侵犯了您的隐私权益,请联系本站邮箱yoyou2525@163.com删除。



 
粤ICP备18138465号  © 2018-2025 CODEPRJ.COM