Kubernetes(K8s)(二)——yum安裝搭建Kubernetes容器集群管理系統


  k8s中文社區文檔:http://docs.kubernetes.org.cn/

  回頭說明一下,這里使用的是yum安裝,k8s的版本為1.5.2(有點老),之后我會看看怎么升級以及其他安裝方式。

(1).配置說明

節點角色 IP地址 CPU 內存
master、etcd 192.168.128.110 4核 2G
node1/minion1 192.168.128.111 4核 2G
node2/minion2 192.168.128.112 4核 2G

(2).搭建Kubernetes容器集群管理系統

 1)三台主機安裝常用的軟件包

  bash-completion可以使得按<Tab>鍵補齊,vim是vi編輯器的升級版,wget用於下載阿里雲的yum源文件。

# yum -y install bash-completion vim wget

 2)三台主機配置阿里雲yum源

# mkdir /etc/yum.repos.d/backup
# mv /etc/yum.repos.d/*.repo /etc/yum.repos.d/backup/
# wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo
# yum clean all && yum list

 3)修改hosts文件

[root@kube-master ~]# vim /etc/hosts
192.168.128.110 kube-master
192.168.128.110 etcd
192.168.128.111 kube-node1
192.168.128.112 kube-node2
[root@kube-master ~]# scp /etc/hosts 192.168.128.111:/etc/
[root@kube-master ~]# scp /etc/hosts 192.168.128.112:/etc/

 4)在master和etcd節點上安裝組件並進行配置

  首先安裝master節點上的K8s組件

[root@kube-master ~]# yum install -y kubernetes etcd flannel ntp

  關閉防火牆或開啟K8s組件相應端口,etcd端口默認2379,網頁端口默認8080

[root@kube-master ~]# systemctl stop firewalld && systemctl disable firewalld
Removed symlink /etc/systemd/system/multi-user.target.wants/firewalld.service.
Removed symlink /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.

#添加端口方法
[root@kube-master ~]# firewall-cmd --permanent --zone=public --add-port={2379,8080}/tcp
success
[root@kube-master ~]# firewall-cmd --reload
success
[root@kube-master ~]# firewall-cmd --zone=public --list-ports 
2379/tcp 8080/tcp

  修改etcd的配置文件,並啟動查看

[root@kube-master ~]# vim /etc/etcd/etcd.conf
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"  #第3行,存儲數據目錄
ETCD_LISTEN_CLIENT_URLS="http://localhost:2379,http://192.168.128.110:2379"  #第6行,etcd對外服務監聽地址,默認端口2379。如果設置為0.0.0.0則監聽所有接口
ETCD_NAME="default"  #第9行,節點名稱。如果存儲集群只有一個節點,這一項可以注釋,默認為default。
ETCD_ADVERTISE_CLIENT_URLS="http://192.168.128.110:2379"  #第21行
[root@kube-master ~]# systemctl start etcd && systemctl enable etcd
Created symlink from /etc/systemd/system/multi-user.target.wants/etcd.service to /usr/lib/systemd/system/etcd.service.
[root@kube-master ~]# systemctl status etcd
● etcd.service - Etcd Server
   Loaded: loaded (/usr/lib/systemd/system/etcd.service; enabled; vendor preset: disabled)
   Active: active (running) since 二 2020-01-14 14:02:31 CST; 11min ago
 Main PID: 12573 (etcd)
   CGroup: /system.slice/etcd.service
           └─12573 /usr/bin/etcd --name=default --data-dir=/var/lib/etcd/default.etcd --listen-client-urls=http://localhost:2379,http://192.168.128.110:2379

1月 14 14:02:31 kube-master etcd[12573]: raft.node: 8e9e05c52164694d elected leader 8e9e05c52164694d at term 2
1月 14 14:02:31 kube-master etcd[12573]: setting up the initial cluster version to 3.3
1月 14 14:02:31 kube-master etcd[12573]: set the initial cluster version to 3.3
1月 14 14:02:31 kube-master etcd[12573]: enabled capabilities for version 3.3
1月 14 14:02:31 kube-master etcd[12573]: published {Name:default ClientURLs:[http://192.168.128.110:2379]} to cluster cdf818194e3a8c32
1月 14 14:02:31 kube-master etcd[12573]: ready to serve client requests
1月 14 14:02:31 kube-master etcd[12573]: ready to serve client requests
1月 14 14:02:31 kube-master etcd[12573]: serving insecure client requests on 192.168.128.110:2379, this is strongly discouraged!
1月 14 14:02:31 kube-master etcd[12573]: serving insecure client requests on 127.0.0.1:2379, this is strongly discouraged!
1月 14 14:02:31 kube-master systemd[1]: Started Etcd Server.
[root@kube-master ~]# yum -y install net-tools  #需要使用到網絡工具
[root@kube-master ~]# netstat -antup | grep 2379
tcp        0      0 127.0.0.1:2379          0.0.0.0:*               LISTEN      12573/etcd          
tcp        0      0 192.168.128.110:2379    0.0.0.0:*               LISTEN      12573/etcd          
tcp        0      0 192.168.128.110:2379    192.168.128.110:49240   ESTABLISHED 12573/etcd          
tcp        0      0 127.0.0.1:2379          127.0.0.1:35638         ESTABLISHED 12573/etcd          
tcp        0      0 192.168.128.110:49240   192.168.128.110:2379    ESTABLISHED 12573/etcd          
tcp        0      0 127.0.0.1:35638         127.0.0.1:2379          ESTABLISHED 12573/etcd 
[root@kube-master ~]# etcdctl cluster-health  檢查etcd cluster狀態
member 8e9e05c52164694d is healthy: got healthy result from http://192.168.128.110:2379
cluster is healthy
[root@kube-master ~]# etcdctl member list  檢查etcd集群成員
8e9e05c52164694d: name=default peerURLs=http://localhost:2380 clientURLs=http://192.168.128.110:2379 isLeader=true

  修改master的通用配置文件

[root@kube-master ~]# vim /etc/kubernetes/config
KUBE_LOGTOSTDERR="--logtostderr=true"  #第13行,錯誤日志是否輸出到標准錯誤,如果不輸出到標准錯誤則記錄到文件
KUBE_LOG_LEVEL="--v=0"  #第16行,日志等級
KUBE_ALLOW_PRIV="--allow-privileged=false"  #第19行,是否允許運行特權容器,false表示不允許
KUBE_MASTER="--master=http://192.168.128.110:8080"  #第22行

  修改API Server的配置文件

[root@kube-master ~]# vim /etc/kubernetes/apiserver
KUBE_API_ADDRESS="--insecure-bind-address=0.0.0.0"  #第8行,API Server監聽所有端口
KUBE_ETCD_SERVERS="--etcd-servers=http://192.168.128.110:2379"  #第17行,etcd存儲地址
KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16"  #第20行,IP地址取值范圍,提供給Pod和Service
#默認允許接入的模塊如下:NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota
KUBE_ADMISSION_CONTROL="--admission-control=AlwaysAdmit"  #第23行,允許哪些模塊接入,這里不做限制。
KUBE_API_ARGS=""  #第26行

  Controller Manager的配置文件不需要修改,可以查看一下

[root@kube-master ~]# vim /etc/kubernetes/controller-manager
KUBE_CONTROLLER_MANAGER_ARGS=""
[root@kube-master ~]# rpm -qf /etc/kubernetes/controller-manager
kubernetes-master-1.5.2-0.7.git269f928.el7.x86_64

  修改Scheduler的配置文件

[root@kube-master ~]# vim /etc/kubernetes/scheduler
KUBE_SCHEDULER_ARGS="--address=0.0.0.0"

  修改flanneld(覆蓋網絡)的配置文件

[root@kube-master ~]# vim /etc/sysconfig/flanneld
FLANNEL_ETCD_ENDPOINTS="http://192.168.128.110:2379"  #第4行,etcd存儲地址
FLANNEL_ETCD_PREFIX="/k8s/network"  #第8行,etcd存儲配置目錄
FLANNEL_OPTIONS="--iface=ens33"  #第11行,指定通信物理網卡
[root@kube-master ~]# mkdir -p /k8s/network
[root@kube-master ~]# etcdctl set /k8s/network/config '{"Network":"10.255.0.0/16"}'  #將IP取值范圍填入
{"Network":"10.255.0.0/16"}
[root@kube-master ~]# etcdctl get /k8s/network/config  #如此一來,后面node節點上運行的flanneld會自動獲取docker的IP地址
{"Network":"10.255.0.0/16"}
[root@kube-master ~]# systemctl start flanneld && systemctl enable flanneld
Created symlink from /etc/systemd/system/multi-user.target.wants/flanneld.service to /usr/lib/systemd/system/flanneld.service.
Created symlink from /etc/systemd/system/docker.service.wants/flanneld.service to /usr/lib/systemd/system/flanneld.service.
[root@Kube-master ~]# ip a sh  #成功
......
3: flannel0: <POINTOPOINT,MULTICAST,NOARP,UP,LOWER_UP> mtu 1472 qdisc pfifo_fast state UNKNOWN group default qlen 500
    link/none 
    inet 10.255.29.0/16 scope global flannel0
       valid_lft forever preferred_lft forever
    inet6 fe80::723e:875f:5995:76d0/64 scope link flags 800 
       valid_lft forever preferred_lft forever

  重啟Master上的API Server、Controller Manager和Scheduler,並設置開機自啟。注意:可以在每個組件配置完成時單獨操作,也可以修改完一次性操作。

[root@kube-master ~]# systemctl restart kube-apiserver kube-controller-manager kube-scheduler
[root@kube-master ~]# systemctl enable kube-apiserver kube-controller-manager kube-scheduler 
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-apiserver.service to /usr/lib/systemd/system/kube-apiserver.service.
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-controller-manager.service to /usr/lib/systemd/system/kube-controller-manager.service.
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-scheduler.service to /usr/lib/systemd/system/kube-scheduler.service.

 5)在node1/minion1節點上安裝組件並進行配置

  首先安裝node1/minion1節點上的K8s組件

[root@kube-node1 ~]# yum -y install kubernetes flannel ntp

  關閉防火牆或開啟K8s組件相應端口,kube-proxy默認端口10249,kubelet默認端口10248、10250、10255

[root@kube-node1 ~]# systemctl stop firewalld && systemctl disable firewalld
Removed symlink /etc/systemd/system/multi-user.target.wants/firewalld.service.
Removed symlink /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.

  配置網絡,這里采用flanneld(覆蓋網絡),然后重啟flaneld並設置開機自啟

[root@kube-node1 ~]# vim /etc/sysconfig/flanneld
FLANNEL_ETCD_ENDPOINTS="http://192.168.128.110:2379"  #etcd存儲地址
FLANNEL_ETCD_PREFIX="/k8s/network"  #etcd存儲目錄
FLANNEL_OPTIONS="--iface=ens33"  #使用通信的物理網卡
[root@Kube-node1 ~]# systemctl restart flanneld && systemctl enable flanneld
Created symlink from /etc/systemd/system/multi-user.target.wants/flanneld.service to /usr/lib/systemd/system/flanneld.service.
Created symlink from /etc/systemd/system/docker.service.wants/flanneld.service to /usr/lib/systemd/system/flanneld.service.

  修改k8s通用配置

[root@kube-node1 ~]# vim /etc/kubernetes/config
KUBE_MASTER="--master=http://192.168.128.110:8080"  #第22行,指向master節點

  查看一下kube-proxy,因為默認監聽所有IP,所以不用修改。

[root@kube-node1 ~]# grep -v '^#' /etc/kubernetes/proxy
KUBE_PROXY_ARGS=""  #默認監聽所有IP

  修改kubelet的配置文件。說明:KUBELET_POD_INFRA_CONTAINER指定Pod基礎鏡像地址。這是一個基礎鏡像,每個Pod啟動時都會啟動通過該鏡像生成一個容器,如果本地沒有這個鏡像,那么kubelet將會通過外網下載鏡像。

[root@kube-node1 ~]# vim /etc/kubernetes/kubelet
KUBELET_ADDRESS="--address=0.0.0.0"  #第5行,監聽所有IP,因為要使用kubectl遠程連接到kubelet,查看Pod及其內的容器狀態
KUBELET_HOSTNAME="--hostname-override=kube-node1"  #第11行,修改為主機名,加快速度
KUBELET_API_SERVER="--api-servers=http://192.168.128.110:8080"  #第14行,指向API Server
KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=registry.access.redhat.com/rhel7/pod-infrastructure:latest"  #第17行,指定Pod基礎鏡像地址
KUBELET_ARGS=""  第20行

  重啟kube-proxy、kubelet和docker(其實都沒有啟動),並設置開機自啟

[root@kube-node1 ~]# systemctl restart kube-proxy kubelet docker
[root@kube-node1 ~]# systemctl enable kube-proxy kubelet docker
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-proxy.service to /usr/lib/systemd/system/kube-proxy.service.
Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service.
Created symlink from /etc/systemd/system/multi-user.target.wants/docker.service to /usr/lib/systemd/system/docker.service.

  檢查一下

[root@kube-node1 ~]# ip a sh
......
3: flannel0: <POINTOPOINT,MULTICAST,NOARP,UP,LOWER_UP> mtu 1472 qdisc pfifo_fast state UNKNOWN group default qlen 500
    link/none 
    inet 10.255.42.0/16 scope global flannel0
       valid_lft forever preferred_lft forever
    inet6 fe80::a721:7a65:54ea:c2b/64 scope link flags 800 
       valid_lft forever preferred_lft forever
4: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default 
    link/ether 02:42:5c:5b:ae:8c brd ff:ff:ff:ff:ff:ff
    inet 10.255.42.1/24 scope global docker0
       valid_lft forever preferred_lft forever
[root@kube-node1 ~]# yum -y install net-tools
[root@kube-node1 ~]# netstat -antup | grep proxy
tcp        0      0 127.0.0.1:10249         0.0.0.0:*               LISTEN      1473/kube-proxy     
tcp        0      0 192.168.128.111:55342   192.168.128.110:8080    ESTABLISHED 1473/kube-proxy     
tcp        0      0 192.168.128.111:55344   192.168.128.110:8080    ESTABLISHED 1473/kube-proxy
[root@kube-node1 ~]# netstat -antup | grep kubelet
tcp        0      0 127.0.0.1:10248         0.0.0.0:*               LISTEN      1698/kubelet        
tcp        0      0 192.168.128.111:55350   192.168.128.110:8080    ESTABLISHED 1698/kubelet        
tcp        0      0 192.168.128.111:55351   192.168.128.110:8080    ESTABLISHED 1698/kubelet        
tcp        0      0 192.168.128.111:55354   192.168.128.110:8080    ESTABLISHED 1698/kubelet        
tcp        0      0 192.168.128.111:55356   192.168.128.110:8080    ESTABLISHED 1698/kubelet        
tcp6       0      0 :::4194                 :::*                    LISTEN      1698/kubelet        
tcp6       0      0 :::10250                :::*                    LISTEN      1698/kubelet        
tcp6       0      0 :::10255                :::*                    LISTEN 

 6)在node2/minion2節點上安裝組件並進行配置

  重復node1/minion1節點上的操作

 7)測試:在Master節點上查看整個集群的運行狀態

[root@kube-master ~]# kubectl get nodes
NAME         STATUS    AGE
kube-node1   Ready     1h
kube-node2   Ready     2m

  至此K8s容器集群管理系統就搭建完成了,但此時是不能使用Web頁面進行管理,只能通過kubectl命令。

(3).擴展:flannel

  flannel是K8s默認提供的網絡插件。Flannel是由CoreOs團隊開發社交的網絡工具,CoreOS團隊采用L3 Overlay模式設計flannel, 規定宿主機下各個Pod屬於同一個子網,不同宿主機下的Pod屬於不同的子網。

  flannel會在每一個宿主機上運行名為flanneld代理,其負責為宿主機預先分配一個子網,並為Pod分配IP地址。Flannel使用Kubernetes或etcd來存儲網絡配置、分配的子網和主機公共IP等信息。數據包則通過VXLAN、UDP或host-gw這些類型的后端機制進行轉發。

  看一下flannel在Kubernetes中運行的整體過程:

 

 

 

參考:https://www.kubernetes.org.cn/4105.html


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM