Kubernetes環境搭建


第一種安裝方案(官網)

基於官方的安裝方式(安裝包並非是最新版本的)

准備CentOS 7.x環境

查看內核版本

1
2
3
[root@k8s-master kube-yaml] # uname -r
3.10.0-514.6.1.el7.x86_64
[root@k8s-master kube-yaml]#

 

最好是選擇3.10版本以上的內核,進行安裝。
本次安裝,選擇了4台服務器進行集群安裝。

1
2
3
4
5
6
[root@k8s-master kube-yaml] # cat /etc/hosts
10.200.102.93 k8s-master
10.200.102.92 k8s-node-1
10.200.102.81 k8s-node-2
10.200.102.82 k8s-node-3
[root@k8s-master kube-yaml]#

配置官方k8s yum源:

1
2
3
4
[virt7-docker-common-release]
name=virt7-docker-common-release
baseurl=http: //cbs.centos.org/repos/virt7-docker-common-release/x86_64/os/
gpgcheck= 0

 

配置阿里雲yum源:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
[base]
name=CentOS-$releasever - Base - mirrors.aliyun.com
failovermethod=priority
baseurl=http: //mirrors.aliyun.com/centos/$releasever/os/$basearch/
http: //mirrors.aliyuncs.com/centos/$releasever/os/$basearch/
#mirrorlist=http: //mirrorlist.centos.org/?release=$releasever&arch=$basearch&repo=os
gpgcheck= 1
gpgkey=http: //mirrors.aliyun.com/centos/RPM-GPG-KEY-CentOS-7
 
#released updates
[updates]
name=CentOS-$releasever - Updates - mirrors.aliyun.com
failovermethod=priority
baseurl=http: //mirrors.aliyun.com/centos/$releasever/updates/$basearch/ http://mirrors.aliyuncs.com/centos/$releasever/updates/$basearch/
#mirrorlist=http: //mirrorlist.centos.org/?release=$releasever&arch=$basearch&repo=updates
gpgcheck= 1
gpgkey=http: //mirrors.aliyun.com/centos/RPM-GPG-KEY-CentOS-7
#additional packages that may be useful
[extras]
name=CentOS-$releasever - Extras - mirrors.aliyun.com
failovermethod=priority
baseurl=http: //mirrors.aliyun.com/centos/$releasever/extras/$basearch/
http: //mirrors.aliyuncs.com/centos/$releasever/extras/$basearch/
#mirrorlist=http: //mirrorlist.centos.org/?release=$releasever&arch=$basearch&repo=extras
gpgcheck= 1
gpgkey=http: //mirrors.aliyun.com/centos/RPM-GPG-KEY-CentOS-7
#additional packages that extend functionality of existing packages
[centosplus]
name=CentOS-$releasever - Plus - mirrors.aliyun.com
failovermethod=priority
baseurl=http: //mirrors.aliyun.com/centos/$releasever/centosplus/$basearch/ http://mirrors.aliyuncs.com/centos/$releasever/centosplus/$basearch/
#mirrorlist=http: //mirrorlist.centos.org/?release=$releasever&arch=$basearch&repo=centosplus
gpgcheck= 1
enabled= 0
gpgkey=http: //mirrors.aliyun.com/centos/RPM-GPG-KEY-CentOS-7
#contrib - packages by Centos Users
[contrib]
name=CentOS-$releasever - Contrib - mirrors.aliyun.com
failovermethod=priority
baseurl=http: //mirrors.aliyun.com/centos/$releasever/contrib/$basearch/
http: //mirrors.aliyuncs.com/centos/$releasever/contrib/$basearch/
#mirrorlist=http: //mirrorlist.centos.org/?release=$releasever&arch=$basearch&repo=contrib
gpgcheck= 1
enabled= 0
gpgkey=http: //mirrors.aliyun.com/centos/RPM-GPG-KEY-CentOS-7

 

更新本地鏡像源

1
2
yum clean all
yum makecache

 

安裝Kubernetes環境(Master)

1
yum -y install --enablerepo=virt7-docker-common-release kubernetes etcd flannel

Kubernates環境安裝

安裝的過程有點久,因為需要下載和安裝。期間如果出現什么下載失敗,更新包更新失敗。基本上都是因為yum的問題,換個國內大企業的鏡像yum就好了。
Kubernates搭建結果
至此,整個下載和安裝的過程就算成功了。

編輯本地host文件,做好訪問映射:vim /etc/hosts

1
2
3
4
5
6
7
[root@k8s-master kube-yaml] # cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
:: 1 localhost localhost.localdomain localhost6 localhost6.localdomain6
10.200.102.93 k8s-master
10.200.102.92 k8s-node-1
10.200.102.81 k8s-node-2
10.200.102.82 k8s-node-3

Hosts配置

編輯k8s的配置文件信息:vi /etc/kubernetes/config
k8s-config

由於CentOS 7.x默認是開啟防火牆的,需要進行防火牆的設置操作:
關閉SELinux:setenforce 0
1、臨時關閉(不用重啟機器):


setenforce 0 #設置SELinux 成為permissive模式
setenforce 1 #設置SELinux 成為enforcing模式

setLinux
2、關閉防火牆:

1
2
systemctl stop firewalld.service
systemctl disable firewalld.service

編輯etcd的配置文件信息:vi /etc/etcd/etcd.conf
etcd

編輯k8s的配置信息:vi /etc/kubernetes/apiserver
apiserver

啟動etcd服務:systemctl start etcd
start-etcd

創建網絡,並且設置網絡配置信息:

1
2
3
etcdctl mkdir /kube-centos/network
etcdctl mk /kube-centos/network/config
"{\"Network\":\"172.30.0.0/16\",\"SubnetLen\":24,\"Backend\":{\"Type\":\"vxlan\"}}"

etcd-network

 

配置flanneld信息:vi /etc/sysconfig/flanneld
flanneld

運行環境:for SERVICES in etcd kube-apiserver kube-controller-manager kube-scheduler flanneld; do systemctl restart $SERVICES; systemctl enable $SERVICES; systemctl status $SERVICES; done
k8s-startk8s-ready

至此k8s-master的安裝和啟動到此完成。

安裝Kubernetes環境(Minion/Node):

內核版本和yum源配置,請參考上面部分進行配置就可以了。

1
yum -y install --enablerepo=virt7-docker-common-release kubernetes flannel

Kubernates環境安裝

 

安裝的過程有點久,因為需要下載和安裝。期間如果出現什么下載失敗,更新包更新失敗。基本上都是因為yum的問題,換個國內大企業的鏡像yum就好了。
Kubernates搭建結果
至此,整個下載和安裝的過程就算成功了。

編輯本地host文件,做好訪問映射:vim /etc/hosts

1
2
3
4
5
6
7
[root@k8s-master kube-yaml] # cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
:: 1 localhost localhost.localdomain localhost6 localhost6.localdomain6
10.200.102.93 k8s-master
10.200.102.92 k8s-node-1
10.200.102.81 k8s-node-2
10.200.102.82 k8s-node-3

Hosts配置

 

由於CentOS 7.x默認是開啟防火牆的,需要進行防火牆的設置操作:
關閉SELinux:setenforce 0
1、臨時關閉(不用重啟機器):

 


setenforce 0 #設置SELinux 成為permissive模式
setenforce 1 #設置SELinux 成為enforcing模式

setLinux
2、關閉防火牆:

1
2
systemctl stop firewalld.service
systemctl disable firewalld.service

 

編輯k8s的配置文件信息:vi /etc/kubernetes/config
k8s-config

配置kubernetes信息:vi /etc/kubernetes/kubelet
kubectl

配置flanneld信息:vi /etc/sysconfig/flanneld
flanneld

運行環境:for SERVICES in kube-proxy kubelet flanneld docker; do systemctl restart $SERVICES; systemctl enable $SERVICES; systemctl status $SERVICES; donek8s-node

配置參數:

1
2
3
kubectl config set-cluster default-cluster --server=http://k8s-master:8080
kubectl config set-context default-context --cluster=default-cluster --user=default-admin
kubectl config use-context default-context

kubectl-config

 

第二種安裝方案(tar安裝)

服務器環境:

1
2
3
4
5
6
7
10.15.206.120 vip
10.15.206.105 master
10.15.206.106 node
 
10.15.206.107 etcd1 node
10.15.206.108 etcd2 node
10.15.206.109 etcd3

 

第一步:配置flannel網卡,先在etcd中注冊flannel子網:

1
etcdctl set /coreos.com/network/config '{"network": "172.16.0.0/16"}'

第二步:在所有節點安裝flannel

1
yum install -y flannel

第三步:修改flannel配置文件/etc/sysconfig/flanneld

1
2
FLANNEL_ETCD= "http://10.15.206.107:2379,http://10.15.206.108:2379,http://10.15.206.109:2379"
FLANNEL_ETCD_KEY= "/coreos.com/network"

重啟flannel:

1
2
systemctl start flanneld
systemctl enable flanneld

 

需要說明的是,如果要讓docker使用flannel的網絡,docker必須要后於flannel啟動,所以需要重新啟動docker

1
systemctl restart docker

 

第四步:下載地址

kubernetes-client地址
https://storage.googleapis.com/kubernetes-release/release/v1.5.3/kubernetes-client-linux-amd64.tar.gz

kubernetes-server地址:
https://storage.googleapis.com/kubernetes-release/release/v1.5.3/kubernetes-server-linux-amd64.tar.gz

第五步:在server端服務器解壓包

tar zxvf kubernetes-server-linux-amd64.tar.gz cd kubernetes/server/bin

然后將文件復制到/usr/local/bin下

1
2
3
4
for i in `ls -F|grep "*"|awk '{print $1}'|awk -F "*" '{print $1}'`;
do
cp $i /usr/local/bin/ ;
done

 

第六步:啟動master

啟動api-server

1
2
3
4
5
6
7
8
9
10
kube-apiserver
--address=0.0.0.0
--insecure-port=8080
--service-cluster-ip-range='10.15.206.120/24'
--log_dir=/usr/local/kubernetes/logs/kube
--kubelet_port=10250
--v=0
--logtostderr=false
--etcd_servers=http://10.15.206.107:2379,http://10.15.206.108:2379,http://10.15.206.109:2379
--allow_privileged=false >> /usr/local/kubernetes/logs/kube-apiserver.log 2>&1 &

 

啟動controller-manager

1
2
3
4
5
kube-controller-manager
--v= 0
--logtostderr= false
--log_dir=/usr/local/kubernetes/logs/kube
--master= 10.15.206.120:8080 >> /usr/local/kubernetes/logs/kube-controller-manager 2>&1 &

 

啟動scheduler

1
2
3
4
kube-scheduler
--master='10.15.206.120:8080'
--v=0
--log_dir=/usr/local/kubernetes/logs/kube >> /usr/local/kubernetes/logs/kube-scheduler.log 2>&1 &

 

第七步:驗證是否成功

1
2
3
4
5
6
7
kubectl get componentstatuses
NAME STATUS MESSAGE ERROR
controller-manager Healthy ok
scheduler Healthy ok
etcd -2 Healthy {"health": "true"}
etcd -0 Healthy {"health": "true"}
etcd -1 Healthy {"health": "true"}

第八步:配置client

1
2
3
tar zxvf kubernetes-client-linux-amd64.tar.gz
cd kubernetes/client/bin
cp * /usr/local/bin/

第九步:啟動client

啟動kubelet

1
2
3
4
5
6
7
8
9
kubelet
--logtostderr= false
--v= 0
--allow-privileged= false
--log_dir=/usr/local/kubernetes/logs/kube
--address= 0.0.0.0
--port= 10250
--hostname_override= 10.15.206.120
--api_servers=http: //10.15.206.120:8080 >> /usr/local/kubernetes/logs/kube-kubelet.log 2>&1 &

 

啟動proxy

1
2
3
4
kube-proxy
--logtostderr= false
--v= 0
--master=http: //10.15.206.120

 

第三種安裝方案(calico)

環境介紹:

服務器 Ip Hosts
Centos-7.3 10.200.102.95 k8s-master
Centos-7.3 10.200.102.94 k8s-node-1
Centos-7.3 10.200.102.85 k8s-node-2
Centos-7.3 10.200.102.90 k8s-node-3

確保操作系統的內核是3.10版本以上的。
並且關閉防火牆和selinux。

1
2
3
setenforce 0
systemctl stop firewalld.service
systemctl disable firewalld.service

 

根據需要是否配置必要的源,可以參考上述的源配置。

etc環境安裝(可以選擇集群的方案安裝)

服務器 IP Hosts
| 服務器 | Ip | Hosts |
| ————- |:—————:|:———:|
| Centos-7.3 | 10.200.102.85 | Echo0 |
| Centos-7.3 | 10.200.102.86 | Echo1 |
| Centos-7.3 | 10.200.102.84 | Echo2 |

安裝ectd環境
etcd-install

配置etcd信息
etcd-config

etcd-config2

啟動服務
etcd-start
所有的節點都進行如上相應的配置

安裝k8s master環境

1
yum install kubernetes-master docker -y

配置好相應的kubernetes信息
k8s-calico-config

k8s-calico-apiserver

配置好docker信息
calico-docker

查看集群信息
calico-k8s-server

安裝k8s node環境

1
yum install kubernetes-node docker –y

配置k8s和docker信息
配置kubectlcalico-k8s-kubectl
配置proxycalico-k8s-proxy
配置configcalico-k8s-config
配置docker鏡像拉取位置calico-docker2

查看集群信息
calico-k8s-server2

安裝kube-dns環境(master節點)

1
2
3
4
5
6
7
8
9
10
11
下載kube-dns命令
# wget https://dl.k8s.io/v1.5.2/kubernetes-server-linux-amd64.tar.gz
# tar -xf kubernetes-server-linux-amd64.tar.gz
# mv /opt/docker/src/kubernetes/server/bin/kube-dns /usr/bin/
 
新建kube-dns配置文件
# vi /etc/kubernetes/kube-dns
KUBE_DNS_PORT="--dns-port=53"
KUBE_DNS_DOMAIN="--domain=cluster.local"
KUBE_DNS_MASTER=--kube-master-url="http://10.200.102.95:8080”
KUBE_DNS_ARGS=""
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
新建kube-dns.service配置文件
# cat /usr/lib/systemd/system/kube-dns.service
[Unit]
Description=Kubernetes Kube-dns Server
Documentation=https: //github.com/GoogleCloudPlatform/kubernetes
After=kube-apiserver.service
Requires=kube-apiserver.service
 
[Service]
WorkingDirectory=/var/lib/kube-dns
EnvironmentFile=-/etc/kubernetes/kube-dns
ExecStart=/usr/bin/kube-dns \
$KUBE_DNS_PORT \
$KUBE_DNS_DOMAIN \
$KUBE_DNS_MASTER \
$KUBE_DNS_ARGS
Restart=on-failure
[Install]
WantedBy=multi-user.target
1
2
3
4
5
6
7
8
9
10
11
12
Master啟動
# mkdir -p /var/lib/kube-dns
# systemctl enable kube-dns
# systemctl restart kube-dns
 
master修改/etc/resolv.conf文件
# cat /etc/resolv.conf
# Generated by NetworkManager
search default.svc.cluster.local svc.cluster.local cluster.local
nameserver 10.200.102.95
nameserver 223.5.5.5
nameserver 202.96.128.86

node結點修改kubelet文件
calico-edit-kubectl

驗證kube-dns是否安裝成功
kube-dns

安裝calico環境

配置各個節點docker環境:
calico-docker-node-1
calico-docker-node-2
calico-docker-node-3

配置好,記得重啟docker

1
2
# systemctl daemon-reload
# systemctl restart docker

 

下載calico插件

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
Master節點:
# wget https://github.com/projectcalico/calicoctl/releases/download/v1.1.0/calicoctl
# chmod +x calicoctl
# mv calicoctl /usr/bin/
# docker pull docker.io/calico/node:v1.1.0
# docker tag docker.io/calico/node:v1.1.0 quay.io/calico/node:v1.1.0
#wget N -P /opt/cni/bin/
https: //github.com/projectcalico/calico-cni/releases/download/v1.6.0/calico
# wget -N -P /opt/cni/bin/
https: //github.com/projectcalico/calico-cni/releases/download/v1.6.0/calico-ipam
# chmod +x /opt/cni/bin/calico /opt/cni/bin/calico-ipam
 
Node節點:
# docker pull docker.io/calico/node:v1.1.0
# docker tag docker.io/calico/node:v1.1.0 quay.io/calico/node:v1.1.0
# wget -N -P /opt/cni/bin/
https: //github.com/projectcalico/calico-cni/releases/download/v1.6.0/calico
# wget -N -P /opt/cni/bin/
https: //github.com/projectcalico/calico-cni/releases/download/v1.6.0/calico-ipam
# chmod +x /opt/cni/bin/calico /opt/cni/bin/calico-ipam

 

配置文件(所有節點)
calico-config-1
calico-config-2
calico-config-3

Master機上
wget http://docs.projectcalico.org/v1.5/getting-started/kubernetes/installation/policy-controller.yaml

修改 policy-controller.yaml文件里的etcd的地址
policy-controller

啟動文件:

1
2
3
4
5
6
7
8
9
10
11
# serivce etcd restart
# kubectl create -f policy-controller.yaml
 
每個節點上啟動calico-node服務(ETCD_AUTHORITY可以配置多個(集群方案))
# systemctl enable calico-node
# systemctl start calico-node
# export ETCD_AUTHORITY=10.200.102.85:2379
 
驗證calico是否啟動正常
calicoctl node status
calicoctl get nodes --out=wide

 

calico-node-status
calico-node

添加子網
calico-pooldo-pool
get-pool

至此calico的k8s方案搭建成功


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM