簡介:
Docker:是一個開源的應用容器引擎,可以為應用創建一個輕量級的、可移植的、自給自足的容器。
Kubernetes:由Google開源的Docker容器集群管理系統,為容器化的應用提供資源調度、部署運行、服務發現、擴容縮容等功能。
Etcd:由CoreOS開發並維護的一個高可用的鍵值存儲系統,主要用於共享配置和服務發現。
Flannel:Flannel是 CoreOS 團隊針對 Kubernetes 設計的一個覆蓋網絡(Overlay Network)工具,其目的在於幫助每一個使用 Kuberentes 的 CoreOS 主機擁有一個完整的子網。
目標:
- etcd集群的搭建;
- docker安裝和配置(簡單介紹);
- flannel安裝和配置(簡單介紹);
- k8s集群部署;
准備工作:
主機 | 運行服務 | 角色 |
---|---|---|
172.20.30.19(centos7.1) | etcd docker flannel kube-apiserver kube-controller-manager kube-scheduler |
k8s-master |
172.20.30.21(centos7.1) | etcd docker flannel kubelet kube-proxy |
minion |
172.20.30.18(centos7.1) | etcd docker flannel kubelet kube-proxy |
minion |
172.20.30.20(centos7.1) | etcd docker flannel kubelet kube-proxy |
minion |
安裝:
下載好etcd、docker、flannel的rpm安裝包,例如:
etcd:
etcd-2.2.5-2.el7.0.1.x86_64.rpm
flannel:
flannel-0.5.3-9.el7.x86_64.rpm
docker:
device-mapper-1.02.107-5.el7_2.5.x86_64.rpm docker-selinux-1.10.3-44.el7.centos.x86_64.rpm
device-mapper-event-1.02.107-5.el7_2.5.x86_64.rpm libseccomp-2.2.1-1.el7.x86_64.rpm
device-mapper-event-libs-1.02.107-5.el7_2.5.x86_64.rpm lvm2-2.02.130-5.el7_2.5.x86_64.rpm
device-mapper-libs-1.02.107-5.el7_2.5.x86_64.rpm lvm2-libs-2.02.130-5.el7_2.5.x86_64.rpm
device-mapper-persistent-data-0.5.5-1.el7.x86_64.rpm oci-register-machine-1.10.3-44.el7.centos.x86_64.rpm
docker-1.10.3-44.el7.centos.x86_64.rpm oci-systemd-hook-1.10.3-44.el7.centos.x86_64.rpm
docker-common-1.10.3-44.el7.centos.x86_64.rpm yajl-2.0.4-4.el7.x86_64.rpm
docker-forward-journald-1.10.3-44.el7.centos.x86_64.rpm
etcd和flannel的安裝比較簡單,沒有依賴關系。docker的安裝因為有依賴關系,需要先安裝docker的依賴包,才能安裝成功。此處不是本文的重點,不做贅述。
四台機器上,都必須安裝etcd,docker,和flannel
下載kubernetes 1.3版本的二進制包,點擊下載
下載完成后執行一下操作,以在 172.20.30.19上為例:
# tar zxvf kubernetes1.3.tar.gz # 解壓二進制包 # cd kubernetes/server # tar zxvf kubernetes-server-linux-amd64.tar.gz # 解壓master所需的安裝包 # cd kubernetes/server/bin/ # cp kube-apiserver kube-controller-manager kubectl kube-scheduler /usr/bin #把master需要的程序,拷貝到/usr/bin下,也可以設置環境變量達到相同目的 # scp kubelet kube-proxy root@172.20.30.21:~ # 把minion需要的程序,scp發送到minion上 # scp kubelet kube-proxy root@172.20.30.19:~ # scp kubelet kube-proxy root@172.20.30.20:~
配置和部署:
1. etcd的配置和部署
# [member] ETCD_NAME="etcd-2" ETCD_DATA_DIR="/data/etcd/" #ETCD_WAL_DIR="" #ETCD_SNAPSHOT_COUNT="10000" #ETCD_HEARTBEAT_INTERVAL="100" #ETCD_ELECTION_TIMEOUT="1000" #ETCD_LISTEN_PEER_URLS="http://localhost:2380" # 去掉默認的配置 ETCD_LISTEN_PEER_URLS="http://0.0.0.0:7001" #ETCD_LISTEN_CLIENT_URLS="http://localhost:2379" # 去掉默認的配置 ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:4001" #ETCD_MAX_SNAPSHOTS="5" #ETCD_MAX_WALS="5" #ETCD_CORS="" # #[cluster] #ETCD_INITIAL_ADVERTISE_PEER_URLS="http://localhost:2380" ETCD_INITIAL_ADVERTISE_PEER_URLS="http://172.20.30.21:7001" # if you use different ETCD_NAME (e.g. test), set ETCD_INITIAL_CLUSTER value for this name, i.e. "test=http://..." #ETCD_INITIAL_CLUSTER="default=http://localhost:2380" ETCD_INITIAL_CLUSTER="etcd-1=http://172.20.30.19:7001,etcd-2=http://172.20.30.21:7001,etcd-3=http://172.20.30.18:7001,etcd-4=http://172.20.30.20:7001"# 此處的含義為,要配置包含有4台機器的etcd集群 ETCD_INITIAL_CLUSTER_STATE="new" #ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster" #ETCD_ADVERTISE_CLIENT_URLS="http://localhost:2379" ETCD_ADVERTISE_CLIENT_URLS="http://172.20.30.21:4001" #ETCD_DISCOVERY="" #ETCD_DISCOVERY_SRV="" #ETCD_DISCOVERY_FALLBACK="proxy" #ETCD_DISCOVERY_PROXY="" # #[proxy] #ETCD_PROXY="off" #ETCD_PROXY_FAILURE_WAIT="5000" #ETCD_PROXY_REFRESH_INTERVAL="30000" #ETCD_PROXY_DIAL_TIMEOUT="1000" #ETCD_PROXY_WRITE_TIMEOUT="5000" #ETCD_PROXY_READ_TIMEOUT="0" # #[security] #ETCD_CERT_FILE="" #ETCD_KEY_FILE="" #ETCD_CLIENT_CERT_AUTH="false" #ETCD_TRUSTED_CA_FILE="" #ETCD_PEER_CERT_FILE="" #ETCD_PEER_KEY_FILE="" #ETCD_PEER_CLIENT_CERT_AUTH="false" #ETCD_PEER_TRUSTED_CA_FILE="" # #[logging] #ETCD_DEBUG="false" # examples for -log-package-levels etcdserver=WARNING,security=DEBUG #ETCD_LOG_PACKAGE_LEVELS=""
修改四台機器中etcd的服務配置: /usr/lib/systemd/system/etcd.service。修改后的文件內容為:
[Unit] Description=Etcd Server After=network.target After=network-online.target Wants=network-online.target [Service] Type=notify WorkingDirectory=/var/lib/etcd/ EnvironmentFile=-/etc/etcd/etcd.conf User=etcd # set GOMAXPROCS to number of processors ExecStart=/bin/bash -c "GOMAXPROCS=$(nproc) /usr/bin/etcd --name=\"${ETCD_NAME}\" --data-dir=\"${ETCD_DATA_DIR}\" --listen-client-urls=\"${ETCD_LISTEN_CLIENT_URLS}\" --listen-peer-urls=\"${ETCD_LISTEN_PEER_URLS}\" --advertise-client-urls=\"${ETCD_ADVERTISE_CLIENT_URLS}\" --initial-advertise-peer-urls=\"${ETCD_INITIAL_ADVERTISE_PEER_URLS}\" --initial-cluster=\"${ETCD_INITIAL_CLUSTER}\" --initial-cluster-state=\"${ETCD_INITIAL_CLUSTER_STATE}\"" Restart=on-failure LimitNOFILE=65536 [Install] WantedBy=multi-user.target
在每台機器上執行:
1 # systemctl enable etcd.service 2 # systemctl start etcd.service
然后選擇一台機器,在其上執行:
1 # etcdctl set /cluster "example-k8s"
再選取另外一台機器,執行:
1 # etcdctl get /cluster
如果返回 “example-k8s”,則etcd集群部署成功。
2. docker的配置和部署
ADD_REGISTRY="--add-registry docker.midea.registry.hub:10050" DOCKER_OPTS="--insecure-registry docker.midea.registry.hub:10050" INSECURE_REGISTRY="--insecure-registry docker.midea.registry.hub:10050"
以上配置項為本地 register 的地址和服務端口,在docker的服務啟動項中有用。具體register的搭建,請參考上一篇文章。
1 [Unit] 2 Description=Docker Application Container Engine 3 Documentation=http://docs.docker.com 4 After=network.target 5 Wants=docker-storage-setup.service 6 7 [Service] 8 Type=notify 9 NotifyAccess=all 10 EnvironmentFile=-/etc/sysconfig/docker 11 EnvironmentFile=-/etc/sysconfig/docker-storage 12 EnvironmentFile=-/etc/sysconfig/docker-network 13 Environment=GOTRACEBACK=crash 14 ExecStart=/bin/sh -c 'exec -a docker /usr/bin/docker-current daemon \ #注意,在centos是,此處是個坑。docker啟動的時候,systemd是無法獲取到docker的pid,可能會導致后面的flannel服務無法啟動,需要加上紅色部分,讓systemd能抓取到 docker的pid 15 --exec-opt native.cgroupdriver=systemd \ 16 $OPTIONS \ 17 $DOCKER_STORAGE_OPTIONS \ 18 $DOCKER_NETWORK_OPTIONS \ 19 $ADD_REGISTRY \ 20 $BLOCK_REGISTRY \ 21 $INSECURE_REGISTRY \ 22 2>&1 | /usr/bin/forward-journald -tag docker' 23 LimitNOFILE=1048576 24 LimitNPROC=1048576 25 LimitCORE=infinity 26 TimeoutStartSec=0 27 MountFlags=slave 28 Restart=on-abnormal 29 StandardOutput=null 30 StandardError=null 31 32 [Install] 33 WantedBy=multi-user.target
分別在每台機器上執行:
1 # systemctl enable docker.service 2 # systemctl start docker
檢測docker的運行狀態很簡單,執行
1 # docker ps
查看是否能正常列出運行中的容器的各個元數據項即可(此時沒有container運行,只列出各個元數據項):
# docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
3. flannel的配置和部署
1 # Flanneld configuration options 2 3 # etcd url location. Point this to the server where etcd runs 4 FLANNEL_ETCD="http://172.20.30.21:4001" 5 6 # etcd config key. This is the configuration key that flannel queries 7 # For address range assignment 8 FLANNEL_ETCD_KEY="/k8s/network" #這是一個目錄,etcd中的目錄 9 10 # Any additional options that you want to pass 11 FLANNEL_OPTIONS="--logtostderr=false --log_dir=/var/log/k8s/flannel/ --etcd-endpoints=http://172.20.30.21:4001"
然后執行:
# etcdctl mkdir /k8s/network
# etcdctl set /k8s/network/config '{"Network":"172.100.0.0/16"}'
該命令含義是,期望docker運行的container實例的地址,都在 172.100.0.0/16 網段中
# systemctl enable flanneld.service
# systemctl stop docker # 暫時先關閉docker服務,啟動flanneld的時候,會自動拉起docker服務
# systemctl start flanneld.service
命令執行完成,如果沒有差錯的話,就會順利地拉起docker。
# ifconfig docker0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1472 inet 172.100.28.1 netmask 255.255.255.0 broadcast 0.0.0.0 inet6 fe80::42:86ff:fe81:6892 prefixlen 64 scopeid 0x20<link> ether 02:42:86:81:68:92 txqueuelen 0 (Ethernet) RX packets 29 bytes 2013 (1.9 KiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 25 bytes 1994 (1.9 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 172.20.30.21 netmask 255.255.255.0 broadcast 172.20.30.255 inet6 fe80::f816:3eff:fe43:21ac prefixlen 64 scopeid 0x20<link> ether fa:16:3e:43:21:ac txqueuelen 1000 (Ethernet) RX packets 13790001 bytes 3573763877 (3.3 GiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 13919888 bytes 1320674626 (1.2 GiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 flannel0: flags=4305<UP,POINTOPOINT,RUNNING,NOARP,MULTICAST> mtu 1472 inet 172.100.28.0 netmask 255.255.0.0 destination 172.100.28.0 unspec 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00 txqueuelen 500 (UNSPEC) RX packets 0 bytes 0 (0.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 2 bytes 120 (120.0 B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536 inet 127.0.0.1 netmask 255.0.0.0 inet6 ::1 prefixlen 128 scopeid 0x10<host> loop txqueuelen 0 (Local Loopback) RX packets 65311 bytes 5768287 (5.5 MiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 65311 bytes 5768287 (5.5 MiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
以上描述,就部署好了基本的環境,接下來要部署和啟動kubernetes服務。
4. kubenetes 部署
1 #! /bin/sh 2 3 # firstly, start etcd 4 systemctl restart etcd 5 6 # secondly, start flanneld 7 systemctl restart flanneld 8 9 # then, start docker 10 systemctl restart docker 11 12 # start the main server of k8s master 13 nohup kube-apiserver --insecure-bind-address=0.0.0.0 --insecure-port=8080 --cors_allowed_origins=.* --etcd_servers=http://172.20.30.19:4001 --v=1 --logtostderr=false --log_dir=/var/log/k8s/apiserver --service-cluster-ip-range=172.100.0.0/16 & 14 15 nohup kube-controller-manager --master=172.20.30.19:8080 --enable-hostpath-provisioner=false --v=1 --logtostderr=false --log_dir=/var/log/k8s/controller-manager & 16 17 nohup kube-scheduler --master=172.20.30.19:8080 --v=1 --logtostderr=false --log_dir=/var/log/k8s/scheduler &
然后賦予執行權限:
# chmod u+x start_k8s_master.sh
由於安裝k8s的操作,已經把kubelet和kube-proxy發送到作為minion機器上了(我們已經悄悄地定義好了k8s集群)
1 #! /bin/sh 2 3 # firstly, start etcd 4 systemctl restart etcd 5 6 # secondly, start flanneld 7 systemctl restart flanneld 8 9 # then, start docker 10 systemctl restart docker 11 12 # start the minion 13 nohup kubelet --address=0.0.0.0 --port=10250 --v=1 --log_dir=/var/log/k8s/kubelet --hostname_override=172.20.30.21 --api_servers=http://172.20.30.19:8080 --logtostderr=false & 14 15 nohup kube-proxy --master=172.20.30.19:8080 --log_dir=/var/log/k8s/proxy --v=1 --logtostderr=false &
然后賦予執行權限:
# chmod u+x start_k8s_minion.sh
發送該腳本到作為minion的主機上。
運行k8s
# ./start_k8s_master.sh
在作為minion的主機上,執行:
# ./start_k8s_minion.sh
在master主機上,執行:
# kubectl get node NAME STATUS AGE 172.20.30.18 Ready 5h 172.20.30.20 Ready 5h 172.20.30.21 Ready 5h
列出以上信息,則表示k8s集群部署成功。