k8s簡單集群搭建


注意:在使用該文檔之前,您應該要對k8s的組件有一些了解,我就不描述組件在集群中所擔任的角色了, 如有需要請移步


1 環境准備

1.1 基本環境

  • 操作系統
CentOS Linux release 7.4.1708 (Core)
  • 軟件版本
Kubernetes v1.9.1  (后面提供tar包)
etcd Version: 3.2.18(直接yum 安裝)(源碼包地址:   https://github.com/coreos/etcd/releases)
flanneld Version: 0.7.1(直接yum安裝)(源碼包地址:https://github.com/coreos/flannel/releases)
docker Version: docker://1.13.1(直接yum安裝)
  • IP部署
master:192.168.1.192 (kube-apiserver, kube-controller-manager, kube-scheduler, etcd, flannel(非必須))
node1:192.168.1.193  (kubelet, kube-proxy, etcd, flannel)
node2:192.168.1.194   ( kubelet, kube-proxy, etcd, flannel)

1.2 初始化操作

  • /etc/hosts配置
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.1.192 master
192.168.1.193 node1
192.168.1.194 node2
# 將這個文件拷貝到所有的節點上(master, node1, node2)
  • 防火牆+selinux
systemctl stop firewalld  # 注意這里執行后 最好用 firewall-cmd --state再確認一下,如果是Not running表示關閉了
vim /etc/sysconfig/selinux
SELINUX=disabled
  • repo文件
cd /etc/yum.repos.d/
cat kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=0
# 清除yum 緩存
yum clean all
# 建立緩存
yum makecache
  • 重啟系統
    reboot

2. 實驗過程

2.1 安裝etcd集群(master, node1,node2上都要執行)

  • yum安裝etcd
yum -y install etcd
yum -y install docker
  • /etc/etcd/etcd.conf
# [member]
ETCD_NAME=infra1   
ETCD_DATA_DIR="/var/lib/etcd"
ETCD_LISTEN_PEER_URLS="http://192.168.1.192:2380"
ETCD_LISTEN_CLIENT_URLS="http://192.168.1.192:2379" # 這個參數是etcd服務器自己監聽時用的,也就是說,監聽本機上的哪個網卡,哪個端口
#[cluster]
ETCD_INITIAL_ADVERTISE_PEER_URLS="http://192.168.1.192:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_ADVERTISE_CLIENT_URLS="http://192.168.1.192:2379"  #就是客戶端(etcdctl/curl等)跟etcd服務進行交互時請求的url    etcdctl的底層邏輯,應該是調用curl跟etcd服務進行交換
  • etcd.service
[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target
Documentation=https://github.com/coreos

[Service]
Type=notify
WorkingDirectory=/var/lib/etcd/
EnvironmentFile=-/etc/etcd/etcd.conf
ExecStart=/usr/bin/etcd \
  --name ${ETCD_NAME} \
  --initial-advertise-peer-urls ${ETCD_INITIAL_ADVERTISE_PEER_URLS} \
  --listen-peer-urls ${ETCD_LISTEN_PEER_URLS} \
  --listen-client-urls ${ETCD_LISTEN_CLIENT_URLS},http://127.0.0.1:2379 \
  --advertise-client-urls ${ETCD_ADVERTISE_CLIENT_URLS} \
  --initial-cluster-token ${ETCD_INITIAL_CLUSTER_TOKEN} \
  --initial-cluster infra1=http://192.168.1.192:2380,infra2=http://192.168.1.193:2380,infra3=http://192.168.1.194:2380 \
  --initial-cluster-state new \  # 設置該選項,那么在--initial-cluster中就要指定所有etcd集群成員
  --data-dir=${ETCD_DATA_DIR}
Restart=on-failure
RestartSec=5
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
  • 啟動etcd
systemctl daemon-reload
systemctl enable etcd
systemctl start etcd

2.2 配置master節點(master上執行)

  • 下載k8s 1.9.1版本包
    下載地址,這里面就包含了master和node上所需的組件
mkdir -p /root/local/bin && mkdir /etc/kubernetes(master和node上都要執行,下面就直接使用了)
# 將/root/local/bin追加到系統PATH路徑后
cat /root/.bash_profile
# .bash_profile
# Get the aliases and functions
if [ -f ~/.bashrc ]; then
 . ~/.bashrc
fi
# User specific environment and startup programs
PATH=$PATH:$HOME/bin:/root/local/bin  #就是這里啦
export PATH

source /root/.bash_profile


# cd /usr/src
wget https://storage.googleapis.com/kubernetes-release/release/v1.9.1/kubernetes-server-linux-amd64.tar.gz
tar -xf kubernetes-server-linux-amd64.tar.gz
cd kubernetes/server/bin
cp -r kube-apiserver kube-controller-manager kubectl kube-scheduler /root/local/bin

配置kube-apiserver

  • 配置/etc/kubernetes/apiserver
cat /etc/kubernetes/apiserver
###
# kubernetes system config
#
# The following values are used to configure the kube-apiserver
#
# The address on the local server to listen to.
KUBE_API_ADDRESS="--insecure-bind-address=0.0.0.0"
# The port on the local server to listen on.
# KUBE_API_PORT="--port=8080"
 
# Port minions listen on
# KUBELET_PORT="--kubelet-port=10250"
 
# Comma separated list of nodes in the etcd cluster
KUBE_ETCD_SERVERS="--etcd-servers=http://192.168.1.192:2379,http://192.168.1.193:2379,http://192.168.1.194:2379"
# Address range to use for services
KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16"
# default admission control policies
#KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota"
KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ResourceQuota"
# Add your own!
KUBE_API_ARGS=""
  • 配置/etc/kubernetes/config
cat /etc/kubernetes/config
###
# kubernetes system config
#
# The following values are used to configure various aspects of all
# kubernetes services, including
#
# kube-apiserver.service  #請注意到這里的說明,這個文件是所有節點都需要的,所以要分發到所有的節點對應的路徑下
# kube-controller-manager.service
# kube-scheduler.service
# kubelet.service
# kube-proxy.service
# logging to stderr means we get it in the systemd journal
KUBE_LOGTOSTDERR="--logtostderr=true"
 
# journal message level, 0 is debug
KUBE_LOG_LEVEL="--v=0"
 
# Should this cluster be allowed to run privileged docker containers
KUBE_ALLOW_PRIV="--allow-privileged=false"
 
# How the controller-manager, scheduler, and proxy find the apiserver
KUBE_MASTER="--master=http://192.168.1.192:8080"
  • 配置kube-apiserver.service (/usr/lib/systemd/system/kube-apiserver.service)
cd /usr/lib/systemd/system/
cat kube-apiserver.service
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=network.target
After=etcd.service
 
[Service]
EnvironmentFile=-/etc/kubernetes/config
EnvironmentFile=-/etc/kubernetes/apiserver
ExecStart=/root/local/bin/kube-apiserver \
            $KUBE_LOGTOSTDERR \
            $KUBE_LOG_LEVEL \
            $KUBE_ETCD_SERVERS \
            $KUBE_API_ADDRESS \
            $KUBE_API_PORT \
            $KUBELET_PORT \
            $KUBE_ALLOW_PRIV \
            $KUBE_SERVICE_ADDRESSES \
            $KUBE_ADMISSION_CONTROL \
            $KUBE_API_ARGS
Restart=on-failure
Type=notify
LimitNOFILE=65536
 
[Install]
WantedBy=multi-user.target

配置kube-controller-manager

  • 配置/etc/kubernetes/controller-manager
###
# The following values are used to configure the kubernetes controller-manager
# defaults from config and apiserver should be adequate
# Add your own!
KUBE_CONTROLLER_MANAGER_ARGS=""
  • 配置/usr/lib/systemd/system/kube-controller-manager.service
cat /usr/lib/systemd/system/kube-controller-manager.service
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
 
[Service]
EnvironmentFile=-/etc/kubernetes/config
EnvironmentFile=-/etc/kubernetes/controller-manager
ExecStart=/root/local/bin/kube-controller-manager \
            $KUBE_LOGTOSTDERR \
            $KUBE_LOG_LEVEL \
            $KUBE_MASTER \
            $KUBE_CONTROLLER_MANAGER_ARGS
Restart=on-failure
LimitNOFILE=65536
 
[Install]
WantedBy=multi-user.target

配置kube-scheduler

  • 配置/etc/kubernetes/scheduler
###
# kubernetes scheduler config
# default config should be adequate
# Add your own!
#KUBE_SCHEDULER_ARGS="--loglevel=0"
KUBE_SCHEDULER_ARGS="--address=127.0.0.1"
  • 配置/usr/lib/systemd/system/kube-scheduler.service
[Unit]
Description=Kubernetes Scheduler Plugin
Documentation=https://github.com/GoogleCloudPlatform/kubernetes

[Service]
EnvironmentFile=-/etc/kubernetes/config
EnvironmentFile=-/etc/kubernetes/scheduler
ExecStart=/root/local/bin/kube-scheduler \
            $KUBE_LOGTOSTDERR \
            $KUBE_LOG_LEVEL \
            $KUBE_MASTER \
            $KUBE_SCHEDULER_ARGS
Restart=on-failure
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target

啟動服務

for i in 'kube-apiserver,kube-controller-manager,kube-scheduler'
do
    systemctl enabld $i
    systemctl start $i
done

2.3 配置docker私有倉庫(在master上執行,當然也可以是其他的節點,只要能訪問到就行)

因為沒有配置TLS所以我們需要修改一下docker.service文件

  • 修改docker.service文件
[Unit]
Description=Docker Application Container Engine
Documentation=http://docs.docker.com
After=network.target rhel-push-plugin.socket registries.service
Wants=docker-storage-setup.service
Requires=docker-cleanup.timer

[Service]
Type=notify
NotifyAccess=all
EnvironmentFile=-/run/containers/registries.conf
EnvironmentFile=-/etc/sysconfig/docker
EnvironmentFile=-/etc/sysconfig/docker-storage
EnvironmentFile=-/etc/sysconfig/docker-network
Environment=GOTRACEBACK=crash
Environment=DOCKER_HTTP_HOST_COMPAT=1
Environment=PATH=/usr/libexec/docker:/usr/bin:/usr/sbin
ExecStart=/usr/bin/dockerd-current \
          --add-runtime docker-runc=/usr/libexec/docker/docker-runc-current \
          --default-runtime=docker-runc \
          --exec-opt native.cgroupdriver=systemd \
          --userland-proxy-path=/usr/libexec/docker/docker-proxy-current \
          --init-path=/usr/libexec/docker/docker-init-current \
          --seccomp-profile=/etc/docker/seccomp.json \
          --insecure-registry=192.168.1.192:5000 \  # 就是這里啦,來訪問該IP:PROT時不用https,node1和node2的docker這里也是需要這樣配置,后面就不贅述docker的配置了
          $OPTIONS \
          $DOCKER_STORAGE_OPTIONS \
          $DOCKER_NETWORK_OPTIONS \
          $ADD_REGISTRY \
          $BLOCK_REGISTRY \
          $INSECURE_REGISTRY \
   $REGISTRIES
ExecReload=/bin/kill -s HUP $MAINPID
LimitNOFILE=1048576
LimitNPROC=1048576
LimitCORE=infinity
TimeoutStartSec=0
Restart=on-abnormal
KillMode=process

[Install]
WantedBy=multi-user.target
  • 啟動docker
systemctl enable docker
systemctl start docker
  • 私有倉庫搭建
    docker search registry # 查找一下可用的registry鏡像
[root@master ~]# docker search registry  #結果太多就列這么多吧
INDEX NAME DESCRIPTION STARS OFFICIAL AUTOMATED
docker.io docker.io/registry The Docker Registry 2.0 implementation for... 2093 [OK]       
docker.io docker.io/konradkleine/docker-registry-frontend Browse and modify your Docker registry in ... 194 [OK]
docker.io docker.io/hyper/docker-registry-web Web UI, authentication service and event r... 140 [OK]
docker.io docker.io/atcol/docker-registry-ui A web UI for easy private/local Docker Reg... 106 [OK]

# 我們直接用第一個
docker pull docker.io/registry
# 查看我們的鏡像
docker images
[root@master ~]# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
docker.io/registry latest b2b03e9146e1 12 days ago 33.3 MB
# 啟動registry鏡像
mkdir -p /home/belle/docker_registry/
docker run -d -p 5000:5000 -v /home/belle/docker_registry/:/var/lib/registry registry
# 查看在運行的容器
[root@master ~]# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
e43cd73f7f76 registry "/entrypoint.sh /e..." 5 days ago Restarting (1) 46 seconds ago suspicious_sinoussi
# 下面就是將我們需要的鏡像下載下來 然后push到我們的剛剛啟動的容器中
# 下載一個之后實驗會用到的鏡像 hello-world-nginx
# 給docker.io/kitematic/hello-world-nginx 鏡像打tag
docker tag docker.io/kitematic/hello-world-nginx  192.168.1.192:5000/hello-world-nginx 
# push
The push refers to a repository [192.168.1.192:5000/hello-world-nginx]
5f70bf18a086: Preparing
5f70bf18a086: Preparing
5f70bf18a086: Preparing
b51acdd3ef48: Preparing
3f47ff454588: Preparing
f19fb69b288a: Preparing
b11278aeb507: Preparing
fb85701f3991: Preparing
15235e629864: Preparing
86882fc1175f: Preparing
fb85701f3991: Waiting
15235e629864: Waiting
86882fc1175f: Waiting
9e8c93c7ea7e: Preparing
e66f0ebc2eef: Preparing
6a15a6c08ef6: Preparing
461f75075df2: Preparing
9e8c93c7ea7e: Waiting
e66f0ebc2eef: Waiting
6a15a6c08ef6: Waiting
461f75075df2: Waiting
f19fb69b288a: Layer already exists
b11278aeb507: Layer already exists
fb85701f3991: Layer already exists
15235e629864: Layer already exists
86882fc1175f: Layer already exists
b51acdd3ef48: Layer already exists
3f47ff454588: Layer already exists
5f70bf18a086: Layer already exists
6a15a6c08ef6: Layer already exists
e66f0ebc2eef: Layer already exists
9e8c93c7ea7e: Layer already exists
461f75075df2: Layer already exists
latest: digest: sha256:583f0c9ca89415140fa80f70f8079f5138180a6dda2c3ff3920353b459e061a3 size: 3226
# 這個結果是因為我之前就已經Push過了

2.4 配置node節點

node1和node2上相似,因為有些地方需要修改ip指向,這里以node1為例

kubeconfig文件
cat /etc/kubernetes/kubeconfig
apiVersion: v1
clusters:
- cluster:
    server: http://192.168.1.192:8080
  name: myk8s
contexts:
- context:
    cluster: myk8s
    user: ""
  name: myk8s-context
current-context: myk8s-context
kind: Config
preferences: {}
users: []
准備所需組件
  • 將kubelet kube-proxy中master上分發到所有的node
cd /usr/src/kubernetes/server/bin/
scp kubelet kube-proxy root@192.168.1.193:/root/local/bin
scp kubelet kube-proxy root@192.168.1.194:/root/local/bin
配置kubelet
  • 配置/etc/kubernetes/kubelet
cat /etc/kubernetes/kubelet
###
# kubernetes kubelet (minion) config
 
# The address for the info server to serve on (set to 0.0.0.0 or "" for all interfaces)
KUBELET_ADDRESS="--address=192.168.1.193"
 
# The port for the info server to serve on
KUBELET_PORT="--port=10250"
 
# You may leave this blank to use the actual hostname
KUBELET_HOSTNAME="--hostname-override=node1"
 
# location of the api-server
##KUBELET_API_SERVER="--api-servers=http://127.0.0.1:8080"
 
# pod infrastructure container
KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=192.168.1.192:5000/pause-amd64"
 
# Add your own!
KUBELET_ARGS="--cluster-dns=10.254.0.2 --cluster-domain=cluster.local --runtime-cgroups=/systemd/system.slice --kubelet-cgroups=/systemd/system.slice --fail-swap-on=false --cgroup-driver=systemd --kubeconfig=/etc/kubernetes/kubeconfig"

注意:KUBELET_ARGS中 一定要指定--cluster-dns --cluster-domain 不然之后創建deploymentpods永遠是containercreating狀態,並會報錯:Warning MissingClusterDNS 13s (x9 over 12m) kubelet, node2 pod: "nginx-7ff779f954-j4t55_default(7dff9ffa-8afa-11e8-b3ac-000c290991f6)". kubelet does not have ClusterDNS IP configured and cannot create Pod using "ClusterFirst" policy. Falling back to "Default" policy.
如果--cgroup-driver=systemd那么要增加配置--runtime-cgroups=/systemd/system.slice --kubelet-cgroups=/systemd/system.slice

  • 配置kubelet.service
cd /usr/lib/systemcd/system
cat kubelet.service
[Unit]
Description=Kubernetes Kubelet Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=docker.service
Requires=docker.service
 
[Service]
WorkingDirectory=/var/lib/kubelet
EnvironmentFile=-/etc/kubernetes/config
EnvironmentFile=-/etc/kubernetes/kubelet
ExecStart=/root/local/bin/kubelet \
            $KUBE_LOGTOSTDERR \
            $KUBE_LOG_LEVEL \
            $KUBELET_API_SERVER \
            $KUBELET_ADDRESS \
            $KUBELET_PORT \
            $KUBELET_HOSTNAME \
            $KUBE_ALLOW_PRIV \
            $KUBELET_ARGS \
            $KUBELET_POD_INFRA_CONTAINER  #注意這個參數,之前這里是沒有配置的,做實驗到創建pods的時候怎么都不成功,結果節點的kubelet報錯提示連接google的pause-amd64超時, wtf 我明明配置文件中寫的自己的私有鏡像,為毛還要去連接google的,所以就直接加到啟動參數了。特別提醒:重啟后一定要用ps aux | grep kubelet 查看一下里面是否有這個參數()
Restart=on-failure
KillMode=process
 
[Install]
WantedBy=multi-user.target
配置kube-proxy
  • 配置kube-proxy.service
[Unit]
Description=Kubernetes Kube-Proxy Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=network.target
 
[Service]
EnvironmentFile=-/etc/kubernetes/config
EnvironmentFile=-/etc/kubernetes/proxy
ExecStart=/root/local/bin/kube-proxy \
            $KUBE_LOGTOSTDERR \
            $KUBE_LOG_LEVEL \
            $KUBE_MASTER \
            $KUBE_PROXY_ARGS
Restart=on-failure
LimitNOFILE=65536
 
[Install]
WantedBy=multi-user.target
  • 配置/etc/kubernetes/proxy
cat /etc/kubernetes/proxy
###
# kubernetes proxy config
# default config should be adequate
# Add your own!
KUBE_PROXY_ARGS=""
配置flannel

flannel我在master,node1,node2上都安裝了

  • 安裝
yum -y install flannel
  • 修改/etc/sysconfig/flannel
# Flanneld configuration options  

# etcd url location. Point this to the server where etcd runs
FLANNEL_ETCD_ENDPOINTS="http://192.168.1.192:2379,http://192.168.1.193:2379,http://192.168.1.194:2379"
# etcd config key. This is the configuration key that flannel queries
# For address range assignment
FLANNEL_ETCD_PREFIX="/kube-centos/network"
# Any additional options that you want to pass
#FLANNEL_OPTIONS=""

mkdir -p /kube-centos/network

  • 配置flanneld.service
[Unit]
Description=Flanneld overlay address etcd agent
After=network.target
After=network-online.target
Wants=network-online.target
After=etcd.service  #注意這里
Before=docker.service #注意這里

[Service]
Type=notify
EnvironmentFile=/etc/sysconfig/flanneld
EnvironmentFile=-/etc/sysconfig/docker-network
ExecStart=/usr/bin/flanneld-start $FLANNEL_OPTIONS
ExecStartPost=/usr/libexec/flannel/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/docker
Restart=on-failure

[Install]
WantedBy=multi-user.target
WantedBy=docker.service
  • 在etcd中創建網絡配置
# 在master上執行(任意安裝了etcd節點的都可以)
[root@master ~]# mkdir -p /kube-centos/network
[root@master ~]# etcdctl --endpoints=http://192.168.1.192:2379,http://192.168.1.193:2379,http://192.168.1.194:2379 set /kube-centos/network/config '{"Network":"172.30.0.0/16","SubnetLen":24,"Backend":{"Type":"vxlan"}}'
  • 啟動flanneld
systemctl enable flanneld
systemctl start flanneld

使用systemctl命令啟動flanneld后,會自動執行./mk-docker-opts.sh -i生成如下兩個文件環境變量文件

  • /run/flannel/subnet.env
  • /run/docker_opts.env
    Docker將會讀取這兩個環境變量文件作為容器啟動參數。
配置docker

在/usr/lib/systemd/system/docker.service中添加

EnvironmentFile=-/run/flannel/docker
EnvironmentFile=-/run/flannel/subnet.env
  • docker.service文件為
[Unit]
Description=Docker Application Container Engine
Documentation=http://docs.docker.com
After=network.target rhel-push-plugin.socket registries.service
Wants=docker-storage-setup.service
Requires=docker-cleanup.timer

[Service]
Type=notify
NotifyAccess=all
EnvironmentFile=-/run/containers/registries.conf
EnvironmentFile=-/run/flannel/docker  #增加
EnvironmentFile=-/run/flannel/subnet.env  #增加
EnvironmentFile=-/etc/sysconfig/docker
EnvironmentFile=-/etc/sysconfig/docker-storage
EnvironmentFile=-/etc/sysconfig/docker-network
Environment=GOTRACEBACK=crash
Environment=DOCKER_HTTP_HOST_COMPAT=1
Environment=PATH=/usr/libexec/docker:/usr/bin:/usr/sbin
ExecStart=/usr/bin/dockerd-current \
          --add-runtime docker-runc=/usr/libexec/docker/docker-runc-current \
          --default-runtime=docker-runc \
          --exec-opt native.cgroupdriver=systemd \
          --userland-proxy-path=/usr/libexec/docker/docker-proxy-current \
          --init-path=/usr/libexec/docker/docker-init-current \
          --seccomp-profile=/etc/docker/seccomp.json \
          --insecure-registry=192.168.1.192:5000 \
          $OPTIONS \
          $DOCKER_STORAGE_OPTIONS \
          $DOCKER_NETWORK_OPTIONS \
          $ADD_REGISTRY \
          $BLOCK_REGISTRY \
          $INSECURE_REGISTRY \
   $REGISTRIES
ExecReload=/bin/kill -s HUP $MAINPID
LimitNOFILE=1048576
LimitNPROC=1048576
LimitCORE=infinity
TimeoutStartSec=0
Restart=on-abnormal
KillMode=process

[Install]
WantedBy=multi-user.target
啟動服務
for i in 'etcd flanneld kubelet kube-proxy docoker'
do
    systemctl enable $i
    systemctl start $i
done
現在查詢etcd中的內容
[root@master ~]# export ENDPOINTS=http://192.168.1.192:2379,http://192.168.1.193:2379,http://192.168.1.194:2379
[root@master ~]# etcdctl --endpoints=${ENDPOINTS} ls /kube-centos/network/subnets
/kube-centos/network/subnets/172.30.6.0-24
/kube-centos/network/subnets/172.30.61.0-24
/kube-centos/network/subnets/172.30.67.0-24
[root@master ~]# etcdctl --endpoints=${ENDPOINTS} get /kube-centos/network/config
{"Network":"172.30.0.0/16","SubnetLen":24,"Backend":{"Type":"vxlan"}}
[root@master ~]# etcdctl --endpoints=${ENDPOINTS} get /kube-centos/network/subnets/172.30.6.0-24
{"PublicIP":"192.168.1.192","BackendType":"vxlan","BackendData":{"VtepMAC":"22:e4:55:e3:27:f6"}}
[root@master ~]# etcdctl --endpoints=${ENDPOINTS} get /kube-centos/network/subnets/172.30.61.0-24
{"PublicIP":"192.168.1.193","BackendType":"vxlan","BackendData":{"VtepMAC":"c6:d3:c4:4b:d1:66"}}
[root@master ~]# etcdctl --endpoints=${ENDPOINTS} get /kube-centos/network/subnets/172.30.67.0-24
{"PublicIP":"192.168.1.194","BackendType":"vxlan","BackendData":{"VtepMAC":"12:0c:91:23:08:83"}}
# 如果能看到以上的信息,說明flannel配置成功了

2.5 驗證

在master上執行

  • 查看nodes的狀態
[root@master ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
node1 Ready <none> 22h v1.9.1  #是ready就OK了
node2 Ready <none> 22h v1.9.1
  • 測試集群
    創建一個nginx的service試一下集群是否可用
[root@master ~]# kubectl run nginx --replicas=2 --labels="run=load-balancer-example" --image=192.168.1.192:5000/hello-world-nginx --port=80
deployment "nginx" created
[root@master ~]# kubectl expose deployment nginx --type=NodePort --name=example-service
service "example-service" exposed
# 查看一下Pods創建結果
[root@master ~]# kubectl get pods 
NAME READY STATUS RESTARTS AGE
nginx-7ff779f954-9sccd 1/1 Running 1 3h  #發現已經運行了  注意:這里的READY列是1/1,一個pod中可以有多個容器,而這里的pod中只有一個容器所有是1/1  一對多之后再寫一些文檔來說明
nginx-7ff779f954-pbps4 1/1 Running 0 17m
# 查看一下pods創建時的一些events
[root@master ~]# kubectl get pods nginx-7ff779f954-pbps4
Name: nginx
Namespace: default
CreationTimestamp: Thu, 19 Jul 2018 14:28:16 +0800
Labels: run=load-balancer-example
Annotations: deployment.kubernetes.io/revision=1
Selector: run=load-balancer-example
Replicas: 2 desired | 2 updated | 2 total | 2 available | 0 unavailable  #要注意這里的一些參數,如果desired和available不對等,可能就出了什么問題了
StrategyType: RollingUpdate
MinReadySeconds: 0
RollingUpdateStrategy: 1 max unavailable, 1 max surge
Pod Template:
  Labels: run=load-balancer-example
  Containers:
   nginx:
    Image: 192.168.1.192:5000/hello-world-nginx  #從我們自建的倉庫中拉取
    Port: 80/TCP
    Environment: <none>
    Mounts: <none>
  Volumes: <none>
Conditions:
  Type Status Reason
  ---- ------ ------
  Available True MinimumReplicasAvailable
OldReplicaSets: <none>
NewReplicaSet: nginx-7ff779f954 (2/2 replicas created)
Events: <none>
Name: nginx-7ff779f954-pbps4
Namespace: default
Node: node1/192.168.1.193  #第一個節點
Start Time: Thu, 19 Jul 2018 17:44:22 +0800
Labels: pod-template-hash=3993359510
                run=load-balancer-example
Annotations: <none>
Status: Running
IP: 172.30.6.3
Controlled By: ReplicaSet/nginx-7ff779f954
Containers:
  nginx:
    Container ID: docker://df7230d3e002c53341fe559851a4d913b77aa9bff0f2d21c656ad1ed6f0bb86d
    Image: 192.168.1.192:5000/hello-world-nginx
    Image ID: docker-pullable://192.168.1.192:5000/hello-world-nginx@sha256:583f0c9ca89415140fa80f70f8079f5138180a6dda2c3ff3920353b459e061a3
    Port: 80/TCP
    State: Running
      Started: Thu, 19 Jul 2018 17:45:55 +0800
    Ready: True
    Restart Count: 0
    Environment: <none>
    Mounts: <none>
Conditions:
  Type Status
  Initialized True 
  Ready True 
  PodScheduled True 
Volumes: <none>
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: <none>
Events:
  Type Reason Age From Message
  ---- ------ ---- ---- -------
  Normal Scheduled 21m default-scheduler Successfully assigned nginx-7ff779f954-pbps4 to node1
  Normal Pulling 19m (x4 over 21m) kubelet, node1 pulling image "192.168.1.192:5000/hello-world-nginx"  # 成功拉取
  Normal Pulled 19m kubelet, node1 Successfully pulled image "192.168.1.192:5000/hello-world-nginx"
  Normal Created 19m kubelet, node1 Created container  # 已經創建
  Normal Started 19m kubelet, node1 Started container
# 查看一下端口情況
[root@master ~]# kubectl describe svc example-service
Name: example-service
Namespace: default
Labels: run=load-balancer-example
Annotations: <none>
Selector: run=load-balancer-example
Type: NodePort
IP: 10.254.72.252
Port: <unset> 80/TCP
TargetPort: 80/TCP
NodePort: <unset> 30782/TCP  #下面我們使用這個端口訪問以下
Endpoints: 172.30.6.2:80,172.30.6.3:80
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
# 下面還是在任意Node上執行的
[root@node1 ~]# netstat -tlunp | grep 30782
tcp6 1 0 :::30782 :::* LISTEN 38917/kube-proxy   #會發現node上已經監聽了NodePort所指定的端口了
[root@node1 ~]# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
df7230d3e002 192.168.1.192:5000/hello-world-nginx@sha256:583f0c9ca89415140fa80f70f8079f5138180a6dda2c3ff3920353b459e061a3 "sh /start.sh" 17 hours ago Up 17 hours k8s_nginx_nginx-7ff779f954-pbps4_default_50953118-8b38-11e8-b3ac-000c290991f6_0
b5e225cd6abf 192.168.1.192:5000/pause-amd64 "/pause" 17 hours ago Up 17 hours k8s_POD_nginx-7ff779f954-pbps4_default_50953118-8b38-11e8-b3ac-000c290991f6_0
# 可以看到有兩個容器在運行,一個是pod 一個是真正提供服務的hello-world-nginx

可以看到我們通過node的ip可以訪問到我們容器里的內容的,我們想要的結果是一個容器宕掉之后馬上又能重啟一個繼續提供訪問,所以我們這里就直接刪除一個pod

  • 容災演練
# 從上面得我們有兩個pods 分別是nginx-7ff779f954-9sccd 和 nginx-7ff779f954-pbps4
# 我們現在刪除nginx-7ff779f954-9sccd  pod
[root@master ~]# kubectl delete pod nginx-7ff779f954-9sccd
pod "nginx-7ff779f954-9sccd" deleted
[root@master ~]# kubectl get pods  #會發現有自動創建了一個nginx-7ff779f954-clxf9 並且兩個IP依然能夠正常訪問
NAME READY STATUS RESTARTS AGE
nginx-7ff779f954-clxf9 1/1 Running 0 4m
nginx-7ff779f954-pbps4 1/1 Running 0 34m

這次實驗就到這里吧,這次各個組件的配置都比較簡單,而且沒有用到證書,所以就僅供"觀賞"了。后面隨着知識了解的深入會加一些優化配置在里面。

2.6 一些補充

  • 嘗試停掉某一node上的hello-world-nginx容器(這里是node1為例)
[root@node1 ~]# docker stop df7230
df7230
[root@node1 ~]# docker ps # 再次查看容器啟動的情況 發現有新的容器b06f2d805a8e已經產生
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 
b06f2d805a8e 192.168.1.192:5000/hello-world-nginx@sha256:583f0c9ca89415140fa80f70f8079f5138180a6dda2c3ff3920353b459e061a3 "sh /start.sh" About an hour ago Up About an hour k8s_nginx_nginx-7ff779f954-pbps4_default_50953118-8b38-11e8-b3ac-000c290991f6_1
b5e225cd6abf 192.168.1.192:5000/pause-amd64 "/pause" 20 hours ago Up 20 hours 
[root@node1 ~]# journalctl -f -t kubelet # 查看一下kubelet的日志信息(截取了一部分)
Jul 20 11:50:02 node1 kubelet[58163]: I0720 11:50:02.811077 58163 kuberuntime_manager.go:514] Container {Name:nginx Image:192.168.1.192:5000/hello-world-nginx Command:[] Args:[] WorkingDir: Ports:[{Name: HostPort:0 ContainerPort:80 Protocol:TCP HostIP:}] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[] VolumeDevices:[] LivenessProbe:nil ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:Always SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.  # 這里就是stop后  發生的restart信息事件了(但有個問題是,雖然恢復了但是卻花了好幾分鍾才恢復。。。)
Jul 20 11:50:02 node1 kubelet[58163]: I0720 11:50:02.811330 58163 kuberuntime_manager.go:758] checking backoff for container "nginx" in pod "nginx-7ff779f954-pbps4_default(50953118-8b38-11e8-b3ac-000c290991f6)"

這實驗真是熬人。。 當我知道還有kubeadm和minikube這樣的工具可以安裝集群是整個人幾乎是崩潰的,不過自己從頭到尾搭建一次對k8s的整個流程和模塊之間的關系會更加的了解,也有利於之后的一些插件的添加,期待下一篇kube-dns的誕生。。。。


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM