理解Kubernetes(1):手工搭建Kubernetes測試環境


理解Kubernetes系列文章:

  1. 手工搭建環境
  2. 基本概念和操作

  

1. 基礎環境准備

准備 3個Ubuntu節點,操作系統版本為 16.04,並做好以下配置:

  • 系統升級
  • 設置 /etc/hosts 文件,保持一致
  • 設置從 0 節點上無密碼ssh 其它兩個節點
節點名稱 IP地址 etcd flanneld K8S docker
kub-node-0 172.23.100.4 Y Y
master:
kubctl
kube-apiserver
kuber-controller
kuber-scheduler
Y
kub-node-1 172.23.100.5 Y Y

node:

kube-proxy

kubelet

Y
kub-node-2 172.23.100.6 Y Y

node:

kube-proxy

kubelet

Y

 

2. 安裝與部署

2.1 安裝 etcd 

2.1.1 安裝 

在3個節點上運行以下命令來安裝 etcd 3.2.5 版本:
ETCD_VERSION=${ETCD_VERSION:-"3.2.5"}
ETCD="etcd-v${ETCD_VERSION}-linux-amd64"
curl -L https://github.com/coreos/etcd/releases/download/v${ETCD_VERSION}/${ETCD}.tar.gz -o etcd.tar.gz
tar xzf etcd.tar.gz -C /tmp
mv /tmp/etcd-v${ETCD_VERSION}-linux-amd64 /opt/bin/

2.1.2 配置 

在3個節點上做如下配置:
  • 創建目錄:
sudo mkdir -p /var/lib/etcd/
sudo mkdir -p /opt/config/
  • 創建 /opt/config/etcd.conf 文件:
ETCD_DATA_DIR=/var/lib/etcd
ETCD_NAME="kub-node-0"
ETCD_INITIAL_CLUSTER="kub-node-0=http://172.23.100.4:2380,kub-node-1=http://172.23.100.5:2380,kub-node-2=http://172.23.100.6:2380"
ETCD_INITIAL_CLUSTER_STATE=new
ETCD_LISTEN_PEER_URLS=http://172.23.100.4:2380
ETCD_INITIAL_ADVERTISE_PEER_URLS=http://172.23.100.4:2380
ETCD_ADVERTISE_CLIENT_URLS=http://172.23.100.4:2379
ETCD_LISTEN_CLIENT_URLS=http://172.23.100.4:2379,http://127.0.0.1:2379

注意:

(1)在 0 節點上 etcd cluster 起來后,1 和 2 上的  ETCD_INITIAL_CLUSTER_STATE 值需要改成 existing,表示加入已有集群。否則的話,它自己會創建一個cluster,而不是加入已有cluster。
(2)在每個節點上,IP 地址需要修改為本機地址。 
  • 創建 /lib/systemd/system/etcd.service 文件:
[Unit]
Description=Etcd Server
Documentation=https://github.com/coreos/etcd
After=network.target
[Service]
User=root
Type=simple
EnvironmentFile=-/opt/config/etcd.conf
ExecStart=/opt/bin/etcd
Restart=on-failure
RestartSec=10s
LimitNOFILE=40000
[Install]
WantedBy=multi-user.target

每個節點上都是一樣的。 

  • 在三個節點上啟動服務:
systemctl daemon-reload
systemctl enable etcd
systemctl start etcd

2.1.3 測試服務

  • 查看etcd集群狀態:
root@kub-node-2:/home/ubuntu# /opt/bin/etcdctl cluster-health
member 664b85ff39242fbc is healthy: got healthy result from http://172.23.100.6:2379
member 9dd263662a4b6f73 is healthy: got healthy result from http://172.23.100.4:2379
member b17535572fd6a37b is healthy: got healthy result from http://172.23.100.5:2379
cluster is healthy
  • 查看 etcd 集群成員:
root@kub-node-0:/home/ubuntu# /opt/bin/etcdctl member list
9dd263662a4b6f73: name=kub-node-0 peerURLs=http://172.23.100.4:2380 clientURLs=http://172.23.100.4:2379 isLeader=false
b17535572fd6a37b: name=kub-node-1 peerURLs=http://172.23.100.5:2380 clientURLs=http://172.23.100.5:2379 isLeader=true
e6db3cac1db23670: name=kub-node-2 peerURLs=http://172.23.100.6:2380 clientURLs=http://172.23.100.6:2379 isLeader=false

2.2 部署flanneld

2.2.1 安裝 0.8.0 版本

在每個節點上:

curl -L https://github.com/coreos/flannel/releases/download/v0.8.0/flannel-v0.8.0-linux-amd64.tar.gz flannel.tar.gz
tar xzf flannel.tar.gz -C /tmp
mv /tmp/flanneld /opt/bin/

2.2.2 配置

在每個節點上: 
  • 創建 /lib/systemd/system/flanneld.service 文件:
[Unit]
Description=Flanneld
Documentation=https://github.com/coreos/flannel
After=network.target
Before=docker.service
[Service]
User=root
ExecStart=/opt/bin/flanneld \
--etcd-endpoints="http://172.23.100.4:2379,http://172.23.100.5:2379,http://172.23.100.4:2379" \
--iface=172.23.100.4 \
--ip-masq
Restart=on-failure
Type=notify
LimitNOFILE=65536

注意:在每個節點上,iface 設置為本機ip。

  • 在 0 node上,運行 
/opt/bin/etcdctl --endpoints="http://172.23.100.4:2379,http://172.23.100.5:2379,http://172.23.100.4:2379" mk /coreos.com/network/config \ '{"Network":"10.1.0.0/16", "Backend": {"Type": "vxlan"}}'

確認:

root@kub-node-0:/home/ubuntu# /opt/bin/etcdctl --endpoints="http://172.23.100.4:2379,http://172.23.100.5:2379,http://172.23.100.4:2379" get /coreos.com/network/config
 {"Network":"10.1.0.0/16", "Backend": {"Type": "vxlan"}}
  • 在三個節點上啟動 flannled:
systemctl daemon-reload
systemctl enable flanneld
systemctl start flanneld

備注:flannel服務需要先於Docker啟動。flannel服務啟動時主要做了以下幾步的工作:

  • 從etcd中獲取network的配置信息。
  • 划分subnet,並在etcd中進行注冊。
  • 將子網信息記錄到/run/flannel/subnet.env中。

此時,能看到 etcd 中的 subnet:

root@kub-node-0:/home/ubuntu/kub# /opt/bin/etcdctl --endpoints="http://172.23.100.4:2379,http://172.23.100.5:2379,http://172.23.100.4:2379"; ls /coreos.com/network/subnets
/coreos.com/network/subnets/10.1.35.0-24
/coreos.com/network/subnets/10.1.1.0-24
/coreos.com/network/subnets/10.1.79.0-24

 

2.2.3 驗證 

  • 通過運行 service flanneld status 查看其狀態。
  • 檢查 flannel 虛擬網卡。它們的配置需要和 etcd 中的配置一致。 
root@kub-node-0:/home/ubuntu/kub# ifconfig flannel.1
flannel.1 Link encap:Ethernet  HWaddr 22:fc:69:01:33:30
          inet addr:10.1.35.0  Bcast:0.0.0.0  Mask:255.255.255.255
 
root@kub-node-1:/home/ubuntu# ifconfig flannel.1
flannel.1 Link encap:Ethernet  HWaddr 0a:6e:a6:6f:95:04
          inet addr:10.1.1.0  Bcast:0.0.0.0  Mask:255.255.255.255
 
root@kub-node-2:/home/ubuntu# ifconfig flannel.1
flannel.1 Link encap:Ethernet  HWaddr 6e:10:b3:53:1e:f4
          inet addr:10.1.79.0  Bcast:0.0.0.0  Mask:255.255.255.255

2.3 部署 Docker

2.3.1 安裝

 參考  https://docs.docker.com/engine/installation/linux/docker-ce/ubuntu/#install-docker-ce-1,在每個節點上運行以下命令來安裝Docker: 
   sudo apt-get update
   sudo apt-get install     apt-transport-https     ca-certificates     curl     software-properties-common
   curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
   sudo apt-key fingerprint 0EBFCD88
   sudo add-apt-repository    "deb [arch=amd64] https://download.docker.com/linux/ubuntu \
   $(lsb_release -cs) \
   stable"
   sudo apt-get update
   sudo apt-get install docker-ce

2.3.2 驗證

創建並運行 hello-world 容器:

root@kub-node-0:/home/ubuntu/kub# docker run hello-world
Unable to find image 'hello-world:latest' locally
latest: Pulling from library/hello-world
ca4f61b1923c: Pull complete
Digest: sha256:445b2fe9afea8b4aa0b2f27fe49dd6ad130dfe7a8fd0832be5de99625dad47cd
Status: Downloaded newer image for hello-world:latest
 
Hello from Docker!
This message shows that your installation appears to be working correctly.

2.3.3 配置

 在每個節點上: 
  • 進入 /tmp 目錄,運行 cp mk-docker-opts.sh /usr/bin/ 拷貝文件 
  • 執行下面的命令
root@kub-node-0:/home/ubuntu/kub# mk-docker-opts.sh -i
root@kub-node-0:/home/ubuntu/kub# source /run/flannel/subnet.env
root@kub-node-0:/home/ubuntu/kub# ifconfig docker0
docker0   Link encap:Ethernet  HWaddr 02:42:bc:71:d0:22
          inet addr:172.17.0.1  Bcast:172.17.255.255  Mask:255.255.0.0
          inet6 addr: fe80::42:bcff:fe71:d022/64 Scope:Link
          UP BROADCAST MULTICAST  MTU:1500  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:3 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:0 (0.0 B)  TX bytes:258 (258.0 B)
 
root@kub-node-0:/home/ubuntu/kub# ifconfig docker0 ${FLANNEL_SUBNET}
root@kub-node-0:/home/ubuntu/kub# ifconfig docker0
docker0   Link encap:Ethernet  HWaddr 02:42:bc:71:d0:22
          inet addr:10.1.35.1  Bcast:10.1.35.255  Mask:255.255.255.0
          inet6 addr: fe80::42:bcff:fe71:d022/64 Scope:Link
          UP BROADCAST MULTICAST  MTU:1500  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:3 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:0 (0.0 B)  TX bytes:258 (258.0 B)
  • 修改 /lib/systemd/system/docker.service 文件為:
[Service]
Type=notify
# the default is not to use systemd for cgroups because the delegate issues still
# exists and systemd currently does not support the cgroup feature set required
# for containers run by docker
EnvironmentFile=/var/run/flannel/subnet.env
ExecStart=/usr/bin/dockerd -g /data/docker  --bip=${FLANNEL_SUBNET} --mtu=${FLANNEL_MTU}
ExecReload=/bin/kill -s HUP $MAINPID
#ExecStart=/usr/bin/dockerd -H fd://
#ExecReload=/bin/kill -s HUP $MAINPID
  • 放開 iptables 規則
iptables -F
iptables -X
iptables -Z
iptables -P INPUT ACCEPT
iptables -P OUTPUT ACCEPT
iptables -P FORWARD ACCEPT
iptables-save
  • 重啟 docker 服務
systemctl daemon-reload
systemctl enable docker
systemctl restart docker
  • 驗證
在三個節點上,運行命令 docker run -it ubuntu bash 啟動一個 ubuntu 容器,其ip 分別為 10.1.35.2,10.1.79.2,10.1.1.2。互相ping,可通。  

2.4 證書創建與配置

2.4.1 0 節點上的配置

  • 在 0 節點上,創建 master_ssl.cnf 文件:
[req]
req_extensions = v3_req distinguished_name = req_distinguished_name [req_distinguished_name] [ v3_req ] basicConstraints = CA:FALSE keyUsage = nonRepudiation, digitalSignature, keyEncipherment subjectAltName = @alt_names [alt_names] DNS.1 = kubernetes DNS.2 = kubernetes.default DNS.3 = kubernetes.default.svc DNS.4 = kubernetes.default.svc.cluster.local DNS.5 = master IP.1 = 192.1.0.1 IP.2 = 172.23.100.4
  • 生成 master 證書:
openssl genrsa -out ca.key 2048
openssl req -x509 -new -nodes -key ca.key -subj "/CN=company.com" -days 10000 -out ca.crt openssl genrsa -out server.key 2048 openssl req -new -key server.key -subj "/CN=master" -config master_ssl.cnf -out server.csr openssl x509 -req -in server.csr -CA ca.crt -CAkey ca.key -CAcreateserial -days 10000 -extensions v3_req -extfile master_ssl.cnf -out server.crt openssl genrsa -out client.key 2048 openssl req -new -key client.key -subj "/CN=node" -out client.csr openssl x509 -req -in client.csr -CA ca.crt -CAkey ca.key -CAcreateserial -out client.crt -days 10000
  • 將生成的文件拷貝至  /root/key 文件夾
root@kub-node-0:/home/ubuntu/kub# ls /root/key
ca.crt  ca.key  client.crt  client.key  server.crt  server.key
  • 將 ca.crt 文件和 ca.key 文件拷貝到各個node節點上的 /home/ubuntu/kub 文件夾中。

2.4.2 在 1 和 2 節點上的配置

在 1 和 2 上分別執行下面的操作。下面的示例以節點2為例,1上需要修改IP地址。

  • 運行:
CLINET_IP=172.23.100.6
openssl genrsa -out client.key 2048 openssl req -new -key client.key -subj "/CN=${CLINET_IP}" -out client.csr openssl x509 -req -in client.csr -CA ca.crt -CAkey ca.key -CAcreateserial -out client.crt -days 10000
  • 結果:
root@kub-node-2:/home/ubuntu/kub# ls -lt
total 8908
-rw-r--r-- 1 root root     985 Dec 31 20:57 client.crt -rw-r--r-- 1 root root 17 Dec 31 20:57 ca.srl -rw-r--r-- 1 root root 895 Dec 31 20:57 client.csr -rw-r--r-- 1 root root 1675 Dec 31 20:57 client.key -rw-r--r-- 1 root root 1099 Dec 31 20:54 ca.crt -rw-r--r-- 1 root root 1675 Dec 31 20:54 ca.key
  • 將 client 和 ca 的 .crt 和 .key 拷貝至 /root/key 文件夾。此時其中有4個文件:
root@kub-node-2:/home/ubuntu# ls /root/key
ca.crt  ca.key  client.crt  client.key
  • 創建 /etc/kubernetes/kubeconfig 文件
apiVersion: v1
clusters:
- cluster: certificate-authority: /root/key/ca.crt server: https://172.23.100.4:6443  name: ubuntu contexts: - context: cluster: ubuntu user: ubuntu name: ubuntu current-context: ubuntu kind: Config preferences: {} users: - name: ubuntu user: client-certificate: /root/key/client.crt client-key: /root/key/client.key

 2.5 Kubernetes master 節點配置

在 0 節點上做如下操作。

2.5.1 安裝Kubernetes 1.8.5 版本

curl -L https://dl.k8s.io/v1.8.5/kubernetes-server-linux-amd64.tar.gz kuber.tar.gz
tar xzf kuber.tar.gz -C /tmp3
mv /tmp3/kubernetes/server/bin/* /opt/bin

2.5.2 配置服務

  • 創建 /lib/systemd/system/kube-apiserver.service 文件
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes
After=network.target
[Service]
User=root
ExecStart=/opt/bin/kube-apiserver \
--secure-port=6443 \
--etcd-servers=http://172.23.100.4:2379,http://172.23.100.5:2379,http://172.23.100.6:2379 \
--logtostderr=false \
--log-dir=/var/log/kubernetes \
--allow-privileged=false \
--service-cluster-ip-range=192.1.0.0/16 \
--admission-control=NamespaceLifecycle,LimitRanger,ServiceAccount,SecurityContextDeny,ResourceQuota \
--service-node-port-range=30000-32767 \
--advertise-address=172.23.100.4 \
--client-ca-file=/root/key/ca.crt \
--tls-cert-file=/root/key/server.crt \
--tls-private-key-file=/root/key/server.key
Restart=on-failure
Type=notify
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
  • 創建 /lib/systemd/system/kube-controller-manager.service 文件
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes
[Service]
User=root
ExecStart=/opt/bin/kube-controller-manager \
--master=https://172.23.100.4:6443 \
--root-ca-file=/root/key/ca.crt \
--service-account-private-key-file=/root/key/server.key \
--kubeconfig=/etc/kubernetes/kubeconfig \
--logtostderr=false \
--log-dir=/var/log/kubernetes
Restart=on-failure
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
  • 創建 /lib/systemd/system/kube-scheduler.service 文件
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes
[Service]
User=root
ExecStart=/opt/bin/kube-scheduler \
--logtostderr=false \
--log-dir=/var/log/kubernetes \
--master=https://172.23.100.4:6443 \
--kubeconfig=/etc/kubernetes/kubeconfig
Restart=on-failure
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
  • 啟動服務
systemctl daemon-reload
systemctl enable kube-apiserver
systemctl enable kube-controller-manager
systemctl enable kube-scheduler
systemctl enable flanneld
systemctl start kube-apiserver
systemctl start kube-controller-manager
systemctl start kube-scheduler
  • 確認各服務狀態
systemctl status kube-apiserver
systemctl status kube-controller-manager
systemctl status kube-scheduler 

2.6 配置 kubectl

在 0 節點上,創建 /root/.kube/config 文件:

apiVersion: v1
clusters:
- cluster: certificate-authority: /root/key/ca.crt name: ubuntu contexts: - context: cluster: ubuntu user: ubuntu name: ubuntu current-context: ubuntu kind: Config preferences: {} users: - name: ubuntu user: client-certificate: /root/key/client.crt client-key: /root/key/client.key 

2.7 Kubernetes node 節點配置

節點1 和  2 為 K8S node 節點。在它們上執行下面的操作。

2.7.1 安裝

同 2.4.1 。

2.7.2 配置

  • 在 1 和 2 分別執行操作。下面的內容為1上的,2上的需要將 127.23.100.5 修改為 127.23.100.6 地址。 
  • 創建 /lib/systemd/system/kubelet.service 文件
[Unit]
Description=Kubernetes Kubelet
After=docker.service
Requires=docker.service
[Service]
ExecStart=/opt/bin/kubelet \
--hostname-override=172.23.100.5 \
--pod-infra-container-image="docker.io/kubernetes/pause" \
--cluster-domain=cluster.local \
--log-dir=/var/log/kubernetes \
--cluster-dns=192.1.0.100 \
--kubeconfig=/etc/kubernetes/kubeconfig \
--logtostderr=false
Restart=on-failure
KillMode=process
[Install]
WantedBy=multi-user.target
[Unit]
Description=Kubernetes Proxy
After=network.target
  • 創建 /lib/systemd/system/kube-proxy.service 文件
[Unit]
Description=Kubernetes Proxy
After=network.target
[Service]
ExecStart=/opt/bin/kube-proxy \
--hostname-override=172.23.100.5 \
--master=https://172.23.100.4:6443 \
--log-dir=/var/log/kubernetes \
--kubeconfig=/etc/kubernetes/kubeconfig \
--logtostderr=false
Restart=on-failure
[Install]
WantedBy=multi-user.target
  • 啟動服務
systemctl daemon-reload
systemctl enable kubelet
systemctl enable kube-proxy
systemctl start kubelet
systemctl start kube-proxy
  • 確認各組件的運行狀態
systemctl status kubelet
systemctl status kube-proxy

3. 驗證

3.1 獲取集群信息

在節點 0 上運行以下命令。

  • 獲取 master 節點
root@kub-node-0:/home/ubuntu/kub# kubectl cluster-info
Kubernetes master is running at http://localhost:8080
  • 查看node 節點
root@kub-node-0:/home/ubuntu/kub# kubectl get nodes
NAME           STATUS    ROLES     AGE       VERSION
172.23.100.5   Ready     <none>    2d        v1.8.5
172.23.100.6   Ready     <none>    2d        v1.8.5

3.2 部署第一個應用

  • 創建 nginx.yml 文件
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: my-nginx4
spec:
  replicas: 2
  template:
    metadata:
      labels:
        app: my-nginx4
    spec:
      containers:
      - name: my-nginx4
        image: nginx
        ports:
        - containerPort: 80
  • 創建一個deployment
root@kub-node-0:/home/ubuntu/kub# kubectl create -f nginx4.yml
deployment "my-nginx4" created
  • 查看狀態
root@kub-node-0:/home/ubuntu/kub# kubectl get all
NAME               DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
deploy/my-nginx4   2         2         2            2           3m

NAME                      DESIRED   CURRENT   READY     AGE
rs/my-nginx4-75bbfccc7c   2         2         2         3m

NAME               DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
deploy/my-nginx4   2         2         2            2           3m

NAME                      DESIRED   CURRENT   READY     AGE
rs/my-nginx4-75bbfccc7c   2         2         2         3m

NAME                            READY     STATUS    RESTARTS   AGE
po/my-nginx4-75bbfccc7c-5frpl   1/1       Running   0          3m
po/my-nginx4-75bbfccc7c-5kr4j   1/1       Running   0          3m

NAME             TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
svc/kubernetes   ClusterIP   192.1.0.1    <none>        443/TCP   2d
  • 查看該部署的詳細信息
root@kub-node-0:/home/ubuntu/kub# kubectl describe deployments my-nginx4
Name:                   my-nginx4
Namespace:              default
CreationTimestamp:      Wed, 03 Jan 2018 09:16:44 +0800
Labels:                 app=my-nginx4
Annotations:            deployment.kubernetes.io/revision=1
Selector:               app=my-nginx4
Replicas:               2 desired | 2 updated | 2 total | 2 available | 0 unavailable
StrategyType:           RollingUpdate
MinReadySeconds:        0
RollingUpdateStrategy:  1 max unavailable, 1 max surge
Pod Template:
  Labels:  app=my-nginx4
  Containers:
   my-nginx4:
    Image:        nginx
    Port:         80/TCP
    Environment:  <none>
    Mounts:       <none>
  Volumes:        <none>
Conditions:
  Type           Status  Reason
  ----           ------  ------
  Available      True    MinimumReplicasAvailable
OldReplicaSets:  <none>
NewReplicaSet:   my-nginx4-75bbfccc7c (2/2 replicas created)
Events:
  Type    Reason             Age   From                   Message
  ----    ------             ----  ----                   -------
  Normal  ScalingReplicaSet  1m    deployment-controller  Scaled up replica set my-nginx4-75bbfccc7c to 2
  • 查看 pod 的詳細信息能看到它的容器、IP地址和所在的node節點
root@kub-node-0:/home/ubuntu/kub# kubectl describe pod my-nginx4-75bbfccc7c-5frpl
Name:           my-nginx4-75bbfccc7c-5frpl
Namespace:      default
Node:           172.23.100.5/172.23.100.5
Start Time:     Wed, 03 Jan 2018 09:16:45 +0800
Labels:         app=my-nginx4
                pod-template-hash=3166977737
Annotations:    kubernetes.io/created-by={"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicaSet","namespace":"default","name":"my-nginx4-75bbfccc7c","uid":"c2d83729-f023-11e7-a605-fa163e9a22a...
Status:         Running
IP:             10.1.1.3
Created By:     ReplicaSet/my-nginx4-75bbfccc7c
Controlled By:  ReplicaSet/my-nginx4-75bbfccc7c
Containers:
  my-nginx4:
    Container ID:   docker://4a994121e309fb81181e22589982bf8c053287616ba7c92dcddc5e7fb49927b1
    Image:          nginx
    Image ID:       docker-pullable://nginx@sha256:cf8d5726fc897486a4f628d3b93483e3f391a76ea4897de0500ef1f9abcd69a1
    Port:           80/TCP
    State:          Running
      Started:      Wed, 03 Jan 2018 09:16:53 +0800
    Ready:          True
    Restart Count:  0
    Environment:    <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-b2p4z (ro)
Conditions:
  Type           Status
  Initialized    True
  Ready          True
  PodScheduled   True
Volumes:
  default-token-b2p4z:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-b2p4z
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     <none>
Events:
  Type    Reason                 Age   From                   Message
  ----    ------                 ----  ----                   -------
  Normal  Scheduled              5m    default-scheduler      Successfully assigned my-nginx4-75bbfccc7c-5frpl to 172.23.100.5
  Normal  SuccessfulMountVolume  5m    kubelet, 172.23.100.5  MountVolume.SetUp succeeded for volume "default-token-b2p4z"
  Normal  Pulling                5m    kubelet, 172.23.100.5  pulling image "nginx"
  Normal  Pulled                 5m    kubelet, 172.23.100.5  Successfully pulled image "nginx"
  Normal  Created                5m    kubelet, 172.23.100.5  Created container
  Normal  Started                5m    kubelet, 172.23.100.5  Started container
  • 在節點 1 上能看到該pod包含的容器。其中 pause 容器比較特殊,是一個 K8S 基礎設施類容器。
root@kub-node-1:/home/ubuntu# docker ps
CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS              PORTS               NAMES
4a994121e309        nginx               "nginx -g 'daemon of…"   2 minutes ago       Up 2 minutes                            k8s_my-nginx4_my-nginx4-75bbfccc7c-5frpl_default_c35b9521-f023-11e7-a605-fa163e9a22a6_0
e3f39d708800        kubernetes/pause    "/pause"                 2 minutes ago       Up 2 minutes                            k8s_POD_my-nginx4-75bbfccc7c-5frpl_default_c35b9521-f023-11e7-a605-fa163e9a22a6_0
  • 創建一個 NodePort 來訪問該應用
root@kub-node-0:/home/ubuntu/kub# kubectl expose deployment my-nginx4 --type=NodePort --name=nginx-nodeport
service "nginx-nodeport" exposed
  • 看到通過 node IP 訪問的端口為 31362
root@kub-node-0:/home/ubuntu/kub# kubectl get svc
NAME             TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE
kubernetes       ClusterIP   192.1.0.1       <none>        443/TCP        2d
nginx-nodeport   NodePort    192.1.216.223   <none>        80:31362/TCP   31s
  • 通過 <node-ip>:<node-port> 訪問 ngnix
root@kub-node-0:/home/ubuntu/kub# curl http://172.23.100.5:31362
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
    body {
        width: 35em;
        margin: 0 auto;
        font-family: Tahoma, Verdana, Arial, sans-serif;
    }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>

 4.踩過的一些坑

  • K8S 1.7.2 版本無法創建 pod。kublet 一直報下面的錯誤。原因是因為該版本有bug,切換至 1.8.5 正常。
W0101 20:25:25.636397   25702 helpers.go:771] eviction manager: no observation found for eviction signal allocatableNodeFs.available
W0101 20:25:35.680877   25702 helpers.go:771] eviction manager: no observation found for eviction signal allocatableNodeFs.available
W0101 20:25:45.728875   25702 helpers.go:771] eviction manager: no observation found for eviction signal allocatableNodeFs.available
W0101 20:25:55.756455   25702 helpers.go:771] eviction manager: no observation found for eviction signal allocatableNodeFs.available
  • 看不到 k8s 服務的日志。fix 方法為在各服務的配置文件中,設置  logtostderr = false 以及添加 log-dir 並手動創建該 dir。
  • 使用 hello-world 容器部署一個應用,pod 狀態一直在 CrashLoopBackOff。其原因是因為該容器是啟動即退出的,因此K8S會不停地啟動pod。
root@kub-node-0:/home/ubuntu# kubectl get pods
NAME                          READY     STATUS             RESTARTS   AGE
hello-world-5c9bd8867-76jjg   0/1       CrashLoopBackOff   7          12m
hello-world-5c9bd8867-92275   0/1       CrashLoopBackOff   7          12m
hello-world-5c9bd8867-cn75n   0/1       CrashLoopBackOff   7          12m
  • 創建第一個部署失敗,pod 狀態一直停留在 ContainerCreating。kubelet 日志如下。原因是因為 kubelet 要去 gcr.io 上拉取 pause 鏡像,而這個站點被牆了。fix方法為在 kueblet service 配置文件中使用  --pod-infra-container-image="docker.io/kubernetes/pause”。原因分析在這里
E0101 22:34:51.908652   29137 kuberuntime_manager.go:633] createPodSandbox for pod "my-nginx3-596b5c5f58-vgvlb_default(aedfbe1b-eefc-11e7-b10d-fa163e9a22a6)" failed: rpc error: code = Unknown desc = failed pulling image "gcr.io/google_containers/pause-amd64:3.0": Error response from daemon: Get https://gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
E0101 22:34:51.908755   29137 pod_workers.go:182] Error syncing pod aedfbe1b-eefc-11e7-b10d-fa163e9a22a6 ("my-nginx3-596b5c5f58-vgvlb_default(aedfbe1b-eefc-11e7-b10d-fa163e9a22a6)"), skipping: failed to "CreatePodSandbox" for "my-nginx3-596b5c5f58-vgvlb_default(aedfbe1b-eefc-11e7-b10d-fa163e9a22a6)" with CreatePodSandboxError: "CreatePodSandbox for pod \"my-nginx3-596b5c5f58-vgvlb_default(aedfbe1b-eefc-11e7-b10d-fa163e9a22a6)\" failed: rpc error: code = Unknown desc = failed pulling image \"gcr.io/google_containers/pause-amd64:3.0\": Error response from daemon: Get https://gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)”
  • 創建 LoadBalancer 類型的 service 后,該 service 的 EXTERNAL-IP 一直停留在 Pending。其原因是因為該功能需要雲平台上支持。改為 NodePort 就正常了。
  • 部署 nginx 應用和 NodePort service 后,無法通過 service 的 nodeport 訪問。原因是因為 yml 文件中 containerport 寫錯了,ngnix 使用的端口為 80. 修改后重新部署問題消除。

 

 
 
 
參考文章:

 


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM