debian8.2安裝kubernetes


master上通過kubeadm安裝Kubernetes

添加國內阿里源后安裝kubeadm:

1 deb https://mirrors.aliyun.com/kubernetes/apt/ kubernetes-xenial main
2 apt-get update && apt-get install kubeadm

 

創建kubeadm.yaml文件, 然后執行安裝:

1 apiVersion: kubeadm.k8s.io/v1alpha2
2 kind: MasterConfiguration
3 controllerManagerExtraArgs:
4   horizontal-pod-autoscaler-use-rest-clients: "true"
5   horizontal-pod-autoscaler-sync-period: "10s"
6   node-monitor-grace-period: "10s"
7 apiServerExtraArgs:
8   runtime-config: "api/all=true"
9 kubernetesVersion: "stable-1.12.2"
1 kubeadm init --config kubeadm.yaml

 

安裝過程中出現的問題:

1 [ERROR Swap]: running with swap on is not supported. Please disable swap
2 [ERROR SystemVerification]: missing cgroups: memory
3 [ERROR ImagePull]: failed to pull image [k8s.gcr.io/kube-apiserver-amd64:v1.12.2]

 

解決辦法:

1. 報錯很直白, 禁用swap分區即可. 
不過不建議使用: swapoff -a 從操作記錄來看, 使用swapoff -a后kubeadm init命令雖然可以執行,但是卻總是失敗, 提示: [kubelet-check] It seems like the kubelet isn't running or healthy. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp [::1]:10248: connect: connection refused. 查看日志發現實際還是swap沒有關閉的問題: ➜ kubernetes journalctl -xefu kubelet 11月 05 22:56:28 debian kubelet[7241]: F1105 22:56:28.609272 7241 server.go:262] failed to run Kubelet: Running with swap on is not supported, please disable swap! or set --fail-swap-on flag to false. /proc/swaps contained: [Filename Type Size Used Priority /dev/sda9 partition 3905532 0 -1] ➜ kubernetes cat /proc/swaps Filename Type Size Used Priority /dev/sda9 partition 3905532 0 -1 ➜ kubernetes 注釋掉/etc/fstab下swap掛載后安裝成功
2. echo GRUB_CMDLINE_LINUX=\"cgroup_enable=memory\" >> /etc/default/grub && update-grub && reboot

3. 國內正常網絡不能從k8s.grc.io拉取鏡像, 所以從docker.io拉取, 然后重新打上一個符合k8s的tag: docker pull mirrorgooglecontainers/kube-apiserver:v1.12.2 docker pull mirrorgooglecontainers/kube-controller-manager:v1.12.2 docker pull mirrorgooglecontainers/kube-scheduler:v1.12.2 docker pull mirrorgooglecontainers/kube-proxy:v1.12.2 docker pull mirrorgooglecontainers/pause:3.1 docker pull mirrorgooglecontainers/etcd:3.2.24 docker pull coredns/coredns:1.2.2 docker tag docker.io/mirrorgooglecontainers/kube-apiserver:v1.12.2 k8s.gcr.io/kube-apiserver:v1.12.2 docker tag docker.io/mirrorgooglecontainers/kube-controller-manager:v1.12.2 k8s.gcr.io/kube-controller-manager:v1.12.2 docker tag docker.io/mirrorgooglecontainers/kube-scheduler:v1.12.2 k8s.gcr.io/kube-scheduler:v1.12.2 docker tag docker.io/mirrorgooglecontainers/kube-proxy:v1.12.2 k8s.gcr.io/kube-proxy:v1.12.2 docker tag docker.io/mirrorgooglecontainers/pause:3.1 k8s.gcr.io/pause:3.1 docker tag docker.io/mirrorgooglecontainers/etcd:3.2.24 k8s.gcr.io/etcd:3.2.24 docker tag docker.io/coredns/coredns:1.2.2 k8s.gcr.io/coredns:1.2.2 docker rmi mirrorgooglecontainers/kube-apiserver:v1.12.2 docker rmi mirrorgooglecontainers/kube-controller-manager:v1.12.2 docker rmi mirrorgooglecontainers/kube-scheduler:v1.12.2 docker rmi mirrorgooglecontainers/kube-proxy:v1.12.2 docker rmi mirrorgooglecontainers/pause:3.1 docker rmi mirrorgooglecontainers/etcd:3.2.24 docker rmi coredns/coredns:1.2.2 也可以增加加速器(測試163后速度比直接訪問更慢), 加入方法如下,然后重啟docker服務: ➜ kubernetes cat /etc/docker/daemon.json { "registry-mirrors": ["http://hub-mirror.c.163.com"] } ➜ kubernetes

 

安裝成功記錄: 

➜  kubernetes  kubeadm init --config kubeadm.yaml                               
I1205 23:08:15.852917    5188 version.go:93] could not fetch a Kubernetes version from the internet: unable to get URL "https://dl.k8s.io/release/stable-1.12.2.txt": Get https://dl.k8s.io/release/stable-1.12.2.txt: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
I1205 23:08:15.853144    5188 version.go:94] falling back to the local client version: v1.12.2 [init] using Kubernetes version: v1.12.2 [preflight] running pre-flight checks [preflight/images] Pulling images required for setting up a Kubernetes cluster [preflight/images] This might take a minute or two, depending on the speed of your internet connection [preflight/images] You can also perform this action in beforehand using 'kubeadm config images pull' [kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [preflight] Activating the kubelet service [certificates] Generated front-proxy-ca certificate and key. [certificates] Generated front-proxy-client certificate and key. [certificates] Generated etcd/ca certificate and key. [certificates] Generated etcd/server certificate and key. [certificates] etcd/server serving cert is signed for DNS names [debian localhost] and IPs [127.0.0.1 ::1] [certificates] Generated etcd/peer certificate and key. [certificates] etcd/peer serving cert is signed for DNS names [debian localhost] and IPs [192.168.2.118 127.0.0.1 ::1] [certificates] Generated etcd/healthcheck-client certificate and key. [certificates] Generated apiserver-etcd-client certificate and key. [certificates] Generated ca certificate and key. [certificates] Generated apiserver certificate and key. [certificates] apiserver serving cert is signed for DNS names [debian kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.2.118] [certificates] Generated apiserver-kubelet-client certificate and key. [certificates] valid certificates and keys now exist in "/etc/kubernetes/pki" [certificates] Generated sa key and public key. [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf" [controlplane] wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml" [controlplane] wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml" [controlplane] wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml" [etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml" [init] waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests" [init] this might take a minute or longer if the control plane images have to be pulled [apiclient] All control plane components are healthy after 48.078220 seconds [uploadconfig] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace [kubelet] Creating a ConfigMap "kubelet-config-1.12" in namespace kube-system with the configuration for the kubelets in the cluster [markmaster] Marking the node debian as master by adding the label "node-role.kubernetes.io/master=''" [markmaster] Marking the node debian as master by adding the taints [node-role.kubernetes.io/master:NoSchedule] [patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "debian" as an annotation [bootstraptoken] using token: x4p0vz.tdp1xxxx7uyerrrs [bootstraptoken] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials [bootstraptoken] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token [bootstraptoken] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster [bootstraptoken] creating the "cluster-info" ConfigMap in the "kube-public" namespace [addons] Applied essential addon: CoreDNS [addons] Applied essential addon: kube-proxy Your Kubernetes master has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/  You can now join any number of machines by running the following on each node as root: kubeadm join 192.168.2.118:6443 --token x4p0vz.tdp1xxxx7uyerrrs --discovery-token-ca-cert-hash sha256:64cb13f7f004fe8dd3e6d0e246950f4cbdfa65e2a84f8988c3070abf8183b3e9 ➜ kubernetes  

 

部署網絡插件

安裝成功后, 通過kubectl get nodes查看節點信息(kubectl命令需要使用kubernetes-admin來運行, 需要拷貝下配置文件並配置環境變量才能運行kubectl get nods):

➜  kubernetes  kubectl get nodes                
The connection to the server localhost:8080 was refused - did you specify the right host or port?
➜  kubernetes  echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> ~/.bashrc      
➜  kubernetes  source ~/.bashrc
➜  kubernetes  kubectl get nodes                                               
NAME     STATUS     ROLES    AGE   VERSION
debian   NotReady   master   21m   v1.12.2
➜  kubernetes 

 

可以看到節點NotReady, 這是由於還沒有部署任何網絡插件:

➜  kubernetes  kubectl get pods -n kube-system 
NAME                             READY   STATUS    RESTARTS   AGE
coredns-576cbf47c7-4vjhf         0/1     Pending   0          24m
coredns-576cbf47c7-xzjk7         0/1     Pending   0          24m
etcd-debian                      1/1     Running   0          23m
kube-apiserver-debian            1/1     Running   0          23m
kube-controller-manager-debian   1/1     Running   0          23m
kube-proxy-5wb6k                 1/1     Running   0          24m
kube-scheduler-debian            1/1     Running   0          23m
➜  kubernetes  

➜  kubernetes  kubectl describe node debian
Name:               debian
Roles:              master
Labels:             beta.kubernetes.io/arch=amd64
                    beta.kubernetes.io/os=linux
                    kubernetes.io/hostname=debian
                    node-role.kubernetes.io/master=
Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
                    node.alpha.kubernetes.io/ttl: 0
                    volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp:  Wed, 05 Dec 2018 23:09:19 +0800
Taints:             node-role.kubernetes.io/master:NoSchedule
                    node.kubernetes.io/not-ready:NoSchedule
Unschedulable:      false
Conditions:
  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
  ----             ------  -----------------                 ------------------                ------                       -------
  OutOfDisk        False   Wed, 05 Dec 2018 23:31:26 +0800   Wed, 05 Dec 2018 23:09:14 +0800   KubeletHasSufficientDisk     kubelet has sufficient disk space available
  MemoryPressure   False   Wed, 05 Dec 2018 23:31:26 +0800   Wed, 05 Dec 2018 23:09:14 +0800   KubeletHasSufficientMemory   kubelet has sufficient memory available
  DiskPressure     False   Wed, 05 Dec 2018 23:31:26 +0800   Wed, 05 Dec 2018 23:09:14 +0800   KubeletHasNoDiskPressure     kubelet has no disk pressure
  PIDPressure      False   Wed, 05 Dec 2018 23:31:26 +0800   Wed, 05 Dec 2018 23:09:14 +0800   KubeletHasSufficientPID      kubelet has sufficient PID available
  Ready            False   Wed, 05 Dec 2018 23:31:26 +0800   Wed, 05 Dec 2018 23:09:14 +0800   KubeletNotReady              runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized. WARNING: CPU hardcapping unsupported
Addresses:
  InternalIP:  192.168.2.118
  Hostname:    debian
Capacity:
 cpu:                2
 ephemeral-storage:  4673664Ki
 hugepages-2Mi:      0
 memory:             5716924Ki
 pods:               110
Allocatable:
 cpu:                2
 ephemeral-storage:  4307248736
 hugepages-2Mi:      0
 memory:             5614524Ki
 pods:               110
System Info:
 Machine ID:                 4341bb45c5c84ad2827c173480039b5c
 System UUID:                05F887C4-A455-122E-8B14-8C736EA3DBDB
 Boot ID:                    ff68f27b-fba0-4048-a1cf-796dd013e025
 Kernel Version:             3.16.0-4-amd64
 OS Image:                   Debian GNU/Linux 8 (jessie)
 Operating System:           linux
 Architecture:               amd64
 Container Runtime Version:  docker://18.6.1
 Kubelet Version:            v1.12.2
 Kube-Proxy Version:         v1.12.2
Non-terminated Pods:         (5 in total)
  Namespace                  Name                              CPU Requests  CPU Limits  Memory Requests  Memory Limits
  ---------                  ----                              ------------  ----------  ---------------  -------------
  kube-system                etcd-debian                       0 (0%)        0 (0%)      0 (0%)           0 (0%)
  kube-system                kube-apiserver-debian             250m (12%)    0 (0%)      0 (0%)           0 (0%)
  kube-system                kube-controller-manager-debian    200m (10%)    0 (0%)      0 (0%)           0 (0%)
  kube-system                kube-proxy-5wb6k                  0 (0%)        0 (0%)      0 (0%)           0 (0%)
  kube-system                kube-scheduler-debian             100m (5%)     0 (0%)      0 (0%)           0 (0%)
Allocated resources:
  (Total limits may be over 100 percent, i.e., overcommitted.)
  Resource  Requests    Limits
  --------  --------    ------
  cpu       550m (27%)  0 (0%)
  memory    0 (0%)      0 (0%)
Events:
  Type    Reason                   Age                From                Message
  ----    ------                   ----               ----                -------
  Normal  Starting                 22m                kubelet, debian     Starting kubelet.
  Normal  NodeAllocatableEnforced  22m                kubelet, debian     Updated Node Allocatable limit across pods
  Normal  NodeHasSufficientDisk    22m (x6 over 22m)  kubelet, debian     Node debian status is now: NodeHasSufficientDisk
  Normal  NodeHasSufficientMemory  22m (x6 over 22m)  kubelet, debian     Node debian status is now: NodeHasSufficientMemory
  Normal  NodeHasNoDiskPressure    22m (x6 over 22m)  kubelet, debian     Node debian status is now: NodeHasNoDiskPressure
  Normal  NodeHasSufficientPID     22m (x5 over 22m)  kubelet, debian     Node debian status is now: NodeHasSufficientPID
  Normal  Starting                 21m                kube-proxy, debian  Starting kube-proxy.
➜  kubernetes  
View Code

 

部署插件后可查看所有pods已經running(插件要幾分鍾才能運行起來, 中間狀態有ContainerCreating/CrashLoopBackOff):

➜  kubernetes  kubectl get pods -n kube-system                 
NAME                             READY   STATUS              RESTARTS   AGE
coredns-576cbf47c7-4vjhf         0/1     Pending             0          25m
coredns-576cbf47c7-xzjk7         0/1     Pending             0          25m
etcd-debian                      1/1     Running             0          25m
kube-apiserver-debian            1/1     Running             0          25m
kube-controller-manager-debian   1/1     Running             0          25m
kube-proxy-5wb6k                 1/1     Running             0          25m
kube-scheduler-debian            1/1     Running             0          25m
weave-net-nj7bk                  0/2     ContainerCreating   0          21s
➜  kubernetes  kubectl get pods -n kube-system
NAME                             READY   STATUS             RESTARTS   AGE
coredns-576cbf47c7-4vjhf         0/1     CrashLoopBackOff   2          27m
coredns-576cbf47c7-xzjk7         0/1     CrashLoopBackOff   2          27m
etcd-debian                      1/1     Running            0          27m
kube-apiserver-debian            1/1     Running            0          27m
kube-controller-manager-debian   1/1     Running            0          27m
kube-proxy-5wb6k                 1/1     Running            0          27m
kube-scheduler-debian            1/1     Running            0          27m
weave-net-nj7bk                  2/2     Running            0          2m32s
➜  kubernetes  kubectl get pods -n kube-system
NAME                             READY   STATUS    RESTARTS   AGE
coredns-576cbf47c7-4vjhf         1/1     Running   3          27m
coredns-576cbf47c7-xzjk7         1/1     Running   3          27m
etcd-debian                      1/1     Running   0          27m
kube-apiserver-debian            1/1     Running   0          27m
kube-controller-manager-debian   1/1     Running   0          27m
kube-proxy-5wb6k                 1/1     Running   0          27m
kube-scheduler-debian            1/1     Running   0          27m
weave-net-nj7bk                  2/2     Running   0          2m42s
➜  kubernetes  
➜  kubernetes  kubectl get nodes  
NAME     STATUS   ROLES    AGE   VERSION
debian   Ready    master   38m   v1.12.2
➜  kubernetes  

 

調整master可以執行Pod

默認情況下,Kubernetes通過Taint/Toleration 機制給某一個節點打上"污點":

➜  kubernetes  kubectl describe node debian | grep Taints                      
Taints:             node-role.kubernetes.io/master:NoSchedule
➜  kubernetes  

那么所有Pod默認就不在被標記的節點上運行,除非:

1 1. Pod主動聲明允許在這種節點上運行(通過在Pod的yaml文件中的spec部分,加入 tolerations 字段即可)。
2 2. 對於總共就幾台機器的k8s測試機器來說,最好的選擇就是刪除Taint:
3   ➜  kubernetes  kubectl taint nodes --all node-role.kubernetes.io/master-
4   node/debian untainted
5   ➜  kubernetes  kubectl describe node debian | grep Taints               
6   Taints:             <none>
7   ➜  kubernetes  

 

增加節點

由於master上kubeadm/kubelet都是v1.12.2版本,worker節點執行默認apt-get install時默認裝了v1.13版本,導致加入集群失敗,需卸載重裝匹配的版本:
root@debian-vm:~# kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.0", GitCommit:"ddf47ac13c1a9483ea035a79cd7c10005ff21a6d", GitTreeState:"clean", BuildDate:"2018-12-03T21:02:01Z", GoVersion:"go1.11.2", Compiler:"gc", Platform:"linux/amd64"}
root@debian-vm:~# root@debian-vm:~# kubelet --version
Kubernetes v1.13.0
root@debian-vm:~# apt-get --purge remove kubeadm kubelet
root@debian-vm:~# apt-cache policy kubeadm
kubeadm:
  已安裝:  (無)
  候選軟件包:1.13.0-00
  版本列表:
     1.13.0-00 0
        500 https://mirrors.aliyun.com/kubernetes/apt/ kubernetes-xenial/main amd64 Packages
     1.12.3-00 0
        500 https://mirrors.aliyun.com/kubernetes/apt/ kubernetes-xenial/main amd64 Packages
     1.12.2-00 0
        500 https://mirrors.aliyun.com/kubernetes/apt/ kubernetes-xenial/main amd64 Packages
root@debian-vm:~# apt-get install kubeadm=1.12.2-00 kubelet=1.12.2-00

 

root@debian-vm:~# kubeadm join 192.168.2.118:6443 --token x4p0vz.tdp1xxxx7uyerrrs --discovery-token-ca-cert-hash sha256:64cb13f7f004fe8dd3e6d0e246950f4cbdfa65e2a84f8988c3070abf8183b3e9
[preflight] running pre-flight checks
[discovery] Trying to connect to API Server "192.168.2.118:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://192.168.2.118:6443"
[discovery] Requesting info from "https://192.168.2.118:6443" again to validate TLS against the pinned public key
[discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server "192.168.2.118:6443"
[discovery] Successfully established connection with API Server "192.168.2.118:6443"
[kubelet] Downloading configuration for the kubelet from the "kubelet-config-1.12" ConfigMap in the kube-system namespace
[kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[preflight] Activating the kubelet service
[tlsbootstrap] Waiting for the kubelet to perform the TLS Bootstrap...
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "debian-vm" as an annotation

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the master to see this node join the cluster.

root@debian-vm:~# 
節點加入集群成功

 

https://github.com/kubernetes/kubernetes/issues/54914
https://github.com/kubernetes/kubeadm/issues/610
https://blog.csdn.net/acxlm/article/details/79069468

 


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM