Kubernetes安裝配置指南(二進制安裝)


以二進制文件方式安裝Kubernetes集群

k8s下載地址:https://github.com/kubernetes/kubernetes/releases
wget https://dl.k8s.io/v1.14.0/kubernetes-server-linux-amd64.tar.gz
wget https://dl.k8s.io/v1.14.0/kubernetes-client-linux-amd64.tar.gz
wget https://dl.k8s.io/v1.14.0/kubernetes-node-linux-amd64.tar.gz
wget https://github.com/etcd-io/etcd/releases/download/v3.3.13/etcd-v3.3.13-linux-amd64.tar.gz

Master上安裝etcd、kube-apiserver、kube-controller-manager、kube-scheduler服務

1.etcd服務

下載etcd二進制包,解壓,將etcd、etcdctl文件復制到/usr/bin/目錄。
設置systemd配置文件:

[root@common etcd]# cat /usr/lib/systemd/system/etcd.service 
[Unit]
Description=Etcd Server
After=network.target

[Service]
Type=simple
WorkingDirectory=/var/lib/etcd/
EnvironmentFile=/etc/etcd/etcd.conf
ExecStart=/usr/bin/etcd

[Install]
WantedBy=multi-user.target

其中WorkingDirectory是etcd的數據保存目錄,需要在啟動服務之前創建。
/etc/etcd/etcd.conf配置文件先添加配置:

#[Member]
ETCD_NAME="etcd01"
ETCD_DATA_DIR="/var/lib/etcd"
ETCD_LISTEN_CLIENT_URLS="http://10.2.7.67:2379"
 
#[Clustering]
ETCD_ADVERTISE_CLIENT_URLS="http://10.2.7.67:2379"

啟動etcd服務

systemctl daemon-reload
systemctl enable etcd.service
systemctl start etcd.service

export ETCDCTL_API=3
# 查看健康狀態
[root@common etcd]# etcdctl endpoint health
127.0.0.1:2379 is healthy: successfully committed proposal: took = 700.897µs
2.kube-apiserver服務

將 kube-apiserver、kube-controller-manager和kube-scheduler文件復制到/usr/bin目錄。設置systemd服務配置文件/usr/lib/systemd/system/kube-apiserver.service,內容如下:

cp kube-apiserver /usr/bin/
cp kube-controller-manager /usr/bin
cp kube-scheduler /usr/bin/

[root@common]# cat /usr/lib/systemd/system/kube-apiserver.service
[Unit]
Description=kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes
After=etcd.service
Wants=etcd.service

[Service]
EnvironmentFile=/etc/kubernetes/apiserver
ExecStart=/usr/bin/kube-apiserver $KUBE_API_ARGS
Restart=on-failure
Type=notify
LimitNOFILE=65535

[Install]
WantedBy=multi-user.target

配置文件/etc/kubernetes/apiserver的內容包括了kube-apiserver的全部啟動參數,主要的配置參數在變量KUBE_API_ARGS中指定。

[root@common]# cat /etc/kubernetes/apiserver
KUBE_API_ARGS="--etcd-servers=http://127.0.0.1:2379 \
--insecure-bind-address=0.0.0.0 \
--insecure-port=8080 \
--service-cluster-ip-range=169.169.0.0/16 \
--service-node-port-range=1-65535 \
--enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota \
--logtostderr=false \
--log-dir=/var/log/kubernetes \
--v=0"

對啟動參數說明如下。
◎ --etcd-servers:指定etcd服務的URL。
◎ --storage-backend:指定etcd的版本,從Kubernetes1.6開始,默認為etcd3。注意,在Kubernetes1.6之前的版本中沒有這個參數,kube-apiserver默認使用etcd2,對於正在運行的1.5或舊版本的Kubernetes集群,etcd提供了數據升級方案,詳見etcd文檔(https://coreos.com/etcd/docs/latest/upgrades/upgrade_3_0.html)。
◎ --insecure-bind-address:APIServer綁定主機的非安全IP地址,設置0.0.0.0表示綁定所有IP地址。
◎ --insecure-port:API Server綁定主機的非安全端口號,默認為8080。
◎ --service-cluster-ip-range:Kubernetes集群中Service的虛擬IP地址范圍,以CIDR格式表示,例如169.169.0.0/16,該IP范圍不能與物理機的IP地址有重合。
◎ --service-node-port-range:Kubernetes集群中Service可使用的物理機端口號 范圍,默認值為30000~32767。
◎ --enable-admission-plugins:Kubernetes集群的准入控制設置,各控制模塊以插件的形式依次生效。
◎ --logtostderr:設置為false表示將日志寫入文件,不寫入stderr
◎ --log-dir:日志目錄。
◎ --v:日志級別。

3.kube-controller-manager服務

kube-controller-manager服務依賴於kube-apiserver服務,設置systemd服務配置文件/usr/lib/systemd/system/kube-controller-manager.service,內容如下:

[root@common]# cat /usr/lib/systemd/system/kube-controller-manager.service 
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/GoogleCloudPlatform/Kubernetes
After=kube-apiserver.service
Requires=kube-apiserver.service

[Service]
EnvironmentFile=/etc/kubernetes/controller-manager
ExecStart=/usr/bin/kube-controller-manager $KUBE_CONTROLLER_MANAGER_ARGS
Restart=on-failure
LimitNOFILE=65535

[Install]
WantedBy=multi-user.target

[root@common]# cat /etc/kubernetes/controller-manager 
KUBE_CONTROLLER_MANAGER_ARGS="--kubeconfig=/etc/kubernetes/kubeconfig \
--logtostderr=false \
--log-dir=/var/log/kubernetes \
--v=0"

 參數說明:--kubeconfig:設置與API Server連接的相關配置
4.kube-scheduler服務

kube-scheduler服務也依賴於kube-apiserver服務,設置systemd服務配置文件/usr/lib/systemd/system/kube-scheduler.service,內容如下:

[root@common]# cat /usr/lib/systemd/system/kube-scheduler.service 
[Unit]
Descriptin=Kubernetes Controller Manager
Documentation=https://github.com/GoogleCloudPlatform/Kubernetes
After=kube-apiserver.service
Requires=kube-apiserver.service

[Service]
EnvironmentFile=/etc/kubernetes/scheduler
ExecStart=/usr/bin/kube-scheduler $KUBE_scheduler_ARGS
Restart=on-failure
LimitNOFILE=65535

[Install]
WantedBy=multi-user.target

[root@common]# cat /etc/kubernetes/scheduler 
KUBE_scheduler_ARGS="--kubeconfig=/etc/kubernetes/kubeconfig \
--logtostderr=false \
--log-dir=/var/log/kubernetes \
--v=0"

  參數說明:--kubeconfig:設置與API Server連接的相關配置
  沒有此文件/etc/kubernetes/kubeconfig,將--kubeconfig參數換成
  --master=http://10.2.7.67:8080即可

配置完成后,執行systemctlstart命令按順序啟動這3個服務,同時,使用systemctl enable命令將服務加入開機啟動列表中:

systemctl daemon-reload
systemctl enable kube-apiserver.service
systemctl enable kube-controller-manager.service
systemctl enable kube-scheduler.service

systemctl start kube-apiserver.service
systemctl start kube-controller-manager.service
systemctl start kube-scheduler.service

運行命令kubectl get cs

[root@common]# ./kubectl get cs
NAME                 STATUS    MESSAGE             ERROR
etcd-0               Healthy   {"health":"true"}   
scheduler            Healthy   ok                  
controller-manager   Healthy   ok 

通過systemctl status <service_name>驗證服務的啟動狀態,running表示啟動成功。至此,Master上所需的服務就全部啟動完成了。

存在的問題:

[root@common]# service kube-apiserver status
Redirecting to /bin/systemctl status kube-apiserver.service
● kube-apiserver.service - kubernetes API Server
   Loaded: loaded (/usr/lib/systemd/system/kube-apiserver.service; enabled; vendor preset: disabled)
   Active: active (running) since 一 2019-08-19 16:29:52 CST; 21min ago
     Docs: https://github.com/GoogleCloudPlatform/Kubernetes
 Main PID: 38789 (kube-apiserver)
    Tasks: 22
   Memory: 149.0M
   CGroup: /system.slice/kube-apiserver.service
           └─38789 /usr/bin/kube-apiserver --etcd-servers=http://10.2.7.67:2379 --insecure-bind-address=0.0.0.0 --insecure-port=8080 --storage-backend=etcd3 --service-cluster-ip-range=169.169.0.0/16 --service-node-port-range=1-65535 --logtostderr=false --enable-admis...

8月 19 16:29:50 common.localdomain kube-apiserver[38789]: E0819 16:29:50.003935   38789 prometheus.go:189] failed to register unfinished_work_seconds metric admission_quota_controller: duplicate metrics collector registration attempted
8月 19 16:29:50 common.localdomain kube-apiserver[38789]: E0819 16:29:50.003959   38789 prometheus.go:202] failed to register longest_running_processor_microseconds metric admission_quota_controller: duplicate metrics collector registration attempted
8月 19 16:29:50 common.localdomain kube-apiserver[38789]: E0819 16:29:50.847490   38789 prometheus.go:138] failed to register depth metric admission_quota_controller: duplicate metrics collector registration attempted
8月 19 16:29:50 common.localdomain kube-apiserver[38789]: E0819 16:29:50.847536   38789 prometheus.go:150] failed to register adds metric admission_quota_controller: duplicate metrics collector registration attempted
8月 19 16:29:50 common.localdomain kube-apiserver[38789]: E0819 16:29:50.847581   38789 prometheus.go:162] failed to register latency metric admission_quota_controller: duplicate metrics collector registration attempted
8月 19 16:29:50 common.localdomain kube-apiserver[38789]: E0819 16:29:50.847619   38789 prometheus.go:174] failed to register work_duration metric admission_quota_controller: duplicate metrics collector registration attempted
8月 19 16:29:50 common.localdomain kube-apiserver[38789]: E0819 16:29:50.847647   38789 prometheus.go:189] failed to register unfinished_work_seconds metric admission_quota_controller: duplicate metrics collector registration attempted
8月 19 16:29:50 common.localdomain kube-apiserver[38789]: E0819 16:29:50.847671   38789 prometheus.go:202] failed to register longest_running_processor_microseconds metric admission_quota_controller: duplicate metrics collector registration attempted
8月 19 16:29:52 common.localdomain systemd[1]: Started kubernetes API Server.
8月 19 16:29:52 common.localdomain kube-apiserver[38789]: E0819 16:29:52.368284   38789 controller.go:148] Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /registry/masterleases/10.10.10.6, Resour...AdditionalErrorMsg:
Hint: Some lines were ellipsized, use -l to show in full.

Node上安裝kubelet、kube-proxy服務

1.kubelet服務

kubelet服務依賴於Docker服務,設置systemd服務配置文件/usr/lib/systemd/system/kubelet.service,內容如下:

[root@cfs-ctp]# cat /usr/lib/systemd/system/kubelet.service
[Unit]
Description=kubernetes Kubelet Server
Documentation=https://github.com/GoogleCloudPlatform/Kubernetes
After=docker.service
Requires=docker.service

[Service]
WorkingDirectory=/var/lib/kubelet
EnvironmentFile=/etc/kubernetes/kubelet
ExecStart=/usr/bin/kubelet $KUBELET_ARGS
Restart=on-failure

[Install]
WantedBy=multi-user.target

其中,WorkingDirectory表示kubelet保存數據的目錄,需要在啟動kubelet服務之前創建。
配置文件/etc/kubernetes/kubelet的內容包括了kubelet的全部啟動參數,主要的配置參數在變量KUBELET_ARGS中指定:

[root@cfs-ctp]# cat /etc/kubernetes/kubelet
KUBELET_ARGS="--kubeconfig=/etc/kubernetes/kubeconfig \
--hostname-override=10.2.7.63 \
--logtostderr=false \
--log-dir=/var/log/kubernetes \
--v=0"

[root@cfs-ctp]# cat /etc/kubernetes/kubeconfig 
apiVersion: v1
kind: Config
users:
- name: kubelet
clusters:
- name: kubernetes
  cluster:
    server: http://10.2.7.67:8080
contexts:
- context:
    cluster: kubernetes
    user: kubelet
  name: service-account-context
current-context: service-account-context

 --kubeconfig:設置與APIServer連接的相關配置,可以與kube-controller-manager使用的kubeconfig文件相同。
 --hostname-override:設置本Node的名稱。
 --logtostderr:設置為false表示將日志寫入文件,不寫入stderr。
2.kube-proxy服務

kube-proxy服務依賴於network服務,設置systemd服務配置文件/usr/lib/systemd/system/kube-proxy.service,內容如下:

[root@cfs-ctp]# cat /usr/lib/systemd/system/kube-proxy.service
[Unit]
Description=kubernetes Kube-proxy Server
Documentation=https://github.com/GoogleCloudPlatform/Kubernetes
After=network.service
Requires=network.service

[Service]
EnvironmentFile=/etc/kubernetes/proxy
ExecStart=/usr/bin/kube-proxy $KUBE_PROXY_ARGS
Restart=on-failure
LinitNOFILE=65535

[Install]
WantedBy=multi-user.target

#配置文件
[root@cfs-ctp]# cat /etc/kubernetes/proxy 
KUBE_PROXY_ARGS="--master=http://10.2.7.67:8080 \
--logtostderr=false \
--log-dir=/var/log/kubernetes \
--v=2"

配置完成后,通過systemctl啟動kubelet和kube-proxy服務:

systemctl daemon-reload
systemctl enable kubelet.service
systemctl start kubelet.service
systemctl enable kube-proxy.service
systemctl start kube-proxy.service

kubelet默認采用向Master自動注冊本Node的機制,在Master上查看各Node的狀態,狀態為Ready表示Node已經成功注冊並且狀態為可用:

[root@common]# ./kubectl get node
NAME        STATUS   ROLES    AGE    VERSION
10.2.7.63   Ready    <none>   114s   v1.14.0

等所有Node的狀態都為Ready之后,一個Kubernetes集群就啟動完成了。接下來可以創建Pod、Deployment、Service等資源對象來部署容器應用了。


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM