kubeadmin介紹
kubeadm
是Kubernetes
項目自帶的及集群構建工具,負責執行構建一個最小化的可用集群以及將其啟動等的必要基本步驟,kubeadm
是Kubernetes
集群全生命周期的管理工具,可用於實現集群的部署、升級、降級及拆除。kubeadm
部署Kubernetes
集群是將大部分資源以pod
的方式運行,例如(kube-proxy
、kube-controller-manager
、kube-scheduler
、kube-apiserver
、flannel
)都是以pod
方式運行。
Kubeadm
僅關心如何初始化並啟動集群,余下的其他操作,例如安裝Kubernetes Dashboard
、監控系統、日志系統等必要的附加組件則不在其考慮范圍之內,需要管理員自行部署。
Kubeadm
集成了Kubeadm init
和kubeadm join
等工具程序,其中kubeadm init
用於集群的快速初始化,其核心功能是部署Master節點的各個組件,而kubeadm join
則用於將節點快速加入到指定集群中,它們是創建Kubernetes
集群最佳實踐的“快速路徑”。另外,kubeadm token
可於集群構建后管理用於加入集群時使用的認證令牌(token
),而kubeadm reset
命令的功能則是刪除集群構建過程中生成的文件以重置回初始狀態。
安裝前期規划
軟件信息
軟件 | 版本 |
---|---|
centos | 7.6 |
kubeadm | 1.15.6 |
kubelet | 1.15.6 |
kubectl | 1.15.6 |
coredns | 1.15.6 |
dashborad | 2.0.0 |
coredns | 1.3.1 |
nginx-ingress | 1.6.0 |
docker | 18.xxx |
Etcd | 3.3.18 |
flannel | 0.11 |
flannel | 0.11 |
部署環境
角色 | ip | 組件 |
---|---|---|
slb | 192.168.5.200 | openretry |
m1 + etcd1 | 192.168.5.3 | kube-apiserver,kube-controller-manager,kube-scheduler,etcd |
m2 + etcd2 | 192.168.5.4 | kube-apiserver,kube-controller-manager,kube-scheduler,etcd |
m3 + etcd3 | 192.168.5.7 | kube-apiserver,kube-controller-manager,kube-scheduler,etcd |
n1 | 192.168.5.5 | kubelet,kube-proxy,docker,flannel |
n2 | 192.168.5.6 | kubelet,kube-proxy,docker,flannel |
修改主機名
按照上述表格角色設置對相應主機名
hostnamectl set-hostname slb
hostnamectl set-hostname m1
hostnamectl set-hostname m2
hostnamectl set-hostname m3
hostnamectl set-hostname n1
各服務器/etc/hosts 配置
除slb 外所有服務器相同
cat /etc/hosts
192.168.5.3 m1
192.168.5.4 m2
192.168.5.7 m3
192.168.5.5 n1
192.168.5.6 n2
關閉Swap
swapoff -a
sed 's/._swap._/#&/' /etc/fstab
關閉防火牆和selinux
全部服務器節點都要修改
systemctl stop firewalld.service
systemctl disable firewalld.service
修改完selinux 重啟服務器
sed -i 's/SELINUX=enforcing/SELINUX=disabled/' /etc/selinux/config
net.bridge 和ip 轉發設置
除slb 外全部服務器增加
cat /etc/sysctl.conf
# 打開ip轉發,下面4條都干上去
net.ipv4.ip_forward = 1
# Disable netfilter on bridges.
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-arptables = 1
sysctl -p
命令使其生效
集群安裝時需要注意的點
問題在安裝的過程出現和總結。帶着這些問題點安裝k8s集群。會有收獲。
帶着問題自己也查閱相關資料學習。安裝中沒有書寫到特別的詳細。自行查閱學習。
集群hostname 處理
1、方法一
統一寫入到/etc/hosts 文件
缺點:集群服務器過多,每次新增節點所有機器都要修改該文件
2、方法二
自建dns 服務,強烈推薦該方法,每次修改只需在dns 服務中增加即可
etcd 集群安裝方式
兩種方式只能使用其中一種,下邊有詳細介紹
1、外部方式
2、內部方式
證書的過期時間
兩種方式。只能使用其中一種
1、使用自己生成的證書,自定義證書過期時間方式
2、重新編譯kubeadm ,修改其生成證書時的時間方式
kubelete 需要跟docker 同步使用systemd 驅動
默認docker 是使用cgroup,kubelete 使用使用systemd,必須統一。
最好使用systemd
flannel yaml 文件的設置的子網
flannel yaml 文件中的子網必須跟kubeadm yaml文件中設置的子網地址一樣。
dashboard 官方的yaml 文件中權限問題
dashboard 官方yaml 文件默認權限不過,需要自行修改。
安裝
本次安裝使用的是etcd 外部集群模式,因為電腦配置有限,能開的虛擬機數量有限,本次安裝的外部etcd,etcd 還是跟各master 在一起。生產環境中最好把etcd跟master 分開到不同服務器。
安裝外部etcd時跟master服務器不在同一服務器時,在安裝初始化和加入其他master時需要把etcd的客戶端證書提前復制到master節點相關位置存放。此位置跟下邊的kube-config.yaml 文件中etcd
配置的證書相關字段中必須相同
etcd 集群規划
使用kubeadm 安裝k8s 集群,etcd 的安裝有兩種方式,一種是內部方式。一種是外部集群方式。兩種安裝方式有很大的區別。
內部方式
使用kubeadm init
時默認安裝etcd。etcd 跟master 在一台服務器上,這種方式優點是安裝方便,初始化時就自動安裝etcd。缺點是當master 服務器宕機后會間接造成etcd 服務也不能使用。集群資源變更過多,master服務器 既要處理node 的請求連接,還要處理etcd 服務的請求連接。會造成服務器出現負載。內部方式還有一個特別的大的缺點:kubadm init 初始化的那台服務器的etcd 只是單獨的etcd服務,后續使用kubeadm join --control-plane
加入的master 中的etcd 組成集群還需要手動修改etcd服務器的yaml 文件。其他master同樣使用kubeadm init 安裝那etcd 就不是集群。各自都是單機版
架構圖
外部方式
自己手動安裝etcd集群,k8s 集群跟etcd 能通信就OK。
這種安裝方式不會出現內部方式的缺點。此方式缺點需要單獨部署集群,
不過使用ansible 或者寫好安裝腳本也快速方便。
架構圖
安裝docker
除slb 外所有服務器全部安裝docker
默認docker使用的是cgroupdriver 是cgroup,修改cgroupdriver 為systemd
yum install -y yum-utils device-mapper-persistent-data lvm2
yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
yum makecache fast
yum -y install docker-ce-18.09.9-3* docker-ce-cli-18.09.9-3*
mkdir /etc/docker/
cat >> /etc/docker/daemon.json <<EOF
{
"registry-mirrors": ["https://slwwbaoq.mirror.aliyuncs.com"],
"exec-opts": ["native.cgroupdriver=systemd"]
}
EOF
systemctl enable docker
systemctl start docker
安裝 ipvsadm
除slb 外所有服務器全部安裝ipvsadm
kuberneter-proxy 使用ipvsadm 當做service 轉發的負載均衡
yum install ipvsadm -y
cat > /etc/sysconfig/modules/ipvs.modules <<EOF
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack_ipv4
EOF
添加權限加驗證ipvs 模塊是否開啟
chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep -e ip_vs -e nf_conntrack_ipv4
安裝etcd
本次在etcd1 上操作。證書生成操作也可以在其他服務器上操作,生成完畢拷貝到相應服務器上。
證書配置文件可執行文件存放目錄
mkdir -p /etc/etcd/{bin,cfg,ssl}
使用cfssl來生成自簽證書,先下載cfssl工具:
wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64
wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64
wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64
chmod +x cfssl_linux-amd64 cfssljson_linux-amd64 cfssl-certinfo_linux-amd64
mv cfssl_linux-amd64 /usr/local/bin/cfssl
mv cfssljson_linux-amd64 /usr/local/bin/cfssljson
mv cfssl-certinfo_linux-amd64 /usr/bin/cfssl-certinfo
生成證書
創建生成證書的文件臨時目錄
mkdir /k8s-tmp
創建以下三個文件,文件在/k8s-tmp目錄下,目錄按照自己環境隨意定
cat /k8s-tmp/ca-config.json
{
"signing": {
"default": {
"expiry": "87600h"
},
"profiles": {
"www": {
"expiry": "87600h",
"usages": [
"signing",
"key encipherment",
"server auth",
"client auth"
]
}
}
}
}
cat /k8s-tmp/ca-csr.json
{
"CN": "etcd CA",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "Beijing",
"ST": "Beijing"
}
]
}
cat /k8s-tmp/server-csr.json
{
"CN": "etcd",
"hosts": [
"192.168.5.3",
"192.168.5.5",
"192.168.5.7",
"192.168.5.200",
"192.168.5.8",
"192.168.5.9"
],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "BeiJing",
"ST": "BeiJing"
}
]
}
生成ca證書
cd /etc/etcd/ssl/
cfssl gencert -initca /k8s-tmp/ca-csr.json | cfssljson -bare ca
2020/01/10 22:08:08 [INFO] generating a new CA key and certificate from CSR
2020/01/10 22:08:08 [INFO] generate received request
2020/01/10 22:08:08 [INFO] received CSR
2020/01/10 22:08:08 [INFO] generating key: rsa-2048
2020/01/10 22:08:08 [INFO] encoded CSR
2020/01/10 22:08:08 [INFO] signed certificate with serial number 490053314682424709503949261482590717907168955991
生成域名證書
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=/k8s-tmp/ca-config.json -profile=www /k8s-tmp/server-csr.json | cfssljson -bare server
2020/01/10 22:11:21 [INFO] generate received request
2020/01/10 22:11:21 [INFO] received CSR
2020/01/10 22:11:21 [INFO] generating key: rsa-2048
ca-key.pem ca.pem server-key.pem server.pem
2020/01/10 22:11:21 [INFO] encoded CSR
2020/01/10 22:11:21 [INFO] signed certificate with serial number 308419828069657306052544507320294995575828716921
2020/01/10 22:11:21 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").
生成的文件
ls /etc/etcd/ssl/*
ca-key.pem ca.pem server-key.pem server.pem
安裝etcd
二進制包下載地址:https://github.com/coreos/etcd/releases/tag/v3.3.18
解壓二進制包:
wget https://github.com/etcd-io/etcd/releases/download/v3.3.18/etcd-v3.3.18-linux-amd64.tar.gz
tar zxvf etcd-v3.3.18-linux-amd64.tar.gz
[root@k8s-master1 ssl]# mv etcd-v3.3.18-linux-amd64/{etcd,etcdctl} /etc/etcd/bin/
創建數據存放目錄
mkdir /data/etcd-data
配置文件
cat /etc/etcd/cfg/etcd.conf
#[Member]
ETCD_NAME="etcd01"
ETCD_DATA_DIR="/data/etcd-data"
ETCD_LISTEN_PEER_URLS="https://192.168.5.3:2380"
ETCD_LISTEN_CLIENT_URLS="https://192.168.5.3:2379"
#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.5.3:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://192.168.5.3:2379"
ETCD_INITIAL_CLUSTER="etcd02=https://192.168.5.4:2380,etcd03=https://192.168.5.7:2380,etcd01=https://192.168.5.3:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"
- ETCD_NAME 節點名稱
- ETCD_DATA_DIR 數據目錄
- ETCD_LISTEN_PEER_URLS 集群通信監聽地址
- ETCD_LISTEN_CLIENT_URLS 客戶端訪問監聽地址
- ETCD_INITIAL_ADVERTISE_PEER_URLS 集群通告地址
- ETCD_ADVERTISE_CLIENT_URLS 客戶端通告地址
- ETCD_INITIAL_CLUSTER 集群節點地址
- ETCD_INITIAL_CLUSTER_TOKEN 集群Token
- ETCD_INITIAL_CLUSTER_STATE 加入集群的當前狀態,new是新集群,existing表示加入已有集群
systemd管理etcd:
cat /usr/lib/systemd/system/etcd.service
[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target
[Service]
Type=notify
EnvironmentFile=/etc/etcd/cfg/etcd.conf
ExecStart=/etc/etcd/bin/etcd --name=${ETCD_NAME} --data-dir=${ETCD_DATA_DIR} --listen-peer-urls=${ETCD_LISTEN_PEER_URLS} --listen-client-urls=${ETCD_LISTEN_CLIENT_URLS},http://127.0.0.1:2379 --advertise-client-urls=${ETCD_ADVERTISE_CLIENT_URLS} --initial-advertise-peer-urls=${ETCD_INITIAL_ADVERTISE_PEER_URLS} --initial-cluster=${ETCD_INITIAL_CLUSTER} --initial-cluster-token=${ETCD_INITIAL_CLUSTER_TOKEN} --initial-cluster-state=new --cert-file=/etc/etcd/ssl/server.pem --key-file=/etc/etcd/ssl/server-key.pem --peer-cert-file=/etc/etcd/ssl/server.pem --peer-key-file=/etc/etcd/ssl/server-key.pem --trusted-ca-file=/etc/etcd/ssl/ca.pem --peer-trusted-ca-file=/etc/etcd/ssl/ca.pem
Restart=on-failure
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
啟動並設置開啟啟動:
systemctl start etcd
systemctl enable etcd
在etcd2 和 etcd3 跟etcd1 操作一抹一樣。唯一不同的是etcd
配置文件中的ETCD_NAME
ETCD_LISTEN_CLIENT_URLS
ETCD_LISTEN_PEER_URLS ETCD_INITIAL_ADVERTISE_PEER_URLS
ETCD_ADVERTISE_CLIENT_URLS
四個地方需要修改
都部署完成后,檢查etcd集群狀態:
/etc/etcd/bin/etcdctl --ca-file=/etc/etcd/ssl/ca.pem --cert-file=/etc/etcd/ssl/server.pem --key-file=/etc/etcd/ssl/server-key.pem --endpoints="https://192.168.5.3:2379,https://192.168.5.4:2379,https://192.168.5.5:2379" cluster-health
member 24586baafb4ab4b8 is healthy: got healthy result from https://192.168.5.7:2379
member 90b0b3dde8b183f1 is healthy: got healthy result from https://192.168.5.3:2379
member 94c0f494655271a4 is healthy: got healthy result from https://192.168.5.4:2379
cluster is healthy
如果輸出上面信息,就說明集群部署成功。如果有問題第一步先看日志:/var/log/message 或 journalctl -u etc
安裝openresty
安裝依賴
yum -y install pcre-devel openssl-devel gcc curl postgresql-devel
下載安裝包
cd /usr/src/
wget https://openresty.org/download/openresty-1.15.8.2.tar.gz
編譯安裝
tar xf /usr/src/openresty-1.15.8.2.tar.gz
cd /usr/src/openresty-1.15.8.2/
./configure --with-luajit --without-http_redis2_module --with-http_iconv_module --with-http_postgres_module
make && make install
ln -s /usr/local/openresty/nginx/sbin/nginx /usr/bin/nginx
配置tcp模式負載均衡
cat /usr/local/openresty/nginx/conf/nginx.conf
#user nobody;
worker_processes 1;
#error_log logs/error.log;
#error_log logs/error.log notice;
#error_log logs/error.log info;
#pid logs/nginx.pid;
events {
worker_connections 1024;
}
stream {
server {
listen 6443;
proxy_pass kubeadm;
}
include servers/*;
}
http {
include mime.types;
default_type application/octet-stream;
#log_format main '$remote_addr - $remote_user [$time_local] "$request" '
# '$status $body_bytes_sent "$http_referer" '
# '"$http_user_agent" "$http_x_forwarded_for"';
#access_log logs/access.log main;
sendfile on;
#tcp_nopush on;
#keepalive_timeout 0;
keepalive_timeout 65;
include servers/*;
#gzip on;
server {
listen 80;
server_name localhost;
#charset koi8-r;
#access_log logs/host.access.log main;
location / {
root html;
index index.html index.htm;
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root html;
}
}
}
cat /usr/local/openresty/nginx/conf/servers/k8s.com
upstream kubeadm {
server 192.168.5.3:6443 weight=10 max_fails=30 fail_timeout=10s;
server 192.168.5.4:6443 weight=10 max_fails=30 fail_timeout=10s;
server 192.168.5.7:6443 weight=10 max_fails=30 fail_timeout=10s;
}
配置文件中最核心的點是
stream {
server {
listen 6443;
proxy_pass kubeadm;
}
include servers/*;
}
啟動openrety
nginx
查看監聽狀態
netstat -anptl | grep 6443
tcp 0 0 0.0.0.0:6443 0.0.0.0:* LISTEN 7707/nginx: master
安裝 kubeadm kubectl kubelet
五台服務器需要全部安裝
配置yum 源
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
安裝
yum install -y kubeadm-1.15.6 kubectl-1.15.6 kubelet-1.15.6
kubeadm 替換
在m1 上操作,生成新的kubeadm 替換掉舊的,所有服務器都要替換
默認安裝的kubeadm 安裝的k8s 集群的證書只有一年時間。
需要每年重新生成新的證書,在重新生成證書時容易導致集群出現宕機等狀態。因此需要修改源代碼修改證書有效期時間,重新編譯kubeadm。
安裝go環境
cd /usr/src
wget https://dl.google.com/go/go1.12.14.linux-amd64.tar.gz
tar -zxf /usr/src/go1.12.14.linux-amd64.tar.gz
mv go /usr/local/
echo "export PATH=$PATH:/usr/local/go/bin" >>/etc/profile
source /etc/profile
重新編譯kubeadm
下載源代碼,上傳源代碼到192.168.5.3服務器的/usr/src 下
下載地址:https://github.com/kubernetes/kubernetes/releases/tag/v1.15.6
cd /usr/src && tar xf kubernetes-1.15.6.tar.gz
大於1.14 版本修改方式
修改CertificateValidity = time.Hour * 24 * 365
為CertificateValidity = time.Hour * 24 * 365 * 10
cat /usr/src/kubernetes-1.15.6/kubeadm/app/constants/constants.go
const (
// KubernetesDir is the directory Kubernetes owns for storing various configuration files
KubernetesDir = "/etc/kubernetes"
// ManifestsSubDirName defines directory name to store manifests
ManifestsSubDirName = "manifests"
// TempDirForKubeadm defines temporary directory for kubeadm
// should be joined with KubernetesDir.
TempDirForKubeadm = "tmp"
// CertificateValidity defines the validity for all the signed certificates generated by kubeadm
//CertificateValidity = time.Hour * 24 * 365
CertificateValidity = time.Hour * 24 * 365 * 10
修改完畢編譯生成kubeadm
cd /usr/src/kubernetes-1.15.6/ && make WHAT=cmd/kubeadm GOFLAGS=-v
將kubeadm 文件拷貝替換系統中原有kubeadm
cp /usr/bin/kubeadm /usr/bin/kubeadm.origin
cp /usr/src/kubernetes-1.15.6/_output/bin/kubeadm /usr/bin/kubeadm
注意:如果不確認重新編譯的kubeadm 生成的證書是自己修改的有效時間,在初始化完畢一個master后使用kubeadm alpha certs check-expiration
命令查看證書的時間(使用外部模式etcd 集群該命令不能使用。因為外部模式etcd集群證書是自己生成。該命令檢測時會出現找不到etcd 證書報錯)
kubeadm alpha certs check-expiratio 命令的結果例子:
查看 EXPIRES 字段跟初始化master時的日期做下對比
kubeadm alpha certs check-expiration
CERTIFICATE EXPIRES 字段跟初始化master時的日期做下對比RESIDUAL TIME EXTERNALLY MANAGED
admin.conf Dec 27, 2029 15:47 UTC 9y no
apiserver Dec 27, 2029 15:47 UTC 9y no
apiserver-etcd-client Dec 27, 2029 15:47 UTC 9y no
apiserver-kubelet-client Dec 27, 2029 15:47 UTC 9y no
controller-manager.conf Dec 27, 2029 15:47 UTC 9y no
etcd-healthcheck-client Dec 27, 2029 15:47 UTC 9y no
etcd-peer Dec 27, 2029 15:47 UTC 9y no
etcd-server Dec 27, 2029 15:47 UTC 9y no
front-proxy-client Dec 27, 2029 15:47 UTC 9y no
scheduler.conf Dec 27, 2029 15:47 UTC 9y no
kubelete 配置調整
配置文件中增加Environment="KUBELET_CGROUP_ARGS=--cgroup-driver=systemd"
該參數是設置kubelet的cgroup drivers 跟docker 的一致
cat /usr/lib/systemd/system/kubelet.service.d/10-kubeadm.conf
# Note: This dropin only works with kubeadm and kubelet v1.11+
[Service]
Environment="KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf"
Environment="KUBELET_CONFIG_ARGS=--config=/var/lib/kubelet/config.yaml"
# This is a file that "kubeadm init" and "kubeadm join" generates at runtime, populating the KUBELET_KUBEADM_ARGS variable dynamically
EnvironmentFile=-/var/lib/kubelet/kubeadm-flags.env
Environment="KUBELET_CGROUP_ARGS=--cgroup-driver=systemd"
# This is a file that the user can use for overrides of the kubelet args as a last resort. Preferably, the user should use
# the .NodeRegistration.KubeletExtraArgs object in the configuration files instead. KUBELET_EXTRA_ARGS should be sourced from this file.
EnvironmentFile=-/etc/sysconfig/kubelet
ExecStart=
ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS
添加開機啟動和啟動kubelet
systemctl enable kubelet
systemctl start kubelet
kubelet 在沒有初始化集群時啟動后的狀態錯誤的,等初始化完或者節點加入到集群后就自動恢復
安裝初始化m1
配置kubeadm-config.yaml
配置文件文檔鏈接:https://godoc.org/k8s.io/kubernetes/cmd/kubeadm/app/apis/kubeadm/v1beta2
cat kubeadm-config.yaml
apiVersion: kubeadm.k8s.io/v1beta2
kind: InitConfiguration
bootstrapTokens:
- token: "783bde.3f89s0fje9f38fhf"
description: "another bootstrap token"
ttl: "0s"
usages:
- authentication
- signing
groups:
- system:bootstrappers:kubeadm:default-node-token
localAPIEndpoint:
advertiseAddress: "192.168.5.3"
bindPort: 6443
certificateKey: "e6a2eb8581237ab72a4f494f30285ec12a9694d750b9785706a83bfcbbbd2204"
---
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
etcd:
external:
endpoints:
- https://192.168.5.3:2379
- https://192.168.5.4:2379
- https://192.168.5.7:2379
caFile: /etc/etcd/ssl/ca.pem
certFile: /etc/etcd/ssl/server.pem
keyFile: /etc/etcd/ssl/server-key.pem
networking:
serviceSubnet: "10.96.0.0/12"
podSubnet: "10.50.0.0/16"
dnsDomain: "cluster.local"
kubernetesVersion: "v1.15.6"
controlPlaneEndpoint: "192.168.5.200:6443"
apiServer:
certSANs:
- "192.168.5.3"
- "192.168.5.4"
- "192.168.5.7"
- "192.168.5.200"
- "192.168.5.10"
- "192.168.5.11"
timeoutForControlPlane: 4m0s
certificatesDir: "/etc/kubernetes/pki"
imageRepository: registry.aliyuncs.com/google_containers
useHyperKubeImage: false
clusterName: kubernetes
dns:
type: CoreDNS
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
featureGates:
SupportIPVSProxyMode: true
mode: ipvs
一些重要字段詳解
-
bootstrapTokens.token
該字段是指定node節點加入集群 和 其他master 節點加入集群自簽證書功能所需的token值,可以使用kubeadm token generate
命令生成,該字段表示后續添加node 時指定的token 值為自定義的。而不是使用集群初始化時自動生成的值。該字段也可以注釋不使用。初始化時master 時如果沒有自定義。系統會默認生成。 -
bootstrapTokens.ttl
設置token 值的有效時間,如果使用該字段按照默認為1天。
指定字段值為0s
代表token 永不過期。 -
localAPIEndpoint.advertiseAddress
api server綁定的IP地址 -
localAPIEndpoint.bindPort
api server 綁定的端口號 -
etcd
該字段下的配置是使用外部自建etcd 集群,所需要的配置信息。包括etcd 集群地址、證書相關配置。 -
networking.serviceSubnet
service 子網 -
networking.podSubnet
pod 的子網,后續安裝flannle 時需要跟該值對應 -
networking.dnsDomain
域名解析中的根,如果該位置修改了就必須修改kebelet中的配置參數 -
kubernetesVersion
指定安裝的k8s 版本 -
controlPlaneEndpoint
指定控制平台節點,可以寫域名和ip地址。api serever如果是集群版,指定slb 地址 -
apiServer.certSANs
該字段指定需要生成證書api server的地址 -
certificatesDir
指定證書存放的位置,不配置默認存放為/etc/kubernetes/pki
目錄下 -
imageRepository
指定相應鏡像的位置,默認是從k8s.gcr.io
下載鏡像,該地址需要特殊上網才能下載。修改鏡像地址為阿里雲鏡像倉庫registry.aliyuncs.com/google_containers
-
dns.type
指定安裝的dns解析服務。默認coredn -
featureGates.SupportIPVSProxyMode
該字段表示proxy開啟ipvs支持 -
mode
該字段表示proxy 使用ipvs模式作為service 負載均衡
初始化m1
kubeadm init --config /k8s/kubeadm-config.yaml --upload-certs
等待出現以下結果初始化完畢
生成kubectl 命令配置
初始化結果已經提示,按照相關提示操作即可。后續使用kubeadm jion 加入的master或 node 也可以使用下邊操作后,也可以使用kubectl命令
mkdir -p $HOME/.kube && cp -i /etc/kubernetes/admin.conf $HOME/.kube/config && chown $(id -u):$(id -g) $HOME/.kube/config
使用kubectl 查看集群
剛初始化完畢的集群,因為沒有安裝cni 網絡插件。coredns 的狀態是pending 狀態
[root@m1 k8s]# kubectl get pod -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-bccdc95cf-h5jkz 0/1 Pending 0 5m
coredns-bccdc95cf-vcgbq 0/1 Pending 0 5m
kube-apiserver-m1 1/1 Running 0 5m
kube-controller-manager-m1 1/1 Running 0 5m
kube-proxy-72cdg 1/1 Running 0 5m
kube-scheduler-m1 1/1 Running 0 5m
安裝flannel 網絡插件
flannel 相關資料
https://github.com/coreos/flannel
配置flannel yaml 文件
修改yaml文件中如下部分,network 字段設置的子網要跟初始化集群的kubeadm-config.yaml文件中設置的network 字段一樣
net-conf.json: |
{
"Network": "10.50.0.0/16",
"Backend": {
"Type": "vxlan"
}
}
安裝flannel
flannel pod 是使用DaemonSet
安裝的控制器安裝的pod,
后續執行集群加入命令加入到集群中的master 和node 節點都會在節點上啟動一個flannel pod
kubectl apply -f /k8s/kube-flannel.yml
安裝成功后coredns 變為running,表示沒有問題
kubectl get pod -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-bccdc95cf-h5jkz 1/1 Running 0
5m
coredns-bccdc95cf-vcgbq 1/1 Running 0 5m
kube-apiserver-m1 1/1 Running 0 5m
kube-flannel-ds-amd64-67tvq 1/1 Running 0 5m
kube-controller-manager-m1 1/1 Running 0 5m
kube-proxy-72cdg 1/1 Running 0 5m
kube-scheduler-m1 1/1 Running 0 5m
安裝其他master 節點
在其他master 執行以下命令。直接加入到集群中。在初始化時集群時有相關提示
kubeadm join 192.168.5.200:6443 --token 783bde.3f89s0fje9f38fhf \
--discovery-token-ca-cert-hash sha256:5dd8f46b1e107e863d3d905411b591573cb65015e2c80386362599b81db09ef7 \
--control-plane --certificate-key e6a2eb8581237ab72a4f494f30285ec12a9694d750b9785706a83bfcbbbd2204
- token
是在kubeadm-config.yaml 文件中設置的 - --discovery-token-ca-cert-hash
是初始化時自動生成的 - --certificate-ke
是kubeadm-config.yaml 文件中設置的,它的作用是用於自動從初始化的m1
上下載相關證書,對應init時的--upload-certs
參數,只有在加入master 時才會使用--certificate-ke
節點加入集群
在節點服務器執行以下命令,在初始化時集群時有相關提示
kubeadm join 192.168.5.200:6443 --token 783bde.3f89s0fje9f38fhf --discovery-token-ca-cert-hash sha256:5dd8f46b1e107e863d3d905411b591573cb65015e2c80386362599b81db09ef7f
查看加入的master 和 node節點
kubectl get node 命令
kubectl get node
NAME STATUS ROLES AGE VERSION
m1 Ready master 7d22h v1.15.6
m2 Ready master 7d22h v1.15.6
m3 Ready master 7d22h v1.15.6
n1 Ready <none> 7d22h v1.15.6
n2 Ready <none> 7d22h v1.15.6
如果不知道以上命令,怎么查詢 token 和 discovery-token-ca-cert-has 呢?
如果沒有token,執行以下查詢獲得
kubeadm token list
TOKEN TTL EXPIRES USAGES DESCRIPTION EXTRA GROUPS
783bde.3f89s0fje9f38fhf <forever> <never> authentication,signing another bootstrap token system:bootstrappers:kubeadm:default-node-token
默認情況下,令牌在24小時后過期。如果在當前令牌過期后將節點加入群集,則可以通過在主節點上運行以下命令來創建新令牌:
kubeadm token create
ih6qhw.tbkp26l64xivcca7
如果沒有discovery-token-ca-cert-hash,執行以下查詢獲得。 --discovery-token-ca-cert-hash 的值可以配合多個token以重復使用
openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //'
0a737675e1e37aa4025077b27ced8053fe84c363df11c506bfb512b88408697e
安裝dashboard
dashboard 是k8s 配套的的一個web 管理界面工具。它能管理k8s 集群的大多數功能。
dashboard 默認安裝的是使用https和 token 認證方式,相關資料到網站查看:https://github.com/kubernetes/dashboard。本次使用dashboard 的http方式安裝。dashboard 默認鏡像中使用9090端口,相關資料:https://github.com/kubernetes/dashboard/blob/master/aio/Dockerfile
配置 dashboard yaml文件,
文件修改以下配置
注釋默認文件中的ClusterRole
#kind: ClusterRole
#apiVersion: rbac.authorization.k8s.io/v1
#metadata:
# labels:
# k8s-app: kubernetes-dashboard
#name: kubernetes-dashboard
#rules:
# Allow Metrics Scraper to get metrics from the Metrics server
# - apiGroups: ["metrics.k8s.io"]
# resources: ["pods", "nodes"]
# verbs: ["get", "list", "watch"]
在ClusterRoleBinding 字段中綁定集群默認的Cluster-admin 角色。因為默認配置文件設置的ClusterRole 權限不足。
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: kubernetes-dashboard
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: kubernetes-dashboard
namespace: kubernetes-dashboard
在容器配置字段添加
- containerPort: 9090
protocol: TCP
在檢測字段添加
httpGet:
scheme: HTTP
path: /
port: 9090
initialDelaySeconds: 30
timeoutSeconds: 30
修改service 字段,修改類型為nodeport,添加9090 的映射
kind: Service
apiVersion: v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kubernetes-dashboard
spec:
ports:
- port: 443
targetPort: 8443
name: https
- port: 9090
name: http
targetPort: 9090
type: NodePort
selector:
k8s-app: kubernetes-dashboard
完整的dashboard yaml 文件
cat dashboard.yaml
# Copyright 2017 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
apiVersion: v1
kind: Namespace
metadata:
name: kubernetes-dashboard
---
apiVersion: v1
kind: ServiceAccount
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kubernetes-dashboard
---
kind: Service
apiVersion: v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kubernetes-dashboard
spec:
ports:
- port: 443
targetPort: 8443
name: https
- port: 9090
name: http
targetPort: 9090
type: NodePort
selector:
k8s-app: kubernetes-dashboard
---
apiVersion: v1
kind: Secret
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard-certs
namespace: kubernetes-dashboard
type: Opaque
---
apiVersion: v1
kind: Secret
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard-csrf
namespace: kubernetes-dashboard
type: Opaque
data:
csrf: ""
---
apiVersion: v1
kind: Secret
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard-key-holder
namespace: kubernetes-dashboard
type: Opaque
---
kind: ConfigMap
apiVersion: v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard-settings
namespace: kubernetes-dashboard
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kubernetes-dashboard
rules:
# Allow Dashboard to get, update and delete Dashboard exclusive secrets.
- apiGroups: [""]
resources: ["secrets"]
resourceNames: ["kubernetes-dashboard-key-holder", "kubernetes-dashboard-certs", "kubernetes-dashboard-csrf"]
verbs: ["get", "update", "delete"]
# Allow Dashboard to get and update 'kubernetes-dashboard-settings' config map.
- apiGroups: [""]
resources: ["configmaps"]
resourceNames: ["kubernetes-dashboard-settings"]
verbs: ["get", "update"]
# Allow Dashboard to get metrics.
- apiGroups: [""]
resources: ["services"]
resourceNames: ["heapster", "dashboard-metrics-scraper"]
verbs: ["proxy"]
- apiGroups: [""]
resources: ["services/proxy"]
resourceNames: ["heapster", "http:heapster:", "https:heapster:", "dashboard-metrics-scraper", "http:dashboard-metrics-scraper"]
verbs: ["get"]
---
#kind: ClusterRole
#apiVersion: rbac.authorization.k8s.io/v1
#metadata:
# labels:
# k8s-app: kubernetes-dashboard
#name: kubernetes-dashboard
#rules:
# Allow Metrics Scraper to get metrics from the Metrics server
# - apiGroups: ["metrics.k8s.io"]
# resources: ["pods", "nodes"]
# verbs: ["get", "list", "watch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kubernetes-dashboard
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: kubernetes-dashboard
subjects:
- kind: ServiceAccount
name: kubernetes-dashboard
namespace: kubernetes-dashboard
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: kubernetes-dashboard
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: kubernetes-dashboard
namespace: kubernetes-dashboard
---
kind: Deployment
apiVersion: apps/v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kubernetes-dashboard
spec:
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
k8s-app: kubernetes-dashboard
template:
metadata:
labels:
k8s-app: kubernetes-dashboard
spec:
containers:
- name: kubernetes-dashboard
image: kubernetesui/dashboard:v2.0.0-rc1
imagePullPolicy: Always
ports:
- containerPort: 8443
protocol: TCP
- containerPort: 9090
protocol: TCP
args:
- --auto-generate-certificates
- --namespace=kubernetes-dashboard
# Uncomment the following line to manually specify Kubernetes API server Host
# If not specified, Dashboard will attempt to auto discover the API server and connect
# to it. Uncomment only if the default does not work.
# - --apiserver-host=http://my-address:port
volumeMounts:
- name: kubernetes-dashboard-certs
mountPath: /certs
# Create on-disk volume to store exec logs
- mountPath: /tmp
name: tmp-volume
livenessProbe:
httpGet:
scheme: HTTPS
path: /
port: 8443
initialDelaySeconds: 30
timeoutSeconds: 30
httpGet:
scheme: HTTP
path: /
port: 9090
initialDelaySeconds: 30
timeoutSeconds: 30
securityContext:
allowPrivilegeEscalation: false
readOnlyRootFilesystem: true
runAsUser: 1001
runAsGroup: 2001
volumes:
- name: kubernetes-dashboard-certs
secret:
secretName: kubernetes-dashboard-certs
- name: tmp-volume
emptyDir: {}
serviceAccountName: kubernetes-dashboard
nodeSelector:
"beta.kubernetes.io/os": linux
# Comment the following tolerations if Dashboard must not be deployed on master
tolerations:
- key: node-role.kubernetes.io/master
effect: NoSchedule
---
kind: Service
apiVersion: v1
metadata:
labels:
k8s-app: dashboard-metrics-scraper
name: dashboard-metrics-scraper
namespace: kubernetes-dashboard
spec:
ports:
- port: 8000
targetPort: 8000
selector:
k8s-app: dashboard-metrics-scraper
---
kind: Deployment
apiVersion: apps/v1
metadata:
labels:
k8s-app: dashboard-metrics-scraper
name: dashboard-metrics-scraper
namespace: kubernetes-dashboard
spec:
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
k8s-app: dashboard-metrics-scraper
template:
metadata:
labels:
k8s-app: dashboard-metrics-scraper
annotations:
seccomp.security.alpha.kubernetes.io/pod: 'runtime/default'
spec:
containers:
- name: dashboard-metrics-scraper
image: kubernetesui/metrics-scraper:v1.0.1
ports:
- containerPort: 8000
protocol: TCP
livenessProbe:
httpGet:
scheme: HTTP
path: /
port: 8000
initialDelaySeconds: 30
timeoutSeconds: 30
volumeMounts:
- mountPath: /tmp
name: tmp-volume
securityContext:
allowPrivilegeEscalation: false
readOnlyRootFilesystem: true
runAsUser: 1001
runAsGroup: 2001
serviceAccountName: kubernetes-dashboard
nodeSelector:
"beta.kubernetes.io/os": linux
# Comment the following tolerations if Dashboard must not be deployed on master
tolerations:
- key: node-role.kubernetes.io/master
effect: NoSchedule
volumes:
- name: tmp-volume
emptyDir: {}
dashboard 安裝
kubectl apply -f /k8s/dashboard.yaml
查看pod
kubectl get pod -n kubernetes-dashboard
NAME READY STATUS RESTARTS AGE
dashboard-metrics-scraper-6c554969c6-k6rbh 1/1 Running 0 3d4h
kubernetes-dashboard-9bff46df4-t7sn2 1/1 Running 1 4d1h
查看svc
kubectl get svc -n kubernetes-dashboard
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
dashboard-metrics-scraper ClusterIP 10.104.248.58 <none> 8000/TCP 4d1h
kubernetes-dashboard NodePort 10.100.47.122 <none> 443:30735/TCP,9090:32701/TCP 4d1h
訪問192.168.5.5:32701 出現以下界面
安裝metrics-server
k8s 1.13 不再使用heapster ,改為使用 metrics-server。
metrics-server github: https://github.com/kubernetes-incubator/metrics-server
主要修改
1、添加兩個參數 --kubelet-preferred-address-types=InternalIP --kubelet-insecure-tls
2、imagePullPolicy: Always
修改為imagePullPolicy: IfNotPresent
因為Always 每次都是從原地址去拉取,不能特殊上網。IfNotPresent node 有鏡像優先使用
鏡像的調整
默認鏡像是k8s.gcr.io/metrics-server-amd64:v0.3.1 自動拉取不到。先自行下載到node上修改鏡像名稱
docker pull mirrorgooglecontainers/metrics-server-amd64:v0.3.1
docker tag mirrorgooglecontainers/metrics-server-amd64:v0.3.1 k8s.gcr.io/metrics-server-amd64:v0.3.1
創建存放yaml 文件目錄,上傳所有相關yaml 文件到該目錄下
mkdir -p /k8s/metrics-server/
yaml 文件調整
# cat /k8s/metrics-server-deployment.yaml
---
apiVersion: v1
kind: ServiceAccount
...
---
apiVersion: extensions/v1beta1
kind: Deployment
...
spec:
serviceAccountName: metrics-server
volumes:
# mount in tmp so we can safely use from-scratch images and/or read-only containers
- name: tmp-dir
emptyDir: {}
containers:
- name: metrics-server
image: k8s.gcr.io/metrics-server-amd64:v0.3.1
imagePullPolicy: Always
args:
- --kubelet-preferred-address-types=InternalIP
- --kubelet-insecure-tls
volumeMounts:
- name: tmp-dir
mountPath: /tmp
執行加載文件
$ kubectl create -f /k8s/metrics-server/
查看pod是否正常運行,查看pod日志是否報錯
# # kubectl -n kube-system get po,svc | grep metrics-server
pod/metrics-server-8665bf49db-5wv7l 1/1 Running 0 31m
service/metrics-server NodePort 10.99.222.85 <none> 443:32443/TCP 23m
# kubectl -n kube-system logs -f metrics-server-8665bf49db-5wv7l
通過kubectl工具測試獲取metrics數據
# kubectl get --raw "/apis/metrics.k8s.io/v1beta1/nodes"
{
"kind": "NodeMetricsList",
"apiVersion": "metrics.k8s.io/v1beta1",
"metadata": {
"selfLink": "/apis/metrics.k8s.io/v1beta1/nodes"
},
"items": [
{
"metadata": {
"name": "master02",
"selfLink": "/apis/metrics.k8s.io/v1beta1/nodes/master02",
"creationTimestamp": "2019-01-29T10:02:00Z"
},
"timestamp": "2019-01-29T10:01:48Z",
"window": "30s",
"usage": {
"cpu": "131375532n",
"memory": "989032Ki"
}
},
...
使用top確認數據
# kubectl top nodes
NAME CPU(cores) CPU% MEMORY(bytes) MEMORY%
master01 200m 2% 1011Mi 3%
master02 451m 5% 967Mi 3%
master03 423m 5% 1003Mi 3%
node01 84m 1% 440Mi 1%
# kubectl top pod
NAME CPU(cores) MEMORY(bytes)
myip-7644b545d9-htg5z 0m 1Mi
myip-7644b545d9-pnwrn 0m 1Mi
myip-7644b545d9-ptnqc 0m 1Mi
tools-657d877fc5-4cfdd 0m 0Mi
ingress 和ingress-Controllers
簡易介紹 ingress
ingress相當於nginx 配置中的 server + upstream
ingress-Controllers 相當於nginx 服務。它們的產生是為的解決默認service 是tcp 層的負載均衡、service使用nodeport 模式還需要記錄相關端口,端口數量還有限制。使用ingress 和 ingress-Controllers 能完美解決該問題。ingress-Controllers 部署完畢后,ingress 配置關聯集群中的service,ingress-Controllers 自動把相關配置同步過來(前提是在部署時添加了授權)
官方資料:https://v1-15.docs.kubernetes.io/docs/concepts/services-networking/ingress/ 和 https://v1-15.docs.kubernetes.io/docs/concepts/services-networking/ingress-controllers/
使用nginx當 ingress 控制器
相關資料: https://docs.nginx.com/nginx-ingress-controller/installation/building-ingress-controller-image/
github 地址:https://github.com/nginxinc/kubernetes-ingress/
nginx-ingress yaml 文件
修改nginx 的image地址,默認使用的不是穩定版,修改為 nginx/nginx-ingress:1.6.0
cat kube-nginx-ingress.yaml
apiVersion: v1
kind: Namespace
metadata:
name: nginx-ingress
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: nginx-ingress
namespace: nginx-ingress
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: nginx-ingress
rules:
- apiGroups:
- ""
resources:
- services
- endpoints
verbs:
- get
- list
- watch
- apiGroups:
- ""
resources:
- secrets
verbs:
- get
- list
- watch
- apiGroups:
- ""
resources:
- configmaps
verbs:
- get
- list
- watch
- update
- create
- apiGroups:
- ""
resources:
- pods
verbs:
- list
- watch
- apiGroups:
- ""
resources:
- events
verbs:
- create
- patch
- apiGroups:
- extensions
resources:
- ingresses
verbs:
- list
- watch
- get
- apiGroups:
- "extensions"
resources:
- ingresses/status
verbs:
- update
- apiGroups:
- k8s.nginx.org
resources:
- virtualservers
- virtualserverroutes
verbs:
- list
- watch
- get
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: nginx-ingress
subjects:
- kind: ServiceAccount
name: nginx-ingress
namespace: nginx-ingress
roleRef:
kind: ClusterRole
name: nginx-ingress
apiGroup: rbac.authorization.k8s.io
---
apiVersion: v1
kind: Secret
metadata:
name: default-server-secret
namespace: nginx-ingress
type: Opaque
data:
tls.crt: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUN2akNDQWFZQ0NRREFPRjl0THNhWFhEQU5CZ2txaGtpRzl3MEJBUXNGQURBaE1SOHdIUVlEVlFRRERCWk8KUjBsT1dFbHVaM0psYzNORGIyNTBjbTlzYkdWeU1CNFhEVEU0TURreE1qRTRNRE16TlZvWERUSXpNRGt4TVRFNApNRE16TlZvd0lURWZNQjBHQTFVRUF3d1dUa2RKVGxoSmJtZHlaWE56UTI5dWRISnZiR3hsY2pDQ0FTSXdEUVlKCktvWklodmNOQVFFQkJRQURnZ0VQQURDQ0FRb0NnZ0VCQUwvN2hIUEtFWGRMdjNyaUM3QlBrMTNpWkt5eTlyQ08KR2xZUXYyK2EzUDF0azIrS3YwVGF5aGRCbDRrcnNUcTZzZm8vWUk1Y2Vhbkw4WGM3U1pyQkVRYm9EN2REbWs1Qgo4eDZLS2xHWU5IWlg0Rm5UZ0VPaStlM2ptTFFxRlBSY1kzVnNPazFFeUZBL0JnWlJVbkNHZUtGeERSN0tQdGhyCmtqSXVuektURXUyaDU4Tlp0S21ScUJHdDEwcTNRYzhZT3ExM2FnbmovUWRjc0ZYYTJnMjB1K1lYZDdoZ3krZksKWk4vVUkxQUQ0YzZyM1lma1ZWUmVHd1lxQVp1WXN2V0RKbW1GNWRwdEMzN011cDBPRUxVTExSakZJOTZXNXIwSAo1TmdPc25NWFJNV1hYVlpiNWRxT3R0SmRtS3FhZ25TZ1JQQVpQN2MwQjFQU2FqYzZjNGZRVXpNQ0F3RUFBVEFOCkJna3Foa2lHOXcwQkFRc0ZBQU9DQVFFQWpLb2tRdGRPcEsrTzhibWVPc3lySmdJSXJycVFVY2ZOUitjb0hZVUoKdGhrYnhITFMzR3VBTWI5dm15VExPY2xxeC9aYzJPblEwMEJCLzlTb0swcitFZ1U2UlVrRWtWcitTTFA3NTdUWgozZWI4dmdPdEduMS9ienM3bzNBaS9kclkrcUI5Q2k1S3lPc3FHTG1US2xFaUtOYkcyR1ZyTWxjS0ZYQU80YTY3Cklnc1hzYktNbTQwV1U3cG9mcGltU1ZmaXFSdkV5YmN3N0NYODF6cFErUyt1eHRYK2VBZ3V0NHh3VlI5d2IyVXYKelhuZk9HbWhWNThDd1dIQnNKa0kxNXhaa2VUWXdSN0diaEFMSkZUUkk3dkhvQXprTWIzbjAxQjQyWjNrN3RXNQpJUDFmTlpIOFUvOWxiUHNoT21FRFZkdjF5ZytVRVJxbStGSis2R0oxeFJGcGZnPT0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
tls.key: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFcEFJQkFBS0NBUUVBdi91RWM4b1JkMHUvZXVJTHNFK1RYZUprckxMMnNJNGFWaEMvYjVyYy9XMlRiNHEvClJOcktGMEdYaVN1eE9ycXgrajlnamx4NXFjdnhkenRKbXNFUkJ1Z1B0ME9hVGtIekhvb3FVWmcwZGxmZ1dkT0EKUTZMNTdlT1l0Q29VOUZ4amRXdzZUVVRJVUQ4R0JsRlNjSVo0b1hFTkhzbysyR3VTTWk2Zk1wTVM3YUhudzFtMApxWkdvRWEzWFNyZEJ6eGc2clhkcUNlUDlCMXl3VmRyYURiUzc1aGQzdUdETDU4cGszOVFqVUFQaHpxdmRoK1JWClZGNGJCaW9CbTVpeTlZTW1hWVhsMm0wTGZzeTZuUTRRdFFzdEdNVWozcGJtdlFmazJBNnljeGRFeFpkZFZsdmwKMm82MjBsMllxcHFDZEtCRThCay90elFIVTlKcU56cHpoOUJUTXdJREFRQUJBb0lCQVFDZklHbXowOHhRVmorNwpLZnZJUXQwQ0YzR2MxNld6eDhVNml4MHg4Mm15d1kxUUNlL3BzWE9LZlRxT1h1SENyUlp5TnUvZ2IvUUQ4bUFOCmxOMjRZTWl0TWRJODg5TEZoTkp3QU5OODJDeTczckM5bzVvUDlkazAvYzRIbjAzSkVYNzZ5QjgzQm9rR1FvYksKMjhMNk0rdHUzUmFqNjd6Vmc2d2szaEhrU0pXSzBwV1YrSjdrUkRWYmhDYUZhNk5nMUZNRWxhTlozVDhhUUtyQgpDUDNDeEFTdjYxWTk5TEI4KzNXWVFIK3NYaTVGM01pYVNBZ1BkQUk3WEh1dXFET1lvMU5PL0JoSGt1aVg2QnRtCnorNTZud2pZMy8yUytSRmNBc3JMTnIwMDJZZi9oY0IraVlDNzVWYmcydVd6WTY3TWdOTGQ5VW9RU3BDRkYrVm4KM0cyUnhybnhBb0dCQU40U3M0ZVlPU2huMVpQQjdhTUZsY0k2RHR2S2ErTGZTTXFyY2pOZjJlSEpZNnhubmxKdgpGenpGL2RiVWVTbWxSekR0WkdlcXZXaHFISy9iTjIyeWJhOU1WMDlRQ0JFTk5jNmtWajJTVHpUWkJVbEx4QzYrCk93Z0wyZHhKendWelU0VC84ajdHalRUN05BZVpFS2FvRHFyRG5BYWkyaW5oZU1JVWZHRXFGKzJyQW9HQkFOMVAKK0tZL0lsS3RWRzRKSklQNzBjUis3RmpyeXJpY05iWCtQVzUvOXFHaWxnY2grZ3l4b25BWlBpd2NpeDN3QVpGdwpaZC96ZFB2aTBkWEppc1BSZjRMazg5b2pCUmpiRmRmc2l5UmJYbyt3TFU4NUhRU2NGMnN5aUFPaTVBRHdVU0FkCm45YWFweUNweEFkREtERHdObit3ZFhtaTZ0OHRpSFRkK3RoVDhkaVpBb0dCQUt6Wis1bG9OOTBtYlF4VVh5YUwKMjFSUm9tMGJjcndsTmVCaWNFSmlzaEhYa2xpSVVxZ3hSZklNM2hhUVRUcklKZENFaHFsV01aV0xPb2I2NTNyZgo3aFlMSXM1ZUtka3o0aFRVdnpldm9TMHVXcm9CV2xOVHlGanIrSWhKZnZUc0hpOGdsU3FkbXgySkJhZUFVWUNXCndNdlQ4NmNLclNyNkQrZG8wS05FZzFsL0FvR0FlMkFVdHVFbFNqLzBmRzgrV3hHc1RFV1JqclRNUzRSUjhRWXQKeXdjdFA4aDZxTGxKUTRCWGxQU05rMXZLTmtOUkxIb2pZT2pCQTViYjhibXNVU1BlV09NNENoaFJ4QnlHbmR2eAphYkJDRkFwY0IvbEg4d1R0alVZYlN5T294ZGt5OEp0ek90ajJhS0FiZHd6NlArWDZDODhjZmxYVFo5MWpYL3RMCjF3TmRKS2tDZ1lCbyt0UzB5TzJ2SWFmK2UwSkN5TGhzVDQ5cTN3Zis2QWVqWGx2WDJ1VnRYejN5QTZnbXo5aCsKcDNlK2JMRUxwb3B0WFhNdUFRR0xhUkcrYlNNcjR5dERYbE5ZSndUeThXczNKY3dlSTdqZVp2b0ZpbmNvVlVIMwphdmxoTUVCRGYxSjltSDB5cDBwWUNaS2ROdHNvZEZtQktzVEtQMjJhTmtsVVhCS3gyZzR6cFE9PQotLS0tLUVORCBSU0EgUFJJVkFURSBLRVktLS0tLQo=
---
kind: ConfigMap
apiVersion: v1
metadata:
name: nginx-config
namespace: nginx-ingress
data:
---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: virtualservers.k8s.nginx.org
spec:
group: k8s.nginx.org
versions:
- name: v1
served: true
storage: true
scope: Namespaced
names:
plural: virtualservers
singular: virtualserver
kind: VirtualServer
shortNames:
- vs
---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: virtualserverroutes.k8s.nginx.org
spec:
group: k8s.nginx.org
versions:
- name: v1
served: true
storage: true
scope: Namespaced
names:
plural: virtualserverroutes
singular: virtualserverroute
kind: VirtualServerRoute
shortNames:
- vsr
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: nginx-ingress
namespace: nginx-ingress
spec:
selector:
matchLabels:
app: nginx-ingress
template:
metadata:
labels:
app: nginx-ingress
#annotations:
#prometheus.io/scrape: "true"
#prometheus.io/port: "9113"
spec:
serviceAccountName: nginx-ingress
containers:
- image: nginx/nginx-ingress:1.6.0
imagePullPolicy: Always
name: nginx-ingress
ports:
- name: http
containerPort: 80
hostPort: 80
- name: https
containerPort: 443
hostPort: 443
#- name: prometheus
#containerPort: 9113
securityContext:
allowPrivilegeEscalation: true
runAsUser: 101 #nginx
capabilities:
drop:
- ALL
add:
- NET_BIND_SERVICE
env:
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
args:
- -nginx-configmaps=$(POD_NAMESPACE)/nginx-config
- -default-server-tls-secret=$(POD_NAMESPACE)/default-server-secret
#- -v=3 # Enables extensive logging. Useful for troubleshooting.
#- -report-ingress-status
#- -external-service=nginx-ingress
#- -enable-leader-election
#- -enable-prometheus-metrics
---
apiVersion: v1
kind: Service
metadata:
name: nginx-ingress
namespace: nginx-ingress
spec:
type: NodePort
ports:
- port: 80
targetPort: 80
protocol: TCP
name: http
- port: 443
targetPort: 443
protocol: TCP
name: https
selector:
app: nginx-ingress
部署,使用的pod 的deamonset
kubectl apply -f /k8s/kube-nginx-ingress.yaml
查看部署結果
kubectl get pod -n nginx-ingress
NAME READY STATUS RESTARTS AGE
nginx-ingress-crk7x 1/1 Running 1 3d23h
nginx-ingress-vw8mx 1/1 Running 1 3d23h
kubectl get svc -n nginx-ingress
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
nginx-ingress NodePort 10.110.78.89 <none> 80:30526/TCP,443:30195/TCP 3d23h
添加測試案例
安裝一個myapp
cat demo1.yaml
apiVersion: v1
kind: Service
metadata:
name: myapp
namespace: default
spec:
selector:
app: myapp
release: canary
ports:
- name: http
port: 80
targetPort: 80
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp-deploy
spec:
replicas: 2
selector:
matchLabels:
app: myapp
release: canary
template:
metadata:
labels:
app: myapp
release: canary
spec:
containers:
- name: myapp
image: ikubernetes/myapp:v2
ports:
- name: httpd
containerPort: 80
部署demo1
kubectl apply -f /k8s/demo1.yaml
查看結果
kubectl get pod -n default | grep myapp
myapp-deploy-67d64cb6f4-c582p 1/1 Running 1 3d23h
myapp-deploy-67d64cb6f4-j98nx 1/1 Running 0 3d6h
kubectl get svc -n default | grep myapp
myapp ClusterIP 10.104.231.115 <none> 80/TCP 3d23h
添加demo1-ingess.yaml
cat demo1-ingess.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress-myapp
namespace: default
annotations:
kubernetes.io/ingress.class: "nginx"
spec:
rules:
- host: demo1.com #生產中該域名應當可以被公網解析
http:
paths:
- path:
backend:
serviceName: myapp
servicePort: 80
部署demo1-ingress
kubectl apply -f /k8s/demo1-ingess.yaml
查看結果
kubectl get ingress -n default
NAME HOSTS ADDRESS PORTS AGE
ingress-myapp demo1.com 80 3d23h
在一台要訪問demo1.com 域名的電腦的hosts 文件中添加解析記錄
192.168.5.5 demo1.com
在瀏覽器訪問http://demo1.com:30526/
出現如下表示ingress和ingress-controllers 安裝成功
報錯處理
添加的節點狀態為notready
一般情況下 我們是在maste節點上安裝網絡插件的,然后在join node 節點,這樣導致node節點可能無法加載到這些插件
使用
journalctl -f -u kubelet
顯示如下內容
Nov 06 15:37:21 jupiter kubelet[86177]: W1106 15:37:21.482574 86177 cni.go:237] Unable to update cni config: no valid networks found in /etc/cni
Nov 06 15:37:25 jupiter kubelet[86177]: E1106 15:37:25.075839 86177 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reaeady: cni config uninitialized
通過研究發現 /etc/沒有cni這個目錄 其他node節點都有
使用scp 把master節點的cni 下 復制過來
scp -r master1:/etc/cni /etc/cni
重啟kubelet
systemctl restart kubelet
回到master 節點查看 狀態 仍然是notready (一般情況,重啟服務,需要等他反應,好吧,我們等幾分鍾)
始終看不到 status ready
回到 node節點
再次使用
journalctl -f -u kubelet
顯示如下
Nov 06 15:36:41 jupiter kubelet[86177]: W1106 15:36:41.439409 86177 cni.go:202] Error validating CNI config &{weave 0.3.0 false [0xc000fb0c00 0xc000fb0c80] [123 10 32 32 32 32 34 99 110 105 86 101 114 115 105 111 110 34 58 32 34 48 46 51 46 48 34 44 10 32 32 32 32 34 110 97 109 101 34 58 32 34 119 101 97 118 101 34 44 10 32 32 32 32 34 112 108 117 103 105 110 115 34 58 32 91 10 32 32 32 32 32 32 32 32 123 10 32 32 32 32 32 32 32 32 32 32 32 32 34 110 97 109 101 34 58 32 34 119 101 97 118 101 34 44 10 32 32 32 32 32 32 32 32 32 32 32 32 34 116 121 112 101 34 58 32 34 119 101 97 118 101 45 110 101 116 34 44 10 32 32 32 32 32 32 32 32 32 32 32 32 34 104 97 105 114 112 105 110 77 111 100 101 34 58 32 116 114 117 101 10 32 32 32 32 32 32 32 32 125 44 10 32 32 32 32 32 32 32 32 123 10 32 32 32 32 32 32 32 32 32 32 32 32 34 116 121 112 101 34 58 32 34 112 111 114 116 109 97 112 34 44 10 32 32 32 32 32 32 32 32 32 32 32 32 34 99 97 112 97 98 105 108 105 116 105 101 115 34 58 32 123 34 112 111 114 116 77 97 112 112 105 110 103 115 34 58 32 116 114 117 101 125 44 10 32 32 32 32 32 32 32 32 32 32 32 32 34 115 110 97 116 34 58 32 116 114 117 101 10 32 32 32 32 32 32 32 32 125 10 32 32 32 32 93 10 125 10]}: [failed to find plugin "weave-net" in path [/opt/cni/bin]]
Nov 06 15:36:41 jupiter kubelet[86177]: W1106 15:36:41.439604 86177 cni.go:237] Unable to update cni config: no valid networks found in /etc/cni/net.d
這次找到目標了:回到master節點下查看/opt/cni/bin
下 查看文件 對比node節點下這個目錄的文件發現 數量不一樣
其實大致是這三個
這是兩個鏈接,如果不知道選取哪一個,無所謂,三個統統scp拷貝過來
scp master1:/opt/cni/bin/weave-plugin-2.5.2 ./
scp master1:/opt/cni/bin/weave-ipam ./
scp master1:/opt/cni/bin/weave-net ./
最后重啟服務
systemctl restart kubelet
```sh
再次使用
journalctl -f -u kubelet
顯示如下:
Nov 06 15:50:24 jupiter kubelet[114959]: I1106 15:50:24.546098 114959 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/7e1ce4d9-8ef6-4fda-8e10-84837b033e06-kube-proxy") pod "kube-proxy-wp5p7" (UID: "7e1ce4d9-8ef6-4fda-8e10-84837b033e06")
Nov 06 15:50:24 jupiter kubelet[114959]: I1106 15:50:24.546183 114959 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "xtables-lock" (UniqueName: "kubernetes.io/host-path/7e1ce4d9-8ef6-4fda-8e10-84837b033e06-xtables-lock") pod "kube-proxy-wp5p7" (UID: "7e1ce4d9-8ef6-4fda-8e10-84837b033e06")
Nov 06 15:50:24 jupiter kubelet[114959]: I1106 15:50:24.546254 114959 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "lib-modules" (UniqueName: "kubernetes.io/host-path/7e1ce4d9-8ef6-4fda-8e10-84837b033e06-lib-modules") pod "kube-proxy-wp5p7" (UID: "7e1ce4d9-8ef6-4fda-8e10-84837b033e06")
最后發現正常了
回到master節點
```sh
kubectl get nodes
查看狀態發現還是notready,不要擔心 等個一分鍾再看看 最后發現正常了
swap 問題處理
報錯如下
init] Using Kubernetes version: v1.15.6
[preflight] Running pre-flight checks
error execution phase preflight: [preflight] Some fatal errors occurred:
[ERROR Swap]: running with swap on is not supported. Please disable swap
處理
[root@k8snode2 k8s_images]# swapoff -a
[root@k8snode2 k8s_images]# sed 's/._swap._/#&/' /etc/fstab
[root@k8smaster k8s_images]# free -m
total used free shared buff/cache available
Mem: 992 524 74 7 392 284
Swap: 0 0 0
主機cpu 數量問題
報錯
the number of available CPUs 1 is less than the required 2
處理
設置虛擬機CPU核心數>1個即可
iptables bridge 和 ip轉發問題
報錯
/proc/sys/net/bridge/bridge-nf-call-iptables contents are not set to 1
/proc/sys/net/ipv4/ip_forward contents are not set to 1
處理
/etc/sysctl.conf
文件增加如下幾條
cat /etc/sysctl.conf
# 打開ip轉發,下面4條都干上去
net.ipv4.ip_forward = 1
# Disable netfilter on bridges.
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-arptables = 1
sysctl -p
命令使其生效
相關資料
-
dashboard資料 https://github.com/kubernetes/dashboard
-
ingress-controllers資料 https://v1-15.docs.kubernetes.io/docs/concepts/services-networking/ingress-controllers/
-
ingress和ingress-controllers-nginx資料 https://www.cnblogs.com/panwenbin-logs/p/9915927.html
-
kubeadm 資料 https://v1-15.docs.kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/
-
kubeadm config yaml文件 https://godoc.org/k8s.io/kubernetes/cmd/kubeadm/app/apis/kubeadm/v1beta2
-
kubeadm 相關命令資料 https://v1-15.docs.kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm/
-
flannel資料 https://github.com/coreos/flannel