二進制安裝k8s


二進制安裝k8s

1.創建多台虛擬機,安裝Linux操作系統

一台或多台機器操作系統 CentOS7.x-86 x64

硬件配置:2GB或更多RAM,2個CPU或更多CPU,硬盤30G或更多

集群中所有機器之間網絡互通

可以訪問外網,需要拉取鏡像

禁止swap分區

2.操作系統初始化

3.為etcd和apiserver自簽證書

cfssl是一個開源的證書管理工具,使用json方式

4.部署etcd集群

5.部署master組件

kube-apiserver,kube-controller-manager,kube-scheduler,etcd

6.部署node組件

kubelet,kube-proxy,docker,etcd

7.部署集群網絡

k8s的架構

1563068809299

1601195361513

服務 端口
etcd 127.0.0.1:2379,2380
kubelet 10250,10255
kube-proxy 10256
kube-apiserve 6443,127.0.0.1:8080
kube-schedule 10251,10259
kube-controll 10252,10257

環境准備

主機 ip 內存 軟件
k8s-master 10.0.0.11 1g etcd,api-server,controller-manager,scheduler
k8s-node1 10.0.0.12 2g etcd,kubelet,kube-proxy,docker,flannel
k8s-node2 10.0.0.13 2g ectd,kubelet,kube-proxy,docker,flannel
k8s-node3 10.0.0.14 1g kubelet,kube-proxy,docker,flannel
操作系統初始化
systemctl stop firewalld
systemctl disable firewalld
setenforce 0  #臨時
sed -i 's/SELINUX=enforcing/SELINUX=disabled/g'/etc/selinux/config
swapoff -a  #臨時
sed -ri 's/.*swap.*/#&/' /etc/fstab  #防止開機自動掛載swap
hostnamectl set-hostname 主機名
sed -i 's/200/IP/g' /etc/sysconfig/network-scripts/ifcfg-eth0
yum -y install ntpdate      時間同步
ntpdate time.windows.com



#master節點
[12:06:17 root@k8s-master ~]#cat > /etc/hosts <<EOF
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
10.0.0.11 k8s-master
10.0.0.12 k8s-node1
10.0.0.13 k8s-node2
10.0.0.14 k8s-node3
EOF
scp -rp /etc/hosts root@10.0.0.12:/etc/hosts
scp -rp /etc/hosts root@10.0.0.13:/etc/hosts
scp -rp /etc/hosts root@10.0.0.14:/etc/hosts

#所有節點
將IPV4流量傳到iptables鏈里
cat > /etc/sysctl.d/k8s.conf<<EOF
net.bridge.bridge-nf-call-ip6tables=1
net.bridge.bridge-nf-call-iptables=1
EOF
sysctl --system #生效

#node3節點
ssh-copy-id root@10.0.0.11
ssh-copy-id root@10.0.0.12
ssh-copy-id root@10.0.0.13

頒發證書

根據認證對象可以將證書分成三類:

  • 服務器證書server cert:服務端使用,客戶端以此驗證服務端身份,例如docker服務端、kube-apiserver
  • 客戶端證書client cert:用於服務端認證客戶端,例如etcdctl、etcd proxy、fleetctl、docker客戶端
  • 對等證書peer cert(表示既是server cert又是client cert):雙向證書,用於etcd集群成員間通信

kubernetes集群需要的證書如下:

  • etcd 節點需要標識自己服務的server cert,也需要client cert與etcd集群其他節點交互,因此使用對等證書peer cert
  • master 節點需要標識apiserver服務的server cert,也需要client cert連接etcd集群,這里分別指定2個證書。
  • kubectlcalicokube-proxy 只需要client cert,因此證書請求中 hosts 字段可以為空。
  • kubelet證書比較特殊,不是手動生成,它由node節點TLS BootStrapapiserver請求,由master節點的controller-manager 自動簽發,包含一個client cert 和一個server cert

本架構使用的證書:參考文檔

  • 一套對等證書(etcd-peer):etcd<-->etcd<-->etcd
  • 客戶端證書(client):api-server-->etcd和flanneld-->etcd
  • 服務器證書(apiserver):-->api-server
  • 服務器證書(kubelet):api-server-->kubelet
  • 服務器證書(kube-proxy-client):api-server-->kube-proxy

不使用證書:

  • 如果使用證書,每次訪問etcd都必須指定證書;為了方便,etcd監聽127.0.0.1,本機訪問不使用證書。

  • api-server-->controller-manager

  • api-server-->scheduler


在k8s-node3節點基於CFSSL工具創建CA證書,服務端證書,客戶端證書。

CFSSL是CloudFlare開源的一款PKI/TLS工具。 CFSSL 包含一個命令行工具 和一個用於簽名,驗證並且捆綁TLS證書的 HTTP API 服務。 使用Go語言編寫。

Github:https://github.com/cloudflare/cfssl
官網:https://pkg.cfssl.org/
參考:http://blog.51cto.com/liuzhengwei521/2120535?utm_source=oschina-app

1.准備證書頒發工具

#node3節點
mkdir /opt/softs &&  cd /opt/softs
wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64 https://pkg.cfssl.org/R1.2/cfssl_linux-amd64 https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64
mv cfssl-certinfo_linux-amd64 /usr/local/bin/cfssl-certinfo
mv cfssl_linux-amd64 /usr/local/bin/cfssl
mv cfssljson_linux-amd64 /usr/local/bin/cfssljson
 chmod +x /opt/softs/*
 ln -s /opt/softs/* /usr/bin/
mkdir /opt/certs && cd /opt/certs

2.編輯ca證書配置文件

 tee  /opt/certs/ca-config.json <<-EOF
{
    "signing": {
        "default": {
            "expiry": "175200h"
        },
        "profiles": {
            "server": {
                "expiry": "175200h",
                "usages": [
                    "signing",
                    "key encipherment",
                    "server auth"
                ]
            },
            "client": {
                "expiry": "175200h",
                "usages": [
                    "signing",
                    "key encipherment",
                    "client auth"
                ]
            },
            "peer": {
                "expiry": "175200h",
                "usages": [
                    "signing",
                    "key encipherment",
                    "server auth",
                    "client auth"
                ]
            }
        }
    }
}
EOF
tee /opt/certs/ca-csr.json <<-EOF
{
    "CN": "kubernetes-ca",
    "hosts": [
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "ST": "beijing",
            "L": "beijing",
            "O": "od",
            "OU": "ops"
        }
    ],
    "ca": {
        "expiry": "175200h"
    }
}
EOF

3.生成CA證書和私鑰

[root@k8s-node3 certs]# cfssl gencert -initca ca-csr.json | cfssl-json -bare ca - 
[root@k8s-node3 certs]# ls 
ca-config.json  ca.csr  ca-csr.json  ca-key.em  ca.pem

部署etcd集群

主機名 ip 角色
k8s-master 10.0.0.11 etcd lead
k8s-node1 10.0.0.12 etcd follow
k8s-node2 10.0.0.13 etcd follow

node3節點頒發etcd節點之間通信的證書

tee /opt/certs/etcd-peer-csr.json <<-EOF
{
    "CN": "etcd-peer",
    "hosts": [
        "10.0.0.11",
        "10.0.0.12",
        "10.0.0.13"
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "ST": "beijing",
            "L": "beijing",
            "O": "od",
            "OU": "ops"
        }
    ]
}
EOF

[root@k8s-node3 certs]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=peer etcd-peer-csr.json | cfssl-json -bare etcd-peer
[root@k8s-node3 certs]# ls etcd-peer*
etcd-peer.csr  etcd-peer-csr.json  etcd-peer-key.pem  etcd-peer.pem

安裝etcd服務

etcd集群

yum install  etcd  -y

node3節點

scp -rp *.pem root@10.0.0.11:/etc/etcd/
scp -rp *.pem root@10.0.0.12:/etc/etcd/
scp -rp *.pem root@10.0.0.13:/etc/etcd/
    
master節點
[root@k8s-master ~]#ls /etc/etcd
chown -R etcd:etcd /etc/etcd/*.pem
tee  /etc/etcd/etcd.conf <<-EOF
ETCD_DATA_DIR="/var/lib/etcd/"
ETCD_LISTEN_PEER_URLS="https://10.0.0.11:2380"
ETCD_LISTEN_CLIENT_URLS="https://10.0.0.11:2379,http://127.0.0.1:2379"
ETCD_NAME="node1"
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://10.0.0.11:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://10.0.0.11:2379,http://127.0.0.1:2379"
ETCD_INITIAL_CLUSTER="node1=https://10.0.0.11:2380,node2=https://10.0.0.12:2380,node3=https://10.0.0.13:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"
ETCD_CERT_FILE="/etc/etcd/etcd-peer.pem"
ETCD_KEY_FILE="/etc/etcd/etcd-peer-key.pem"
ETCD_CLIENT_CERT_AUTH="true"
ETCD_TRUSTED_CA_FILE="/etc/etcd/ca.pem"
ETCD_AUTO_TLS="true"
ETCD_PEER_CERT_FILE="/etc/etcd/etcd-peer.pem"
ETCD_PEER_KEY_FILE="/etc/etcd/etcd-peer-key.pem"
ETCD_PEER_CLIENT_CERT_AUTH="true"
ETCD_PEER_TRUSTED_CA_FILE="/etc/etcd/ca.pem"
ETCD_PEER_AUTO_TLS="true"
EOF

node1節點

 chown -R etcd:etcd /etc/etcd/*.pem
tee  /etc/etcd/etcd.conf <<-EOF
ETCD_DATA_DIR="/var/lib/etcd/"
ETCD_LISTEN_PEER_URLS="https://10.0.0.12:2380"
ETCD_LISTEN_CLIENT_URLS="https://10.0.0.12:2379,http://127.0.0.1:2379"
ETCD_NAME="node2"
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://10.0.0.12:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://10.0.0.12:2379,http://127.0.0.1:2379"
ETCD_INITIAL_CLUSTER="node1=https://10.0.0.11:2380,node2=https://10.0.0.12:2380,node3=https://10.0.0.13:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"
ETCD_CERT_FILE="/etc/etcd/etcd-peer.pem"
ETCD_KEY_FILE="/etc/etcd/etcd-peer-key.pem"
ETCD_CLIENT_CERT_AUTH="true"
ETCD_TRUSTED_CA_FILE="/etc/etcd/ca.pem"
ETCD_AUTO_TLS="true"
ETCD_PEER_CERT_FILE="/etc/etcd/etcd-peer.pem"
ETCD_PEER_KEY_FILE="/etc/etcd/etcd-peer-key.pem"
ETCD_PEER_CLIENT_CERT_AUTH="true"
ETCD_PEER_TRUSTED_CA_FILE="/etc/etcd/ca.pem"
ETCD_PEER_AUTO_TLS="true"
EOF

node2節點

 chown -R etcd:etcd /etc/etcd/*.pem
tee  /etc/etcd/etcd.conf <<-EOF
ETCD_DATA_DIR="/var/lib/etcd/"
ETCD_LISTEN_PEER_URLS="https://10.0.0.13:2380"
ETCD_LISTEN_CLIENT_URLS="https://10.0.0.13:2379,http://127.0.0.1:2379"
ETCD_NAME="node3"
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://10.0.0.13:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://10.0.0.13:2379,http://127.0.0.1:2379"
ETCD_INITIAL_CLUSTER="node1=https://10.0.0.11:2380,node2=https://10.0.0.12:2380,node3=https://10.0.0.13:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"
ETCD_CERT_FILE="/etc/etcd/etcd-peer.pem"
ETCD_KEY_FILE="/etc/etcd/etcd-peer-key.pem"
ETCD_CLIENT_CERT_AUTH="true"
ETCD_TRUSTED_CA_FILE="/etc/etcd/ca.pem"
ETCD_AUTO_TLS="true"
ETCD_PEER_CERT_FILE="/etc/etcd/etcd-peer.pem"
ETCD_PEER_KEY_FILE="/etc/etcd/etcd-peer-key.pem"
ETCD_PEER_CLIENT_CERT_AUTH="true"
ETCD_PEER_TRUSTED_CA_FILE="/etc/etcd/ca.pem"
ETCD_PEER_AUTO_TLS="true"
EOF

etcd節點同時啟動

systemctl restart etcd
systemctl enable etcd

驗證

etcdctl member list

node3節點的安裝

安裝api-server服務
rz kubernetes-server-linux-amd64-v1.15.4.tar.gz
[root@k8s-node3 softs]# ls
cfssl  cfssl-certinfo  cfssl-json  kubernetes-server-linux-amd64-v1.15.4.tar.gz
[root@k8s-node3 softs]# tar xf kubernetes-server-linux-amd64-v1.15.4.tar.gz 
[root@k8s-node3 softs]# ls 
cfssl  cfssl-certinfo  cfssl-json  kubernetes  kubernetes-server-linux-amd64-v1.15.4.tar.gz
[root@k8s-node3 softs]# cd /opt/softs/kubernetes/server/bin/
[root@k8s-node3 bin]# rm -rf *.tar *_.tag   
[root@k8s-node3 bin]# scp -rp kube-apiserver kube-controller-manager kube-scheduler  kubectl root@10.0.0.11:/usr/sbin/

簽發client證書

[root@k8s-node3 bin]# cd /opt/certs/
 tee /opt/certs/client-csr.json <<-EOF
{
    "CN": "k8s-node",
    "hosts": [
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "ST": "beijing",
            "L": "beijing",
            "O": "od",
            "OU": "ops"
        }
    ]
}
EOF
[root@k8s-node3 certs]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=client client-csr.json | cfssl-json -bare client
 2020/12/17 12:25:03 [INFO] generate received request
2020/12/17 12:25:03 [INFO] received CSR
2020/12/17 12:25:03 [INFO] generating key: rsa-2048

2020/12/17 12:25:04 [INFO] encoded CSR
2020/12/17 12:25:04 [INFO] signed certificate with serial number 533774625324057341405060072478063467467017332427
2020/12/17 12:25:04 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").
[root@k8s-node3 certs]# ls client*
client.csr  client-csr.json  client-key.pem  client.pem

簽發kube-apiserver服務端證書

tee /opt/certs/apiserver-csr.json <<-EOF
{
    "CN": "apiserver",
    "hosts": [
        "127.0.0.1",
        "10.254.0.1",
        "kubernetes.default",
        "kubernetes.default.svc",
        "kubernetes.default.svc.cluster",
        "kubernetes.default.svc.cluster.local",
        "10.0.0.11",
        "10.0.0.12",
        "10.0.0.13"
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "ST": "beijing",
            "L": "beijing",
            "O": "od",
            "OU": "ops"
        }
    ]
}
EOF
[root@k8s-node3 certs]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=server apiserver-csr.json | cfssl-json -bare apiserver 
2020/12/17 12:26:58 [INFO] generate received request
2020/12/17 12:26:58 [INFO] received CSR
2020/12/17 12:26:58 [INFO] generating key: rsa-2048
2020/12/17 12:26:58 [INFO] encoded CSR
2020/12/17 12:26:58 [INFO] signed certificate with serial number 315139331004456663749895745137037080303885454504
2020/12/17 12:26:58 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").    
[root@k8s-node3 certs]# ls apiserver*
apiserver.csr  apiserver-csr.json  apiserver-key.pem  apiserver.pem

注:10.254.0.1為clusterIP網段的第一個ip,做為pod訪問api-server的內部ip

配置api-server服務

master節點

1.拷貝證書

 mkdir /etc/kubernetes -p && cd /etc/kubernetes
[root@k8s-node3 certs]#scp -rp ca*.pem apiserver*.pem client*.pem root@10.0.0.11:/etc/kubernetes
[root@k8s-master kubernetes]# ls
apiserver-key.pem  apiserver.pem  ca-key.pem  ca.pem  client-key.pem  client.pem

2.api-server審計日志規則

[root@k8s-master kubernetes]# tee /etc/kubernetes/audit.yaml <<-EOF
apiVersion: audit.k8s.io/v1beta1 # This is required.
kind: Policy
# Don't generate audit events for all requests in RequestReceived stage.
omitStages:
  - "RequestReceived"
    rules:
  # Log pod changes at RequestResponse level
  - level: RequestResponse
    resources:
    - group: ""
      # Resource "pods" doesn't match requests to any subresource of pods,
      # which is consistent with the RBAC policy.
      resources: ["pods"]
  # Log "pods/log", "pods/status" at Metadata level
  - level: Metadata
    resources:
    - group: ""
      resources: ["pods/log", "pods/status"]
  # Don't log requests to a configmap called "controller-leader"
  - level: None
    resources:
    - group: ""
      resources: ["configmaps"]
      resourceNames: ["controller-leader"]
  # Don't log watch requests by the "system:kube-proxy" on endpoints or services
  - level: None
    users: ["system:kube-proxy"]
    verbs: ["watch"]
    resources:
    - group: "" # core API group
      resources: ["endpoints", "services"]
  # Don't log authenticated requests to certain non-resource URL paths.
  - level: None
    userGroups: ["system:authenticated"]
    nonResourceURLs:
    - "/api*" # Wildcard matching.
    - "/version"
  # Log the request body of configmap changes in kube-system.
  - level: Request
    resources:
    - group: "" # core API group
      resources: ["configmaps"]
    # This rule only applies to resources in the "kube-system" namespace.
    # The empty string "" can be used to select non-namespaced resources.
    namespaces: ["kube-system"]
  # Log configmap and secret changes in all other namespaces at the Metadata level.
  - level: Metadata
    resources:
    - group: "" # core API group
      resources: ["secrets", "configmaps"]
  # Log all other resources in core and extensions at the Request level.
  - level: Request
    resources:
    - group: "" # core API group
    - group: "extensions" # Version of group should NOT be included.
  # A catch-all rule to log all other requests at the Metadata level.
  - level: Metadata
    # Long-running requests like watches that fall under this rule will not
    # generate an audit event in RequestReceived.
    omitStages:
      - "RequestReceived"
        EOF



tee  /usr/lib/systemd/system/kube-apiserver.service <<-EOF
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes
After=etcd.service
[Service]
ExecStart=/usr/sbin/kube-apiserver \\
  --audit-log-path /var/log/kubernetes/audit-log \\
  --audit-policy-file /etc/kubernetes/audit.yaml \\
  --authorization-mode RBAC \\
  --client-ca-file /etc/kubernetes/ca.pem \\
  --requestheader-client-ca-file /etc/kubernetes/ca.pem \\
  --enable-admission-plugins \\ NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota \\
  --etcd-cafile /etc/kubernetes/ca.pem \\
  --etcd-certfile /etc/kubernetes/client.pem \\
  --etcd-keyfile /etc/kubernetes/client-key.pem \\
  --etcd-servers \\ https://10.0.0.11:2379,https://10.0.0.12:2379,https://10.0.0.13:2379 \\
  --service-account-key-file /etc/kubernetes/ca-key.pem \\
  --service-cluster-ip-range 10.254.0.0/16 \\
  --service-node-port-range 30000-59999 \\
  --kubelet-client-certificate /etc/kubernetes/client.pem \\
  --kubelet-client-key /etc/kubernetes/client-key.pem \\
  --log-dir  /var/log/kubernetes/ \\
  --logtostderr=false \\
  --tls-cert-file /etc/kubernetes/apiserver.pem \\
  --tls-private-key-file /etc/kubernetes/apiserver-key.pem \\
  --v 2
Restart=on-failure
[Install]
WantedBy=multi-user.target
EOF
    
    
    
[root@k8s-master kubernetes]# mkdir /var/log/kubernetes
[root@k8s-master kubernetes]# systemctl daemon-reload 
[root@k8s-master kubernetes]# systemctl start kube-apiserver.service 
[root@k8s-master kubernetes]# systemctl enable kube-apiserver.service

3.檢驗

[root@k8s-master kubernetes]# kubectl get cs  
NAME                 STATUS      MESSAGE                                                                                     ERROR
scheduler            Unhealthy   Get http://127.0.0.1:10251/healthz: dial tcp 127.0.0.1:10251: connect: connection refused   
controller-manager   Unhealthy   Get http://127.0.0.1:10252/healthz: dial tcp 127.0.0.1:10252: connect: connection refused   
etcd-2               Healthy     {"health":"true"}                                                                           
etcd-1               Healthy     {"health":"true"}                                                                           
etcd-0               Healthy     {"health":"true"}   
4.安裝controller-manager服務
tee /usr/lib/systemd/system/kube-controller-manager.service <<-EOF
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes
After=kube-apiserver.service
[Service]
ExecStart=/usr/sbin/kube-controller-manager \\
  --cluster-cidr 172.18.0.0/16 \\                     #pod網段
  --log-dir /var/log/kubernetes/ \\
  --master http://127.0.0.1:8080 \\
  --service-account-private-key-file /etc/kubernetes/ca-key.pem \\
  --service-cluster-ip-range 10.254.0.0/16 \\         #VIP網段
  --root-ca-file /etc/kubernetes/ca.pem \\
  --logtostderr=false \\
  --v 2
Restart=on-failure                                  #取消標准錯誤輸出
[Install]
WantedBy=multi-user.target
EOF

為了省事,apiserver和etcd通信,apiserver和kubelet通信共用一套client cert證書。

--audit-log-path /var/log/kubernetes/audit-log \ # 審計日志路徑
--audit-policy-file /etc/kubernetes/audit.yaml \ # 審計規則文件
--authorization-mode RBAC \                      # 授權模式:RBAC
--client-ca-file /etc/kubernetes/ca.pem \        # client ca證書
--requestheader-client-ca-file /etc/kubernetes/ca.pem \ # 請求頭 ca證書
--enable-admission-plugins NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota \ # 啟用的准入插件
--etcd-cafile /etc/kubernetes/ca.pem \          # 與etcd通信ca證書
--etcd-certfile /etc/kubernetes/client.pem \    # 與etcd通信client證書
--etcd-keyfile /etc/kubernetes/client-key.pem \ # 與etcd通信client私鑰
--etcd-servers https://10.0.0.11:2379,https://10.0.0.12:2379,https://10.0.0.13:2379 \
--service-account-key-file /etc/kubernetes/ca-key.pem \ # ca私鑰
--service-cluster-ip-range 10.254.0.0/16 \              # VIP范圍
--service-node-port-range 30000-59999 \          # VIP端口范圍
--kubelet-client-certificate /etc/kubernetes/client.pem \ # 與kubelet通信client證書
--kubelet-client-key /etc/kubernetes/client-key.pem \ # 與kubelet通信client私鑰
--log-dir  /var/log/kubernetes/ \  # 日志文件路徑
--logtostderr=false \ # 啟用日志
--tls-cert-file /etc/kubernetes/apiserver.pem \            # api服務證書
--tls-private-key-file /etc/kubernetes/apiserver-key.pem \ # api服務私鑰
--v 2  # 日志級別 2
Restart=on-failure
-etcd-servers: etcd集群地址
-bind-address:監聽地址
-secure-port 安全端口
-allow-privileged:啟用授權
-enable-admission-plugins:准入控制模塊
-authorization-mode: 認證授權,啟用RBAC授權和節點自管理
-token-auth-file: bootfile

5.啟動服務

systemctl daemon-reload 
systemctl restart kube-controller-manager.service 
systemctl enable kube-controller-manager.service

6.安裝scheduler服務

 tee  /usr/lib/systemd/system/kube-scheduler.service <<-EOF
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes
After=kube-apiserver.service
[Service]
ExecStart=/usr/sbin/kube-scheduler \\
  --log-dir /var/log/kubernetes/ \\
  --master http://127.0.0.1:8080 \\
  --logtostderr=false \\
  --v 2
Restart=on-failure
[Install]
WantedBy=multi-user.target
EOF

7.重啟服務驗證

systemctl daemon-reload 
systemctl start kube-scheduler.service 
systemctl enable kube-scheduler.service
kubectl get cs
NAME                 STATUS    MESSAGE             ERROR
scheduler            Healthy   ok                  
controller-manager   Healthy   ok                  
etcd-1               Healthy   {"health":"true"}   
etcd-0               Healthy   {"health":"true"}   
etcd-2               Healthy   {"health":"true"}

node節點的安裝

安裝kubelet服務

node3節點簽發證書

[root@k8s-node3 bin]# cd /opt/certs/
 tee kubelet-csr.json <<-EOF
{
    "CN": "kubelet-node",
    "hosts": [
    "127.0.0.1", \\
    "10.0.0.11", \\
    "10.0.0.12", \\
    "10.0.0.13", \\
    "10.0.0.14", \\
    "10.0.0.15" \\
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "ST": "beijing",
            "L": "beijing",
            "O": "od",
            "OU": "ops"
        }
    ]
}
EOF


cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=server kubelet-csr.json | cfssl-json -bare kubelet
[root@k8s-node3 certs]#ls kubelet*
kubelet.csr  kubelet-csr.json  kubelet-key.pem  kubelet.pem

生成kubelet啟動所需的kube-config文件

[root@k8s-node3 certs]# ln -s /opt/softs/kubernetes/server/bin/kubectl /usr/sbin/

設置集群參數

[root@k8s-node3 certs]# kubectl config set-cluster myk8s \\
   --certificate-authority=/opt/certs/ca.pem \\
   --embed-certs=true \\
   --server=https://10.0.0.11:6443 \\
   --kubeconfig=kubelet.kubeconfig

設置客戶端認證參數

[root@k8s-node3 certs]# kubectl config set-credentials k8s-node --client-certificate=/opt/certs/client.pem --client-key=/opt/certs/client-key.pem --embed-certs=true --kubeconfig=kubelet.kubeconfig

生成上下文參數

[root@k8s-node3 certs]# kubectl config set-context myk8s-context \\
   --cluster=myk8s \\
   --user=k8s-node \\
   --kubeconfig=kubelet.kubeconfig

切換默認上下文

[root@k8s-node3 certs]# kubectl config use-context myk8s-context --kubeconfig=kubelet.kubeconfig

查看生成的kube-config文件

[root@k8s-node3 certs]# ls kubelet.kubeconfig 

master節點

[root@k8s-master ~]# tee  k8s-node.yaml <<-EOF
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: k8s-node
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:node
subjects:
- apiGroup: rbac.authorization.k8s.io
  kind: User
  name: k8s-node
EOF
[root@k8s-master ~]# kubectl create -f k8s-node.yaml

node1節點

安裝docker-ce

rz docker1903_rpm.tar.gz
tar xf docker1903_rpm.tar.gz
cd docker1903_rpm/
yum localinstall *.rpm -y
systemctl start docker

tee /etc/docker/daemon.json <<-EOF
{
"registry-mirrors": ["https://registry.docker-cn.com"],
"exec-opts": ["native.cgroupdriver=systemd"]
}
EOF
systemctl restart docker.service
systemctl enable docker.service
[root@k8s-node1 ~]# mkdir /etc/kubernetes -p && cd /etc/kubernetes

    
[root@k8s-node3 certs]# scp -rp kubelet.kubeconfig root@10.0.0.12:/etc/kubernetes
scp -rp kubelet*.pem ca*.pem root@10.0.0.12:/etc/kubernetes
scp -rp /opt/softs/kubernetes/server/bin/kubelet root@10.0.0.12:/usr/bin/

        
[root@k8s-node1 kubernetes]# mkdir  /var/log/kubernetes
tee /usr/lib/systemd/system/kubelet.service <<-EOF
[Unit]
Description=Kubernetes Kubelet
After=docker.service
Requires=docker.service
[Service]
ExecStart=/usr/bin/kubelet \
  --anonymous-auth=false \
  --cgroup-driver systemd \
  --cluster-dns 10.254.230.254 \
  --cluster-domain cluster.local \
  --runtime-cgroups=/systemd/system.slice \
  --kubelet-cgroups=/systemd/system.slice \
  --fail-swap-on=false \
  --client-ca-file /etc/kubernetes/ca.pem \
  --tls-cert-file /etc/kubernetes/kubelet.pem \
  --tls-private-key-file /etc/kubernetes/kubelet-key.pem \
  --hostname-override 10.0.0.12 \
  --image-gc-high-threshold 20 \
  --image-gc-low-threshold 10 \
  --kubeconfig /etc/kubernetes/kubelet.kubeconfig \
  --log-dir /var/log/kubernetes/ \
  --pod-infra-container-image t29617342/pause-amd64:3.0 \
  --logtostderr=false \
  --v=2
Restart=on-failure
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
EOF  

重啟服務

systemctl daemon-reload 
systemctl start kubelet.service 
systemctl enable kubelet.service
netstat -tnulp 

node2節點

rz docker1903_rpm.tar.gz
tar xf docker1903_rpm.tar.gz
cd docker1903_rpm/
yum localinstall *.rpm -y
systemctl start docker
[root@k8s-node3 certs]#scp -rp /opt/softs/kubernetes/server/bin/kubelet root@10.0.0.13:/usr/bin/
[15:55:01 root@k8s-node2 ~/docker1903_rpm]#
scp -rp root@10.0.0.12:/etc/docker/daemon.json /etc/docker
systemctl restart docker
docker info|grep -i cgroup
mkdir /etc/kubernetes 
scp -rp root@10.0.0.12:/etc/kubernetes/* /etc/kubernetes/
mkdir  /var/log/kubernetes
tee /usr/lib/systemd/system/kubelet.service <<-EOF
[Unit]
Description=Kubernetes Kubelet
After=docker.service
Requires=docker.service
[Service]
ExecStart=/usr/bin/kubelet \
  --anonymous-auth=false \
  --cgroup-driver systemd \
  --cluster-dns 10.254.230.254 \
  --cluster-domain cluster.local \
  --runtime-cgroups=/systemd/system.slice \
  --kubelet-cgroups=/systemd/system.slice \
  --fail-swap-on=false \
  --client-ca-file /etc/kubernetes/ca.pem \
  --tls-cert-file /etc/kubernetes/kubelet.pem \
  --tls-private-key-file /etc/kubernetes/kubelet-key.pem \
  --hostname-override 10.0.0.13 \
  --image-gc-high-threshold 20 \
  --image-gc-low-threshold 10 \
  --kubeconfig /etc/kubernetes/kubelet.kubeconfig \
  --log-dir /var/log/kubernetes/ \
  --pod-infra-container-image t29617342/pause-amd64:3.0 \
  --logtostderr=false \
  --v=2
Restart=on-failure
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
EOF
    
systemctl daemon-reload 
systemctl start kubelet.service 
systemctl enable kubelet.service
netstat -tnulp    
Requires=docker.service # 依賴服務
[Service]
ExecStart=/usr/bin/kubelet \
--anonymous-auth=false \         # 關閉匿名認證
--cgroup-driver systemd \        # 用systemd控制
--cluster-dns 10.254.230.254 \   # DNS地址
--cluster-domain cluster.local \ # DNS域名,與DNS服務配置資源指定的一致
--runtime-cgroups=/systemd/system.slice \
--kubelet-cgroups=/systemd/system.slice \
--fail-swap-on=false \           # 關閉不使用swap
--client-ca-file /etc/kubernetes/ca.pem \                # ca證書
--tls-cert-file /etc/kubernetes/kubelet.pem \            # kubelet證書
--tls-private-key-file /etc/kubernetes/kubelet-key.pem \ # kubelet密鑰
--hostname-override 10.0.0.13 \  # kubelet主機名, 各node節點不一樣
--image-gc-high-threshold 20 \   # 磁盤使用率超過20,始終運行鏡像垃圾回收
--image-gc-low-threshold 10 \    # 磁盤使用率小於10,從不運行鏡像垃圾回收
--kubeconfig /etc/kubernetes/kubelet.kubeconfig \ # 客戶端認證憑據
--pod-infra-container-image t29617342/pause-amd64:3.0 \ # pod基礎容器鏡像

注意:這里的pod基礎容器鏡像使用的是官方倉庫t29617342用戶的公開鏡像!

master節點驗證

[root@k8s-master ~]# kubectl get nodes
NAME        STATUS   ROLES    AGE   VERSION
10.0.0.12   Ready    <none>   15m   v1.15.4
10.0.0.13   Ready    <none>   16s   v1.15.4
安裝kube-proxy服務

node3節點簽發證書

[root@k8s-node3 ~]# cd /opt/certs/
tee /opt/certs/kube-proxy-csr.json <<-EOF
{
    "CN": "system:kube-proxy",
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "ST": "beijing",
            "L": "beijing",
            "O": "od",
            "OU": "ops"
        }
    ]
}
EOF

[root@k8s-node3 certs]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=client kube-proxy-csr.json | cfssl-json -bare kube-proxy-client
[root@k8s-node3 certs]# ls kube-proxy-c*
kube-proxy-client.csr  kube-proxy-client-key.pem  kube-proxy-client.pem  kube-proxy-csr.json\

生成kube-proxy啟動所需要kube-config
[root@k8s-node3 certs]# kubectl config set-cluster myk8s \
   --certificate-authority=/opt/certs/ca.pem \
   --embed-certs=true \
   --server=https://10.0.0.11:6443 \
   --kubeconfig=kube-proxy.kubeconfig




 kubectl config set-credentials kube-proxy \
   --client-certificate=/opt/certs/kube-proxy-client.pem \
   --client-key=/opt/certs/kube-proxy-client-key.pem \
   --embed-certs=true \
   --kubeconfig=kube-proxy.kubeconfig




 kubectl config set-context myk8s-context \
   --cluster=myk8s \
   --user=kube-proxy \
   --kubeconfig=kube-proxy.kubeconfig




 kubectl config use-context myk8s-context --kubeconfig=kube-proxy.kubeconfig




[root@k8s-node3 certs]# ls kube-proxy.kubeconfig 
kube-proxy.kubeconfig



scp -rp kube-proxy.kubeconfig  root@10.0.0.12:/etc/kubernetes/  
scp -rp kube-proxy.kubeconfig  root@10.0.0.13:/etc/kubernetes/
scp -rp  /opt/softs/kubernetes/server/bin/kube-proxy root@10.0.0.12:/usr/bin/
scp -rp /opt/softs/kubernetes/server/bin/kube-proxy root@10.0.0.13:/usr/bin/

node1節點上配置kube-proxy

[root@k8s-node1 ~]#tee /usr/lib/systemd/system/kube-proxy.service <<-EOF
[Unit]
Description=Kubernetes Proxy
After=network.target
[Service]
ExecStart=/usr/bin/kube-proxy \
  --kubeconfig /etc/kubernetes/kube-proxy.kubeconfig \
  --cluster-cidr 172.18.0.0/16 \
  --hostname-override 10.0.0.12 \
  --logtostderr=false \
  --v=2
Restart=on-failure
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
EOF
    
    
    
systemctl daemon-reload 
systemctl start kube-proxy.service 
systemctl enable kube-proxy.service
netstat -tnulp    

node2節點配置kube-proxy

tee /usr/lib/systemd/system/kube-proxy.service <<-EOF
[Unit]
Description=Kubernetes Proxy
After=network.target
[Service]
ExecStart=/usr/bin/kube-proxy \
  --kubeconfig /etc/kubernetes/kube-proxy.kubeconfig \
  --cluster-cidr 172.18.0.0/16 \
  --hostname-override 10.0.0.13 \
  --logtostderr=false \
  --v=2
Restart=on-failure
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
EOF
    
    
    
systemctl daemon-reload 
systemctl start kube-proxy.service 
systemctl enable kube-proxy.service
netstat -tnulp

配置flannel網絡

所有節點安裝flannel

yum install flannel  -y
mkdir  /opt/certs/

node3分發證書

cd /opt/certs/
 scp -rp ca.pem client*pem root@10.0.0.11:/opt/certs/
 scp -rp ca.pem client*pem root@10.0.0.12:/opt/certs/
 scp -rp ca.pem client*pem root@10.0.0.13:/opt/certs/

master節點

etcd創建flannel的key

#通過這個key定義pod的ip地址范圍
etcdctl mk /atomic.io/network/config   '{ "Network": "172.18.0.0/16","Backend": {"Type": "vxlan"} }'

注意可能會失敗提示

      Error:  x509: certificate signed by unknown authority
      #多重試幾次就好了

配置啟動flannel

tee  /etc/sysconfig/flanneld <<-EOF
# Flanneld configuration options  

# etcd url location.  Point this to the server where etcd runs
FLANNEL_ETCD_ENDPOINTS="https://10.0.0.11:2379,https://10.0.0.12:2379,https://10.0.0.13:2379"

# etcd config key.  This is the configuration key that flannel queries
# For address range assignment
FLANNEL_ETCD_PREFIX="/atomic.io/network"

# Any additional options that you want to pass
FLANNEL_OPTIONS="-etcd-cafile=/opt/certs/ca.pem -etcd-certfile=/opt/certs/client.pem -etcd-keyfile=/opt/certs/client-key.pem"
EOF
systemctl restart flanneld.service 
systemctl enable flanneld.service

scp -rp /etc/sysconfig/flanneld root@10.0.0.12:/etc/sysconfig/flanneld    
scp -rp /etc/sysconfig/flanneld root@10.0.0.13:/etc/sysconfig/flanneld

systemctl restart flanneld.service 
systemctl enable flanneld.service

#驗證
[root@k8s-master ~]# ifconfig flannel.1

node1和node2節點

sed -i 's@ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock@ExecStart=/usr/bin/dockerd  $DOCKER_NETWORK_OPTIONS -H fd:// --containerd=/run/containerd/containerd.sock@g'  /usr/lib/systemd/system/docker.service
sed  -i "/ExecStart=/a ExecStartPost=/usr/sbin/iptables -P FORWARD ACCEPT"  /usr/lib/systemd/system/docker.service
或者sed  -i "/ExecStart=/i ExecStartPost=/usr/sbin/iptables -P FORWARD ACCEPT"  /usr/lib/systemd/system/docker.service
 systemctl daemon-reload 
 systemctl restart docker
iptables -nL
#驗證,docker0網絡為172.18網段就ok了
ifconfig docker0

驗證k8s集群的安裝

node1和node2節點
rz docker_nginx1.13.tar.gz
docker load -i docker_nginx1.13.tar.gz
[root@k8s-master ~]# kubectl run nginx  --image=nginx:1.13 --replicas=2
kubectl create deployment test --image=nginx:1.13
kubectl get pod    
kubectl expose deploy nginx --type=NodePort --port=80 --target-port=80
kubectl get svc
curl -I http://10.0.0.12:35822
curl -I http://10.0.0.13:35822    

run將在未來被移除,以后用:

kubectl create deployment test --image=nginx:1.13

k8s高版本支持 -A參數

-A, --all-namespaces # 如果存在,列出所有命名空間中請求的對象

k8s的常用資源

pod資源

pod資源至少由兩個容器組成,一個基礎容器pod+業務容器

動態pod,這個pod的yaml文件從etcd獲取的yaml

靜態pod,kubelet本地目錄讀取yaml文件,啟動的pod

node1

mkdir /etc/kubernetes/manifest
tee  /usr/lib/systemd/system/kubelet.service <<-EOF
[Unit]
Description=Kubernetes Kubelet
After=docker.service
Requires=docker.service
[Service]
ExecStart=/usr/bin/kubelet   
  --anonymous-auth=false  \
  --cgroup-driver systemd   \
  --cluster-dns 10.254.230.254  \  
  --cluster-domain cluster.local   \
  --runtime-cgroups=/systemd/system.slice  \
  --kubelet-cgroups=/systemd/system.slice   \
  --fail-swap-on=false  \
  --client-ca-file /etc/kubernetes/ca.pem  \
  --tls-cert-file /etc/kubernetes/kubelet.pem   \
  --tls-private-key-file /etc/kubernetes/kubelet-key.pem  \
  --hostname-override 10.0.0.12   \
  --image-gc-high-threshold 20   \
  --image-gc-low-threshold 10   \
  --kubeconfig /etc/kubernetes/kubelet.kubeconfig   \
  --log-dir /var/log/kubernetes/   \
  --pod-infra-container-image t29617342/pause-amd64:3.0  \
  --pod-manifest-path /etc/kubernetes/manifest  \
  --logtostderr=false  \
  --v=2
Restart=on-failure
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
EOF

重啟服務

systemctl daemon-reload 
systemctl restart kubelet.service

添加靜態pod

tee  /etc/kubernetes/manifest/k8s_pod.yaml<<-EOF
apiVersion: v1
kind: Pod
metadata:
  name: static-pod
spec:
  containers:
    - name: nginx
      image: nginx:1.13
      ports:
        - containerPort: 80
EOF

驗證

[18:57:17 root@k8s-master ~]# kubectl get pod
NAME                     READY   STATUS        RESTARTS   AGE
nginx-6459cd46fd-4tszs   1/1     Running       0          37m
nginx-6459cd46fd-74npw   1/1     Running       0          6m31s
nginx-6459cd46fd-qbwg6   1/1     Running       0          37m
static-pod-10.0.0.13     1/1     Running       0          6m28s

污點和容忍度

節點和Pod的親和力,用來將Pod吸引到一組節點【根據拓撲域】(作為優選或硬性要求)。

污點(Taints)則相反,應用於node,它們允許一個節點排斥一組Pod。

污點taints是定義在節點之上的key=value:effect,用於讓節點拒絕將Pod調度運行於其上, 除非該Pod對象具有接納節點污點的容忍度。

容忍(Tolerations)應用於pod,允許(但不強制要求)pod調度到具有匹配污點的節點上。

容忍度tolerations是定義在 Pod對象上的鍵值型屬性數據,用於配置其可容忍的節點污點,而且調度器僅能將Pod對象調度至其能夠容忍該節點污點的節點之上。

img

污點(Taints)和容忍(Tolerations)共同作用,確保pods不會被調度到不適當的節點。一個或多個污點應用於節點;這標志着該節點不應該接受任何不容忍污點的Pod。

說明:我們在平常使用中發現pod不會調度到k8s的master節點,就是因為master節點存在污點。

多個Taints污點和多個Tolerations容忍判斷:

可以在同一個node節點上設置多個污點(Taints),在同一個pod上設置多個容忍(Tolerations)。

Kubernetes處理多個污點和容忍的方式就像一個過濾器:從節點的所有污點開始,然后忽略可以被Pod容忍匹配的污點;保留其余不可忽略的污點,污點的effect對Pod具有顯示效果:


污點

污點(Taints): node節點的屬性,通過打標簽實現

污點(Taints)類型:

  • NoSchedule:不要再往該node節點調度了,不影響之前已經存在的pod。
  • PreferNoSchedule:備用。優先往其他node節點調度。
  • NoExecute:清場,驅逐。新pod不許來,老pod全趕走。適用於node節點下線。

污點(Taints)的 effect 值 NoExecute,它會影響已經在節點上運行的 pod:

  • 如果 pod 不能容忍 effect 值為 NoExecute 的 taint,那么 pod 將馬上被驅逐
  • 如果 pod 能夠容忍 effect 值為 NoExecute 的 taint,且在 toleration 定義中沒有指定 tolerationSeconds,則 pod 會一直在這個節點上運行。
  • 如果 pod 能夠容忍 effect 值為 NoExecute 的 taint,但是在toleration定義中指定了 tolerationSeconds,則表示 pod 還能在這個節點上繼續運行的時間長度。

  1. 查看node節點標簽
[root@k8s-master ~]# kubectl get nodes --show-labels
NAME        STATUS     ROLES    AGE   VERSION   LABELS
10.0.0.12   NotReady   <none>   17h   v1.15.4   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=10.0.0.12,kubernetes.io/os=linux
10.0.0.13   NotReady   <none>   17h   v1.15.4   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=10.0.0.13,kubernetes.io/os=linux
  1. 添加標簽:node角色
kubectl label nodes 10.0.0.12 node-role.kubernetes.io/node=
  1. 查看node節點標簽:10.0.0.12的ROLES變為node
[root@k8s-master ~]# kubectl get nodes --show-labels
NAME        STATUS     ROLES    AGE   VERSION   LABELS
10.0.0.12   NotReady   node     17h   v1.15.4   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=10.0.0.12,kubernetes.io/os=linux,node-role.kubernetes.io/node=
10.0.0.13   NotReady   <none>   17h   v1.15.4   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=10.0.0.13,kubernetes.io/os=linux
  1. 刪除標簽
kubectl label nodes 10.0.0.12 node-role.kubernetes.io/node-
  1. 添加標簽:硬盤類型
kubectl label nodes 10.0.0.12 disk=ssd
kubectl label nodes 10.0.0.13 disk=sata
  1. 清除其他pod
kubectl delete deployments --all
  1. 查看當前pod:2個
[root@k8s-master ~]# kubectl get pod -o wide
NAME                     READY   STATUS    RESTARTS   AGE   IP            NODE        NOMINATED NODE   READINESS GATES
nginx-6459cd46fd-dl2ct   1/1     Running   1          16h   172.18.28.3   10.0.0.12   <none>           <none>
nginx-6459cd46fd-zfwbg   1/1     Running   0          16h   172.18.98.4   10.0.0.13   <none>           <none>

NoSchedule

  1. 添加污點:基於硬盤類型的NoSchedule
kubectl taint node 10.0.0.12 disk=ssd:NoSchedule
  1. 查看污點
kubectl describe nodes 10.0.0.12|grep Taint
  1. 調整副本數
kubectl scale deployment nginx --replicas=5
  1. 查看pod驗證:新增pod都在10.0.0.13上創建
kubectl get pod -o wide
  1. 刪除污點
kubectl taint node 10.0.0.12 disk-

NoExecute

  1. 添加污點:基於硬盤類型的NoExecute
kubectl taint node 10.0.0.12 disk=ssd:NoExecute
  1. 查看pod驗證:所有pod都在10.0.0.13上創建,之前10.0.0.12上的pod也轉移到10.0.0.13上
kubectl get pod -o wide
  1. 刪除污點
kubectl taint node 10.0.0.12 disk-

PreferNoSchedule

  1. 添加污點:基於硬盤類型的PreferNoSchedule
kubectl taint node 10.0.0.12 disk=ssd:PreferNoSchedule
  1. 調整副本數
kubectl scale deployment nginx --replicas=2
kubectl scale deployment nginx --replicas=5
  1. 查看pod驗證:有部分pod都在10.0.0.12上創建
kubectl get pod -o wide
  1. 刪除污點
kubectl taint node 10.0.0.12 disk-

容忍度

容忍度(Tolerations):pod.spec的屬性,設置了容忍的Pod將可以容忍污點的存在,可以被調度到存在污點的Node上。


  1. 查看解釋
kubectl explain pod.spec.tolerations
  1. 配置能夠容忍NoExecute污點的deploy資源yaml配置文件
mkdir -p /root/k8s_yaml/deploy && cd /root/k8s_yaml/deploy
cat > /root/k8s_yaml/deploy/k8s_deploy.yaml <<EOF
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: nginx
spec:
  replicas: 3
  template:
    metadata:
      labels:
        app: nginx
    spec:
      tolerations:
      - key: "disk"
        operator: "Equal"
        value: "ssd"
        effect: "NoExecute"
      containers:
      - name: nginx
        image: nginx:1.13
        ports:
        - containerPort: 80
EOF
  1. 創建deploy資源
kubectl delete deployments nginx
kubectl create -f k8s_deploy.yaml
  1. 查看當前pod
kubectl get pod -o wide
  1. 添加污點:基於硬盤類型的NoExecute
kubectl taint node 10.0.0.12 disk=ssd:NoExecute
  1. 調整副本數
kubectl scale deployment nginx --replicas=5
  1. 查看pod驗證:有部分pod都在10.0.0.12上創建,容忍了污點
kubectl get pod -o wide
  1. 刪除污點
kubectl taint node 10.0.0.12 disk-

pod.spec.tolerations示例

tolerations:
- key: "key"
  operator: "Equal"
  value: "value"
  effect: "NoSchedule"
---
tolerations:
- key: "key"
  operator: "Exists"
  effect: "NoSchedule"
---
tolerations:
- key: "key"
  operator: "Equal"
  value: "value"
  effect: "NoExecute"
  tolerationSeconds: 3600

說明:

  • 其中key、value、effect要與Node上設置的taint保持一致
  • operator的值為Exists時,將會忽略value;只要有key和effect就行
  • tolerationSeconds:表示pod能夠容忍 effect 值為 NoExecute 的 taint;當指定了 tolerationSeconds【容忍時間】,則表示 pod 還能在這個節點上繼續運行的時間長度。

不指定key值和effect值時,且operator為Exists,表示容忍所有的污點【能匹配污點所有的keys,values和effects】

tolerations:
- operator: "Exists"

不指定effect值時,則能容忍污點key對應的所有effects情況

tolerations:
- key: "key"
  operator: "Exists"

有多個Master存在時,為了防止資源浪費,可以進行如下設置:

kubectl taint nodes Node-name node-role.kubernetes.io/master=:PreferNoSchedule

常用資源

pod資源

pod資源至少由兩個容器組成:一個基礎容器pod+業務容器

  • 動態pod:從etcd獲取yaml文件。

  • 靜態pod:kubelet本地目錄讀取yaml文件。


  1. k8s-node1修改kubelet.service,指定靜態pod路徑:該目錄下只能放置靜態pod的yaml配置文件
sed -i '22a \ \ --pod-manifest-path /etc/kubernetes/manifest \\' /usr/lib/systemd/system/kubelet.service
mkdir /etc/kubernetes/manifest
systemctl daemon-reload
systemctl restart kubelet.service
  1. k8s-node1創建靜態pod的yaml配置文件:靜態pod立即被創建,其name增加后綴本機IP
cat > /etc/kubernetes/manifest/k8s_pod.yaml <<EOF
apiVersion: v1
kind: Pod
metadata:
  name: static-pod
spec:
  containers:
    - name: nginx
      image: nginx:1.13
      ports:
        - containerPort: 80
EOF
  1. master查看pod
[root@k8s-master ~]# kubectl get pod
NAME                     READY   STATUS    RESTARTS   AGE
nginx-6459cd46fd-dl2ct   1/1     Running   0          51m
nginx-6459cd46fd-zfwbg   1/1     Running   0          51m
test-8c7c68d6d-x79hf     1/1     Running   0          51m
static-pod-10.0.0.12     1/1     Running   0          3s

kubeadm部署k8s基於靜態pod。

靜態pod:

  • 創建yaml配置文件,立即自動創建pod。

  • 移走yaml配置文件,立即自動移除pod。


secret資源

secret資源是某個namespace的局部資源,含有加密的密碼、密鑰、證書等。


k8s對接harbor

首先搭建Harbor docker鏡像倉庫,啟用https,創建私有倉庫。

然后使用secrets資源管理密鑰對,用於拉取鏡像時的身份驗證。


首先:deploy在pull鏡像時調用secrets

  1. 創建secrets資源regcred
kubectl create secret docker-registry regcred --docker-server=blog.oldqiang.com --docker-username=admin --docker-password=a123456 --docker-email=296917342@qq.com
  1. 查看secrets資源
[root@k8s-master ~]# kubectl get secrets 
NAME                       TYPE                                  DATA   AGE
default-token-vgc4l        kubernetes.io/service-account-token   3      2d19h
regcred                    kubernetes.io/dockerconfigjson        1      114s
  1. deploy資源調用secrets資源的密鑰對pull鏡像
cd /root/k8s_yaml/deploy
cat > /root/k8s_yaml/deploy/k8s_deploy_secrets.yaml <<EOF
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: nginx
spec:
  replicas: 3
  template:
    metadata:
      labels:
        app: nginx
    spec:
      imagePullSecrets:
      - name: regcred
      containers:
      - name: nginx
        image: blog.oldqiang.com/oldboy/nginx:1.13
        ports:
        - containerPort: 80
EOF
  1. 創建deploy資源
kubectl delete deployments nginx
kubectl create -f k8s_deploy_secrets.yaml
  1. 查看當前pod:資源創建成功
kubectl get pod -o wide

RBAC:deploy在pull鏡像時通過用戶調用secrets

  1. 創建secrets資源harbor-secret
kubectl create secret docker-registry harbor-secret --namespace=default --docker-username=admin --docker-password=a123456 --docker-server=blog.oldqiang.com
  1. 創建用戶和pod資源的yaml文件
cd /root/k8s_yaml/deploy
# 創建用戶
cat > /root/k8s_yaml/deploy/k8s_sa_harbor.yaml <<EOF
apiVersion: v1
kind: ServiceAccount
metadata:
  name: docker-image
  namespace: default
imagePullSecrets:
- name: harbor-secret
EOF
# 創建pod
cat > /root/k8s_yaml/deploy/k8s_pod.yaml <<EOF
apiVersion: v1
kind: Pod
metadata:
  name: static-pod
spec:
  serviceAccount: docker-image
  containers:
    - name: nginx
      image: blog.oldqiang.com/oldboy/nginx:1.13
      ports:
        - containerPort: 80
EOF
  1. 創建資源
kubectl delete deployments nginx
kubectl create -f k8s_sa_harbor.yaml
kubectl create -f k8s_pod.yaml
  1. 查看當前pod:資源創建成功
kubectl get pod -o wide

configmap資源

configmap資源用來存放配置文件,可用掛載到pod容器上。


  1. 創建配置文件
cat > /root/k8s_yaml/deploy/81.conf <<EOF
    server {
        listen       81;
        server_name  localhost;
        root         /html;
        index      index.html index.htm;
        location / {
        }
    }
EOF
  1. 創建configmap資源(可以指定多個--from-file)
kubectl create configmap 81.conf --from-file=/root/k8s_yaml/deploy/81.conf
  1. 查看configmap資源
kubectl get cm
kubectl get cm 81.conf -o yaml
  1. deploy資源掛載configmap資源
cd /root/k8s_yaml/deploy
cat > /root/k8s_yaml/deploy/k8s_deploy_cm.yaml <<EOF
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: nginx
spec:
  replicas: 2
  template:
    metadata:
      labels:
        app: nginx
    spec:
      volumes:
        - name: nginx-config
          configMap:
            name: 81.conf
            items:
              - key: 81.conf  # 指定多個配置文件中的一個
                path: 81.conf
      containers:
      - name: nginx
        image: nginx:1.13
        volumeMounts:
          - name: nginx-config
            mountPath: /etc/nginx/conf.d
        ports:
        - containerPort: 80
          name: port1
        - containerPort: 81
          name: port2
EOF
  1. 創建deploy資源
kubectl delete deployments nginx
kubectl create -f k8s_deploy_cm.yaml
  1. 查看當前pod
kubectl get pod -o wide
  1. 但是volumeMounts只能掛目錄,原有文件會被覆蓋,導致80端口不能訪問。

initContainers資源

在啟動pod前,先啟動initContainers容器進行初始化操作。


  1. 查看解釋
kubectl explain pod.spec.initContainers
  1. deploy資源掛載configmap資源

初始化操作:

  • 初始化容器一:掛載持久化hostPath和configmap,拷貝81.conf到持久化目錄
  • 初始化容器二:掛載持久化hostPath,拷貝default.conf到持久化目錄

最后Deployment容器啟動,掛載持久化目錄。

cd /root/k8s_yaml/deploy
cat > /root/k8s_yaml/deploy/k8s_deploy_init.yaml <<EOF
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: nginx
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: nginx
    spec:
      volumes:
        - name: config
          hostPath:
            path: /mnt
        - name: tmp
          configMap:
            name: 81.conf
            items:
              - key: 81.conf
                path: 81.conf
      initContainers:
      - name: cp1
        image: nginx:1.13
        volumeMounts:
          - name: config
            mountPath: /nginx_config
          - name: tmp
            mountPath: /tmp
        command: ["cp","/tmp/81.conf","/nginx_config/"]
      - name: cp2
        image: nginx:1.13
        volumeMounts:
          - name: config
            mountPath: /nginx_config
        command: ["cp","/etc/nginx/conf.d/default.conf","/nginx_config/"]
      containers:
      - name: nginx
        image: nginx:1.13
        volumeMounts:
          - name: config
            mountPath: /etc/nginx/conf.d
        ports:
        - containerPort: 80
          name: port1
        - containerPort: 81
          name: port2
EOF
  1. 創建deploy資源
kubectl delete deployments nginx
kubectl create -f k8s_deploy_init.yaml
  1. 查看當前pod
kubectl get pod -o wide -l app=nginx
  1. 查看存在配置文件:81.conf,default.conf
kubectl exec -ti nginx-7879567f94-25g5s /bin/bash
ls /etc/nginx/conf.d

常用服務

RBAC

RBAC:role base access controller

kubernetes的認證訪問授權機制RBAC,通過apiserver設置-–authorization-mode=RBAC開啟。

RBAC的授權步驟分為兩步:

1)定義角色:在定義角色時會指定此角色對於資源的訪問控制的規則;

2)綁定角色:將主體與角色進行綁定,對用戶進行訪問授權。


用戶:sa(ServiceAccount)

角色:role

  • 局部角色:Role
    • 角色綁定(授權):RoleBinding
  • 全局角色:ClusterRole
    • 角色綁定(授權):ClusterRoleBinding

K8S RBAC詳解


使用流程圖

RBAC使用流程圖

  • 用戶使用:如果是用戶需求權限,則將Role與User(或Group)綁定(這需要創建User/Group);

  • 程序使用:如果是程序需求權限,將Role與ServiceAccount指定(這需要創建ServiceAccount並且在deployment中指定ServiceAccount)。


部署dns服務

部署coredns,官方文檔

  1. master節點創建配置文件coredns.yaml(指定調度到node2)
mkdir -p /root/k8s_yaml/dns && cd /root/k8s_yaml/dns
cat > /root/k8s_yaml/dns/coredns.yaml <<EOF
apiVersion: v1
kind: ServiceAccount
metadata:
  name: coredns
  namespace: kube-system
  labels:
      kubernetes.io/cluster-service: "true"
      addonmanager.kubernetes.io/mode: Reconcile
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  labels:
    kubernetes.io/bootstrapping: rbac-defaults
    addonmanager.kubernetes.io/mode: Reconcile
  name: system:coredns
rules:
- apiGroups:
  - ""
  resources:
  - endpoints
  - services
  - pods
  - namespaces
  verbs:
  - list
  - watch
- apiGroups:
  - ""
  resources:
  - nodes
  verbs:
  - get
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  annotations:
    rbac.authorization.kubernetes.io/autoupdate: "true"
  labels:
    kubernetes.io/bootstrapping: rbac-defaults
    addonmanager.kubernetes.io/mode: EnsureExists
  name: system:coredns
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:coredns
subjects:
- kind: ServiceAccount
  name: coredns
  namespace: kube-system
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: coredns
  namespace: kube-system
  labels:
      addonmanager.kubernetes.io/mode: EnsureExists
data:
  Corefile: |
    .:53 {
        errors
        health
        kubernetes cluster.local in-addr.arpa ip6.arpa {
            pods insecure
            upstream
            fallthrough in-addr.arpa ip6.arpa
            ttl 30
        }
        prometheus :9153
        forward . /etc/resolv.conf
        cache 30
        loop
        reload
        loadbalance
    }
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: coredns
  namespace: kube-system
  labels:
    k8s-app: kube-dns
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
    kubernetes.io/name: "CoreDNS"
spec:
  # replicas: not specified here:
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxUnavailable: 1
  selector:
    matchLabels:
      k8s-app: kube-dns
  template:
    metadata:
      labels:
        k8s-app: kube-dns
      annotations:
        seccomp.security.alpha.kubernetes.io/pod: 'docker/default'
    spec:
      priorityClassName: system-cluster-critical
      serviceAccountName: coredns
      tolerations:
        - key: "CriticalAddonsOnly"
          operator: "Exists"
      nodeSelector:
        beta.kubernetes.io/os: linux
      nodeName: 10.0.0.13
      containers:
      - name: coredns
        image: coredns/coredns:1.3.1
        imagePullPolicy: IfNotPresent
        resources:
          limits:
            memory: 100Mi
          requests:
            cpu: 100m
            memory: 70Mi
        args: [ "-conf", "/etc/coredns/Corefile" ]
        volumeMounts:
        - name: config-volume
          mountPath: /etc/coredns
          readOnly: true
        - name: tmp
          mountPath: /tmp
        ports:
        - containerPort: 53
          name: dns
          protocol: UDP
        - containerPort: 53
          name: dns-tcp
          protocol: TCP
        - containerPort: 9153
          name: metrics
          protocol: TCP
        livenessProbe:
          httpGet:
            path: /health
            port: 8080
            scheme: HTTP
          initialDelaySeconds: 60
          timeoutSeconds: 5
          successThreshold: 1
          failureThreshold: 5
        readinessProbe:
          httpGet:
            path: /health
            port: 8080
            scheme: HTTP
        securityContext:
          allowPrivilegeEscalation: false
          capabilities:
            add:
            - NET_BIND_SERVICE
            drop:
            - all
          readOnlyRootFilesystem: true
      dnsPolicy: Default
      volumes:
        - name: tmp
          emptyDir: {}
        - name: config-volume
          configMap:
            name: coredns
            items:
            - key: Corefile
              path: Corefile
---
apiVersion: v1
kind: Service
metadata:
  name: kube-dns
  namespace: kube-system
  annotations:
    prometheus.io/port: "9153"
    prometheus.io/scrape: "true"
  labels:
    k8s-app: kube-dns
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
    kubernetes.io/name: "CoreDNS"
spec:
  selector:
    k8s-app: kube-dns
  clusterIP: 10.254.230.254
  ports:
  - name: dns
    port: 53
    protocol: UDP
  - name: dns-tcp
    port: 53
    protocol: TCP
  - name: metrics
    port: 9153
    protocol: TCP
EOF
  1. master節點創建資源(准備鏡像:coredns/coredns:1.3.1)
kubectl create -f coredns.yaml
  1. master節點查看pod用戶
kubectl get pod -n kube-system
kubectl get pod -n kube-system coredns-6cf5d7fdcf-dvp8r -o yaml | grep -i ServiceAccount
  1. master節點查看DNS資源coredns用戶的全局角色,綁定
kubectl get clusterrole | grep coredns
kubectl get clusterrolebindings | grep coredns
kubectl get sa -n kube-system | grep coredns
  1. master節點創建tomcat+mysql的deploy資源yaml文件
mkdir -p /root/k8s_yaml/tomcat_deploy && cd /root/k8s_yaml/tomcat_deploy
cat > /root/k8s_yaml/tomcat_deploy/mysql-deploy.yaml <<EOF
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  namespace: tomcat
  name: mysql
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: mysql
    spec:
      containers:
        - name: mysql
          image: mysql:5.7
          ports:
          - containerPort: 3306
          env:
          - name: MYSQL_ROOT_PASSWORD
            value: '123456'
EOF
cat > /root/k8s_yaml/tomcat_deploy/mysql-svc.yaml <<EOF
apiVersion: v1
kind: Service
metadata:
  namespace: tomcat
  name: mysql
spec:
  ports:
    - port: 3306
      targetPort: 3306
  selector:
    app: mysql
EOF
cat > /root/k8s_yaml/tomcat_deploy/tomcat-deploy.yaml <<EOF
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  namespace: tomcat
  name: myweb
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: myweb
    spec:
      containers:
        - name: myweb
          image: kubeguide/tomcat-app:v2
          ports:
          - containerPort: 8080
          env:
          - name: MYSQL_SERVICE_HOST
            value: 'mysql'
          - name: MYSQL_SERVICE_PORT
            value: '3306'
EOF
cat > /root/k8s_yaml/tomcat_deploy/tomcat-svc.yaml <<EOF
apiVersion: v1
kind: Service
metadata:
  namespace: tomcat
  name: myweb
spec:
  type: NodePort
  ports:
    - port: 8080
      nodePort: 30008
  selector:
    app: myweb
EOF
  1. master節點創建資源(准備鏡像:mysql:5.7 和 kubeguide/tomcat-app:v2)
kubectl create namespace tomcat
kubectl create -f .
  1. master節點驗證
[root@k8s-master tomcat_demo]# kubectl get pod -n tomcat
NAME                     READY   STATUS    RESTARTS   AGE
mysql-94f6bbcfd-6nng8    1/1     Running   0          5s
myweb-5c8956ff96-fnhjh   1/1     Running   0          5s
[root@k8s-master tomcat_deploy]# kubectl -n tomcat exec -ti myweb-5c8956ff96-fnhjh /bin/bash
root@myweb-5c8956ff96-fnhjh:/usr/local/tomcat# ping mysql
PING mysql.tomcat.svc.cluster.local (10.254.94.77): 56 data bytes
^C--- mysql.tomcat.svc.cluster.local ping statistics ---
2 packets transmitted, 0 packets received, 100% packet loss
root@myweb-5c8956ff96-fnhjh:/usr/local/tomcat# exit
exit
  1. 驗證DNS
  • master節點
[root@k8s-master deploy]# kubectl get pod -n kube-system -o wide
NAME                       READY   STATUS    RESTARTS   AGE    IP            NODE        NOMINATED NODE   READINESS GATES
coredns-6cf5d7fdcf-dvp8r   1/1     Running   0          177m   172.18.98.2   10.0.0.13   <none>           <none>
yum install bind-utils -y
dig @172.18.98.2 kubernetes.default.svc.cluster.local +short
  • node節點(kube-proxy)
yum install bind-utils -y
dig @10.254.230.254 kubernetes.default.svc.cluster.local +short

部署dashboard服務

  1. 官方配置文件,略作修改

k8s1.15的dashboard-controller.yaml建議使用dashboard1.10.1kubernetes-dashboard.yaml

mkdir -p /root/k8s_yaml/dashboard && cd /root/k8s_yaml/dashboard
cat > /root/k8s_yaml/dashboard/kubernetes-dashboard.yaml <<EOF
# Copyright 2017 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

# ------------------- Dashboard Secret ------------------- #

apiVersion: v1
kind: Secret
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-certs
  namespace: kube-system
type: Opaque

---
# ------------------- Dashboard Service Account ------------------- #

apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kube-system

---
# ------------------- Dashboard Role & Role Binding ------------------- #

kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: kubernetes-dashboard-minimal
  namespace: kube-system
rules:
  # Allow Dashboard to create 'kubernetes-dashboard-key-holder' secret.
- apiGroups: [""]
  resources: ["secrets"]
  verbs: ["create"]
  # Allow Dashboard to create 'kubernetes-dashboard-settings' config map.
- apiGroups: [""]
  resources: ["configmaps"]
  verbs: ["create"]
  # Allow Dashboard to get, update and delete Dashboard exclusive secrets.
- apiGroups: [""]
  resources: ["secrets"]
  resourceNames: ["kubernetes-dashboard-key-holder", "kubernetes-dashboard-certs"]
  verbs: ["get", "update", "delete"]
  # Allow Dashboard to get and update 'kubernetes-dashboard-settings' config map.
- apiGroups: [""]
  resources: ["configmaps"]
  resourceNames: ["kubernetes-dashboard-settings"]
  verbs: ["get", "update"]
  # Allow Dashboard to get metrics from heapster.
- apiGroups: [""]
  resources: ["services"]
  resourceNames: ["heapster"]
  verbs: ["proxy"]
- apiGroups: [""]
  resources: ["services/proxy"]
  resourceNames: ["heapster", "http:heapster:", "https:heapster:"]
  verbs: ["get"]

---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: kubernetes-dashboard-minimal
  namespace: kube-system
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: kubernetes-dashboard-minimal
subjects:
- kind: ServiceAccount
  name: kubernetes-dashboard
  namespace: kube-system

---
# ------------------- Dashboard Deployment ------------------- #

kind: Deployment
apiVersion: apps/v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kube-system
spec:
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      k8s-app: kubernetes-dashboard
  template:
    metadata:
      labels:
        k8s-app: kubernetes-dashboard
    spec:
      containers:
      - name: kubernetes-dashboard
        image: registry.aliyuncs.com/google_containers/kubernetes-dashboard-amd64:v1.10.1
        ports:
        - containerPort: 8443
          protocol: TCP
        args:
          - --auto-generate-certificates
          # Uncomment the following line to manually specify Kubernetes API server Host
          # If not specified, Dashboard will attempt to auto discover the API server and connect
          # to it. Uncomment only if the default does not work.
          # - --apiserver-host=http://my-address:port
        volumeMounts:
        - name: kubernetes-dashboard-certs
          mountPath: /certs
          # Create on-disk volume to store exec logs
        - mountPath: /tmp
          name: tmp-volume
        livenessProbe:
          httpGet:
            scheme: HTTPS
            path: /
            port: 8443
          initialDelaySeconds: 30
          timeoutSeconds: 30
      volumes:
      - name: kubernetes-dashboard-certs
        secret:
          secretName: kubernetes-dashboard-certs
      - name: tmp-volume
        emptyDir: {}
      serviceAccountName: kubernetes-dashboard
      # Comment the following tolerations if Dashboard must not be deployed on master
      tolerations:
      - key: node-role.kubernetes.io/master
        effect: NoSchedule

---
# ------------------- Dashboard Service ------------------- #

kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kube-system
spec:
  type: NodePort
  ports:
    - port: 443
      nodePort: 30001
      targetPort: 8443
  selector:
    k8s-app: kubernetes-dashboard
EOF
# 鏡像改用國內源
  image: registry.aliyuncs.com/google_containers/kubernetes-dashboard-amd64:v1.10.1
# service類型改為NodePort:指定宿主機端口
spec:
type: NodePort
ports:
    - port: 443
      nodePort: 30001
      targetPort: 8443
  1. 創建資源(准備鏡像:registry.aliyuncs.com/google_containers/kubernetes-dashboard-amd64:v1.10.1)
kubectl create -f kubernetes-dashboard.yaml
  1. 查看當前已存在角色admin
kubectl get clusterrole | grep admin
  1. 創建用戶,綁定已存在角色admin(默認用戶只有最小權限)
cat > /root/k8s_yaml/dashboard/dashboard_rbac.yaml <<EOF
apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    k8s-app: kubernetes-dashboard
    addonmanager.kubernetes.io/mode: Reconcile
  name: kubernetes-admin
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: kubernetes-dashboard-admin
  namespace: kube-system
  labels:
    k8s-app: kubernetes-dashboard
    addonmanager.kubernetes.io/mode: Reconcile
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: kubernetes-admin
  namespace: kube-system
EOF
  1. 創建資源
kubectl create -f dashboard_rbac.yaml
  1. 查看admin角色用戶令牌
[root@k8s-master dashboard]# kubectl describe secrets -n kube-system kubernetes-admin-token-tpqs6 
Name:         kubernetes-admin-token-tpqs6
Namespace:    kube-system
Labels:       <none>
Annotations:  kubernetes.io/service-account.name: kubernetes-admin
              kubernetes.io/service-account.uid: 17f1f684-588a-4639-8ec6-a39c02361d0e

Type:  kubernetes.io/service-account-token

Data
====
ca.crt:     1354 bytes
namespace:  11 bytes
token:      eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJrdWJlcm5ldGVzLWFkbWluLXRva2VuLXRwcXM2Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6Imt1YmVybmV0ZXMtYWRtaW4iLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiIxN2YxZjY4NC01ODhhLTQ2MzktOGVjNi1hMzljMDIzNjFkMGUiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZS1zeXN0ZW06a3ViZXJuZXRlcy1hZG1pbiJ9.JMvv-W50Zala4I0uxe488qjzDZ2m05KN0HMX-RCHFg87jHq49JGyqQJQDFgujKCyecAQSYRFm4uZWnKiWR81Xd7IZr16pu5exMpFaAryNDeAgTAsvpJhaAuumopjiXXYgip-7pNKxJSthmboQkQ4OOmzSHRv7N6vOsyDQOhwGcgZ01862dsjowP3cCPL6GSQCeXT0TX968MyeKZ-2JV4I2XdbkPoZYCRNvwf9F3u74xxPlC9vVLYWdNP8rXRBXi3W_DdQyXntN-jtMXHaN47TWuqKIgyWmT3ZzTIKhKART9_7YeiOAA6LVGtYq3kOvPqyGHvQulx6W2ADjCTAAPovA
  1. 使用火狐瀏覽器訪問:https://10.0.0.12:30001使用令牌登錄
  2. 生成證書,解決Google瀏覽器不能打開kubernetes dashboard的問題
mkdir /root/k8s_yaml/dashboard/key && cd /root/k8s_yaml/dashboard/key
openssl genrsa -out dashboard.key 2048
openssl req -new -out dashboard.csr -key dashboard.key -subj '/CN=10.0.0.11'
openssl x509 -req -in dashboard.csr -signkey dashboard.key -out dashboard.crt
  1. 刪除原有的證書secret資源
kubectl delete secret kubernetes-dashboard-certs -n kube-system
  1. 創建新的證書secret資源
kubectl create secret generic kubernetes-dashboard-certs --from-file=dashboard.key --from-file=dashboard.crt -n kube-system
  1. 刪除pod,自動創建新pod生效
[root@k8s-master key]# kubectl get pod -n kube-system 
NAME                                    READY   STATUS    RESTARTS   AGE
coredns-6cf5d7fdcf-dvp8r                1/1     Running   0          4h19m
kubernetes-dashboard-5dc4c54b55-sn8sv   1/1     Running   0          41m
kubectl delete pod -n kube-system kubernetes-dashboard-5dc4c54b55-sn8sv
  1. 使用谷歌瀏覽器訪問:https://10.0.0.12:30001使用令牌登錄
  2. 令牌生成kubeconfig,解決令牌登陸快速超時的問題
DASH_TOKEN='eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJrdWJlcm5ldGVzLWFkbWluLXRva2VuLXRwcXM2Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6Imt1YmVybmV0ZXMtYWRtaW4iLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiIxN2YxZjY4NC01ODhhLTQ2MzktOGVjNi1hMzljMDIzNjFkMGUiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZS1zeXN0ZW06a3ViZXJuZXRlcy1hZG1pbiJ9.JMvv-W50Zala4I0uxe488qjzDZ2m05KN0HMX-RCHFg87jHq49JGyqQJQDFgujKCyecAQSYRFm4uZWnKiWR81Xd7IZr16pu5exMpFaAryNDeAgTAsvpJhaAuumopjiXXYgip-7pNKxJSthmboQkQ4OOmzSHRv7N6vOsyDQOhwGcgZ01862dsjowP3cCPL6GSQCeXT0TX968MyeKZ-2JV4I2XdbkPoZYCRNvwf9F3u74xxPlC9vVLYWdNP8rXRBXi3W_DdQyXntN-jtMXHaN47TWuqKIgyWmT3ZzTIKhKART9_7YeiOAA6LVGtYq3kOvPqyGHvQulx6W2ADjCTAAPovA'
kubectl config set-cluster kubernetes --server=10.0.0.11:6443 --kubeconfig=/root/dashbord-admin.conf
kubectl config set-credentials admin --token=$DASH_TOKEN --kubeconfig=/root/dashbord-admin.conf
kubectl config set-context admin --cluster=kubernetes --user=admin --kubeconfig=/root/dashbord-admin.conf
kubectl config use-context admin --kubeconfig=/root/dashbord-admin.conf
  1. 下載到主機,用於以后登錄使用
cd ~
sz dashbord-admin.conf
  1. 使用谷歌瀏覽器訪問:https://10.0.0.12:30001使用kubeconfig文件登錄,可以exec

網絡

映射(endpoints資源)

  1. master節點查看endpoints資源
[root@k8s-master ~]# kubectl get endpoints 
NAME         ENDPOINTS        AGE
kubernetes   10.0.0.11:6443   28h
... ...

可用其將外部服務映射到內部使用。每個Service資源自動關連一個endpoints資源,優先標簽,然后同名。

  1. k8s-node2准備外部數據庫
yum install mariadb-server -y
systemctl start mariadb
mysql_secure_installation

n
y
y
y
y
mysql -e "grant all on *.* to root@'%' identified by '123456';"

該項目在tomcat的index.html頁面,已經將數據庫連接寫固定了,用戶名root,密碼123456。

  1. master節點創建endpoint和svc資源yaml文件
cd /root/k8s_yaml/tomcat_deploy
cat > /root/k8s_yaml/tomcat_deploy/mysql_endpoint_svc.yaml <<EOF
apiVersion: v1
kind: Endpoints
metadata:
  name: mysql
  namespace: tomcat
subsets:
- addresses:
  - ip: 10.0.0.13
  ports:
  - name: mysql
    port: 3306
    protocol: TCP
--- 
apiVersion: v1
kind: Service
metadata:
  name: mysql
  namespace: tomcat
spec:
  ports:
  - name: mysql
    port: 3306
    protocol: TCP
    targetPort: 3306  
  type: ClusterIP
EOF
# 可以參考系統默認創建
kubectl get endpoints kubernetes -o yaml
kubectl get svc kubernetes -o yaml

注意:此時不能使用標簽選擇器!

  1. master節點創建資源
kubectl delete deployment mysql -n tomcat
kubectl delete svc mysql -n tomcat
kubectl create -f mysql_endpoint_svc.yaml
  1. master節點查看endpoints資源及其與svc的關聯
kubectl get endpoints -n tomcat
kubectl describe svc -n tomcat
  1. 瀏覽器訪問http://10.0.0.12:30008/demo/

  2. k8s-node2查看數據庫驗證

[root@k8s-node2 ~]# mysql -e 'show databases;'
+--------------------+
| Database           |
+--------------------+
| information_schema |
| HPE_APP            |
| mysql              |
| performance_schema |
+--------------------+
[root@k8s-node2 ~]# mysql -e 'use HPE_APP;select * from T_USERS;'
+----+-----------+-------+
| ID | USER_NAME | LEVEL |
+----+-----------+-------+
|  1 | me        | 100   |
|  2 | our team  | 100   |
|  3 | HPE       | 100   |
|  4 | teacher   | 100   |
|  5 | docker    | 100   |
|  6 | google    | 100   |
+----+-----------+-------+

kube-proxy的ipvs模式

  1. node節點安裝依賴命令
yum install ipvsadm conntrack-tools -y
  1. node節點修改kube-proxy.service增加參數
cat > /usr/lib/systemd/system/kube-proxy.service <<EOF
[Unit]
Description=Kubernetes Proxy
After=network.target
[Service]
ExecStart=/usr/bin/kube-proxy \\
  --kubeconfig /etc/kubernetes/kube-proxy.kubeconfig \\
  --cluster-cidr 172.18.0.0/16 \\
  --hostname-override 10.0.0.12 \\
  --proxy-mode ipvs \\
  --logtostderr=false \\
  --v=2
Restart=on-failure
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
EOF
--proxy-mode ipvs  # 啟用ipvs模式

LVS默認NAT模式。不滿足LVS,自動降級為iptables。

  1. node節點重啟kube-proxy並檢查LVS規則
systemctl daemon-reload 
systemctl restart kube-proxy.service 
ipvsadm -L -n 

七層負載均衡(ingress-traefik)

Ingress 包含兩大組件:Ingress Controller 和 Ingress。

  • ingress-controller(traefik)服務組件,直接使用宿主機網絡。
  • Ingress資源是基於DNS名稱(host)或URL路徑把請求轉發到指定的Service資源的轉發規則

image-20201215232645950


Ingress-Traefik

Traefik 是一款開源的反向代理與負載均衡工具。它最大的優點是能夠與常見的微服務系統直接整合,可以實現自動化動態配置。目前支持 Docker、Swarm、Mesos/Marathon、 Mesos、Kubernetes、Consul、Etcd、Zookeeper、BoltDB、Rest API 等等后端模型。

Traefike可觀測性方案

1568743448535


創建rbac

  1. 創建rbac的yaml文件
mkdir -p /root/k8s_yaml/ingress && cd /root/k8s_yaml/ingress
cat > /root/k8s_yaml/ingress/ingress_rbac.yaml <<EOF
apiVersion: v1
kind: ServiceAccount
metadata:
  name: traefik-ingress-controller
  namespace: kube-system
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: traefik-ingress-controller
rules:
  - apiGroups:
      - ""
    resources:
      - services
      - endpoints
      - secrets
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - extensions
    resources:
      - ingresses
    verbs:
      - get
      - list
      - watch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: traefik-ingress-controller
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: traefik-ingress-controller
subjects:
- kind: ServiceAccount
  name: traefik-ingress-controller
  namespace: kube-system
EOF
  1. 創建資源
kubectl create -f ingress_rbac.yaml
  1. 查看資源
kubectl get serviceaccounts -n kube-system | grep traefik-ingress-controller
kubectl get clusterrole -n kube-system | grep traefik-ingress-controller
kubectl get clusterrolebindings.rbac.authorization.k8s.io -n kube-system | grep traefik-ingress-controller

部署traefik服務

  1. 創建traefik的DaemonSet資源yaml文件
cat > /root/k8s_yaml/ingress/ingress_traefik.yaml <<EOF
kind: DaemonSet
apiVersion: extensions/v1beta1
metadata:
  name: traefik-ingress-controller
  namespace: kube-system
  labels:
    k8s-app: traefik-ingress-lb
spec:
  selector:
    matchLabels:
      k8s-app: traefik-ingress-lb
  template:
    metadata:
      labels:
        k8s-app: traefik-ingress-lb
        name: traefik-ingress-lb
    spec:
      serviceAccountName: traefik-ingress-controller
      terminationGracePeriodSeconds: 60
      tolerations:
      - operator: "Exists"
      #nodeSelector:
        #kubernetes.io/hostname: master
      # 允許使用主機網絡,指定主機端口hostPort
      hostNetwork: true
      containers:
      - image: traefik:v1.7.2
        imagePullPolicy: IfNotPresent
        name: traefik-ingress-lb
        ports:
        - name: http
          containerPort: 80
          hostPort: 80
        - name: admin
          containerPort: 8080
          hostPort: 8080
        args:
        - --api
        - --kubernetes
        - --logLevel=DEBUG
---
kind: Service
apiVersion: v1
metadata:
  name: traefik-ingress-service
  namespace: kube-system
spec:
  selector:
    k8s-app: traefik-ingress-lb
  ports:
    - protocol: TCP
      port: 80
      name: web
    - protocol: TCP
      port: 8080
      name: admin
  type: NodePort
EOF
  1. 創建資源(准備鏡像:traefik:v1.7.2)
kubectl create -f ingress_traefik.yaml
  1. 瀏覽器訪問 traefik 的 dashboardhttp://10.0.0.12:8080 此時沒有server。

創建Ingress資源

  1. 查看要代理的svc資源的NAME和POST
[root@k8s-master ingress]# kubectl get svc -n tomcat 
NAME    TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)          AGE
mysql   ClusterIP   10.254.71.221    <none>        3306/TCP         4h2m
myweb   NodePort    10.254.130.141   <none>        8080:30008/TCP   8h
  1. 創建Ingress資源yaml文件
cat > /root/k8s_yaml/ingress/ingress.yaml <<EOF
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: traefik-myweb
  namespace: tomcat
  annotations:
    kubernetes.io/ingress.class: traefik
spec:
  rules:
  - host: tomcat.oldqiang.com
    http:
      paths:
      - backend:
          serviceName: myweb
          servicePort: 8080
EOF
  1. 創建資源
kubectl create -f ingress.yaml
  1. 查看資源
kubectl get ingress -n tomcat

測試訪問

  1. windows配置:在C:\Windows\System32\drivers\etc\hosts文件中增加10.0.0.12 tomcat.oldqiang.com

  2. 瀏覽器直接訪問tomcat:http://tomcat.oldqiang.com/demo/

image-20201215205523416

  1. 瀏覽器訪問:http://10.0.0.12:8080 此時BACKENDS(后端)有Server

image-20201215205417151

image-20201215205446740


七層負載均衡(ingress-nginx)

img

五個基礎yaml文件:

  • Namespace
  • ConfigMap
  • RBAC
  • Service:添加NodePort端口
  • Deployment:默認404頁面,改用國內阿里雲鏡像
  • Deployment:ingress-controller,改用國內阿里雲鏡像
  1. 准備配置文件
mkdir /root/k8s_yaml/ingress-nginx && cd /root/k8s_yaml/ingress-nginx
# 創建命名空間 ingress-nginx
cat > /root/k8s_yaml/ingress-nginx/namespace.yaml <<EOF
apiVersion: v1
kind: Namespace
metadata:
  name: ingress-nginx
EOF
# 創建配置資源
cat > /root/k8s_yaml/ingress-nginx/configmap.yaml <<EOF
kind: ConfigMap
apiVersion: v1
metadata:
  name: nginx-configuration
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
EOF
# 如果外界訪問的域名不存在的話,則默認轉發到default-http-backend這個Service,直接返回404:
cat > /root/k8s_yaml/ingress-nginx/default-backend.yaml <<EOF
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: default-http-backend
  labels:
    app.kubernetes.io/name: default-http-backend
    app.kubernetes.io/part-of: ingress-nginx
  namespace: ingress-nginx
spec:
  replicas: 1
  selector:
    matchLabels:
      app.kubernetes.io/name: default-http-backend
      app.kubernetes.io/part-of: ingress-nginx
  template:
    metadata:
      labels:
        app.kubernetes.io/name: default-http-backend
        app.kubernetes.io/part-of: ingress-nginx
    spec:
      terminationGracePeriodSeconds: 60
      containers:
        - name: default-http-backend
          # Any image is permissible as long as:
          # 1. It serves a 404 page at /
          # 2. It serves 200 on a /healthz endpoint
          # 改用國內阿里雲鏡像
          image: registry.cn-qingdao.aliyuncs.com/kubernetes_xingej/defaultbackend-amd64:1.5
          livenessProbe:
            httpGet:
              path: /healthz
              port: 8080
              scheme: HTTP
            initialDelaySeconds: 30
            timeoutSeconds: 5
          ports:
            - containerPort: 8080
          resources:
            limits:
              cpu: 10m
              memory: 20Mi
            requests:
              cpu: 10m
              memory: 20Mi

---
apiVersion: v1
kind: Service
metadata:
  name: default-http-backend
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: default-http-backend
    app.kubernetes.io/part-of: ingress-nginx
spec:
  ports:
    - port: 80
      targetPort: 8080
  selector:
    app.kubernetes.io/name: default-http-backend
    app.kubernetes.io/part-of: ingress-nginx
EOF
# 創建Ingress的RBAC授權控制,包括:
# ServiceAccount、ClusterRole、Role、RoleBinding、ClusterRoleBinding
cat > /root/k8s_yaml/ingress-nginx/rbac.yaml <<EOF
apiVersion: v1
kind: ServiceAccount
metadata:
  name: nginx-ingress-serviceaccount
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx

---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
  name: nginx-ingress-clusterrole
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
rules:
  - apiGroups:
      - ""
    resources:
      - configmaps
      - endpoints
      - nodes
      - pods
      - secrets
    verbs:
      - list
      - watch
  - apiGroups:
      - ""
    resources:
      - nodes
    verbs:
      - get
  - apiGroups:
      - ""
    resources:
      - services
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - "extensions"
    resources:
      - ingresses
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - ""
    resources:
      - events
    verbs:
      - create
      - patch
  - apiGroups:
      - "extensions"
    resources:
      - ingresses/status
    verbs:
      - update

---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: Role
metadata:
  name: nginx-ingress-role
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
rules:
  - apiGroups:
      - ""
    resources:
      - configmaps
      - pods
      - secrets
      - namespaces
    verbs:
      - get
  - apiGroups:
      - ""
    resources:
      - configmaps
    resourceNames:
      # Defaults to "<election-id>-<ingress-class>"
      # Here: "<ingress-controller-leader>-<nginx>"
      # This has to be adapted if you change either parameter
      # when launching the nginx-ingress-controller.
      - "ingress-controller-leader-nginx"
    verbs:
      - get
      - update
  - apiGroups:
      - ""
    resources:
      - configmaps
    verbs:
      - create
  - apiGroups:
      - ""
    resources:
      - endpoints
    verbs:
      - get

---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
metadata:
  name: nginx-ingress-role-nisa-binding
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: nginx-ingress-role
subjects:
  - kind: ServiceAccount
    name: nginx-ingress-serviceaccount
    namespace: ingress-nginx

---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: nginx-ingress-clusterrole-nisa-binding
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: nginx-ingress-clusterrole
subjects:
  - kind: ServiceAccount
    name: nginx-ingress-serviceaccount
    namespace: ingress-nginx
EOF
# 創建ingress-controller。將新加入的Ingress進行轉化為Nginx的配置。
cat > /root/k8s_yaml/ingress-nginx/with-rbac.yaml <<EOF
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: nginx-ingress-controller
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
spec:
  replicas: 1
  selector:
    matchLabels:
      app.kubernetes.io/name: ingress-nginx
      app.kubernetes.io/part-of: ingress-nginx
  template:
    metadata:
      labels:
        app.kubernetes.io/name: ingress-nginx
        app.kubernetes.io/part-of: ingress-nginx
      annotations:
        prometheus.io/port: "10254"
        prometheus.io/scrape: "true"
    spec:
      serviceAccountName: nginx-ingress-serviceaccount
      containers:
        - name: nginx-ingress-controller
          # 改用國內阿里雲鏡像
          image: registry.cn-qingdao.aliyuncs.com/kubernetes_xingej/nginx-ingress-controller:0.20.0
          args:
            - /nginx-ingress-controller
            - --default-backend-service=\$(POD_NAMESPACE)/default-http-backend
            - --configmap=\$(POD_NAMESPACE)/nginx-configuration
            - --tcp-services-configmap=\$(POD_NAMESPACE)/tcp-services
            - --udp-services-configmap=\$(POD_NAMESPACE)/udp-services
            - --publish-service=\$(POD_NAMESPACE)/ingress-nginx
            - --annotations-prefix=nginx.ingress.kubernetes.io
          securityContext:
            capabilities:
              drop:
                - ALL
              add:
                - NET_BIND_SERVICE
            # www-data -> 33
            runAsUser: 33
          env:
            - name: POD_NAME
              valueFrom:
                fieldRef:
                  fieldPath: metadata.name
            - name: POD_NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
          ports:
            - name: http
              containerPort: 80
            - name: https
              containerPort: 443
          livenessProbe:
            failureThreshold: 3
            httpGet:
              path: /healthz
              port: 10254
              scheme: HTTP
            initialDelaySeconds: 10
            periodSeconds: 10
            successThreshold: 1
            timeoutSeconds: 1
          readinessProbe:
            failureThreshold: 3
            httpGet:
              path: /healthz
              port: 10254
              scheme: HTTP
            periodSeconds: 10
            successThreshold: 1
            timeoutSeconds: 1
EOF
# 創建Service資源,對外提供服務
cat > /root/k8s_yaml/ingress-nginx/service-nodeport.yaml <<EOF
apiVersion: v1
kind: Service
metadata:
  name: ingress-nginx
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
spec:
  type: NodePort
  ports:
    - name: http
      port: 80
      targetPort: 80
      protocol: TCP
      nodePort: 32080  # http
    - name: https
      port: 443
      targetPort: 443
      protocol: TCP
      nodePort: 32443  # https
  selector:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
EOF
  1. 所有node節點准備鏡像
docker pull registry.cn-qingdao.aliyuncs.com/kubernetes_xingej/defaultbackend-amd64:1.5
docker pull registry.cn-qingdao.aliyuncs.com/kubernetes_xingej/nginx-ingress-controller:0.20.0
docker images
  1. 創建資源
kubectl create -f namespace.yaml
kubectl create -f configmap.yaml
kubectl create -f rbac.yaml
kubectl create -f default-backend.yaml
kubectl create -f with-rbac.yaml
kubectl create -f service-nodeport.yaml
  1. 查看ingress-nginx組件狀態
kubectl get all -n ingress-nginx
  1. 訪問http://10.0.0.12:32080/
[root@k8s-master ingress-nginx]# curl 10.0.0.12:32080
default backend - 404
  1. 准備后端Service,創建Deployment資源(nginx)
cat > /root/k8s_yaml/ingress-nginx/deploy-demon.yaml <<EOF
apiVersion: v1
kind: Service
metadata:
  name: myapp-nginx
spec:
  selector:
    app: myapp-nginx
    release: canary
  ports:
  - name: http
    port: 80
    targetPort: 80
---
apiVersion: apps/v1
kind: Deployment
metadata: 
  name: nginx-deploy
spec:
  replicas: 2
  selector: 
    matchLabels:
      app: myapp-nginx
      release: canary
  template:
    metadata:
      labels:
        app: myapp-nginx
        release: canary
    spec:
      containers:
      - name: myapp-nginx
        image: nginx:1.13
        ports:
        - name: httpd
          containerPort: 80
EOF
  1. 創建資源(准備鏡像:nginx:1.13)
kubectl apply -f deploy-demon.yaml
  1. 查看資源
kubectl get all
  1. 創建ingress資源:將nginx加入ingress-nginx中
cat > /root/k8s_yaml/ingress-nginx/ingress-myapp.yaml <<EOF
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: ingress-myapp
  annotations: 
    kubernetes.io/ingress.class: "nginx"
spec:
  rules:
  - host: myapp.oldqiang.com
    http:
      paths:
      - path: 
        backend:
          serviceName: myapp-nginx
          servicePort: 80
EOF
  1. 創建資源
kubectl apply -f ingress-myapp.yaml
  1. 查看資源
kubectl get ingresses
  1. windows配置:在C:\Windows\System32\drivers\etc\hosts文件中增加10.0.0.12 myapp.oldqiang.com
  2. 瀏覽器直接訪問http://myapp.oldqiang.com:32080/,顯示nginx歡迎頁
  3. 修改nginx頁面以便區分
[root@k8s-master ingress-nginx]# kubectl get pod
NAME                           READY   STATUS    RESTARTS   AGE
nginx-deploy-6b4c84588-crgvr   1/1     Running   0          22m
nginx-deploy-6b4c84588-krvwz   1/1     Running   0          22m
kubectl exec -ti nginx-deploy-6b4c84588-crgvr /bin/bash
echo web1 > /usr/share/nginx/html/index.html
exit
kubectl exec -ti nginx-deploy-6b4c84588-krvwz /bin/bash
echo web2 > /usr/share/nginx/html/index.html
exit
  1. 瀏覽器訪問http://myapp.oldqiang.com:32080/,刷新測試負載均衡

image-20201215225826142


彈性伸縮

heapster監控

參考heapster1.5.4官方配置文件

  1. 查看已存在默認角色heapster
kubectl get clusterrole | grep heapster
  1. 創建heapster所需RBAC、Service和Deployment的yaml文件
mkdir /root/k8s_yaml/heapster/ && cd /root/k8s_yaml/heapster/
cat > /root/k8s_yaml/heapster/heapster.yaml <<EOF
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: heapster
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:heapster
subjects:
- kind: ServiceAccount
  name: heapster
  namespace: kube-system
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: heapster
  namespace: kube-system
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: heapster
  namespace: kube-system
spec:
  replicas: 1
  template:
    metadata:
      labels:
        task: monitoring
        k8s-app: heapster
    spec:
      serviceAccountName: heapster
      containers:
      - name: heapster
        image: registry.aliyuncs.com/google_containers/heapster-amd64:v1.5.3
        imagePullPolicy: IfNotPresent
        command:
        - /heapster
        - --source=kubernetes:https://kubernetes.default
        - --sink=influxdb:http://monitoring-influxdb.kube-system.svc:8086
---
apiVersion: v1
kind: Service
metadata:
  labels:
    task: monitoring
    # For use as a Cluster add-on (https://github.com/kubernetes/kubernetes/tree/master/cluster/addons)
    # If you are NOT using this as an addon, you should comment out this line.
    kubernetes.io/cluster-service: 'true'
    kubernetes.io/name: Heapster
  name: heapster
  namespace: kube-system
spec:
  ports:
  - port: 80
    targetPort: 8082
  selector:
    k8s-app: heapster
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: monitoring-grafana
  namespace: kube-system
spec:
  replicas: 1
  template:
    metadata:
      labels:
        task: monitoring
        k8s-app: grafana
    spec:
      containers:
      - name: grafana
        image: registry.aliyuncs.com/google_containers/heapster-grafana-amd64:v4.4.3
        ports:
        - containerPort: 3000
          protocol: TCP
        volumeMounts:
        - mountPath: /etc/ssl/certs
          name: ca-certificates
          readOnly: true
        - mountPath: /var
          name: grafana-storage
        env:
        - name: INFLUXDB_HOST
          value: monitoring-influxdb
        - name: GF_SERVER_HTTP_PORT
          value: "3000"
          # The following env variables are required to make Grafana accessible via
          # the kubernetes api-server proxy. On production clusters, we recommend
          # removing these env variables, setup auth for grafana, and expose the grafana
          # service using a LoadBalancer or a public IP.
        - name: GF_AUTH_BASIC_ENABLED
          value: "false"
        - name: GF_AUTH_ANONYMOUS_ENABLED
          value: "true"
        - name: GF_AUTH_ANONYMOUS_ORG_ROLE
          value: Admin
        - name: GF_SERVER_ROOT_URL
          # If you're only using the API Server proxy, set this value instead:
          # value: /api/v1/namespaces/kube-system/services/monitoring-grafana/proxy
          value: /
      volumes:
      - name: ca-certificates
        hostPath:
          path: /etc/ssl/certs
      - name: grafana-storage
        emptyDir: {}
---
apiVersion: v1
kind: Service
metadata:
  labels:
    # For use as a Cluster add-on (https://github.com/kubernetes/kubernetes/tree/master/cluster/addons)
    # If you are NOT using this as an addon, you should comment out this line.
    kubernetes.io/cluster-service: 'true'
    kubernetes.io/name: monitoring-grafana
  name: monitoring-grafana
  namespace: kube-system
spec:
  # In a production setup, we recommend accessing Grafana through an external Loadbalancer
  # or through a public IP.
  # type: LoadBalancer
  # You could also use NodePort to expose the service at a randomly-generated port
  # type: NodePort
  ports:
  - port: 80
    targetPort: 3000
  selector:
    k8s-app: grafana
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: monitoring-influxdb
  namespace: kube-system
spec:
  replicas: 1
  template:
    metadata:
      labels:
        task: monitoring
        k8s-app: influxdb
    spec:
      containers:
      - name: influxdb
        image: registry.aliyuncs.com/google_containers/heapster-influxdb-amd64:v1.3.3
        volumeMounts:
        - mountPath: /data
          name: influxdb-storage
      volumes:
      - name: influxdb-storage
        emptyDir: {}
---
apiVersion: v1
kind: Service
metadata:
  labels:
    task: monitoring
    # For use as a Cluster add-on (https://github.com/kubernetes/kubernetes/tree/master/cluster/addons)
    # If you are NOT using this as an addon, you should comment out this line.
    kubernetes.io/cluster-service: 'true'
    kubernetes.io/name: monitoring-influxdb
  name: monitoring-influxdb
  namespace: kube-system
spec:
  ports:
  - port: 8086
    targetPort: 8086
  selector:
    k8s-app: influxdb
EOF
  1. 創建資源
kubectl create -f heapster.yaml
  1. 高版本k8s已經不建議使用heapster彈性伸縮,配置強制開啟:
kube-controller-manager \
--horizontal-pod-autoscaler-use-rest-clients=false
sed -i '8a \ \ --horizontal-pod-autoscaler-use-rest-clients=false \\' /usr/lib/systemd/system/kube-controller-manager.service
  1. 創建業務資源
cd /root/k8s_yaml/deploy
cat > /root/k8s_yaml/deploy/k8s_deploy3.yaml <<EOF
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: nginx
spec:
  replicas: 3
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:1.13
        ports:
        - containerPort: 80
        resources:
          limits:
            cpu: 100m
            memory: 50Mi
          requests:
            cpu: 100m
            memory: 50Mi
EOF
kubectl create -f k8s_deploy3.yaml
  1. 創建HPA規則
kubectl autoscale deploy nginx --max=6 --min=1 --cpu-percent=5
  1. 查看資源
kubectl get pod
kubectl get hpa
  1. 清除heapster資源,和metric-server不能兼容
kubectl delete -f heapster.yaml
kubectl delete hpa nginx
# 還原kube-controller-manager.service配置
  1. 當node節點NotReady時,強制刪除pod
kubectl delete -n kube-system pod Pod_Name --force --grace-period 0

metric-server

metrics-server Github 1.15

  1. 准備yaml文件,使用國內鏡像地址(2個),修改一些其他參數
mkdir -p /root/k8s_yaml/metrics/ && cd /root/k8s_yaml/metrics/
cat <<EOF > /root/k8s_yaml/metrics/auth-rbac.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: metrics-server-auth-reader
  namespace: kube-system
  labels:
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: extension-apiserver-authentication-reader
subjects:
- kind: ServiceAccount
  name: metrics-server
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: metrics-server:system:auth-delegator
  labels:
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:auth-delegator
subjects:
- kind: ServiceAccount
  name: metrics-server
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: system:metrics-server
  labels:
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
rules:
- apiGroups:
  - ""
  resources:
  - pods
  - nodes
  - nodes/stats
  - namespaces
  verbs:
  - get
  - list
  - watch
- apiGroups:
  - "extensions"
  resources:
  - deployments
  verbs:
  - get
  - list
  - update
  - watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: system:metrics-server
  labels:
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:metrics-server
subjects:
- kind: ServiceAccount
  name: metrics-server
  namespace: kube-system
EOF
cat <<EOF > /root/k8s_yaml/metrics/metrics-apiservice.yaml
apiVersion: apiregistration.k8s.io/v1beta1
kind: APIService
metadata:
  name: v1beta1.metrics.k8s.io
  labels:
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
spec:
  service:
    name: metrics-server
    namespace: kube-system
  group: metrics.k8s.io
  version: v1beta1
  insecureSkipTLSVerify: true
  groupPriorityMinimum: 100
  versionPriority: 100
EOF
cat <<EOF > /root/k8s_yaml/metrics/metrics-server.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  name: metrics-server
  namespace: kube-system
  labels:
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: metrics-server-config
  namespace: kube-system
  labels:
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: EnsureExists
data:
  NannyConfiguration: |-
    apiVersion: nannyconfig/v1alpha1
    kind: NannyConfiguration
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: metrics-server-v0.3.3
  namespace: kube-system
  labels:
    k8s-app: metrics-server
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
    version: v0.3.3
spec:
  selector:
    matchLabels:
      k8s-app: metrics-server
      version: v0.3.3
  template:
    metadata:
      name: metrics-server
      labels:
        k8s-app: metrics-server
        version: v0.3.3
      annotations:
        scheduler.alpha.kubernetes.io/critical-pod: ''
        seccomp.security.alpha.kubernetes.io/pod: 'docker/default'
    spec:
      priorityClassName: system-cluster-critical
      serviceAccountName: metrics-server
      containers:
      - name: metrics-server
        image: registry.aliyuncs.com/google_containers/metrics-server-amd64:v0.3.3
        command:
        - /metrics-server
        - --metric-resolution=30s
        # These are needed for GKE, which doesn't support secure communication yet.
        # Remove these lines for non-GKE clusters, and when GKE supports token-based auth.
        #- --kubelet-port=10255
        #- --deprecated-kubelet-completely-insecure=true
        - --kubelet-insecure-tls
        - --kubelet-preferred-address-types=InternalIP,Hostname,InternalDNS,ExternalDNS,ExternalIP
        ports:
        - containerPort: 443
          name: https
          protocol: TCP
      - name: metrics-server-nanny
        image: registry.aliyuncs.com/google_containers/addon-resizer:1.8.5
        resources:
          limits:
            cpu: 100m
            memory: 300Mi
          requests:
            cpu: 5m
            memory: 50Mi
        env:
          - name: MY_POD_NAME
            valueFrom:
              fieldRef:
                fieldPath: metadata.name
          - name: MY_POD_NAMESPACE
            valueFrom:
              fieldRef:
                fieldPath: metadata.namespace
        volumeMounts:
        - name: metrics-server-config-volume
          mountPath: /etc/config
        command:
          - /pod_nanny
          - --config-dir=/etc/config
          #- --cpu=80m
          - --extra-cpu=0.5m
          #- --memory=80Mi
          #- --extra-memory=8Mi
          - --threshold=5
          - --deployment=metrics-server-v0.3.3
          - --container=metrics-server
          - --poll-period=300000
          - --estimator=exponential
          - --minClusterSize=2
          # Specifies the smallest cluster (defined in number of nodes)
          # resources will be scaled to.
          #- --minClusterSize={{ metrics_server_min_cluster_size }}
      volumes:
        - name: metrics-server-config-volume
          configMap:
            name: metrics-server-config
      tolerations:
        - key: "CriticalAddonsOnly"
          operator: "Exists"
---
apiVersion: v1
kind: Service
metadata:
  name: metrics-server
  namespace: kube-system
  labels:
    addonmanager.kubernetes.io/mode: Reconcile
    kubernetes.io/cluster-service: "true"
    kubernetes.io/name: "Metrics-server"
spec:
  selector:
    k8s-app: metrics-server
  ports:
  - port: 443
    protocol: TCP
    targetPort: https
EOF

下載指定配置文件:

for file in auth-delegator.yaml auth-reader.yaml metrics-apiservice.yaml metrics-server-deployment.yaml metrics-server-service.yaml resource-reader.yaml;do wget https://raw.githubusercontent.com/kubernetes/kubernetes/v1.15.0/cluster/addons/metrics-server/$file;done
# 使用國內鏡像
  image: registry.aliyuncs.com/google_containers/metrics-server-amd64:v0.3.3
  command:
        - /metrics-server
        - --metric-resolution=30s
# 不驗證客戶端證書
        - --kubelet-insecure-tls
# 默認解析主機名,coredns中沒有物理機的主機名解析,指定使用IP
        - --kubelet-preferred-address-types=InternalIP,Hostname,InternalDNS,ExternalDNS,ExternalIP
... ...
# 使用國內鏡像
        image: registry.aliyuncs.com/google_containers/addon-resizer:1.8.5
        command:
          - /pod_nanny
          - --config-dir=/etc/config
          #- --cpu=80m
          - --extra-cpu=0.5m
          #- --memory=80Mi
          #- --extra-memory=8Mi
          - --threshold=5
          - --deployment=metrics-server-v0.3.3
          - --container=metrics-server
          - --poll-period=300000
          - --estimator=exponential
          - --minClusterSize=2
# 添加 node/stats 權限
kind: ClusterRole
metadata:
  name: system:metrics-server
  labels:
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
rules:
- apiGroups:
  - ""
  resources:
  - pods
  - nodes
  - nodes/stats

不加上述參數,可能報錯:

unable to fully collect metrics: [unable to fully scrape metrics from source kubelet_summary:k8s-node02: unable to fetch metrics from Kubelet k8s-node02 (10.10.0.13): request failed - "401 Unauthorized", response: "Unauthorized", unable to fully scrape metrics from source kubelet_summary:k8s-node01: unable to fetch metrics from Kubelet k8s-node01 (10.10.0.12): request failed - "401 Unauthorized", response: "Unauthorized"]
  1. 創建資源(准備鏡像:registry.aliyuncs.com/google_containers/addon-resizer:1.8.5和registry.aliyuncs.com/google_containers/metrics-server-amd64:v0.3.3)
kubectl create -f .
  1. 查看資源,使用-l指定標簽
kubectl get pod -n kube-system -l k8s-app=metrics-server
  1. 查看資源監控:報錯
kubectl top nodes
  1. 注意:二進制安裝需要在master節點安裝kubelet、kube-proxy、docker-ce。並將master節點加入進群worker node節點。否則有可能會無法連接metrics-server而報錯timeout。
kubectl get apiservices v1beta1.metrics.k8s.io -o yaml
# 報錯信息:mertics無法與 apiserver服務通信
"metrics-server error "Client.Timeout exceeded while awaiting headers"
  1. 其他報錯查看api,日志
kubectl describe apiservice v1beta1.metrics.k8s.io
kubectl get pods -n kube-system | grep 'metrics'
kubectl logs metrics-server-v0.3.3-6b7c586ffd-7b4n4 metrics-server -n kube-system
  1. 修改kube-apiserver.service開啟聚合層,使用證書
cat > /usr/lib/systemd/system/kube-apiserver.service <<EOF
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes
After=etcd.service
[Service]
ExecStart=/usr/sbin/kube-apiserver \\
  --audit-log-path /var/log/kubernetes/audit-log \\
  --audit-policy-file /etc/kubernetes/audit.yaml \\
  --authorization-mode RBAC \\
  --client-ca-file /etc/kubernetes/ca.pem \\
  --requestheader-client-ca-file /etc/kubernetes/ca.pem \\
  --enable-admission-plugins NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota \\
  --etcd-cafile /etc/kubernetes/ca.pem \\
  --etcd-certfile /etc/kubernetes/client.pem \\
  --etcd-keyfile /etc/kubernetes/client-key.pem \\
  --etcd-servers https://10.0.0.11:2379,https://10.0.0.12:2379,https://10.0.0.13:2379 \\
  --service-account-key-file /etc/kubernetes/ca-key.pem \\
  --service-cluster-ip-range 10.254.0.0/16 \\
  --service-node-port-range 30000-59999 \\
  --kubelet-client-certificate /etc/kubernetes/client.pem \\
  --kubelet-client-key /etc/kubernetes/client-key.pem \\
  --proxy-client-cert-file=/etc/kubernetes/client.pem \\
  --proxy-client-key-file=/etc/kubernetes/client-key.pem \\
  --requestheader-allowed-names= \\
  --requestheader-extra-headers-prefix=X-Remote-Extra- \\
  --requestheader-group-headers=X-Remote-Group \\
  --requestheader-username-headers=X-Remote-User \\
  --log-dir /var/log/kubernetes/ \\
  --logtostderr=false \\
  --tls-cert-file /etc/kubernetes/apiserver.pem \\
  --tls-private-key-file /etc/kubernetes/apiserver-key.pem \\
  --v 2
Restart=on-failure
[Install]
WantedBy=multi-user.target
EOF
systemctl daemon-reload
systemctl restart kube-apiserver.service
# 開啟聚合層,使用證書
--requestheader-client-ca-file /etc/kubernetes/ca.pem \\ # 已配置
--proxy-client-cert-file=/etc/kubernetes/client.pem \\
--proxy-client-key-file=/etc/kubernetes/client-key.pem \\
--requestheader-allowed-names= \\
--requestheader-extra-headers-prefix=X-Remote-Extra- \\
--requestheader-group-headers=X-Remote-Group \\
--requestheader-username-headers=X-Remote-User \\

注:如果 --requestheader-allowed-names 不為空,則--proxy-client-cert-file 證書的 CN 必須位於 allowed-names 中,默認為 aggregator

  如果 kube-apiserver 主機沒有運行 kube-proxy,則還需要添加 --enable-aggregator-routing=true 參數。

注意:kube-apiserver不開啟聚合層會報錯:

I0109 05:55:43.708300       1 serving.go:273] Generated self-signed cert (apiserver.local.config/certificates/apiserver.crt, apiserver.local.config/certificates/apiserver.key)
Error: cluster doesn't provide requestheader-client-ca-file
  1. 每個節點修改kubelet.service檢查:否則無法正常獲取節點主機或者pod的資源使用情況
  • 刪除--read-only-port=0
  • 添加--authentication-token-webhook=true
cat > /usr/lib/systemd/system/kubelet.service <<EOF
[Unit]
Description=Kubernetes Kubelet
After=docker.service multi-user.target
Requires=docker.service
[Service]
ExecStart=/usr/bin/kubelet \\
  --anonymous-auth=false \\
  --cgroup-driver systemd \\
  --cluster-dns 10.254.230.254 \\
  --cluster-domain cluster.local \\
  --runtime-cgroups=/systemd/system.slice \\
  --kubelet-cgroups=/systemd/system.slice \\
  --fail-swap-on=false \\
  --client-ca-file /etc/kubernetes/ca.pem \\
  --tls-cert-file /etc/kubernetes/kubelet.pem \\
  --tls-private-key-file /etc/kubernetes/kubelet-key.pem \\
  --hostname-override 10.0.0.12 \\
  --image-gc-high-threshold 90 \\
  --image-gc-low-threshold 70 \\
  --kubeconfig /etc/kubernetes/kubelet.kubeconfig \\
  --authentication-token-webhook=true \\
  --log-dir /var/log/kubernetes/ \\
  --pod-infra-container-image t29617342/pause-amd64:3.0 \\
  --logtostderr=false \\
  --v=2
Restart=on-failure
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
EOF
systemctl daemon-reload
systemctl restart kubelet.service
  1. 重新部署(生成自簽發證書)
cd /root/k8s_yaml/metrics/
kubectl delete -f .
kubectl create -f .
  1. 查看資源監控
[root@k8s-master metrics]# kubectl top nodes
NAME        CPU(cores)   CPU%   MEMORY(bytes)   MEMORY%   
10.0.0.11   99m          9%     644Mi           73%       
10.0.0.12   56m          5%     1294Mi          68%       
10.0.0.13   44m          4%     622Mi           33%

動態存儲

搭建NFS提供靜態存儲

  1. 所有節點安裝nfs-utils
yum -y install nfs-utils
  1. master節點部署nfs服務
mkdir -p /data/tomcat-db
cat > /etc/exports <<EOF
/data    10.0.0.0/24(rw,sync,no_root_squash,no_all_squash)
EOF
systemctl start nfs
  1. 所有node節點檢查掛載
showmount -e 10.0.0.11

配置動態存儲

創建PVC時,系統自動創建PV

1. 准備存儲類SC資源及其依賴的Deployment和RBAC的yaml文件

mkdir /root/k8s_yaml/storageclass/ && cd /root/k8s_yaml/storageclass/
# 實現自動創建PV功能,提供存儲類SC
cat > /root/k8s_yaml/storageclass/nfs-client.yaml <<EOF
kind: Deployment
apiVersion: apps/v1
metadata:
  name: nfs-client-provisioner
spec:
  replicas: 1
  strategy:
    type: Recreate
  selector:
    matchLabels:
      app: nfs-client-provisioner
  template:
    metadata:
      labels:
        app: nfs-client-provisioner
    spec:
      serviceAccountName: nfs-client-provisioner
      containers:
        - name: nfs-client-provisioner
          image: quay.io/external_storage/nfs-client-provisioner:latest
          volumeMounts:
            - name: nfs-client-root
              mountPath: /persistentvolumes
          env:
            - name: PROVISIONER_NAME
              value: fuseim.pri/ifs
            - name: NFS_SERVER
              value: 10.0.0.11
            - name: NFS_PATH
              value: /data
      volumes:
        - name: nfs-client-root
          nfs:
            server: 10.0.0.13
            path: /data
EOF
# RBAC
cat > /root/k8s_yaml/storageclass/nfs-client-rbac.yaml <<EOF
apiVersion: v1
kind: ServiceAccount
metadata:
  name: nfs-client-provisioner

---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: nfs-client-provisioner-runner
rules:
  - apiGroups: [""]
    resources: ["persistentvolumes"]
    verbs: ["get", "list", "watch", "create", "delete"]
  - apiGroups: [""]
    resources: ["persistentvolumeclaims"]
    verbs: ["get", "list", "watch", "update"]
  - apiGroups: ["storage.k8s.io"]
    resources: ["storageclasses"]
    verbs: ["get", "list", "watch"]
  - apiGroups: [""]
    resources: ["events"]
    verbs: ["list", "watch", "create", "update", "patch"]
  - apiGroups: [""]
    resources: ["endpoints"]
    verbs: ["create", "delete", "get", "list", "watch", "patch", "update"]

---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: run-nfs-client-provisioner
subjects:
  - kind: ServiceAccount
    name: nfs-client-provisioner
    namespace: default
roleRef:
  kind: ClusterRole
  name: nfs-client-provisioner-runner
  apiGroup: rbac.authorization.k8s.io
EOF
# 創建SC資源,基於nfs-client-provisioner
cat > /root/k8s_yaml/storageclass/nfs-client-class.yaml <<EOF
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: course-nfs-storage
provisioner: fuseim.pri/ifs
EOF
  1. 創建資源(准備鏡像:quay.io/external_storage/nfs-client-provisioner:latest)
kubectl create -f .
  1. 創建pvc資源:yaml文件增加屬性annotations(可以設為默認屬性)
cat > /root/k8s_yaml/storageclass/test_pvc1.yaml <<EOF
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: pvc1
  annotations:
    volume.beta.kubernetes.io/storage-class: "course-nfs-storage"
spec:
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 10Gi
EOF

Jenkins對接k8s

Jenkins部署在物理機(常修改),k8s現在有了身份認證:

  • 方案一:Jenkins安裝k8s身份認證插件
  • 方案二:遠程控制k8s:同版本kubectl,指定kubelet客戶端認證憑據
kubectl --kubeconfig='kubelet.kubeconfig' get nodes

kubeadm的憑據位於/etc/kubernetes/admin.conf


kubeadm部署k8s集群

官方文檔

環境准備

主機 IP 配置 軟件
k8s-adm-master 10.0.0.15 2核2G docker-ce,kubelet,kubeadm,kubectl
k8s-adm-node1 10.0.0.16 2核2G docker-ce,kubelet,kubeadm,kubectl
  • 關閉selinuxfirewalldNetworkManagerpostfix(非必須)

  • 修改IP地址、主機名

hostnamectl set-hostname 主機名
sed -i 's/200/IP/g' /etc/sysconfig/network-scripts/ifcfg-eth0
  • 添加hosts解析
cat > /etc/hosts <<EOF
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
10.0.0.15 k8s-adm-master
10.0.0.16 k8s-adm-node1
EOF
  • 修改內核參數,關閉swap分區
cat <<EOF > /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl --system
swapoff -a
sed -i 's%/dev/mapper/centos-swap%#&%g' /etc/fstab

安裝docker-ce

wget -O /etc/yum.repos.d/docker-ce.repo https://mirrors.tuna.tsinghua.edu.cn/docker-ce/linux/centos/docker-ce.repo
yum install docker-ce-18.09.7 -y
systemctl enable docker.service
systemctl start docker.service
systemctl start docker
cat > /etc/docker/daemon.json <<EOF
{
"registry-mirrors": ["https://registry.docker-cn.com"],
}
EOF
systemctl restart docker.service
docker info

安裝kubeadm

cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
yum install kubelet-1.15.4-0 kubeadm-1.15.4-0 kubectl-1.15.4-0 -y
systemctl enable kubelet.service
systemctl start kubelet.service

使用kubeadm初始化k8s集群

  1. 選擇一個控制節點(k8s-adm-master),初始化一個k8s集群:
kubeadm init --kubernetes-version=v1.15.4 --image-repository registry.aliyuncs.com/google_containers --pod-network-cidr=10.244.0.0/16 --service-cidr=10.254.0.0/16
  1. 等待鏡像下載,可以使用docker images查看下載進度。
  2. Your Kubernetes control-plane has initialized successfully!
  3. 執行提示命令1:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
  1. 執行提示命令2:node節點加入k8s集群
kubeadm join 10.0.0.15:6443 --token uwelrl.g25p8ye1q9m2sfk7 \
    --discovery-token-ca-cert-hash sha256:e598a2895a53fded82d808caf9b9fd65a04ff59a5b773696d8ceb799cac93c5e

默認 token 24H過期,需要重新生成

kubeadm token create --print-join-command

默認 證書 10年過期,查看

cfssl-certinfo -cert /etc/kubernetes/pki/ca.crt
  1. kubectl命令行TAB鍵補全:
echo "source <(kubectl completion bash)" >> ~/.bashrc

master節點配置flannel網絡

  1. 准備yaml文件
cat <<EOF >> /etc/hosts
199.232.4.133 raw.githubusercontent.com
EOF
wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
  1. 創建資源
kubectl create -f kube-flannel.yml
  1. 查看資源
kubectl get all -n kube-system
kubectl get nodes

metric-server

metrics-server Github 1.15

  1. 准備yaml文件,使用國內鏡像地址(2個),修改一些其他參數

  2. 創建資源(准備鏡像:registry.aliyuncs.com/google_containers/addon-resizer:1.8.5和registry.aliyuncs.com/google_containers/metrics-server-amd64:v0.3.3)

kubectl create -f .
  1. 查看資源監控
kubectl top nodes

導出所有鏡像

docker save `docker images|awk 'NR>1{print $1":"$2}'|xargs -n 50` -o docker_k8s_kubeadm.tar.gz

彈性伸縮

  1. 創建業務資源
kubectl create -f /root/k8s_yaml/deploy/k8s_deploy2.yaml
  1. 創建HPA規則
kubectl autoscale deploy nginx --max=6 --min=1 --cpu-percent=5
  1. 查看pod
kubectl get pod
  1. 創建service和ingress資源,部署dashboard服務,ab壓力測試彈性伸縮。

StatefulSet 資源

StatefulSet (PetSets):寵物應用,有狀態的應用,有數據的應用,pod名稱固定(有序 01 02 03)。

  • 適用於每個Pod中有自己的編號,需要互相訪問,以及持久存儲區分。
  • 例如數據庫應用,redis,es集群,mysql集群。

StatefulSet 用來管理 Deployment 和擴展一組 Pod,並且能為這些 Pod 提供序號和唯一性保證。

StatefulSet 為它的每個 Pod 維護了一個固定的 ID。這些 Pod 是基於相同的聲明來創建的,但是不能相互替換:無論怎么調度,每個 Pod 都有一個永久不變的 ID。

參考文檔


StatefulSets 對於需要滿足以下一個或多個需求的應用程序很有價值:

  • 穩定的、唯一的網絡標識符。$(StatefulSet 名稱)-$(序號)
  • 穩定的、持久的存儲。
  • 有序的、優雅的部署和縮放。
  • 有序的、自動的滾動更新。

使用限制

  • 給定 Pod 的存儲必須由 PersistentVolume 驅動基於所請求的 storage class 來提供,或者由管理員預先提供。
  • 刪除或者收縮 StatefulSet 並不會刪除它關聯的存儲卷。保證數據安全。
  • StatefulSet 當前需要無頭服務(不分配 ClusterIP的 svc 資源)來負責 Pod 的網絡標識。需要預先創建此服務。
  • 有序和優雅的終止 StatefulSet 中的 Pod ,在刪除前將 StatefulSet 縮放為 0。
  • 默認 Pod 管理策略(OrderedReady) 使用滾動更新,可能進入損壞狀態,需要手工修復。

  1. 搭建NFS提供靜態存儲
  2. 配置動態存儲
mkdir -p /root/k8s_yaml/sts/ && cd /root/k8s_yaml/sts/
# 實現自動創建PV功能,提供存儲類SC
cat > /root/k8s_yaml/sts/nfs-client.yaml <<EOF
kind: Deployment
apiVersion: apps/v1
metadata:
  name: nfs-client-provisioner
spec:
  replicas: 1
  strategy:
    type: Recreate
  selector:
    matchLabels:
      app: nfs-client-provisioner
  template:
    metadata:
      labels:
        app: nfs-client-provisioner
    spec:
      serviceAccountName: nfs-client-provisioner
      containers:
        - name: nfs-client-provisioner
          image: quay.io/external_storage/nfs-client-provisioner:latest
          volumeMounts:
            - name: nfs-client-root
              mountPath: /persistentvolumes
          env:
            - name: PROVISIONER_NAME
              value: fuseim.pri/ifs
            - name: NFS_SERVER
              value: 10.0.0.15
            - name: NFS_PATH
              value: /data
      volumes:
        - name: nfs-client-root
          nfs:
            server: 10.0.0.15
            path: /data
EOF
# RBAC
cat > /root/k8s_yaml/sts/nfs-client-rbac.yaml <<EOF
apiVersion: v1
kind: ServiceAccount
metadata:
  name: nfs-client-provisioner

---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: nfs-client-provisioner-runner
rules:
  - apiGroups: [""]
    resources: ["persistentvolumes"]
    verbs: ["get", "list", "watch", "create", "delete"]
  - apiGroups: [""]
    resources: ["persistentvolumeclaims"]
    verbs: ["get", "list", "watch", "update"]
  - apiGroups: ["storage.k8s.io"]
    resources: ["storageclasses"]
    verbs: ["get", "list", "watch"]
  - apiGroups: [""]
    resources: ["events"]
    verbs: ["list", "watch", "create", "update", "patch"]
  - apiGroups: [""]
    resources: ["endpoints"]
    verbs: ["create", "delete", "get", "list", "watch", "patch", "update"]

---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: run-nfs-client-provisioner
subjects:
  - kind: ServiceAccount
    name: nfs-client-provisioner
    namespace: default
roleRef:
  kind: ClusterRole
  name: nfs-client-provisioner-runner
  apiGroup: rbac.authorization.k8s.io
EOF
# 創建SC資源,基於nfs-client-provisioner,設為默認SC
cat > /root/k8s_yaml/sts/nfs-client-class.yaml <<EOF
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: course-nfs-storage
  annotations:
    storageclass.kubernetes.io/is-default-class: "true"
provisioner: fuseim.pri/ifs
EOF

給sc資源,命令行打默認補丁:

kubectl patch storageclass course-nfs-storage -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'
  1. 創建資源(准備鏡像:quay.io/external_storage/nfs-client-provisioner:latest)
kubectl create -f .
  1. 創建pvc資源yaml文件
cat > /root/k8s_yaml/sts/test_pvc1.yaml <<EOF
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: pvc1
spec:
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 10Gi
EOF
  1. 創建pvc資源:測試動態存儲
kubectl create -f test_pvc1.yaml
  1. 查看資源:驗證動態存儲
kubectl get pvc
kubectl get pv
  1. 查看sts解釋
kubectl explain sts.spec.volumeClaimTemplates
kubectl explain sts.spec.volumeClaimTemplates.spec
kubectl explain sts.spec.selector.matchLabels
  1. 創建sts及其依賴svc資源yaml文件
# 創建無頭service:不分配 ClusterIP
cat > /root/k8s_yaml/sts/sts_svc.yaml <<EOF
kind: Service
apiVersion: v1
metadata:
  name: nginx
spec:
  type: ClusterIP
  clusterIP: None
  ports:
    - port: 80
      targetPort: 80
  selector:
    app: nginx
EOF
cat > /root/k8s_yaml/sts/sts.yaml <<EOF
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: nginx
spec:
  serviceName: nginx
  replicas: 2
  selector:
    matchLabels:
      app: nginx
  volumeClaimTemplates:
  - metadata:
      name: html
    spec:
      resources:
        requests:
          storage: 5Gi
      accessModes: 
        - ReadWriteOnce
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:1.13
        volumeMounts:
          - name: html
            mountPath: /usr/shart/nginx/html
        ports:
        - containerPort: 80
EOF
  1. 創建svc和sts資源
kubectl create -f sts_svc.yaml
kubectl create -f sts.yaml
  1. 查看資源:pod是有序的,對應pvc也是有序的,但pvc無序
kubectl get pod
kubectl get pv
kubectl get pvc
  1. 直接使用域名訪問容器:容器名不變,即域名不變
ping nginx-0.nginx.default.svc.cluster.local
  1. 查看DNS地址
[root@k8s-adm-master sts]# kubectl get pod -n kube-system -o wide | grep coredns
coredns-bccdc95cf-9sc5f                  1/1     Running   2          20h   10.244.0.6    k8s-adm-master   <none>           <none>
coredns-bccdc95cf-k298p                  1/1     Running   2          20h   10.244.0.7    k8s-adm-master   <none>           <none>
  1. 解析域名
yum install bind-utils -y
dig @10.244.0.6 nginx-0.nginx.default.svc.cluster.local +short

nginx-0.nginx.default.svc.cluster.local

Pod 的 DNS 子域: $(主機名).$(所屬服務的 DNS 域名)

  • 主機名:$(StatefulSet 名稱)-$(序號)

  • 所屬服務的 DNS 域名: $(服務名稱).$(命名空間).svc.$(集群域名)

  • 集群域名: cluster.local

  • 服務名稱由 StatefulSet 的 serviceName 域來設定。

集群域名 服務(名字空間/名字) StatefulSet(名字空間/名字) StatefulSet 域名 Pod DNS Pod 主機名
cluster.local default/nginx default/web nginx.default.svc.cluster.local web-{0..N-1}.nginx.default.svc.cluster.local web-{0..N-1}
cluster.local foo/nginx foo/web nginx.foo.svc.cluster.local web-{0..N-1}.nginx.foo.svc.cluster.local web-{0..N-1}
kube.local foo/nginx foo/web nginx.foo.svc.kube.local web-{0..N-1}.nginx.foo.svc.kube.local web-{0..N-1}

Job資源

一次性任務,例如:清理es索引。


  1. 創建job資源yaml文件
mkdir -p /root/k8s_yaml/job/ && cd /root/k8s_yaml/job/
cat > /root/k8s_yaml/job/job.yaml <<EOF
apiVersion: batch/v1
kind: Job
metadata:
  name: nginx
spec:
  template:
    metadata:
      name: myjob
    spec:
      containers:
      - name: nginx
        image: nginx:1.13
        ports:
        - containerPort: 80
        command: ["sleep","10"]
      restartPolicy: Never
EOF
  1. 創建job資源
kubectl create -f job.yaml
  1. 查看資源:啟動一個pod,10秒后關閉,STATUS:Completed
kubectl get job
kubectl get pod

CronJob資源

定時任務


  1. 創建cronjob資源yaml文件
cat > /root/k8s_yaml/job/cronjob.yaml <<EOF
apiVersion: batch/v1beta1
kind: CronJob
metadata:
  name: nginx
spec:
  schedule: "*/1 * * * *"
  jobTemplate:
    spec:
      template:
        metadata:
          name: myjob
        spec:
          containers:
          - name: nginx
            image: nginx:1.13
            ports:
            - containerPort: 80
            command: ["sleep","10"]
          restartPolicy: Never
EOF
  1. 創建cronjob資源
kubectl create -f cronjob.yaml
  1. 查看資源:10秒后創建一個pod
kubectl get cronjobs
kubectl get pod

Helm包管理器

Helm:讓部署應用變的更簡單,高效。

Helm chart幫助我們定義,安裝和升級kubernetes應用。

官方安裝文檔

安裝helm客戶端

wget https://get.helm.sh/helm-v2.17.0-linux-amd64.tar.gz
tar xf helm-v2.17.0-linux-amd64.tar.gz
mv linux-amd64/helm /usr/local/bin/helm

部署helm服務端

helm必須部署在k8s集群中,才能有權限調用apiserver。

  1. helm初始化(准備鏡像:ghcr.io/helm/tiller:v2.17.0)
helm init
  1. 查看資源,驗證
kubectl get pod -n kube-system
helm version

授予tiller容器權限

  1. 創建RBAC的yaml文件
mkdir -p /root/k8s_yaml/helm/ && cd /root/k8s_yaml/helm/
cat <<EOF > /root/k8s_yaml/helm/tiller_rbac.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  name: tiller
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: tiller
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
  - kind: ServiceAccount
    name: tiller
    namespace: kube-system
EOF
  1. 創建RBAC資源
kubectl create -f .
  1. 查看tiller-deploy的yaml文件
kubectl get deploy tiller-deploy -n kube-system -o yaml
  1. 給tiller-deploy打補丁:命令行修改yaml文件
kubectl patch -n kube-system deploy tiller-deploy -p '{"spec":{"template":{"spec":{"serviceAccount":"tiller"}}}}'
  1. 配置命令行補全
cd ~ && helm completion bash > .helmrc && echo "source ~/.helmrc" >> .bashrc
source ~/.helmrc

部署應用

  1. 搜索應用
helm search phpmyadmin
  1. 下載charts(模板),安裝實例
helm install --name oldboy --namespace=oldboy stable/phpmyadmin
[root@k8s-adm-master ~]# helm install --name oldboy --namespace=oldboy stable/phpmyadmin
WARNING: This chart is deprecated
NAME:   oldboy
LAST DEPLOYED: Wed Dec 16 20:19:21 2020
NAMESPACE: oldboy
STATUS: DEPLOYED

RESOURCES:
==> v1/Deployment
NAME               READY  UP-TO-DATE  AVAILABLE  AGE
oldboy-phpmyadmin  0/1    1           0          0s

==> v1/Pod(related)
NAME                                READY  STATUS             RESTARTS  AGE
oldboy-phpmyadmin-7d65b585fb-r8cp2  0/1    ContainerCreating  0         0s

==> v1/Service
NAME               TYPE       CLUSTER-IP      EXTERNAL-IP  PORT(S)  AGE
oldboy-phpmyadmin  ClusterIP  10.254.253.220  <none>       80/TCP   0s


NOTES:
This Helm chart is deprecated

Given the `stable` deprecation timeline (https://github.com/helm/charts#deprecation-timeline), the Bitnami maintained Helm chart is now located at bitnami/charts (https://github.com/bitnami/charts/).

The Bitnami repository is already included in the Hubs and we will continue providing the same cadence of updates, support, etc that we've been keeping here these years. Installation instructions are very similar, just adding the _bitnami_ repo and using it during the installation (`bitnami/<chart>` instead of `stable/<chart>`)

```bash
$ helm repo add bitnami https://charts.bitnami.com/bitnami
$ helm install my-release bitnami/<chart>           # Helm 3
$ helm install --name my-release bitnami/<chart>    # Helm 2
```

To update an exisiting _stable_ deployment with a chart hosted in the bitnami repository you can execute

```bash
$ helm repo add bitnami https://charts.bitnami.com/bitnami
$ helm upgrade my-release bitnami/<chart>
```

Issues and PRs related to the chart itself will be redirected to `bitnami/charts` GitHub repository. In the same way, we'll be happy to answer questions related to this migration process in this issue (https://github.com/helm/charts/issues/20969) created as a common place for discussion.

1. Get the application URL by running these commands:
  export POD_NAME=$(kubectl get pods --namespace oldboy -l "app=phpmyadmin,release=oldboy" -o jsonpath="{.items[0].metadata.name}")
  echo "phpMyAdmin URL: http://127.0.0.1:8080"
  kubectl port-forward --namespace oldboy svc/oldboy-phpmyadmin 8080:80

2. How to log in

phpMyAdmin has not been configure to point to a specific database. Please provide the db host,
username and password at log in or upgrade the release with a specific database:

$ helm upgrade oldboy stable/phpmyadmin --set db.host=mydb



** Please be patient while the chart is being deployed **
  1. 查看資源
kubectl get all -n oldboy
  1. 升級,命令行修改變量
helm upgrade oldboy stable/phpmyadmin --set db.host=10.0.0.13
  1. 可以解壓緩存的tgz包,查看charts
[root@k8s-adm-master charts]# ls /root/.helm/cache/archive/
phpmyadmin-4.3.5.tgz

charts

  1. 創建charts
mkdir -p /root/k8s_yaml/helm/charts && cd /root/k8s_yaml/helm/charts
helm create hello-helm
[root@k8s-adm-master charts]# tree hello-helm
hello-helm
|-- charts                 # 子charts
|-- Chart.yaml             # charts版本
|-- templates              # 模板
|   |-- deployment.yaml
|   |-- _helpers.tpl
|   |-- ingress.yaml
|   |-- NOTES.txt           # 使用說明
|   |-- serviceaccount.yaml
|   |-- service.yaml
|   `-- tests
|       `-- test-connection.yaml
`-- values.yaml             # 變量
  1. 自定義charts
rm -rf /root/k8s_yaml/helm/charts/hello-helm/templates/*
echo hello! > /root/k8s_yaml/helm/charts/hello-helm/templates/NOTES.txt
cat <<EOF > /root/k8s_yaml/helm/charts/hello-helm/templates/pod.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: nginx
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:1.13
        ports:
        - containerPort: 80
EOF
  1. 安裝charts
cd /root/k8s_yaml/helm/charts
helm install hello-helm
  1. 查看charts
helm list
  1. 查看pod
kubectl get pod
  1. 調試:只渲染,不部署
helm install hello-helm --debug --dry-run
  1. 卸載實例
helm delete oldboy
  1. 打包charts
helm package hello-helm

配置國內源

  1. 刪除默認源
helm repo remove stable
  1. 增加國內源(stable只能指定一個,可以指定不同名的源)官方
helm repo add stable https://burdenbear.github.io/kube-charts-mirror/
helm repo add stable https://kubernetes.oss-cn-hangzhou.aliyuncs.com/charts
helm repo add stable https://mirror.azure.cn/kubernetes/charts/
  1. 查看源
helm repo list
  1. 更新倉庫信息
helm repo update
  1. 搜索測試
helm search mysql
  1. 自建倉庫

搭建charts倉庫需要:參考Github,官方推薦使用gitPage搭建charts倉庫。


Helm3變化

去除Tiller 和 helm serve

helm服務端和init命令在helm3已棄用。

helm通過 kubeconfig 直接操作k8s集群,類似於kubectl。
helm使用與kubectl上下文相同的訪問權限,無需再使用helm init來初始化Helm。

只需要安裝helm即可:

curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3
chmod 700 get_helm.sh
./get_helm.sh

實際上就是Github下載二進制文件並解壓,移動到/usr/local/bin/下,添加執行權限。


移除預定義倉庫被,增加helm hub

helm search 區分 repo 和 hub

  • repo:自己手動添加的源
helm repo add stable https://kubernetes-charts.storage.googleapis.com
helm repo add incubator https://kubernetes-charts-incubator.storage.googleapis.com/
helm repo add ibmstable https://raw.githubusercontent.com/IBM/charts/master/repo/stable
  • hub:helm 的中心庫,各軟件商需要在 hub 把應用更新到最新,我們才能在上面查到最新的,等同dockerhub。hub 搜到的包需要進入hub頁面查看下載地址。可以把 hub 和 google repo 配合使用:
helm search hub mysql

Values 支持 JSON Schema 校驗器

運行 helm install 、 helm upgrade 、 helm lint 、 helm template 命令時,JSON Schema 的校驗會自動運行,如果失敗就會立即報錯。等於先將yaml文件都校驗一遍,再創建。

helm pull stable/mysql
tar -zxvf mysql-1.6.2.tgz 
cd mysql 
vim values.yaml 
# 把port: 3306 改成 port: 3306aaa
# 安裝測試,會校驗port的格式,而且確實是在安裝之前,一旦有錯任何資源都不會被創建
helm install mysqlll .
Error: unable to build kubernetes objects from release manifest: error validating "": error validating data: ValidationError(Service.spec.ports[0].port): invalid type for io.k8s.api.core.v1.ServicePort.port: got "string", expected "integer"

helm2/3 命令差異

參考文檔


kubesphere管理平台

kubesphere官網

Linux上部署官方說明

准備

主機 IP 最低要求(每個節點)
master 10.0.0.21 CPU:2 核,內存:4 G,硬盤:40 G
node1 10.0.0.22 CPU:2 核,內存:4 G,硬盤:40 G
node2 10.0.0.23 CPU:2 核,內存:4 G,硬盤:40 G
  • 關閉selinuxfirewalldNetworkManagerpostfix(非必須)

  • 修改IP地址、主機名

hostnamectl set-hostname 主機名
sed -i 's/200/IP/g' /etc/sysconfig/network-scripts/ifcfg-eth0
  • 所有節點配置時間同步(ntp)
  • 所有節點檢查:sshd/sudo/curl/openssl可用
  • 所有節點安裝docker並配置鏡像加速,會快點。

下載

使用 KubeKey v1.0.1 工具安裝

kubekey

wget https://github.com/kubesphere/kubekey/releases/download/v1.0.1/kubekey-v1.0.1-linux-amd64.tar.gz
tar xf kubekey-v1.0.1-linux-amd64.tar.gz

創建

  1. 創建配置文件
./kk create config --with-kubernetes v1.18.6 --with-kubesphere v3.0.0
  1. 修改示例配置文件
vim config-sample.yaml
# 實際主機名,各節點SSH連接IP,實際IP,SSH登錄使用的用戶和密碼
  hosts:
    - {name: node1, address: 10.0.0.22, internalAddress: 10.0.0.22, user: root, password: 1
}
# SSH密鑰登陸
    - {name: master, address: 10.0.0.21, internalAddress: 10.0.0.21, privateKeyPath: "~/.ssh/id_rsa"}
# 實際主機名
  roleGroups:
    etcd:
    - node1
    master:
    - node1
    worker:
    - node1
    - node2
  1. 使用配置文件創建集群
./kk create cluster -f config-sample.yaml
yes

整個安裝過程可能需要 10 到 20 分鍾,具體取決於您的計算機和網絡環境。

添加nodes節點,修改配置文件,執行./kk add nodes -f config-sample.yaml

完成

  1. 完成后stdout
#####################################################
###              Welcome to KubeSphere!           ###
#####################################################

Console: http://10.0.0.21:30880
Account: admin
Password: P@88w0rd

NOTES:
  1. After logging into the console, please check the
     monitoring status of service components in
     the "Cluster Management". If any service is not
     ready, please wait patiently until all components
     are ready.
  2. Please modify the default password after login.

#####################################################
https://kubesphere.io             20xx-xx-xx xx:xx:xx
#####################################################
  1. 瀏覽器訪問KubeSphere Web 控制台
  2. 啟用kubectl 自動補全
# Install bash-completion
apt-get install bash-completion

# Source the completion script in your ~/.bashrc file
echo 'source <(kubectl completion bash)' >>~/.bashrc

# Add the completion script to the /etc/bash_completion.d directory
kubectl completion bash >/etc/bash_completion.d/kubectl

secrets資源

方式1:

kubectl create secret docker-registry harbor-secret --namespace=default  --docker-username=admin  --docker-password=a123456 --docker-server=blog.oldqiang.com
vi  k8s_sa_harbor.yaml 
apiVersion: v1
kind: ServiceAccount
metadata:
  name: docker-image
  namespace: default
imagePullSecrets:
- name: harbor-secret
vi k8s_pod.yaml
iapiVersion: v1
kind: Pod
metadata:
  name: static-pod
spec:
  serviceAccount: docker-image
  containers:
    - name: nginx
      image: blog.oldqiang.com/oldboy/nginx:1.13
      ports:
        - containerPort: 80

方法二:


kubectl create secret docker-registry regcred --docker-server=blog.oldqiang.com --docker-username=admin --docker-password=a123456 --docker-email=296917342@qq.com
​
#驗證
[root@k8s-master ~]# kubectl get secrets 
NAME                       TYPE                                  DATA   AGE
default-token-vgc4l        kubernetes.io/service-account-token   3      2d19h
regcred                    kubernetes.io/dockerconfigjson        1      114s
​
[root@k8s-master ~]# cat k8s_pod.yaml 
apiVersion: v1
kind: Pod
metadata:
  name: static-pod
spec:
  nodeName: 10.0.0.12
  imagePullSecrets:
    - name: regcred
  containers:
    - name: nginx
      image: blog.oldqiang.com/oldboy/nginx:1.13
      ports:
        - containerPort: 80

3.3 configmap資源

vi /opt/81.conf
    server {
        listen       81;
        server_name  localhost;
        root         /html;
        index      index.html index.htm;
        location / {
        }
    }
​
kubectl create configmap 81.conf --from-file=/opt/81.conf
#驗證
kubectl get cm
​
vi k8s_deploy.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: nginx
spec:
  replicas: 2
  template:
    metadata:
      labels:
        app: nginx
    spec:
      volumes:
        - name: nginx-config
          configMap:
            name: 81.conf
            items:
              - key: 81.conf
                path: 81.conf
      containers:
      - name: nginx
        image: nginx:1.13
        volumeMounts:
          - name: nginx-config
            mountPath: /etc/nginx/conf.d
        ports:
        - containerPort: 80
          name: port1
        - containerPort: 81
          name: port2

4: k8s常用服務

4.1 部署dns服務


vi coredns.yaml 
apiVersion: v1
kind: ServiceAccount
metadata:
  name: coredns
  namespace: kube-system
  labels:
      kubernetes.io/cluster-service: "true"
      addonmanager.kubernetes.io/mode: Reconcile
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  labels:
    kubernetes.io/bootstrapping: rbac-defaults
    addonmanager.kubernetes.io/mode: Reconcile
  name: system:coredns
rules:
- apiGroups:
  - ""
  resources:
  - endpoints
  - services
  - pods
  - namespaces
  verbs:
  - list
  - watch
- apiGroups:
  - ""
  resources:
  - nodes
  verbs:
  - get
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  annotations:
    rbac.authorization.kubernetes.io/autoupdate: "true"
  labels:
    kubernetes.io/bootstrapping: rbac-defaults
    addonmanager.kubernetes.io/mode: EnsureExists
  name: system:coredns
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:coredns
subjects:
- kind: ServiceAccount
  name: coredns
  namespace: kube-system
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: coredns
  namespace: kube-system
  labels:
      addonmanager.kubernetes.io/mode: EnsureExists
data:
  Corefile: |
    .:53 {
        errors
        health
        kubernetes cluster.local in-addr.arpa ip6.arpa {
            pods insecure
            upstream
            fallthrough in-addr.arpa ip6.arpa
            ttl 30
        }
        prometheus :9153
        forward . /etc/resolv.conf
        cache 30
        loop
        reload
        loadbalance
    }
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: coredns
  namespace: kube-system
  labels:
    k8s-app: kube-dns
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
    kubernetes.io/name: "CoreDNS"
spec:
  # replicas: not specified here:
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxUnavailable: 1
  selector:
    matchLabels:
      k8s-app: kube-dns
  template:
    metadata:
      labels:
        k8s-app: kube-dns
      annotations:
        seccomp.security.alpha.kubernetes.io/pod: 'docker/default'
    spec:
      priorityClassName: system-cluster-critical
      serviceAccountName: coredns
      tolerations:
        - key: "CriticalAddonsOnly"
          operator: "Exists"
      nodeSelector:
        beta.kubernetes.io/os: linux
      nodeName: 10.0.0.13
      containers:
      - name: coredns
        image: coredns/coredns:1.3.1
        imagePullPolicy: IfNotPresent
        resources:
          limits:
            memory: 100Mi
          requests:
            cpu: 100m
            memory: 70Mi
        args: [ "-conf", "/etc/coredns/Corefile" ]
        volumeMounts:
        - name: config-volume
          mountPath: /etc/coredns
          readOnly: true
        - name: tmp
          mountPath: /tmp
        ports:
        - containerPort: 53
          name: dns
          protocol: UDP
        - containerPort: 53
          name: dns-tcp
          protocol: TCP
        - containerPort: 9153
          name: metrics
          protocol: TCP
        livenessProbe:
          httpGet:
            path: /health
            port: 8080
            scheme: HTTP
          initialDelaySeconds: 60
          timeoutSeconds: 5
          successThreshold: 1
          failureThreshold: 5
        readinessProbe:
          httpGet:
            path: /health
            port: 8080
            scheme: HTTP
        securityContext:
          allowPrivilegeEscalation: false
          capabilities:
            add:
            - NET_BIND_SERVICE
            drop:
            - all
          readOnlyRootFilesystem: true
      dnsPolicy: Default
      volumes:
        - name: tmp
          emptyDir: {}
        - name: config-volume
          configMap:
            name: coredns
            items:
            - key: Corefile
              path: Corefile
---
apiVersion: v1
kind: Service
metadata:
  name: kube-dns
  namespace: kube-system
  annotations:
    prometheus.io/port: "9153"
    prometheus.io/scrape: "true"
  labels:
    k8s-app: kube-dns
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
    kubernetes.io/name: "CoreDNS"
spec:
  selector:
    k8s-app: kube-dns
  clusterIP: 10.254.230.254
  ports:
  - name: dns
    port: 53
    protocol: UDP
  - name: dns-tcp
    port: 53
    protocol: TCP
  - name: metrics
    port: 9153
    protocol: TCP
​
#測試
yum install bind-utils.x86_64 -y
dig @10.254.230.254 kubernetes.default.svc.cluster.local +short

4.2 部署dashboard服務

wget https://raw.githubusercontent.com/kubernetes/dashboard/v1.10.1/src/deploy/recommended/kubernetes-dashboard.yaml
​
vi kubernetes-dashboard.yaml
#修改鏡像地址
image: registry.aliyuncs.com/google_containers/kubernetes-dashboard-amd64:v1.10.1
#修改service類型為NodePort類型
spec:
  type: NodePort
  ports:
    - port: 443
      nodePort: 30001
      targetPort: 8443
​
​
kubectl create -f kubernetes-dashboard.yaml
#使用火狐瀏覽器訪問https://10.0.0.12:30001
​
vim dashboard_rbac.yaml 
apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    k8s-app: kubernetes-dashboard
    addonmanager.kubernetes.io/mode: Reconcile
  name: admin
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: kubernetes-dashboard-admin
  labels:
    k8s-app: kubernetes-dashboard
    addonmanager.kubernetes.io/mode: Reconcile
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: admin
  namespace: kube-system

5:k8s的網絡訪問

5.1 k8s的映射

#准備數據庫
yum install mariadb-server -y
systemctl start  mariadb
mysql_secure_installation
mysql>grant all on *.* to root@'%' identified by '123456';
​
#刪除mysql的rc和svc
kubectl  delete  rc  mysql
kubectl delete  svc mysql
​
#創建endpoint和svc
[root@k8s-master yingshe]# cat mysql_endpoint.yaml 
apiVersion: v1
kind: Endpoints
metadata:
  name: mysql
subsets:
- addresses:
  - ip: 10.0.0.13
  ports:
  - name: mysql
    port: 3306
    protocol: TCP
​
[root@k8s-master yingshe]# cat mysql_svc.yaml 
apiVersion: v1
kind: Service
metadata:
  name: mysql
spec:
  ports:
  - name: mysql
    port: 3306
    protocol: TCP
    targetPort: 3306  
  type: ClusterIP
​
#web頁面重新訪問tomcat/demo
#驗證
[root@k8s-node2 ~]# mysql -e 'show databases;'
+--------------------+
| Database           |
+--------------------+
| information_schema |
| HPE_APP            |
| mysql              |
| performance_schema |
+--------------------+

5.2 kube-proxy的ipvs模式

yum install conntrack-tools -y
yum install ipvsadm.x86_64 -y
​
vim /usr/lib/systemd/system/kube-proxy.service
[Unit]
Description=Kubernetes Proxy
After=network.target
[Service]
ExecStart=/usr/bin/kube-proxy \
  --kubeconfig /etc/kubernetes/kube-proxy.kubeconfig \
  --cluster-cidr 172.18.0.0/16 \
  --hostname-override 10.0.0.12 \
  --proxy-mode ipvs \
  --logtostderr=false \
  --v=2
Restart=on-failure
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target

systemctl daemon-reload 
systemctl restart kube-proxy.service 
ipvsadm -L -n

5.3 ingress

6: k8s彈性伸縮

彈性伸縮

--horizontal-pod-autoscaler-use-rest-clients=false

7: 動態存儲

cat nfs-client.yaml 
kind: Deployment
apiVersion: apps/v1
metadata:
  name: nfs-client-provisioner
spec:
  replicas: 1
  strategy:
    type: Recreate
  selector:
    matchLabels:
      app: nfs-client-provisioner
  template:
    metadata:
      labels:
        app: nfs-client-provisioner
    spec:
      serviceAccountName: nfs-client-provisioner
      containers:
        - name: nfs-client-provisioner
          image: quay.io/external_storage/nfs-client-provisioner:latest
          volumeMounts:
            - name: nfs-client-root
              mountPath: /persistentvolumes
          env:
            - name: PROVISIONER_NAME
              value: fuseim.pri/ifs
            - name: NFS_SERVER
              value: 10.0.0.13
            - name: NFS_PATH
              value: /data
      volumes:
        - name: nfs-client-root
          nfs:
            server: 10.0.0.13
            path: /data
vi nfs-client-sa.yaml 
iapiVersion: v1
kind: ServiceAccount
metadata:
  name: nfs-client-provisioner
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: nfs-client-provisioner-runner
rules:
  - apiGroups: [""]
    resources: ["persistentvolumes"]
    verbs: ["get", "list", "watch", "create", "delete"]
  - apiGroups: [""]
    resources: ["persistentvolumeclaims"]
    verbs: ["get", "list", "watch", "update"]
  - apiGroups: ["storage.k8s.io"]
    resources: ["storageclasses"]
    verbs: ["get", "list", "watch"]
  - apiGroups: [""]
    resources: ["events"]
    verbs: ["list", "watch", "create", "update", "patch"]
  - apiGroups: [""]
    resources: ["endpoints"]
    verbs: ["create", "delete", "get", "list", "watch", "patch", "update"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: run-nfs-client-provisioner
subjects:
  - kind: ServiceAccount
    name: nfs-client-provisioner
    namespace: default
roleRef:
  kind: ClusterRole
  name: nfs-client-provisioner-runner
  apiGroup: rbac.authorization.k8s.io
vi nfs-client-class.yaml 
iapiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: course-nfs-storage
provisioner: fuseim.pri/ifs
修改pvc的配置文件
metadata:
  namespace: tomcat
  name: pvc-01
  annotations:
    volume.beta.kubernetes.io/storage-class: "course-nfs-storage"

8:增加計算節點

計算節點服務: docker kubelet kube-proxy flannel

9: 污點和容忍度

污點: 給node節點加污點

污點的類型:
NoSchedule
PreferNoSchedule
NoExecute

#添加污點的例子
kubectl taint node 10.0.0.14  node-role.kubernetes.io=master:NoExecute
#檢查
[root@k8s-master ~]# kubectl describe nodes 10.0.0.14|grep -i taint
Taints:             node-role.kubernetes.io=master:NoExecute

容忍度

#添加在pod的spec下
tolerations:
- key: "node-role.kubernetes.io"
  operator: "Exists"
  value: "master"
  effect: "NoExecute"


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM