Shell腳本實現----Kubernetes單集群二進制部署
搭建Kubernetes集群環境有以下三種方式:
1. Minikube安裝方式
Minikube是一個工具,可以在本地快速運行一個單點的Kubernetes,嘗試Kubernetes或日常開發的用戶使用。但是這種方式僅可用於學習和測試部署,不能用於生產環境。
2. Kubeadm安裝方式
kubeadm是一個kubernetes官方提供的快速安裝和初始化擁有最佳實踐(best practice)的kubernetes集群的工具,提供kubeadm init和kubeadm join,用於快速部署Kubernetes集群。目前kubeadm還處於beta 和alpha狀態,可以通過學習這種部署方法來體會一些官方推薦的kubernetes最佳實踐的設計和思想,目前大的生產環境中比較少用。
3. 二進制包安裝方式(生產部署的推薦方式)
從官方下載發行版的二進制包,手動部署每個組件,組成Kubernetes集群,這種方式符合企業生產環境標准的Kubernetes集群環境的安裝,可用於生產方式部署。
# wget https://dl.k8s.io/v1.16.1/kubernetes-server-linux-amd64.tar.gz # wget https://github.com/etcd-io/etcd/releases/download/v3.3.13/etcd-v3.3.13-linux-amd64.tar.gz # wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64 # wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64 # wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64 # wget https://github.com/coreos/flannel/releases/download/v0.11.0/flannel-v0.11.0-linux-amd64.tar.gz
軟件環境
系統: CentOS7.8
Docker: docker-ce.18.06
Kubernetes: v1.16.1
Etcd Version: 3.3.13
Flanneld: v0.11.0
服務器規划
IP | 主機名 | 角色 | 安裝組件 |
192.168.10.10 | k8s-master | master | kube- apiserver kube- controller-manager kube- scheduler etcd kubectl |
192.168.10.11 | k8s-node1 | node1 | kubelet kube-proxy docker flannel etcd |
192.168.10.12 | k8s-node2 | node2 | kubelet kube-proxy docker flannel etcd |
1. 環境准備(所有機器)
1.1 關閉防火牆, selinux (略)
1.2 互相解析 (略),master節點上傳公鑰到node節點
https://www.cnblogs.com/user-sunli/p/13889477.html
1.3 配置好集群時間同步 (略)
1.4 更新內核(7.6以上的系統不需要這步)
# yum update
1.5 關閉 swap
# swapoff -a
# sed -i.bak 's/^.*swap/#&/' /etc/fstab
1.6 配置內核參數
# vim /etc/sysctl.d/kubernetes.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
# sysctl -p
1.7 重啟(升級內核則需要做此步)
# shutdown -r now
1.8 驗證
# uname -r
3.10.0-1062.4.1.el7.x86_64
# free -m
total used free shared buff/cache available
Mem: 1980 120 1371 9 488 1704
Swap: 0 0 0
2. 編寫腳本及上傳二進制包
在/root/目錄下

主腳本
vim main.sh
#/bin/bash
#auther:sunli
#mail:<1916989848@qq.com> k8s_master=192.168.10.10 k8s_node1=192.168.10.11 k8s_node2=192.168.10.12 sh ansible_docker.sh $k8s_node1 $k8s_node2 || (echo "docker install error" && exit) sh etcd_install.sh $k8s_master $k8s_node1 $k8s_node2 || (echo "etcd install error" && exit) sh flannel_install.sh $k8s_master $k8s_node1 $k8s_node2 || (echo "flannel install error" && exit) sh master.sh $k8s_master || (echo "master install error" && exit) sh node.sh $k8s_master $k8s_node1 $k8s_node2 || (echo "node install error" && exit)
docker安裝腳本
vim docker_install.sh
#/bin/bash
curl http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -o /etc/yum.repos.d/docker-ce.repo
yum -y install docker-ce-18.06.0.ce
[ ! -d /etc/docker ] && mkdir /etc/docker
cat >> /etc/docker/daemon.json <<- EOF
{
"registry-mirrors": ["https://pf5f57i3.mirror.aliyuncs.com"]
}
EOF
systemctl enable docker
systemctl start docker
vim ansible_docker.sh
#!bin/bash
[ ! -x /usr/bin/ansible ] && yum -y install ansible cat >> /etc/ansible/hosts << EOF [docker] $1 $2 EOF ansible docker -m script -a 'creates=/root/docker_install.sh /root/docker_install.sh'
CA簽證腳本
vim CA.sh
#/bin/bash
#auther:sunli
#mail:<1916989848@qq.com> #description:利用cfssljson格式,自建CA中心,生成ca-key.pem(私鑰)和ca.pem(證書),還會生成ca.csr(證書簽名請求) CFSSL() { #將下載好的三個cfssl文件賦予可執行權限,並設置為系統命令 wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64 wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64 wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64 mv cfssl* /usr/local/bin/ chmod +x /usr/local/bin/cfssl* } #判斷cfssl命令是否存在 which cfssljson_linux-amd64 [ `echo $?` -ne 0 ] && CFSSL #確定service service=$1 [ ! -d /etc/$service/ssl ] && mkdir -p /etc/$service/ssl CA_DIR=/etc/$service/ssl #CA中心配置文件 CA_CONFIG() { cat > $CA_DIR/ca-config.json <<- EOF { "signing": { "default": { "expiry": "87600h" }, "profiles": { "$service": { "expiry": "87600h", "usages": [ "signing", "key encipherment", "server auth", "client auth" ] } } } } EOF } #CA證書簽名請求 CA_CSR() { cat > $CA_DIR/ca-csr.json <<- EOF { "CN": "$service", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "L": "GuangDong", "ST": "GuangZhou", "O": "$service", "OU": "System" } ] } EOF } #服務器請求CA中心頒發證書簽名請求 SERVER_CSR() { host1=192.168.10.10 host2=192.168.10.11 host3=192.168.10.12 host4=192.168.10.13 host5=192.168.10.14 host6=192.168.10.15 cat > $CA_DIR/server-csr.json <<- EOF { "CN": "$service", "hosts": [ "127.0.0.1", "$host1", "$host2", "$host3", "$host4", "$host5" ], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "L": "GuangDong", "ST": "GuangZhou", "O": "$service", "OU": "System" } ] } EOF } SERVER_CSR1() { host1=192.168.10.10 host2=192.168.10.20 host3=192.168.10.30 host4=192.168.10.40 cat > $CA_DIR/server-csr.json <<- EOF { "CN": "$service", "hosts": [ "10.0.0.1", "127.0.0.1", "$host1", "$host2", "$host3", "$host4", "kubernetes", "kubernetes.default", "kubernetes.default.svc", "kubernetes.default.svc.cluster", "kubernetes.default.svc.cluster.local" ], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "L": "GuangDong", "ST": "GuangZhou", "O": "$service", "OU": "System" } ] } EOF } CA_CONFIG && CA_CSR [ "$service" == "kubernetes" ] && SERVER_CSR1 || SERVER_CSR #生成CA所必需的文件ca-key.pem(私鑰)和ca.pem(證書),還會生成ca.csr(證書簽名請求),用於交叉簽名或重新簽名 cd $CA_DIR/ cfssl_linux-amd64 gencert -initca ca-csr.json | cfssljson_linux-amd64 -bare ca #生成證書 cfssl_linux-amd64 gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=$service server-csr.json | cfssljson_linux-amd64 -bare server

安裝etcd
vim etcd_install.sh
#/bin/bash
#解壓etcd二進制包,將二進制命令設置為系統命令
etcd_01=$1 etcd_02=$2 etcd_03=$3 sh CA.sh etcd || (echo "etcd CA not build" && exit) #將二進制包提前拷貝至當前路徑 dir=./ pkgname=etcd-v3.3.13-linux-amd64 [ ! -e $dir/$pkgname.tar.gz ] && echo "no package" && exit tar xf $dir/$pkgname.tar.gz cp -p $dir/$pkgname/etc* /usr/local/bin/ #創建etcd配置文件 ETCD_CONFIG() { cat > /etc/etcd/etcd.conf <<- EOF #[Member] ETCD_NAME="etcd-01" ETCD_DATA_DIR="/var/lib/etcd/default.etcd" ETCD_LISTEN_PEER_URLS="https://$etcd_01:2380" ETCD_LISTEN_CLIENT_URLS="https://$etcd_01:2379" #[Clustering] ETCD_INITIAL_ADVERTISE_PEER_URLS="https://$etcd_01:2380" ETCD_ADVERTISE_CLIENT_URLS="https://$etcd_01:2379" ETCD_INITIAL_CLUSTER="etcd-01=https://$etcd_01:2380,etcd-02=https://$etcd_02:2380,etcd-03=https://$etcd_03:2380" ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster" ETCD_INITIAL_CLUSTER_STATE="new" EOF } #創建etcd的system啟動文件 ETCD_SERVICE() { cat > /usr/lib/systemd/system/etcd.service <<- EOF [Unit] Description=Etcd Server After=network.target After=network-online.target Wants=network-online.target [Service] Type=notify EnvironmentFile=/etc/etcd/etcd.conf ExecStart=/usr/local/bin/etcd \ --name=\${ETCD_NAME} \ --data-dir=\${ETCD_DATA_DIR} \ --listen-peer-urls=\${ETCD_LISTEN_PEER_URLS} \ --listen-client-urls=\${ETCD_LISTEN_CLIENT_URLS},http://127.0.0.1:2379 \ --advertise-client-urls=\${ETCD_ADVERTISE_CLIENT_URLS} \ --initial-advertise-peer-urls=\${ETCD_INITIAL_ADVERTISE_PEER_URLS} \ --initial-cluster=\${ETCD_INITIAL_CLUSTER} \ --initial-cluster-token=\${ETCD_INITIAL_CLUSTER_TOKEN} \ --initial-cluster-state=new \ --cert-file=/etc/etcd/ssl/server.pem \ --key-file=/etc/etcd/ssl/server-key.pem \ --peer-cert-file=/etc/etcd/ssl/server.pem \ --peer-key-file=/etc/etcd/ssl/server-key.pem \ --trusted-ca-file=/etc/etcd/ssl/ca.pem \ --peer-trusted-ca-file=/etc/etcd/ssl/ca.pem Restart=on-failure LimitNOFILE=65536 [Install] WantedBy=multi-user.target EOF } ETCD_CONFIG && ETCD_SERVICE #將master節點的配置信息拷貝到node節點,前提已做ssh授信和域名解析 scp /usr/local/bin/etcd* $etcd_02:/usr/local/bin/ scp -r /etc/etcd/ $etcd_02:/etc/ scp /usr/lib/systemd/system/etcd.service $etcd_02:/usr/lib/systemd/system/ scp /usr/local/bin/etcd* $etcd_03:/usr/local/bin/ scp -r /etc/etcd/ $etcd_03:/etc/ scp /usr/lib/systemd/system/etcd.service $etcd_03:/usr/lib/systemd/system/ #安裝ansible,並將node_ip寫入host [ ! -x /usr/bin/ansible ] && yum -y install ansible echo "[etcd]" >> /etc/ansible/hosts echo "$etcd_02" >> /etc/ansible/hosts echo "$etcd_03" >> /etc/ansible/hosts #修改etcd-02的etcd.conf cat > /tmp/etcd-02.sh <<- EOF #/bin/bash sed -i "s#\"etcd-01\"#\"etcd-02\"#g" /etc/etcd/etcd.conf sed -i "s#\"https://$etcd_01#\"https://$etcd_02#g" /etc/etcd/etcd.conf EOF ansible $etcd_02 -m script -a 'creates=/tmp/etcd-02.sh /tmp/etcd-02.sh' #修改etcd-03的etcd.conf #ansible $etcd_03 -m lineinfile -a "dest=/etc/etcd/etcd.conf regexp='ETCD_NAME=\"etcd-01\"' line='ETCD_NAME=\"etcd-03\"' backrefs=yes" cat > /tmp/etcd-03.sh <<- EOF #/bin/bash sed -i "s#\"etcd-01\"#\"etcd-03\"#g" /etc/etcd/etcd.conf sed -i "s#\"https://$etcd_01#\"https://$etcd_03#g" /etc/etcd/etcd.conf EOF ansible $etcd_03 -m script -a 'creates=/tmp/etcd-03.sh /tmp/etcd-03.sh' #啟動etcd-02、etcd-03 ansible etcd -m service -a "name=etcd state=started enabled=yes" && continue #啟動etcd_01 systemctl enable etcd systemctl start etcd #別名簡化 cat > /etc/profile.d/alias_etcd.sh <<- EOF alias etcdctld='etcdctl --cert-file=/etc/etcd/ssl/server.pem \ --key-file=/etc/etcd/ssl/server-key.pem \ --ca-file=/etc/etcd/ssl/ca.pem \ --endpoint=https://$etcd_01:2379,https://$etcd_02:2379,https://$etcd_03:2379' EOF source /etc/profile.d/alias_etcd.sh #將簡化的命令拷貝node scp /etc/profile.d/alias_etcd.sh $etcd_02:/etc/profile.d/ scp /etc/profile.d/alias_etcd.sh $etcd_03:/etc/profile.d/ ansible etcd -m shell -a "source /etc/profile.d/alias_etcd.sh" #輸出etcd集群健康,注意重開終端生效 etcdctld cluster-health
echo "輸出etcd集群健康,注意重開終端生效,命令:etcdctld cluster-health"
安裝flannel
vim flannel_install.sh
#/bin/bash
flannel_01=$1 flannel_02=$2 flannel_03=$3 #將二進制包提前拷貝至當前路徑 dir=./ pkgname=flannel-v0.11.0-linux-amd64 [ ! -e $dir/$pkgname.tar.gz ] && echo "error:no package" && exit tar xf $dir/$pkgname.tar.gz mv $dir/{flanneld,mk-docker-opts.sh} /usr/local/bin/ #向 etcd 寫入集群 Pod 網段信息(在任意一個etcd節點執行) cd /etc/etcd/ssl/ etcdctl --ca-file=ca.pem --cert-file=server.pem --key-file=server-key.pem \ --endpoint=https://$flannel_01:2379,https://$flannel_02:2379,https://$flannel_03:2379 \ set /coreos.com/network/config '{ "Network": "10.244.0.0/16", "Backend": {"Type": "vxlan"}}' #創建flannel.conf FLANNEL_CONFIG() { cat >/etc/flannel.conf <<- EOF FLANNEL_OPTIONS="--etcd-endpoints=https://$flannel_01:2379,https://$flannel_02:2379,https://$flannel_03:2379 -etcd-cafile=/etc/etcd/ssl/ca.pem -etcd-certfile=/etc/etcd/ssl/server.pem -etcd-keyfile=/etc/etcd/ssl/server-key.pem" EOF } #創建flannel.service FLANNEL_SERVICE() { cat > /usr/lib/systemd/system/flanneld.service <<- EOF [Unit] Description=Flanneld overlay address etcd agent After=network-online.target network.target Before=docker.service [Service] Type=notify EnvironmentFile=/etc/flannel.conf ExecStart=/usr/local/bin/flanneld --ip-masq \$FLANNEL_OPTIONS ExecStartPost=/usr/local/bin/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/subnet.env Restart=on-failure [Install] WantedBy=multi-user.target EOF } #重建docker.service DOCKER_SERVICE() { cat > /usr/lib/systemd/system/docker.service <<- EOF [Unit] Description=Docker Application Container Engine Documentation=https://docs.docker.com After=network-online.target firewalld.service Wants=network-online.target [Service] Type=notify EnvironmentFile=/run/flannel/subnet.env ExecStart=/usr/bin/dockerd \$DOCKER_NETWORK_OPTIONS ExecReload=/bin/kill -s HUP \$MAINPID LimitNOFILE=infinity LimitNPROC=infinity LimitCORE=infinity TimeoutStartSec=0 Delegate=yes KillMode=process Restart=on-failure StartLimitBurst=3 StartLimitInterval=60s [Install] WantedBy=multi-user.target EOF } FLANNEL_CONFIG && FLANNEL_SERVICE && DOCKER_SERVICE
# 重啟 docker 服務
systemctl daemon-reload
systemctl restart docker scp /usr/local/bin/{flanneld,mk-docker-opts.sh} $flannel_02:/usr/local/bin/ scp /usr/local/bin/{flanneld,mk-docker-opts.sh} $flannel_03:/usr/local/bin/ scp /etc/flannel.conf $flannel_02:/etc/ scp /etc/flannel.conf $flannel_03:/etc/ scp /usr/lib/systemd/system/flanneld.service $flannel_02:/usr/lib/systemd/system/ scp /usr/lib/systemd/system/flanneld.service $flannel_03:/usr/lib/systemd/system/ scp /usr/lib/systemd/system/docker.service $flannel_02:/usr/lib/systemd/system/ scp /usr/lib/systemd/system/docker.service $flannel_03:/usr/lib/systemd/system/ #安裝ansible,並將node_ip寫入host [ ! -x /usr/bin/ansible ] && yum -y install ansible echo "[flannel]" >> /etc/ansible/hosts echo "$flannel_02" >> /etc/ansible/hosts echo "$flannel_03" >> /etc/ansible/hosts ansible flannel -m service -a "name=flanneld state=started daemon_reload=yes enabled=yes" && continue ansible flannel -m service -a "name=docker state=restarted enabled=yes" && continue #別名簡化 cat > /etc/profile.d/alias_etcdf.sh <<- EOF alias etcdctlf='cd /etc/etcd/ssl/;etcdctl --ca-file=ca.pem --cert-file=server.pem --key-file=server-key.pem --endpoint=https://$flannel_01:2379,https://$flannel_02:2379,https://$flannel_03:2379 ls /coreos.com/network/subnets' EOF source /etc/profile.d/alias_etcdf.sh scp /etc/profile.d/alias_etcdf.sh $flannel_02:/etc/profile.d/ scp /etc/profile.d/alias_etcdf.sh $flannel_03:/etc/profile.d/ ansible flannel -m shell -a "source /etc/profile.d/alias_etcdf.sh"
master節點
vim master.sh
#/bin/bash
master=$1 #創建kubernetes的CA證書 sh CA.sh kubernetes #配置kube-apiserver #將kubernetes二進制包提前拷貝至當前路徑 dir=./ pkgname=kubernetes-server-linux-amd64 [ ! -e $dir/$pkgname.tar.gz ] && echo "no package" && exit tar xf $dir/$pkgname.tar.gz cp -p $dir/kubernetes/server/bin/{kube-apiserver,kube-controller-manager,kube-scheduler,kubectl} /usr/local/bin/ #創建Bootstrapping Token 文件 TLS=$(head -c 16 /dev/urandom | od -An -t x | tr -d ' ') cat > /etc/kubernetes/token.csv <<- EOF $TLS,,kubelet-bootstrap,10001,"system:kubelet-bootstrap" EOF #創建 kube-apiserver 配置文件 KUBE_API_CONF() { cat > /etc/kubernetes/apiserver <<- EOF KUBE_APISERVER_OPTS="--logtostderr=true \ --v=4 \ --etcd-servers=https://192.168.10.10:2379,https://192.168.10.11:2379,https://192.168.10.12:2379 \ --bind-address=$master \ --secure-port=6443 \ --advertise-address=$master \ --allow-privileged=true \ --service-cluster-ip-range=10.0.0.0/24 \ --enable-admission-plugins=NamespaceLifecycle,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota,NodeRestriction \ --authorization-mode=RBAC,Node \ --enable-bootstrap-token-auth \ --token-auth-file=/etc/kubernetes/token.csv \ --service-node-port-range=30000-50000 \ --tls-cert-file=/etc/kubernetes/ssl/server.pem \ --tls-private-key-file=/etc/kubernetes/ssl/server-key.pem \ --client-ca-file=/etc/kubernetes/ssl/ca.pem \ --service-account-key-file=/etc/kubernetes/ssl/ca-key.pem \ --etcd-cafile=/etc/etcd/ssl/ca.pem \ --etcd-certfile=/etc/etcd/ssl/server.pem \ --etcd-keyfile=/etc/etcd/ssl/server-key.pem" EOF } #創建kube-apiserver.service的systemd 啟動文件 KUBE_API_SERVICE() { cat > /usr/lib/systemd/system/kube-apiserver.service <<- EOF [Unit] Description=Kubernetes API Server Documentation=https://github.com/kubernetes/kubernetes After=etcd.service Wants=etcd.service [Service] EnvironmentFile=-/etc/kubernetes/apiserver ExecStart=/usr/local/bin/kube-apiserver \$KUBE_APISERVER_OPTS Restart=on-failure [Install] WantedBy=multi-user.target EOF } KUBE_API_CONF && KUBE_API_SERVICE #啟動服務 systemctl daemon-reload systemctl start kube-apiserver systemctl enable kube-apiserver #部署kube-scheduler #創建kube-scheduler.conf配置文件 KUBE_SCH_CONF() { cat > /etc/kubernetes/kube-scheduler.conf <<- EOF KUBE_SCHEDULER_OPTS="--logtostderr=true --v=4 --master=127.0.0.1:8080 --leader-elect" EOF } #創建kube-scheduler的systemd啟動文件 KUBE_SCH_SERVICE() { cat > /usr/lib/systemd/system/kube-scheduler.service <<- EOF [Unit] Description=Kubernetes Scheduler Documentation=https://github.com/kubernetes/kubernetes [Service] EnvironmentFile=-/etc/kubernetes/kube-scheduler.conf ExecStart=/usr/local/bin/kube-scheduler \$KUBE_SCHEDULER_OPTS Restart=on-failure [Install] WantedBy=multi-user.target EOF } KUBE_SCH_CONF && KUBE_SCH_SERVICE #啟動服務 systemctl daemon-reload systemctl start kube-scheduler systemctl enable kube-scheduler #部署kube-controller-manager #創建kube-controller-manager.conf配置文件 KUBE_CM_CONF() { cat > /etc/kubernetes/kube-controller-manager.conf <<- EOF KUBE_CONTROLLER_MANAGER_OPTS="--logtostderr=true \ --v=4 \ --master=127.0.0.1:8080 \ --leader-elect=true \ --address=127.0.0.1 \ --service-cluster-ip-range=10.0.0.0/24 \ --cluster-name=kubernetes \ --cluster-signing-cert-file=/etc/kubernetes/ssl/ca.pem \ --cluster-signing-key-file=/etc/kubernetes/ssl/ca-key.pem \ --root-ca-file=/etc/kubernetes/ssl/ca.pem \ --service-account-private-key-file=/etc/kubernetes/ssl/ca-key.pem" EOF } #創建kube-controller-manager的systemd 啟動文件 KUBE_CM_SERVICE() { cat > /usr/lib/systemd/system/kube-controller-manager.service <<- EOF [unit] Description=Kubernetes Controller Manager Documentation=https://github.com/kubernetes/kubernetes [Service] EnvironmentFile=-/etc/kubernetes/kube-controller-manager.conf ExecStart=/usr/local/bin/kube-controller-manager \$KUBE_CONTROLLER_MANAGER_OPTS Restart=on-failure [Install] WantedBy=multi-user.target EOF } KUBE_CM_CONF && KUBE_CM_SERVICE #啟動服務 systemctl daemon-reload systemctl start kube-controller-manager systemctl enable kube-controller-manager #檢查各服務狀態及 Master 集群狀態 kubectl get cs

部署node節點
vim node.sh
#/bin/bash
master=$1 node1=$2 node2=$3 ssh $node1 "mkdir -p /etc/kubernetes/ssl" ssh $node2 "mkdir -p /etc/kubernetes/ssl" dir=./ [ ! -d kubernetes ] && echo "error:no kubernetes dir" && exit #部署kubelet #在master節點創建 kube-proxy 證書 kube_proxy_csr() { cd /etc/kubernetes/ssl/ cat > /etc/kubernetes/ssl/kube-proxy-csr.json <<- EOF { "CN": "system:kube-proxy", "hosts": [], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "L": "GuangDong", "ST": "GuangZhou", "O": "kubernetes", "OU": "System" } ] } EOF cfssl_linux-amd64 gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json --profile=kubernetes kube-proxy-csr.json | cfssljson_linux-amd64 -bare kube-proxy } kube_proxy_csr || exit #創建 kubelet bootstrap kubeconfig 文件 #寫一個腳本bs_kubeconfig.sh cat > /tmp/bs_kubeconfig.sh <<- EOF #!/bin/bash BOOTSTRAP_TOKEN=$(awk -F "," '{print $1}' /etc/kubernetes/token.csv) KUBE_SSL=/etc/kubernetes/ssl KUBE_APISERVER="https://$master:6443" cd \$KUBE_SSL/ # 設置集群參數 kubectl config set-cluster kubernetes \ --certificate-authority=./ca.pem \ --embed-certs=true \ --server=\${KUBE_APISERVER} \ --kubeconfig=bootstrap.kubeconfig # 設置客戶端認證參數 kubectl config set-credentials kubelet-bootstrap \ --token=\${BOOTSTRAP_TOKEN} \ --kubeconfig=bootstrap.kubeconfig # 設置上下文參數 kubectl config set-context default \ --cluster=kubernetes \ --user=kubelet-bootstrap \ --kubeconfig=bootstrap.kubeconfig # 設置上下文 kubectl config use-context default --kubeconfig=bootstrap.kubeconfig # 創建 kube-proxy kubeconfig 文件 kubectl config set-cluster kubernetes \ --certificate-authority=./ca.pem \ --embed-certs=true \ --server=\${KUBE_APISERVER} \ --kubeconfig=kube-proxy.kubeconfig kubectl config set-credentials kube-proxy \ --client-certificate=./kube-proxy.pem \ --client-key=./kube-proxy-key.pem \ --embed-certs=true \ --kubeconfig=kube-proxy.kubeconfig kubectl config set-context default \ --cluster=kubernetes \ --user=kube-proxy \ --kubeconfig=kube-proxy.kubeconfig kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig # 將 kubelet-bootstrap 用戶綁定到系統集群角色 kubectl create clusterrolebinding kubelet-bootstrap \ --clusterrole=system:node-bootstrapper \ --user=kubelet-bootstrap EOF #執行腳本 sh /tmp/bs_kubeconfig.sh #查看ls /etc/kubernetes/ssl/*.kubeconfig #會有/etc/kubernetes/ssl/bootstrap.kubeconfig /etc/kubernetes/ssl/kube-proxy.kubeconfig這兩個文件 #將二進制文件, 剛生成的兩個 .kubeconfig 文件拷貝到所有的 node 節點 #SHELL_FOLDER=$(cd "$(dirname "$0")";pwd) scp /root/kubernetes/server/bin/{kubelet,kube-proxy} $node1:/usr/local/bin/ scp /root/kubernetes/server/bin/{kubelet,kube-proxy} $node2:/usr/local/bin/ scp /etc/kubernetes/ssl/*.kubeconfig $node1:/etc/kubernetes/ scp /etc/kubernetes/ssl/*.kubeconfig $node2:/etc/kubernetes/ #node:創建 kubelet 配置文件 KUBELET_CONF() { cat > /etc/kubernetes/kubelet.conf <<- EOF KUBELET_OPTS="--logtostderr=true \ --v=4 \ --hostname-override=$master \ --kubeconfig=/etc/kubernetes/kubelet.kubeconfig \ --bootstrap-kubeconfig=/etc/kubernetes/bootstrap.kubeconfig \ --config=/etc/kubernetes/kubelet.yaml \ --cert-dir=/etc/kubernetes/ssl \ --pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0" EOF } #創建kube-proxy.conf配置文件 KUBE_PROXY_CONF() { cat > /etc/kubernetes/kube-proxy.conf <<- EOF KUBE_PROXY_OPTS="--logtostderr=true \ --v=4 \ --hostname-override=$master \ --cluster-cidr=10.0.0.0/24 \ --kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig" EOF } #創建 kubelet 參數配置模板文件 KUBELET_YAML() { cat > /etc/kubernetes/kubelet.yaml <<- EOF kind: KubeletConfiguration apiVersion: kubelet.config.k8s.io/v1beta1 address: $master port: 10250 readOnlyPort: 10255 cgroupDriver: cgroupfs clusterDNS: ["10.0.0.2"] clusterDomain: cluster.local. failSwapOn: false authentication: anonymous: enabled: true EOF } #創建 kubelet的systemd 啟動文件 KUBELET_SERVICE() { cat > /usr/lib/systemd/system/kubelet.service <<- EOF [Unit] Description=Kubernetes Kubelet After=docker.service Requires=docker.service [Service] EnvironmentFile=/etc/kubernetes/kubelet.conf ExecStart=/usr/local/bin/kubelet \$KUBELET_OPTS Restart=on-failure KillMode=process [Install] WantedBy=multi-user.target EOF } #創建 kube-proxy的systemd 啟動文件 KUBE_PROXY_SERVICE() { cat > /usr/lib/systemd/system/kube-proxy.service <<- EOF [Unit] Description=Kubernetes Proxy After=network.target [Service] EnvironmentFile=-/etc/kubernetes/kube-proxy.conf ExecStart=/usr/local/bin/kube-proxy \$KUBE_PROXY_OPTS Restart=on-failure [Install] WantedBy=multi-user.target EOF } KUBELET_CONF && KUBELET_YAML && KUBELET_SERVICE && KUBE_PROXY_CONF && KUBE_PROXY_SERVICE scp /etc/kubernetes/{kubelet.conf,kubelet.yaml,kube-proxy.conf} $node1:/etc/kubernetes/ scp /etc/kubernetes/{kubelet.conf,kubelet.yaml,kube-proxy.conf} $node2:/etc/kubernetes/ scp /usr/lib/systemd/system/{kubelet.service,kube-proxy.service} $node1:/usr/lib/systemd/system/ scp /usr/lib/systemd/system/{kubelet.service,kube-proxy.service} $node2:/usr/lib/systemd/system/ #修改node1的kubelet.conf、kubelet.yaml、kube-proxy.conf cat > /tmp/kubelet_conf1.sh <<- EOF #/bin/bash sed -i "s#$master#$node1#g" /etc/kubernetes/{kubelet.conf,kubelet.yaml,kube-proxy.conf} EOF ansible $node1 -m script -a 'creates=/tmp/kubelet_conf1.sh /tmp/kubelet_conf1.sh' #修改node2的kubelet.conf、kubelet.yaml、kube-proxy.conf cat > /tmp/kubelet_conf2.sh <<- EOF #/bin/bash sed -i "s#$master#$node2#g" /etc/kubernetes/{kubelet.conf,kubelet.yaml,kube-proxy.conf} EOF ansible $node2 -m script -a 'creates=/tmp/kubelet_conf2.sh /tmp/kubelet_conf2.sh' #安裝ansible,並將node_ip寫入host [ ! -x /usr/bin/ansible ] && yum -y install ansible echo "[node]" >> /etc/ansible/hosts echo "$node1" >> /etc/ansible/hosts echo "$node2" >> /etc/ansible/hosts ansible node -m service -a "name=kubelet state=started daemon_reload=yes enabled=yes" && continue ansible node -m service -a "name=kube-proxy state=started daemon_reload=yes enabled=yes" && continue #Approve kubelet CSR請求 kubectl certificate approve `kubectl get csr|awk 'NR>1{print $1}'`