部署規划
軟件版本
應用規划
集群架構規划
安裝注意事項:
這套k8s都是通過腳本部署的,所以一定要注意各個節點的服務的安裝目錄,我本地安裝的根目錄是:/opt/k8s,如果想使用其他目錄,就需要修改腳本中涉及目錄的所有參數
部署准備(三個節點都需要操作):
1.修改主機名
參考上述角色名
2.同步ntp網絡時間
3.關閉selinux
4.關閉firewalld防火牆
5.關閉swap分區
swapoff -a && sysctl -w vm.swappiness=0
6.設置k8s的路由轉發(k8s1.8版本后引入了ipvs來代替iptables,本次安裝1.7版本,不涉及配置ipvs)
[root@k8s-master01 etcd-cert]# modprobe br_netfilter [root@k8s-master01 etcd-cert]# cat << EOF | tee /etc/sysctl.d/k8s.conf net.ipv4.ip_forward = 1 net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 EOF [root@k8s-master01 etcd-cert]# sysctl -p /etc/sysctl.d/k8s.conf
7.開啟ipvs支持
一、生成證書
由於server-api和etcd服務都是通過https請求的,所以每個服務都需要部署相應的證書,證書我們可以使用自頒發的
上面基本需要的環境已經配置完成
接下來開始進入正題
配置證書:(隨便在任意一台服務器上操作即可,這里我們在k8s-master上執行)
1.使用cfssl來生成自簽證書,先下載cfssl工具:
mkdir /opt/cfssl && cd /opt/cfssl wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64 wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64 wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64 chmod +x cfssl_linux-amd64 cfssljson_linux-amd64 cfssl-certinfo_linux-amd64 mv cfssl_linux-amd64 /usr/local/bin/cfssl mv cfssljson_linux-amd64 /usr/local/bin/cfssljson mv cfssl-certinfo_linux-amd64 /usr/bin/cfssl-certinfo
2.創建證書目錄
mkdir -p /opt/k8s/k8s-cert mkdir -p /opt/k8s/etcd-cert
3.使用腳本部署(注意腳本中etcd的節點IP地址,需要修改成自己的)
mv etcd-cert.sh /k8s/etcd-cert #腳本內容 cat > ca-config.json <<EOF { "signing": { "default": { "expiry": "87600h" }, "profiles": { "www": { "expiry": "87600h", "usages": [ "signing", "key encipherment", "server auth", "client auth" ] } } } } EOF cat > ca-csr.json <<EOF { "CN": "etcd CA", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "L": "Tianjing", "ST": "Tianjing" } ] } EOF cfssl gencert -initca ca-csr.json | cfssljson -bare ca - #----------------------- cat > server-csr.json <<EOF { "CN": "etcd", "hosts": [ "172.16.204.133", "172.16.204.134", "172.16.204.135" ], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "L": "TianJing", "ST": "TianJing" } ] } EOF cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=www server-csr.json | cfssljson -bare server [root@k8s-master01 etcd-cert]# cat etcd-cert.sh cat > ca-config.json <<EOF { "signing": { "default": { "expiry": "87600h" }, "profiles": { "www": { "expiry": "87600h", "usages": [ "signing", "key encipherment", "server auth", "client auth" ] } } } } EOF cat > ca-csr.json <<EOF { "CN": "etcd CA", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "L": "Tianjing", "ST": "Tianjing" } ] } EOF cfssl gencert -initca ca-csr.json | cfssljson -bare ca - #----------------------- cat > server-csr.json <<EOF { "CN": "etcd", "hosts": [ "172.16.204.133", "172.16.204.134", "172.16.204.135" ], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "L": "TianJing", "ST": "TianJing" } ] } EOF cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=www server-csr.json | cfssljson -bare server
#執行腳本
sh etcd-cert.sh
#查看證書是否生成
[root@k8s-master01 etcd-cert]# ls
ca-config.json ca.csr ca-csr.json ca-key.pem ca.pem etcd-cert.sh server.csr server-csr.json server-key.pem server.pem
部署etcd集群
下載安裝包
mkdir /opt/k8s/soft && cd/opt/k8s/soft
wget https://github.com/etcd-io/etcd/releases/download/v3.3.10/etcd-v3.3.10-linux-amd64.tar.gz
tar -zxvf etcd-v3.3.10-linux-amd64.tar.gz
#創建etcd服務目錄
mkdir -p /opt/k8s/etcd/{cfg,bin,ssl}
cd etcd-v3.3.10-linux-amd64
mv etcd etcdctl /opt/k8s/etcd/bin/
這里我們使用腳本部署:
cat etcd.sh
#!/bin/bash # example: ./etcd.sh etcd01 192.168.1.10 etcd02=https://192.168.1.11:2380,etcd03=https://192.168.1.12:2380 ETCD_NAME=$1 ETCD_IP=$2 ETCD_CLUSTER=$3 WORK_DIR=/opt/k8s/etcd cat <<EOF >$WORK_DIR/cfg/etcd #[Member] ETCD_NAME="${ETCD_NAME}" ETCD_DATA_DIR="/var/lib/etcd/default.etcd" ETCD_LISTEN_PEER_URLS="https://${ETCD_IP}:2380" ETCD_LISTEN_CLIENT_URLS="https://${ETCD_IP}:2379" #[Clustering] ETCD_INITIAL_ADVERTISE_PEER_URLS="https://${ETCD_IP}:2380" ETCD_ADVERTISE_CLIENT_URLS="https://${ETCD_IP}:2379" ETCD_INITIAL_CLUSTER="etcd01=https://${ETCD_IP}:2380,${ETCD_CLUSTER}" ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster" ETCD_INITIAL_CLUSTER_STATE="new" EOF cat <<EOF >/usr/lib/systemd/system/etcd.service [Unit] Description=Etcd Server After=network.target After=network-online.target Wants=network-online.target [Service] Type=notify EnvironmentFile=${WORK_DIR}/cfg/etcd ExecStart=${WORK_DIR}/bin/etcd \ --name=\${ETCD_NAME} \ --data-dir=\${ETCD_DATA_DIR} \ --listen-peer-urls=\${ETCD_LISTEN_PEER_URLS} \ --listen-client-urls=\${ETCD_LISTEN_CLIENT_URLS},http://127.0.0.1:2379 \ --advertise-client-urls=\${ETCD_ADVERTISE_CLIENT_URLS} \ --initial-advertise-peer-urls=\${ETCD_INITIAL_ADVERTISE_PEER_URLS} \ --initial-cluster=\${ETCD_INITIAL_CLUSTER} \ --initial-cluster-token=\${ETCD_INITIAL_CLUSTER_TOKEN} \ --initial-cluster-state=new \ --cert-file=${WORK_DIR}/ssl/server.pem \ --key-file=${WORK_DIR}/ssl/server-key.pem \ --peer-cert-file=${WORK_DIR}/ssl/server.pem \ --peer-key-file=${WORK_DIR}/ssl/server-key.pem \ --trusted-ca-file=${WORK_DIR}/ssl/ca.pem \ --peer-trusted-ca-file=${WORK_DIR}/ssl/ca.pem Restart=on-failure LimitNOFILE=65536 [Install] WantedBy=multi-user.target EOF systemctl daemon-reload systemctl enable etcd systemctl restart etcd
執行部署腳本
chmod +x etcd.sh #生成啟動腳本並添加其它etcd節點到集群中 ./etcd.sh etcd01 172.16.204.133 etcd02=https://172.16.204.134:2380,etcd03=https://172.16.204.135:2380
將etcd需要的證書放到ssl目錄
cp /opt/k8s/etcd-cert/{ca,server-key,server}.pem /opt/k8s/etcd/ssl/
查看etcd的啟動腳本(在部署腳本中可以看到這段的配置)
cat /usr/lib/systemd/system/etcd.service [Unit] Description=Etcd Server After=network.target After=network-online.target Wants=network-online.target [Service] Type=notify EnvironmentFile=/opt/k8s/etcd/cfg/etcd ExecStart=/opt/k8s/etcd/bin/etcd --name=${ETCD_NAME} --data-dir=${ETCD_DATA_DIR} --listen-peer-urls=${ETCD_LISTEN_PEER_URLS} --listen-client-urls=${ETCD_LISTEN_CLIENT_URLS},http://127.0.0.1:2379 --advertise-client-urls=${ETCD_ADVERTISE_CLIENT_URLS} --initial-advertise-peer-urls=${ETCD_INITIAL_ADVERTISE_PEER_URLS} --initial-cluster=${ETCD_INITIAL_CLUSTER} --initial-cluster-token=${ETCD_INITIAL_CLUSTER_TOKEN} --initial-cluster-state=new --cert-file=/opt/k8s/etcd/ssl/server.pem --key-file=/opt/k8s/etcd/ssl/server-key.pem --peer-cert-file=/opt/k8s/etcd/ssl/server.pem --peer-key-file=/opt/k8s/etcd/ssl/server-key.pem --trusted-ca-file=/opt/k8s/etcd/ssl/ca.pem --peer-trusted-ca-file=/opt/k8s/etcd/ssl/ca.pem Restart=on-failure LimitNOFILE=65536 [Install] WantedBy=multi-user.target
k8s-master節點啟動etcd服務
systemctl start etcd #會有卡頓,因為另外倆個節點還沒有部署並啟動etcd節點,所以啟動etcd集群另外倆個節點會顯示超時
#查看日志
tail -f /var/log/messages
部署etcd到另外倆個節點(復制etcd相關文件到其它節點,並修改配置文件即可)
1.在k8s-node01和k8s-node02上創建k8s目錄
#在k8s-node01和k8s-node02上執行
mkdir -p /opt/k8s
2.將k8s-master上etcd的相關文件和證書復制到另外倆個節點
scp -r /opt/k8s/etcd 172.16.204.134:/opt/k8s
scp -r /opt/k8s/etcd 172.16.204.135:/opt/k8s
scp /usr/lib/systemd/system/etcd.service 172.16.204.134:/usr/lib/systemd/system/
scp /usr/lib/systemd/system/etcd.service 172.16.204.135:/usr/lib/systemd/system/
3.修改k8s-node01上的配置文件並啟動etcd服務
vim /opt/k8s/etcd/cfg/etcd #將除了etcd集群的配置不變,其它的IP配置改為k8s-node01的IP地址,端口不變(2380是監聽端口,2379是數據傳輸端口,標紅的是需要修改的地方) ETCD_NAME="etcd02" ETCD_DATA_DIR="/var/lib/etcd/default.etcd" ETCD_LISTEN_PEER_URLS="https://172.16.204.134:2380" ETCD_LISTEN_CLIENT_URLS="https://172.16.204.134:2379" #[Clustering] ETCD_INITIAL_ADVERTISE_PEER_URLS="https://172.16.204.134:2380" ETCD_ADVERTISE_CLIENT_URLS="https://172.16.204.134:2379" ETCD_INITIAL_CLUSTER="etcd01=https://172.16.204.133:2380,etcd02=https://172.16.204.134:2380,etcd03=https://172.16.204.135:2380" ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster" ETCD_INITIAL_CLUSTER_STATE="new"
#啟動etcd服務
systemctl daemon-reload
systemctl start etcd
4..修改k8s-node02上的配置文件並啟動etcd服務
vim /opt/k8s/etcd/cfg/etcd #將除了etcd集群的配置不變,其它的IP配置改為k8s-node01的IP地址,端口不變(2380是監聽端口,2379是數據傳輸端口,標紅的是需要修改的地方) ETCD_NAME="etcd02" ETCD_DATA_DIR="/var/lib/etcd/default.etcd" ETCD_LISTEN_PEER_URLS="https://172.16.204.135:2380" ETCD_LISTEN_CLIENT_URLS="https://172.16.204.135:2379" #[Clustering] ETCD_INITIAL_ADVERTISE_PEER_URLS="https://172.16.204.135:2380" ETCD_ADVERTISE_CLIENT_URLS="https://172.16.204.135:2379" ETCD_INITIAL_CLUSTER="etcd01=https://172.16.204.133:2380,etcd02=https://172.16.204.134:2380,etcd03=https://172.16.204.135:2380" ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster" ETCD_INITIAL_CLUSTER_STATE="new" #啟動etcd服務 systemctl daemon-reload systemctl start etcd
驗證etcd集群是否正常
#在k8s-master上執行下面命令,ip地址改成自己的就好 /opt/k8s/etcd/bin/etcdctl --ca-file=/opt/k8s/etcd/ssl/ca.pem --cert-file=/opt/k8s/etcd/ssl/server.pem --key-file=/opt/k8s/etcd/ssl/server-key.pem --endpoints="https://172.16.204.133:2379,https://172.16.204.134:2379,https://172.16.204.135:2379" cluster-health
輸出結果
member 6776bd806704ee4 is healthy: got healthy result from https://172.16.204.133:2379 member 2663165d9244289c is healthy: got healthy result from https://172.16.204.134:2379 member 4fdd244cdd2c8097 is healthy: got healthy result from https://172.16.204.135:2379 cluster is healthy
======etcd集群部署完成=====
node節點安裝docker
1,CentOS7安裝Docker, 要求 CentOS 系統的內核版本高於 3.10,查看內核版本: [root@localhost ~]# uname -r 2,更新yum包 [root@localhost ~]# yum -y update 3,卸載舊版本docker [root@localhost ~]# yum remove docker docker-common docker-selinux docker-engine 4,安裝必要的一些系統工具 # yum-utils提供yum的配置管理 # device-mapper-persistent-data 與 lvm2 是devicemapper存儲驅動所需要的 [root@localhost ~]# yum install -y yum-utils device-mapper-persistent-data lvm2 5, 配置Docker的穩定版本倉庫 [root@localhost ~]# yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo 6,更新安裝包索引 [root@localhost ~]# yum makecache fast 7,安裝Docker CE [root@localhost ~]# yum -y install docker-ce 8,開啟Docker服務 [root@localhost ~]# service docker start 9,查看Docker CE安裝是否成功 [root@localhost ~]# docker version 作者:歸去來ming 鏈接:https://www.jianshu.com/p/780ae3bd04fd 來源:簡書 著作權歸作者所有。商業轉載請聯系作者獲得授權,非商業轉載請注明出處。
在倆個node節點上配置docker加速器
sudo mkdir -p /etc/docker sudo tee /etc/docker/daemon.json <<-'EOF' { "registry-mirrors": ["https://4lymnb6o.mirror.aliyuncs.com"] } EOF
啟動docker
sudo systemctl daemon-reload sudo systemctl restart docker
=====docker安裝完成=============
k8s網絡模型(簡稱:CNI,全稱:Container Network Interface)
k8s網絡模型設計基本要求:
- 一個pod一個ip
- 每個pod獨立ip,pod內所有容器共享網絡(同一個ip)
- 所有容器都可以與其它容器通信
- 所有節點都可以與所有容器通信
Flannel:通過給每台宿主機分配一個子網的方式為容器提供虛擬網絡,它基於Linux TUN/TAP,使用UDP封裝IP包來創建overlay網絡,並借助etcd維護網絡的分配情況。
開始部署Flannel網絡(flannel主要部署到node節點,寫入flannel信息可以在master上進行)
1.寫入分配的子網到etcd,供flanneld使用
此處有采坑:etcd配置的網段需要和node節點的網卡docker0的網段匹配,即:Network:"172.17.0.0/16"
在master執行下面命令,主要是將Flannel網絡的相關信息寫入到etcd [root@k8s-master01 ~]# /opt/k8s/etcd/bin/etcdctl --ca-file=/opt/k8s/etcd/ssl/ca.pem --cert-file=/opt/k8s/etcd/ssl/server.pem --key-file=/opt/k8s/etcd/ssl/server-key.pem --endpoints="https://172.16.204.133:2379,https://172.16.204.134:2379,https://172.16.204.135:2379" set /coreos.com/network/config '{"Network:"172.17.0.0/16","Backend":{"Type": "vxlan"}}}'
2.下載二進制包
wget https://github.com/coreos/flannel/releases/download/v0.12.0/flannel-v0.12.0-linux-amd64.tar.gz
3.部署flanneld:下面操作均在各個node節點上操作,每個節點都需要部署flannel
Node所有節點下創建工作目錄
mkdir -p /opt/k8s/kubernetes/{bin,ssl,cfg}
解壓flannel壓縮包
tar -zxvf flannel-v0.12.0-linux-amd64.tar.gz
將flannel的啟動腳本和相關文件移動到flannel的bin目錄
mv flanneld mk-docker-opts.sh /opt/k8s/kubernetes/bin/
#解釋:
mk-docker-opts.sh:為本機創建一個flannel網絡的工具
flanneld:flannel的啟動腳本
將etcd的證書文件移動到flannel的ssl目錄
cp -rp /opt/k8s/etcd/ssl/* /opt/k8s/kubernetes/ssl/
通過腳本腳本部署flannel
腳本內容:
#!/bin/bash ETCD_ENDPOINTS=${1:-"http://127.0.0.1:2379"} #ETCD_ENDPOINTS=${1} cat <<EOF >/opt/k8s/kubernetes/cfg/flanneld FLANNEL_OPTIONS="--etcd-endpoints=${ETCD_ENDPOINTS} \ -etcd-cafile=/opt/k8s/etcd/ssl/ca.pem \ -etcd-certfile=/opt/k8s/etcd/ssl/server.pem \ -etcd-keyfile=/opt/k8s/etcd/ssl/server-key.pem" EOF cat <<EOF >/usr/lib/systemd/system/flanneld.service [Unit] Description=Flanneld overlay address etcd agent After=network-online.target network.target Before=docker.service [Service] Type=notify EnvironmentFile=/opt/k8s/kubernetes/cfg/flanneld ExecStart=/opt/k8s/kubernetes/bin/flanneld --ip-masq \$FLANNEL_OPTIONS ExecStartPost=/opt/k8s/kubernetes/bin/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/subnet.env Restart=on-failure [Install] WantedBy=multi-user.target EOF cat <<EOF >/usr/lib/systemd/system/docker.service [Unit] Description=Docker Application Container Engine Documentation=https://docs.docker.com After=network-online.target firewalld.service Wants=network-online.target [Service] Type=notify EnvironmentFile=/run/flannel/subnet.env ExecStart=/usr/bin/dockerd \$DOCKER_NETWORK_OPTIONS ExecReload=/bin/kill -s HUP \$MAINPID LimitNOFILE=infinity LimitNPROC=infinity LimitCORE=infinity TimeoutStartSec=0 Delegate=yes KillMode=process Restart=on-failure StartLimitBurst=3 StartLimitInterval=60s [Install] WantedBy=multi-user.target EOF systemctl daemon-reload systemctl enable flanneld systemctl restart flanneld systemctl restart docker
執行腳本部署
腳本后面跟etcd集群的信息
./flannel.sh https://172.16.204.133:2379,https://172.16.204.134:2379,https://172.16.204.135:2379
啟動flannel和docker服務
#如果之前安裝過docker,需要重啟docker服務,保證docker使用的是flannel分配的子網
systemctl start flanneld
systemctl start docker
驗證:
1.查看flannel和docker服務是否正常啟動
2.docker是否使用的是flannel分配的子網
3.flannel會給每個節點分配一個獨立不沖突的子網,保證的IP的唯一性
4.可以在node2上啟動一個容器,然后在node1上ping容器的IP地址
[root@k8s-node02 kubernetes]# docker run -it busybox / # ifconfig eth0 Link encap:Ethernet HWaddr 02:42:AC:11:59:02 inet addr:172.17.89.2 Bcast:172.17.89.255 Mask:255.255.255.0 UP BROADCAST RUNNING MULTICAST MTU:1450 Metric:1 RX packets:6 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:508 (508.0 B) TX bytes:0 (0.0 B) lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 UP LOOPBACK RUNNING MTU:65536 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) / #
[root@k8s-node01 ~]# ping 172.17.89.2 PING 172.17.89.2 (172.17.89.2) 56(84) bytes of data. 64 bytes from 172.17.89.2: icmp_seq=1 ttl=63 time=0.832 ms 64 bytes from 172.17.89.2: icmp_seq=2 ttl=63 time=0.661 ms 64 bytes from 172.17.89.2: icmp_seq=3 ttl=63 time=0.628 ms 64 bytes from 172.17.89.2: icmp_seq=4 ttl=63 time=0.432 ms 64 bytes from 172.17.89.2: icmp_seq=5 ttl=63 time=0.508 ms 64 bytes from 172.17.89.2: icmp_seq=6 ttl=63 time=0.417 ms 64 bytes from 172.17.89.2: icmp_seq=7 ttl=63 time=0.337 ms