前景說明:
使用集群是網站解決高並發、海量數據問題的常用手段。當一台服務器的處理能力、存儲空間不足時,不要企圖去換更強大的服務器,對大型網站而言,不管多么強大的服務器,都滿足不了網站持續增長的業務需求。這種情況下,更恰當的做法是增加一台服務器分擔原有服務器的訪問及存儲壓力。通過負載均衡調度服務器,將來自瀏覽器的訪問請求分發到應用服務器集群中的任何一台服務器上,如果有更多的用戶,就在集群中加入更多的應用服務器,使應用服務器的負載壓力不再成為整個網站的瓶頸。
下面這個例子,就是簡單演示一下通過nginx來負載均衡實現
環境准備
192.168.11.25 nginx負載均衡服務器 192.168.11.200 nginx負載均衡服務器 192.168.11.57 web 192.168.11.98 web
web部署 方法1
web就采用最簡單的flask實現
# app.py #!/usr/bin/env python # -*- coding: utf-8 -*- # @Date : 2020-02-14 16:12:15 # @Author : Your Name (you@example.org)b # @Link : http://example.org # @Version : $Id$ from flask import Flask app = Flask(__name__) import socket ip = ([(s.connect(('8.8.8.8', 53)), s.getsockname()[0], s.close()) for s in [socket.socket(socket.AF_INET, socket.SOCK_DGRAM)]][0][1]) @app.route('/') def hello_world(): return ip if __name__ == '__main__': app.run(host='0.0.0.0', debug=True, port=5000)
分別在兩台web服務器上啟動web
python3 app.py
* Serving Flask app "app" (lazy loading)
* Environment: production
WARNING: This is a development server. Do not use it in a production deployment.
Use a production WSGI server instead.
* Debug mode: on
* Running on http://0.0.0.0:5000/ (Press CTRL+C to quit)
* Restarting with stat
* Debugger is active!
* Debugger PIN: 307-743-873
web部署 方法2
這里部署web,也可以用容器來部署,最方便不過😜
# Dockerfile
FROM python:3.6 LABEL maintainer="web demo" RUN pip install flask -i http://pypi.douban.com/simple --trusted-host pypi.douban.com ADD app.py /app.py CMD ["python", "/app.py"]
編譯出web demo鏡像
docker build -t flask_demo .
啟動web demo
docker run -itd --name flask_demo -p 5000:5000 --network host flask_demo:latest
啟動成功后,在瀏覽器查看,成功是這樣
通過nginx進行負載均衡設置
在192.168.11.25/200服務器上部署nginx
docker run -itd --name nginx_for_ha -p 8000:80 nginx:latest
進入nginx容器內,配置nginx.conf
root@ubuntu:/home/flask_app# cat nginx.conf user nginx; worker_processes 1; error_log /var/log/nginx/error.log warn; pid /var/run/nginx.pid; events { worker_connections 1024; } http { include /etc/nginx/mime.types; default_type application/octet-stream; log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; access_log /var/log/nginx/access.log main; sendfile on; #tcp_nopush on; keepalive_timeout 65; #gzip on; upstream flask_pool { server 192.168.11.98:5000 weight=4 max_fails=2 fail_timeout=30s; server 192.168.11.57:5000 weight=4 max_fails=2 fail_timeout=30s; } server { listen 80; server_name localhost; location / { proxy_pass http://flask_pool; #轉向flask處理 } } include /etc/nginx/conf.d/*.conf; }
修改好后,重啟nginx
docker restart nginx
瀏覽器打開192.168.11.25:8000驗證負載均衡是否生效, 成功的話,會發現請求被均勻的轉發到98, 57上
瀏覽器打開192.168.11.200(偷懶用了上面已有的nginx,端口是80😄)驗證負載均衡是否生效, 成功的話,也會發現請求被均勻的轉發到98, 57上
當停掉其中一台如57時,再去請求,就只轉發到98上了(我個人的理解,高可用就體現在這里了)。
雖然此時兩台nginx服務器服務正常,此時是沒有主從之分的,兩者級別一樣高,當配置keepalived之后就有了主從之分了。
keepalived實現主備
還在是這兩台機器上部署keepalived(這里演示的是docker方式)
192.168.11.25 nginx負載均衡服務器 + keepalived(master) # docker-compose.yml for keepalived version: '3' services: keepalived: image: keepalived:x86 volumes: - /var/run/docker.sock:/var/run/docker.sock - /home/keepalived/check_ng.sh:/container/service/keepalived/assets/check_ng.sh environment: - KEEPALIVED_INTERFACE=eno1 # 25宿主機網卡信息 命令是:ip route |awk '$2=="via" {print $5}' |head -1 - KEEPALIVED_STATE=BACKUP # 表示該節點是keepalived和備節點 - KEEPALIVED_PRIORITY=90 # - KEEPALIVED_VIRTUAL_IPS=192.168.11.58 # 虛擬ip - KEEPALIVED_UNICAST_PEERS=192.168.11.200 # 取keepalived主節點的宿主機ip - KEEPALIVED_ROUTER_ID=25 # 主備節點通信標志,要一致 privileged: true restart: always container_name: keepalived network_mode: host
192.168.11.200 nginx負載均衡服務器 + keepalived(backup) # docker-compose.yml for keepalived version: '3' services: keepalived: image: keepalived:x86 volumes: - /var/run/docker.sock:/var/run/docker.sock - /home/keepalived/check_ng.sh:/container/service/keepalived/assets/check_ng.sh environment: - KEEPALIVED_INTERFACE=eno1 # 25宿主機網卡信息 命令是:ip route |awk '$2=="via" {print $5}' |head -1 - KEEPALIVED_STATE=BACKUP # 表示該節點是keepalived和備節點 - KEEPALIVED_PRIORITY=90 # - KEEPALIVED_VIRTUAL_IPS=192.168.11.58 # 虛擬ip - KEEPALIVED_UNICAST_PEERS=192.168.11.200 # 取keepalived主節點的宿主機ip - KEEPALIVED_ROUTER_ID=25 privileged: true restart: always container_name: keepalived network_mode: host
分別啟動主備節點的keepalived
docker-compose up -d # docker-compose所在目錄
查看主節點日志:
# 關鍵部分 I'm the MASTER! Whup whup. Mon Apr 13 15:32:59 2020: Sending gratuitous ARP on ens160 for 192.168.11.58 Mon Apr 13 15:32:59 2020: (VI_1) Sending/queueing gratuitous ARPs on ens160 for 192.168.11.58 Mon Apr 13 15:32:59 2020: Sending gratuitous ARP on ens160 for 192.168.11.58 Mon Apr 13 15:32:59 2020: Sending gratuitous ARP on ens160 for 192.168.11.58 Mon Apr 13 15:32:59 2020: Sending gratuitous ARP on ens160 for 192.168.11.58 Mon Apr 13 15:32:59 2020: Sending gratuitous ARP on ens160 for 192.168.11.58
查看備節點日志:
Ok, i'm just a backup, great. Mon Apr 13 15:42:09 2020: (VI_1) Backup received priority 0 advertisement Mon Apr 13 15:42:09 2020: (VI_1) Receive advertisement timeout Mon Apr 13 15:42:09 2020: (VI_1) Entering MASTER STATE Mon Apr 13 15:42:09 2020: (VI_1) setting VIPs. Mon Apr 13 15:42:09 2020: Sending gratuitous ARP on eno1 for 192.168.11.58 Mon Apr 13 15:42:09 2020: (VI_1) Sending/queueing gratuitous ARPs on eno1 for 192.168.11.58 Mon Apr 13 15:42:09 2020: Sending gratuitous ARP on eno1 for 192.168.11.58 Mon Apr 13 15:42:09 2020: Sending gratuitous ARP on eno1 for 192.168.11.58 Mon Apr 13 15:42:09 2020: Sending gratuitous ARP on eno1 for 192.168.11.58 Mon Apr 13 15:42:09 2020: Sending gratuitous ARP on eno1 for 192.168.11.58
接下來,通過虛擬ip來請求我們之前部署的web服務:
查看web端日志,可以發現,該請求都是從主節點11.200發出的:
192.168.11.200 - - [13/Apr/2020 16:17:07] "GET / HTTP/1.0" 200 -
停掉主節點的keepalived再來測試:
通過備節點的keepalived日志可以看到,備節點迅速切換成master
I'm the MASTER! Whup whup. Mon Apr 13 16:03:46 2020: Sending gratuitous ARP on eno1 for 192.168.11.58 Mon Apr 13 16:03:46 2020: (VI_1) Sending/queueing gratuitous ARPs on eno1 for 192.168.11.58
而此時的web服務訪問,並不受任何影響(我這邊備節點的nginx映射的是8000端口):
再查看web端日志,可以發現,該請求都是從備節點11.25發出的:
192.168.11.25 - - [13/Apr/2020 16:15:15] "GET / HTTP/1.0" 200 -
再來,啟動主節點
通過備節點的keepalived日志可以看到,備節點迅速切又換回backup
Ok, i'm just a backup, great.
此時,通過備節點訪問,已無法訪問。
至此,記錄也到此結束,后面有新的認知,我再更新😁