使用Ansible實現nginx+keepalived高可用負載均衡自動化部署


本篇文章記錄通過Ansible自動化部署nginx的負載均衡高可用,前端代理使用nginx+keepalived,端web server使用3台nginx用於負載效果的體現,結構圖如下:

部署前准備工作

主機規划

  • Ansible : 192.168.214.144
  • Keepalived-node-1 : 192.168.214.148
  • Keepalived-node-2 : 192.168.214.143
  • web1 : 192.168.214.133
  • web2 : 192.168.214.135
  • web3 : 192.168.214.139

Ansible主機與遠程主機秘鑰認證

#!/bin/bash

keypath=/root/.ssh
[ -d ${keypath} ] || mkdir -p ${keypath}
rpm -q expect &> /dev/null || yum install expect -y
ssh-keygen -t rsa -f /root/.ssh/id_rsa  -P ""
password=fsz...
while read ip;do
expect <<EOF
set timeout 5
spawn ssh-copy-id $ip
expect {
"yes/no" { send "yes\n";exp_continue }
"password" { send "$password\n"  }
}
expect eof
EOF
done < /home/iplist.txt

iplist.txt

192.168.214.148
192.168.214.143
192.168.214.133
192.168.214.135
192.168.214.139
192.168.214.134

執行腳本

[root@Ansible script]# ./autokey.sh

測試驗證

[root@Ansible script]# ssh 192.168.214.148 'date'
Address 192.168.214.148 maps to localhost, but this does not map back to the address - POSSIBLE BREAK-IN ATTEMPT!
Sat Jul 14 11:35:21 CST 2018

配置Ansible基於主機名認證,方便單獨管理遠程主機

vim  /etc/hosts
#
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.214.148 node-1
192.168.214.143 node-2
192.168.214.133 web-1
192.168.214.135 web-2
192.168.214.139 web-3

安裝配置Ansible

#安裝ansible
[root@Ansible ~]# yum install ansible -y

#配置ansible主機清單
[root@Ansible ~]# vim /etc/ansible/hosts 
[all]

192.168.214.148
192.168.214.143
192.168.214.133
192.168.214.135
192.168.214.139

[node]

192.168.214.148
192.168.214.143

[web]
192.168.214.133 
192.168.214.135
192.168.214.139

#Ansible執行ping測試 
 [root@Ansible ~]# ansible all -m ping

編寫roles,實現web的部署

先看一下web的目錄結構

[root@Ansible ~]# tree /opt/roles/web
/opt/roles/web
.
├── tasks
│   ├── install_nginx.yml
│   ├── main.yml
│   ├── start.yml
│   ├── temps.yml
│   └── user.yml
└── templates
    ├── index.html.j2
    └── nginx.conf.j2

2 directories, 7 files

按照角色執行的順序編寫

編寫user.yml

- name: create group nginx
  group: name=nginx
- name: create user nginx
  user: name=nginx group=nginx system=yes shell=/sbin/nologin

編寫install_nginx.yml

- name: install nginx webserver
  yum: name=nginx

創建nginx配置文件的template模板
由於是測試,后端web服務的nginx.conf配置文件基本保持默認,只只更具后端主機情況設置worker進程數,使用ansible的setup模塊中的變量獲取遠程主機的cpu的數量值


#將配置文件轉換成template文件
[root@Ansible conf]# cp nginx.conf /opt/roles/web/templates/nginx.conf.j2
#做出修改的內容如下
worker_processes {{ansible_proccessor_vcpus}};

#在templates目錄寫一個測試頁內如下
vim index.html.j2
{{ ansible_hostname }} test page.

編寫temps.yml

- name: cp nginx.conf.j2 to nginx web server rename nginx.conf
  template: src=/opt/roles/web/templates/nginx.conf.j2 dest=/etc/nginx/nginx.conf
- name: cp index test page to nginx server
  template: src=/opt/roles/web/templates/index.html.j2 dest=/usr/share/nginx/html/index.html

編寫start.yml

- name: restart nginx
  service: name=nginx state=started

編寫main.yml

- import_tasks: user.yml
- import_tasks: install_nginx.yml
- import_tasks: temps.yml
- import_tasks: start.yml

編寫執行主文件web_install.yml,執行文件不能與web角色放在同一目錄,通常放在roles目錄

[root@Ansible ~]# vim /opt/roles/web_install.yml 


---
- hosts: web
  remote_user: root
  roles:
    - web

安裝前測試: -C選項為測試

[root@Ansible ~]# ansible-playbook -C /opt/roles/web_install.yml 

如沒有問題則執行安裝

[root@Ansible ~]# ansible-playbook /opt/roles/web_install.yml 

測試訪問

[root@Ansible ~]# ansible web -m shell -a 'iptables -F'
192.168.214.139 | SUCCESS | rc=0 >>


192.168.214.135 | SUCCESS | rc=0 >>


192.168.214.133 | SUCCESS | rc=0 >>


[root@Ansible ~]# curl 192.168.214.133
web-1 test page.

編寫roles角色部署nginx+keepalived

部署高可用集群需要注意各節點包括后端主機的時間問題,保證各主機時間一致。

[root@Ansible ~]# ansible all -m shell -a 'yum install ntpdate -y'

[root@Ansible ~]# ansible all -m shell -a 'ntpdate gudaoyufu.com'

編寫roles角色

編寫user.yml

- name: create nginx group
  group: name=nginx
- name: create nginx user
  user: name=nginx group=nginx system=yes shell=/sbin/nologin

編寫install_server.yml

- name: install nginx and keepalived
  yum: name={{ item }} state=latest
  with_items:
    - nginx
    - keepalived

編寫temps.yml

- name: copy nginx proxy conf and rename
  template: src=/opt/roles/ha_proxy/templates/nginx.conf.j2  dest=/etc/nginx/nginx.conf
  
- name: copy master_keepalived.conf.j2 to MASTER node
  when: ansible_hostname == "node-1"
  template: src=/opt/roles/ha_proxy/templates/master_keepalived.conf.j2 dest=/etc/keepalived/keepalived.conf
  
- name: copy backup_keepalived.conf.j2 to BACKUP node
  when: ansible_hostname == "node-2"
  template: src=/opt/roles/ha_proxy/templates/backup_keepalived.conf.j2 dest=/etc/keepalived/keepalived.conf

配置nginx proxy配置文件模板

[root@Ansible ~]# cp /opt/conf/nginx.conf /opt/roles/ngx_proxy/templates/nginx.conf.j2
[root@Ansible ~]# vim /opt/roles/ngx_proxy/templates/nginx.conf.j2

user nginx;
worker_processes {{ ansible_processor_vcpus }};
error_log /var/log/nginx/error.log;
pid /var/run/nginx.pid;

# Load dynamic modules. See /usr/share/nginx/README.dynamic.

events {
    worker_connections  1024;
}


http {
    log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
                      '$status $body_bytes_sent "$http_referer" '
                      '"$http_user_agent" "$http_x_forwarded_for"';

    access_log  /var/log/nginx/access.log  main;

    sendfile            on;
    tcp_nopush          on;
    tcp_nodelay         on;
    keepalive_timeout   65;
    types_hash_max_size 2048;

    include             /etc/nginx/mime.types;
    default_type        application/octet-stream;


    include /etc/nginx/conf.d/*.conf;

    upstream web {

        server 192.168.214.133:80 max_fails=3 fail_timeout=30s;
        server 192.168.214.135:80 max_fails=3 fail_timeout=30s;
        server 192.168.214.139:80 max_fails=3 fail_timeout=30s;


    }



    server {

    listen       80 default_server;
    server_name  {{ ansible_hostname }};
    root         /usr/share/nginx/html;
    index index.html index.php;

         location / {
                proxy_pass http://web;
             }

         error_page 404 /404.html;

          }


}


配置keepalived配置文件模板

[root@Ansible ~]# cp /opt/conf/keepalived.conf /opt/roles/ha_proxy/templates/master_keepalived.conf.j2

[root@Ansible templates]# vim master_keepalived.conf.j2

#

! Configuration File for keepalived

global_defs {
   notification_email {
     acassen@firewall.loc
     failover@firewall.loc
     sysadmin@firewall.loc
   }
   notification_email_from Alexandre.Cassen@firewall.loc
   smtp_server 192.168.214.1
   smtp_connect_timeout 30
   router_id LVS_DEVEL
   vrrp_skip_check_adv_addr
   vrrp_strict
   vrrp_garp_interval 0
   vrrp_gna_interval 0
   vrrp_iptables
   vrrp_mcast_group4 224.17.17.17
}


vrrp_script chk_nginx {
                script "killall -0 nginx"
                interval 1
                weight -20
                fall 2
                rise 1
            }

vrrp_instance VI_1 {
    state MASTER
    interface ens33
    virtual_router_id 55
    priority 100
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 12345678
    }
    virtual_ipaddress {
        192.168.214.100
    }

    track_script {
        chk_nginx
      }

   }

同樣,在master_keepalived.conf.j2基礎修改另存為backup_keepalived.conf.j2,只修改角色與優先級即可。注意:master_keepalived.conf.j2文件中的檢測故障降低優先級的值要確保降低后MASTER優先級小於BACKUP的優先級

編寫start.yml

- name: start nginx proxy server
  service: name=nginx state=started

編寫main.yml

- import_tasks: user.yml
- import_tasks: install_server.yml
- import_tasks: temps.yml
- import_tasks: start.yml

編寫執行主文件

[root@Ansible ~]# vim /opt/roles/ha_proxy_install.yml


---
- hosts: node
  remote_user: root
  roles:
    - ha_proxy

執行檢測roles

[root@Ansible ~]# ansible-playbook -C /opt/roles/ha_proxy_install.yml 
 

執行測試沒問題即可執行自動部署

執行過程如下:

[root@Ansible ~]# ansible-playbook  /opt/roles/ha_proxy_install.yml 



PLAY [node] **********************************************************************************************************************

TASK [Gathering Facts] ***********************************************************************************************************
ok: [192.168.214.148]
ok: [192.168.214.143]

TASK [ha_proxy : create nginx group] *********************************************************************************************
changed: [192.168.214.148]
ok: [192.168.214.143]

TASK [ha_proxy : create nginx user] **********************************************************************************************
changed: [192.168.214.148]
ok: [192.168.214.143]

TASK [ha_proxy : install nginx and keepalived] ***********************************************************************************
changed: [192.168.214.143] => (item=[u'nginx', u'keepalived'])
changed: [192.168.214.148] => (item=[u'nginx', u'keepalived'])

TASK [ha_proxy : copy nginx proxy conf and rename] *******************************************************************************
changed: [192.168.214.148]
changed: [192.168.214.143]

TASK [ha_proxy : copy master_keepalived.conf.j2 to MASTER node] ******************************************************************
skipping: [192.168.214.143]
changed: [192.168.214.148]

TASK [ha_proxy : copy backup_keepalived.conf.j2 to BACKUP node] ******************************************************************
skipping: [192.168.214.148]
changed: [192.168.214.143]

TASK [ha_proxy : start nginx proxy server] ***************************************************************************************
changed: [192.168.214.143]
changed: [192.168.214.148]

PLAY RECAP ***********************************************************************************************************************
192.168.214.143            : ok=7    changed=4    unreachable=0    failed=0   
192.168.214.148            : ok=7    changed=6    unreachable=0    failed=0   

至此,自動部署nginx+keepalived高可用負載均衡完成了

最后看一下roles目錄的結構

[root@Ansible ~]# tree /opt/roles/
/opt/roles/
├── ha_proxy
│   ├── tasks
│   │   ├── install_server.yml
│   │   ├── main.yml
│   │   ├── start.yml
│   │   ├── temps.yml
│   │   └── user.yml
│   └── templates
│       ├── backup_keepalived.conf.j2
│       ├── master_keepalived.conf.j2
│       └── nginx.conf.j2
├── ha_proxy_install.retry
├── ha_proxy_install.yml
├── web
│   ├── tasks
│   │   ├── install_nginx.yml
│   │   ├── main.yml
│   │   ├── start.yml
│   │   ├── temps.yml
│   │   └── user.yml
│   └── templates
│       ├── index.html.j2
│       └── nginx.conf.j2
├── web_install.retry
└── web_install.yml

6 directories, 19 files

下面測試服務:keepalived的服務沒有在ansible中設置自動啟動,到keepalived節點啟動即可。

測試node節點

[root@Ansible ~]# for i in {1..10};do curl 192.168.214.148;done
web-3 test page.
web-1 test page.
web-2 test page.
web-3 test page.
web-1 test page.
web-2 test page.
web-3 test page.
web-1 test page.
web-2 test page.
web-3 test page.

將node-1 的MASTER服務停掉測試故障轉移,同時查看node-2狀態變化

執行: nginx -s stop

查看vrrp通知,可以看到主備切換正常:

[root@node-2 ~]# tcpdump -i ens33 -nn host 224.17.17.17

listening on ens33, link-type EN10MB (Ethernet), capture size 262144 bytes

16:55:20.804327 IP 192.168.214.148 > 224.17.17.17: VRRPv2, Advertisement, vrid 55, prio 100, authtype simple, intvl 1s, length 20
16:55:25.476397 IP 192.168.214.148 > 224.17.17.17: VRRPv2, Advertisement, vrid 55, prio 0, authtype simple, intvl 1s, length 20
16:55:26.128474 IP 192.168.214.143 > 224.17.17.17: VRRPv2, Advertisement, vrid 55, prio 90, authtype simple, intvl 1s, length 20
16:55:27.133349 IP 192.168.214.143 > 224.17.17.17: VRRPv2, Advertisement, vrid 55, prio 90, authtype simple, intvl 1s, length 20

再測試訪問:

[root@Ansible ~]# for i in {1..10};do curl 192.168.214.148;done
web-1 test page.
web-2 test page.
web-3 test page.
web-1 test page.
web-2 test page.
web-3 test page.
web-1 test page.
web-2 test page.
web-3 test page.
web-1 test page.

node-1恢復主節點,搶回MASTER角色

node-1節點執行nginx指令,可以看到VIP漂移回到node-1節點,測試訪問

[root@Ansible ~]# for i in {1..10};do curl 192.168.214.148;done
web-1 test page.
web-2 test page.
web-3 test page.
web-1 test page.
web-2 test page.
web-3 test page.
web-1 test page.
web-2 test page.
web-3 test page.
web-1 test page.

其他問題

上面的自動部署方式還有可以改進的地方,比如,可以將配置keepalived的配置文件中的許多參數在roles中以統一變量的方式定義,然后在template模板文件中引用參數就可以了

此外還有一個需要注意的地方是:keepalived的配置文件中使用了killall指令檢測本地的nginx服務狀態,如果檢測結果狀態為非0就會執行vrrp_script中定義的降級操作,要確保系統這個指令可以執行,有時該指令沒有被安裝,如果該指令沒有存在,即使MASTER節點發生故障也不會發生變化


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM