HAProxy介紹
反向代理服務器,支持雙機熱備支持虛擬主機,但其配置簡單,擁有非常不錯的服務器健康檢查功能,當其代理的后端服務器出現故障, HAProxy會自動將該服務器摘除,故障恢復后再自動將該服務器加入。引入了frontend,backend;frontend根據任意 HTTP請求頭內容做規則匹配,然后把請求定向到相關的backend.
Keepalived介紹
Keepalived是一個基於VRRP協議來實現的WEB 服務高可用方案,可以利用其來避免單點故障。一個WEB服務至少會有2台服務器運行Keepalived,一台為主服務器(MASTER),一台為備份服務器(BACKUP),但是對外表現為一個虛擬IP,主服務器會發送特定的消息給備份服務器,當備份服務器收不到這個消息的時候,即主服務器宕機的時候,備份服務器就會接管虛擬IP,繼續提供服務,從而保證了高可用性。
環境情況:
OS:CentOS release 6.6 x86_64系統
MySQL版本 :Percona-XtraDB-Cluster-5.6.22-25.8
pxc三個節點 :192.168.79.3:3306、192.168.79.4:3306、192.168.79.5:3306
HAPproxy節點 :192.168.79.128 、 192.168.79.5
HAproxy版本 :1.5.2
keepalived版本:keepalived-1.2.13-4.el6.x86_64 vip:192.168.79.166
一、PXC環境的搭建
配置服務器ssh登錄無密碼驗證
ssh-keygen實現四台主機之間相互免密鑰登錄,保證四台主機之間能ping通
1)在所有的主機上執行:
# ssh-keygen -t rsa
2)將所有機子上公鑰(id_rsa.pub)導到一個主機的/root/.ssh/authorized_keys文件中,然后將authorized_keys分別拷貝到所有主機上
cat /root/.ssh/id_rsa.pub >> /root/.ssh/authorized_keys
ssh 192.168.79.4 cat /root/.ssh/id_rsa.pub >> /root/.ssh/authorized_keys
ssh 192.168.79.5 cat /root/.ssh/id_rsa.pub >> /root/.ssh/authorized_keys
scp /root/.ssh/authorized_keys 192.168.79.4:/root/.ssh/authorized_keys
scp /root/.ssh/authorized_keys 192.168.79.5:/root/.ssh/authorized_keys
測試:ssh xxxxx date
在所有的節點上pxc所需安裝軟件包
# rpm -ivh perl-DBD-MySQL-4.013-3.el6.x86_64.rpm
# rpm -ivh perl-IO-Socket-SSL-1.31-2.el6.noarch.rpm
# rpm -ivh nc-1.84-22.el6.x86_64.rpm
# rpm -ivh socat-1.7.2.4-1.el6.rf.x86_64.rpm
# rpm -ivh mysql-libs-5.1.73-3.el6_5.x86_64.rpm
# rpm -ivh zlib-1.2.3-29.el6.x86_64.rpm
# rpm -ivh zlib-devel-1.2.3-29.el6.x86_64.rpm
# rpm -ivh percona-release-0.1-3.noarch.rpm
# rpm -ivh perl-Time-HiRes-1.9721-136.el6.x86_64.rpm
# rpm -ivh percona-xtrabackup-2.2.9-5067.el6.x86_64.rpm
ps:安裝epel用網絡yum源。yum localinstall xx.rpm解決本地包依賴
在所有的節點上mysql用戶,用戶組,
# groupadd mysql
# useradd mysql -g mysql -s /sbin/nologin -d /opt/mysql
解壓Percona-XtraDB-Cluster二進制安裝包,安裝文件放在/opt/mysql,並對解壓后的mysql目錄加一個符號連接,mysql,這樣讀mysql目錄操作會更方便
# cd /opt/mysql
# tar -zxvf Percona-XtraDB-Cluster-5.6.22-rel72.0-25.8.978.Linux.x86_64.tar.gz
#cd /usr/local
# ln -s /opt/mysql/Percona-XtraDB-Cluster-5.6.22-rel72.0-25.8.978.Linux.x86_64 mysql
#chown -R mysql:mysql /usr/local/mysql/
# ln -sf /usr/lib64/libssl.so.10 /usr/lib64/libssl.so.6
# ln -sf /usr/lib64/libcrypto.so.10 /usr/lib64/libcrypto.so.6
創建安裝mysql的目標,並賦予權限
# mkdir -p /data/mysql/mysql_3306/{data,logs,tmp}
chown -R mysql:mysql /data/mysql/
chown -R mysql:mysql /usr/local/mysql/
加環境變量,解決找不到mysql命令的問題
echo 'export PATH=$PATH:/usr/local/mysql/bin' >> /etc/profile
source /etc/profile
echo 'export PATH=$PATH:/usr/local/mysql/bin' >> /etc/bashrc //裝mhs寫到這里,perl只能調用/root/.bashrc或者/etc/bashrc
my.cnf關鍵參數配置
###############################
# Percona XtraDB Cluster
default_storage_engine = InnoDB
#innodb_locks_unsafe_for_binlog = 1
innodb_autoinc_lock_mode = 2
wsrep_cluster_name = pxc_taotao #Cluster 集群的名字
wsrep_cluster_address = gcomm://192.168.79.3,192.168.79.4,192.168.79.5 #Cluster集群中的所有節點IP
wsrep_node_address = 192.168.79.3 #Cluster 集群當前節點的IP
wsrep_provider = /usr/local/mysql/lib/libgalera_smm.so
#wsrep_provider_options="gcache.size = 1G;debug = yes"
wsrep_provider_options="gcache.size = 1G;"
#wsrep_sst_method = rsync //很大,上T用這個
wsrep_sst_method = xtrabackup-v2 // 100-200G用
wsrep_sst_auth = sst:taotao
#wsrep_sst_donor = #從那個節點主機名同步數據
###############################333333
scp /etc/my.cnf 192.168.79.4:/etc/my.cnf
scp /etc/my.cnf 192.168.79.5:/etc/my.cnf
注:不同機子需要修改這兩個參數
wsrep_node_address 參數為Cluster集群節點的當前機器的IP地址
server-id 的標識
在第一個節點上初始化數據庫,啟動集群第一個節點,配置備份用戶
1)在第一個節點的basedir下初始化數據庫:
# /usr/local/mysql/scripts/mysql_install_db --datadir=/data/mysql/mysql_3306/data --basedir=/usr/local/mysql
2)啟動集群的第一個節點:
# cp /usr/local/mysql/support-files/mysql.server /etc/init.d/mysqld
# /etc/init.d/mysqld bootstrap-pxc
Bootstrapping PXC (Percona XtraDB Cluster)Starting MySQL (Percona XtraDB Cluster)................[ OK ]
# ps -ef |grep mysqld
root 8114 1 0 13:11 pts/1 00:00:00 /bin/sh /usr/local/mysql/bin/mysqld_safe --datadir=/data/mysql/mysql_3306/data --pid-file=/data/mysql/mysql_3306/data/node1.localdomain.pid --wsrep-new-cluster
mysql 9326 8114 28 13:11 pts/1 00:00:15 /usr/local/mysql/bin/mysqld --basedir=/usr/local/mysql --datadir=/data/mysql/mysql_3306/data --plugin-dir=/usr/local/mysql/lib/mysql/plugin --user=mysql --wsrep-provider=/usr/local/mysql/lib/libgalera_smm.so --wsrep-new-cluster --log-error=/data/mysql/mysql_3306/logs/error.log --pid-file=/data/mysql/mysql_3306/data/node1.localdomain.pid --socket=/tmp/mysql.sock --port=3306 --wsrep_start_position=00000000-0000-0000-0000-000000000000:-1
root 9435 6397 2 13:12 pts/1 00:00:00 grep mysqld
3)配置備份用戶
# mysql
delete from mysql.user where user != 'root' or host != 'localhost';
truncate mysql.db;
drop database test;
grant all privileges on *.* to 'sst'@'localhost' identified by 'taotao'; //這一個就可以啦,供本地的innobackupex使用的
flush privileges;
其他集群節點啟動:
其它節點無需初始化數據庫,數據會從第一個節點上拉過來
cp /usr/local/mysql/support-files/mysql.server /etc/init.d/mysqld
# /etc/init.d/mysqld start
查看集群狀態:
> show global status like 'wsrep%';
二、Haproxy負載均衡
PXC安裝完成后,我們采用HARPOXY分發連接對數據庫進行訪問
如下在192.168.79.128 和192.168.79.5上haprox的安裝配置操作,兩個服務器上操作一樣
1、安裝haproxy(如果安裝在pxc mysql節點注意端口號)
#tar -zxvf haproxy-1.5.2.tar.gz
#cd haproxy-1.5.2
#make TARGET=linux2628
#make install
ps:默認安裝到/usr/local/sbin/下面,可以用PREFIX指定軟件安裝路徑
也可以直接使用yum安裝:yum install –y haproxy
2、在haproxy服務器上安裝配置HAproxy
1)/etc/haproxy/haproxy.cfg配置如下:
#mkdir /etc/haproxy
# cp examples/haproxy.cfg /etc/haproxy/
# cat /etc/haproxy/haproxy.cfg
global
log 127.0.0.1 local0
log 127.0.0.1 local1 notice
maxconn 4096
#ulimit-n 10240
#chroot /usr/share/haproxy
uid 99
gid 99
daemon
#nbproc
#pidfile /var/run/haproxy/haproxy.pid
#stats socket /var/run/haproxy/haproxy.sock level operator
#debug
#quiet
defaults
log global
mode http
option tcplog
option dontlognull
retries 3
option redispatch
maxconn 2000
timeout connect 50000
timeout client 50000
timeout server 50000
frontend stats-front
bind *:8088
mode http
default_backend stats-back
frontend pxc-front
bind *:3307
mode tcp
default_backend pxc-back
frontend pxc-onenode-front
bind *:3308
mode tcp
default_backend pxc-onenode-back
backend stats-back
mode http
balance roundrobin
stats uri /haproxy/stats
stats auth admin:admin
backend pxc-back
mode tcp
balance leastconn
option httpchk
server taotao 192.168.79.3:3306 check port 9200 inter 12000 rise 3 fall 3
server candidate 192.168.79.4:3306 check port 9200 inter 12000 rise 3 fall 3
server slave 192.168.79.5:3306 check port 9200 inter 12000 rise 3 fall 3
backend pxc-onenode-back
mode tcp
balance leastconn
option httpchk
server taotao 192.168.79.3:3306 check port 9200 inter 12000 rise 3 fall 3
server candidate 192.168.79.4:3306 check port 9200 inter 12000 rise 3 fall 3 backup
server slave 192.168.79.5:3306 check port 9200 inter 12000 rise 3 fall 3 backup
2)配置haproxy的日志:
安裝完HAProxy后,默認情況下,HAProxy為了節省讀寫IO所消耗的性能,默認情況下沒有日志輸出,一下是開啟日志的過程
# rpm -qa |grep rsyslog
rsyslog-5.8.10-8.el6.x86_64
# rpm -ql rsyslog |grep conf$
# vim /etc/rsyslog.conf
...........
$ModLoad imudp
$UDPServerRun 514 //rsyslog 默認情況下,需要在514端口監聽UDP,所以可以把這兩行注釋掉
.........
local0.* /var/log/haproxy.log //和haproxy的配置文件中定義的log level一致
# service rsyslog restart
Shutting down system logger: [ OK ]
Starting system logger: [ OK ]
# service rsyslog status
rsyslogd (pid 11437) is running...
3、在每個PXC 每個mysql節點安裝mysql健康狀態檢查腳本(需要在pxc的每個節點執行)
1)腳本拷貝
# cp /usr/local/mysql/bin/clustercheck /usr/bin/
# cp /usr/local/mysql/xinetd.d/mysqlchk /etc/xinetd.d/
ps:clustercheck和腳本都是默認值沒有修改
2)創建mysql用戶,用於mysql健康檢查(在任一節點即可):
> grant process on *.* to 'clustercheckuser'@'localhost' identified by 'clustercheckpassword!';
> flush privileges;
ps:如不使用clustercheck中默認用戶名和密碼,將需要修改clustercheck腳本,MYSQL_USERNAME和MYSQL_PASSWORD值
3)更改/etc/services添加mysqlchk的服務端口號:
# echo 'mysqlchk 9200/tcp # mysqlchk' >> /etc/services
4)安裝xinetd服務,通過守護進程來管理mysql健康狀態檢查腳本
# yum -y install xinetd
# /etc/init.d/xinetd restart
Stopping xinetd: [FAILED]
Starting xinetd: [ OK ]
# chkconfig --level 2345 xinetd on
# chkconfig --list |grep xinetd
xinetd 0:off 1:off 2:on 3:on 4:on 5:on 6:off
測試檢測腳本:
# clustercheck
HTTP/1.1 200 OK
Content-Type: text/plain
Connection: close
Content-Length: 40
Percona XtraDB Cluster Node is synced.
# curl -I 192.168.79.5:9200
HTTP/1.1 503 Service Unavailable
Content-Type: text/plain
Connection: close
Content-Length: 57
# cp /usr/local/mysql/bin/mysql /usr/bin/
# curl -I 192.168.79.5:9200
HTTP/1.1 200 OK
Content-Type: text/plain
Connection: close
Content-Length: 40
ps:要保證狀態為200,否則檢測不通過,可能是mysql服務不正常,或者環境不對致使haproxy無法使用mysql
haproxy如何偵測 MySQL Server 是否存活,靠着就是 9200 port,透過 Http check 方式,讓 HAProxy 知道 PXC 狀態
在mysql集群的其他節點執行上面操作,保證各個節點返回狀態為200,如下:
# curl -I 192.168.79.4:9200
HTTP/1.1 200 OK
Content-Type: text/plain
Connection: close
Content-Length: 40
# curl -I 192.168.79.5:9200
HTTP/1.1 200 OK
Content-Type: text/plain
Connection: close
Content-Length: 40
4、HAproxy啟動和關閉
在haproxy服務器上啟動haproxy服務:
# /usr/local/sbin/haproxy -f /etc/haproxy/haproxy.cfg
關閉:
#pkill haproxy
# /usr/local/sbin/haproxy -f /etc/haproxy/haproxy.cfg
# ps -ef |grep haproxy
nobody 5751 1 0 21:18 ? 00:00:00 /usr/local/sbin/haproxy -f /etc/haproxy/haproxy.cfg
root 5754 2458 0 21:19 pts/0 00:00:00 grep haproxy
]# netstat -nlap |grep haproxy
tcp 0 0 0.0.0.0:8088 0.0.0.0:* LISTEN 5751/haproxy
tcp 0 0 0.0.0.0:3307 0.0.0.0:* LISTEN 5751/haproxy
tcp 0 0 0.0.0.0:3308 0.0.0.0:* LISTEN 5751/haproxy
udp 0 0 0.0.0.0:45891 0.0.0.0:* 5751/haproxy
# cp /usr/local/sbin/haproxy /usr/sbin/haproxy
cd /opt/soft/haproxy-1.5.3/examples
[root@db169 examples]# cp haproxy.init /etc/init.d/haproxy
[root@db169 examples]# chmod +x /etc/init.d/haproxy
5、haproxy測試
在mysql pxc創建測試賬號:
#grant all privileges on *.* to 'taotao'@’%’ identified by ‘taotao’;
#for i in `seq 1 1000`; do mysql -h 192.168.79.128 -P3307 -utaotao -ptaotao -e "select @@hostname;"; done
#for i in `seq 1 1000`; do mysql -h 192.168.79.128 -P3308 -utaotao -ptaotao -e "select @@hostname;"; done
注:其實可以只允許haproxy側的IP訪問即可,因用戶通過vip訪問mysql集群,haproxy根據調度策略使用自己的ip創建與后端mysql服務器的連接。
查看Haproxy狀態:
http://192.168.79.128:8088/haproxy/stats
輸入用戶密碼:stats auth admin admin
三、用keepalived實現haproxy 的高可用
接着在192.168.79.128 和192.168.79.5節點,安裝配置keepalived,防止是HAPROXY單點故障對數據庫的訪問產生影響,給服務器之間通過KEEPALIVED進行心跳檢測,如果其中的某個機器出現問題,其中的一台將會接管,對用戶來說整個過程透明
在192.168.79.128和192.168.79.5上安裝配置:
# yum install keepalived -y
#yum install MySQL-python -y
# mv /etc/keepalived/keepalived.conf /etc/keepalived/keepalived.conf.bak
# vim /etc/keepalived/keepalived.conf
! Configuration File for keepalived
global_defs {
router_id haproxy_ha #keepalived組的名稱
}
vrrp_script chk_haprocy {
script "/etc/keepalived/check_haproxy.sh"
interval 2
weight 2
}
vrrp_instance VI_HAPROXY {
state MASTER #備份機是BACKUP
#nopreempt #非搶占模式
interface eth0
virtual_router_id 51 #同一集群中該數值要相同,只能從1-255
priority 100 //備庫可以90
advert_int 1
authentication {
auth_type PASS #Auth 用密碼,但密碼不要超過8位
auth_pass 1111
}
track_script {
chk_haprocy
}
virtual_ipaddress {
192.168.79.166/24
}
}
這里state不配置MASTER,是期望在MASTER宕機后再恢復時,不主動將MASTER狀態搶過來,避免MySQL服務的波動。
由於不存在使用lvs進行負載均衡,不需要配置虛擬服務器virtual server,下同。
#vim /etc/keepalived/check_haproxy.sh
#!/bin/bash
A=`ps -C haproxy --no-header |wc -l`
if [ $A -eq 0 ];then
/usr/local/sbin/haproxy -f /etc/haproxy/haproxy.cfg
sleep 3
if [ `ps -C haproxy --no-header |wc -l` -eq 0 ];then
/etc/init.d/keepalived stop
fi
fi
#chmod 755 /etc/keepalived/check_haproxy.sh
啟用keepalived
#service keepalived start
# chkconfig --level 2345 keepalived on
# tail -f /var/log/messages //keepalived的日志
ps:先啟動,你內心期望成為對外服務的機器,確認VIP綁定到那台機器上, keepalived進入到master狀態持有vip
keepalived 三種狀態 1)backup 2)master 3)嘗試進入master狀態,沒成功: FAULT
haproxy高可用測試:
check_haproxy.sh腳本可知,測試如果是只關閉haproxy服務,還是會自動重新,如果haproxy服務重啟成功,是不會關閉keepalived的,
vip也不會飄到haproxy備機上,所以給測試需要關閉keepalived或關閉服務器才能達到效果。當master的掛掉后,處於backup的keepalived可以自動接管,
當master啟動后vip會自動偏移過來。
在其中某台機子上關閉keepalived服務,/etc/init.d/keepalived stop,在經過1s的心跳檢測后,會自動切換到另一台機子上,可以通過/var/log/messages進行查看
192.168.79.128:
# ip a
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 00:0c:29:7c:e0:ce brd ff:ff:ff:ff:ff:ff
inet 192.168.79.128/24 brd 192.168.79.255 scope global eth0
inet 192.168.79.166/32 scope global eth0
192.168.79.5:
# ip a
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 00:0c:29:d2:83:87 brd ff:ff:ff:ff:ff:ff
inet 192.168.79.5/24 brd 192.168.79.255 scope global eth0
192.168.79.128:
# /etc/init.d/keepalived stop
Stopping keepalived: [ OK ]
[root@taotao ~]# ip a
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 00:0c:29:7c:e0:ce brd ff:ff:ff:ff:ff:ff
inet 192.168.79.128/24 brd 192.168.79.255 scope global eth0
inet6 fe80::20c:29ff:fe7c:e0ce/64 scope link
valid_lft forever preferred_lft forever
192.168.79.5:
# ip a
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 00:0c:29:d2:83:87 brd ff:ff:ff:ff:ff:ff
inet 192.168.79.5/24 brd 192.168.79.255 scope global eth0
inet 192.168.79.166/32 scope global eth0
inet6 fe80::20c:29ff:fed2:8387/64 scope link
valid_lft forever preferred_lft forever
192.168.79.128:
# /etc/init.d/keepalived start
Starting keepalived: [ OK ]
# ip a
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 00:0c:29:7c:e0:ce brd ff:ff:ff:ff:ff:ff
inet 192.168.79.128/24 brd 192.168.79.255 scope global eth0
inet 192.168.79.166/32 scope global eth0
# ip a
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 00:0c:29:d2:83:87 brd ff:ff:ff:ff:ff:ff
inet 192.168.79.5/24 brd 192.168.79.255 scope global eth0