mycat和mysql的高可用參考如下兩個圖


簡介:應用程序僅需要連接HAproxy或者mycat,后端服務器的讀寫分離由mycat進行控制,后端服務器數據的同步由MySQL主從同步進行控制。

服務器主機規划
| IP | 功能 | 備注 |
|---|---|---|
| 192.168.0.200 | Mysql Master1 | Mysql Master1端口3306 |
| 192.168.0.199 | mycat1 ,Mysql Slave1 | mycat1端口8066 ,Mysql Slave1端口3306 |
| 192.168.0.198 | mycat2 ,Mysql Slave2 | mycat2端口8066 ,Mysql Slave2端口3306 |
| 192.168.0.170 | Mysql Master2 | Mysql Master2端口3306 |
| 192.168.0.169 | Mysql Slave3 | Mysql Slave3端口3306 |
| 192.168.0.168 | Mysql Slave4 | Mysql Slave4端口3306 |
安裝MySQL數據庫
1)使用docker安裝完成mysql5.7.24,我規划的是3台:
192.168.0.200(Master1)
192.168.0.199(Slave1)
192.168.0.198(Slave2)
2)配置三台機器的my.cnf配置文件
我三台機器的配置文件都是/usr/local/mysql/conf/my.cnf
3)設置三台主從服務器配置
vi /usr/local/mysql/conf/my.cnf
[mysql]
default-character-set=utf8
[mysqld]
interactive_timeout = 120
wait_timeout = 120
max_allowed_packet = 32M
log-bin=mysql-bin
server-id=200
character-set-server=utf8
log-slave-updates
auto-increment-increment = 2
auto-increment-offset = 1
default-time_zone = '+8:00'
vi /usr/local/mysql/conf/my.cnf
[mysql]
default-character-set=utf8
[mysqld]
interactive_timeout = 120
wait_timeout = 120
max_allowed_packet = 32M
log-bin=mysql-bin
server-id=199
character-set-server=utf8
default-time_zone = '+8:00'
vi /usr/local/mysql/conf/my.cnf
[mysql]
default-character-set=utf8
[mysqld]
interactive_timeout = 120
wait_timeout = 120
max_allowed_packet = 32M
log-bin=mysql-bin
server-id=198
character-set-server=utf8
default-time_zone = '+8:00'
3)創建主從服務器容器
在200,199,198上運行啟動mysql
docker run --name mysql5_7_24 -p 3306:3306 -v /usr/local/mysql/conf:/etc/mysql/conf.d -v /usr/local/mysql/log:/var/log/mysql -v /usr/local/mysql/data:/var/lib/mysql --privileged=true -e MYSQL_ROOT_PASSWORD=root -d mysql:5.7.24
docker run --name mysql5_7_24 -p 3306:3306 -v /usr/local/mysql/conf:/etc/mysql/conf.d -v /usr/local/mysql/log:/var/log/mysql -v /usr/local/mysql/data:/var/lib/mysql --privileged=true -e MYSQL_ROOT_PASSWORD=root -d mysql:5.7.24
docker run --name mysql5_7_24 -p 3306:3306 -v /usr/local/mysql/conf:/etc/mysql/conf.d -v /usr/local/mysql/log:/var/log/mysql -v /usr/local/mysql/data:/var/lib/mysql --privileged=true -e MYSQL_ROOT_PASSWORD=root -d mysql:5.7.24
接下來配置主從
4)登錄200主服務器的mysql,查詢master的狀態

200主庫創建用戶
SET sql_mode=(SELECT REPLACE(@@sql_mode,'ONLY_FULL_GROUP_BY',''));
GRANT REPLICATION SLAVE ON *.* to 'backup'@'%' identified by '123456';
5)登錄199和198從服務器的mysql,設置與主服務器相關的配置參數
SET sql_mode=(SELECT REPLACE(@@sql_mode,'ONLY_FULL_GROUP_BY',''));
change master to master_host='192.168.0.200',master_user='backup',master_password='123456',master_log_file='mysql-bin.000003',master_log_pos=441;
master_host為docker的地址不能寫127.0.0.1
master_user是在主庫創建的用戶
master_log_pos是主庫show master status;查詢出的Position
然后在199和198上啟動服務
start slave;
查看服務狀態
show slave status;
Waiting for master to send event 就是成功了
Connecting to master 多半是連接不通
之后主庫的修改都能同步到從庫了
接下來安裝Mysql Master2和Mysql Slave3和Mysql Slave4使用docker安裝完成mysql5.7.24,我規划的是3台:
192.168.0.170(Master1)
192.168.0.169(Slave3)
192.168.0.168(Slave4)
具體安裝方法和上面一致,我只貼出170,169,168的mysql 配置文件如下
170的配置文件
vi /usr/local/mysql/conf/my.cnf
[mysql]
default-character-set=utf8
[mysqld]
interactive_timeout = 120
wait_timeout = 120
max_allowed_packet = 32M
log-bin=mysql-bin
server-id=170
character-set-server=utf8
log-slave-updates
auto-increment-increment = 2
auto-increment-offset = 2
default-time_zone = '+8:00'
169的配置文件
vi /usr/local/mysql/conf/my.cnf
[mysql]
default-character-set=utf8
[mysqld]
interactive_timeout = 120
wait_timeout = 120
max_allowed_packet = 32M
log-bin=mysql-bin
server-id=169
character-set-server=utf8
default-time_zone = '+8:00'
168的配置文件
vi /usr/local/mysql/conf/my.cnf
[mysql]
default-character-set=utf8
[mysqld]
interactive_timeout = 120
wait_timeout = 120
max_allowed_packet = 32M
log-bin=mysql-bin
server-id=168
character-set-server=utf8
default-time_zone = '+8:00'
依照200,199,198配置好170,169,168的主從同步后,接下來我們配置
200和170的主主同步:
先查詢170的 SHOW MASTER STATUS

然后在200這個mysql上執行如下語句
然后執行 start slave;
然后執行show slave status; 查看同步狀態
然后再先查詢200的 SHOW MASTER STATUS
然后在170這個mysql上執行如下語句
然后執行 start slave;
然后執行show slave status; 查看同步狀態,現在170和200配置成功主主同步了,但是我的200,199,198是比170,169,168晚安裝2個多月,所以170上面有幾萬條記錄需要通過mysqldump和source手動同步到200上,發現大量數據不一致的時候可以分別在170和200上把主主同步停了,stop slave; 再執行下面步驟具體步驟如下:
這個時候如果發現170和200數據大量不同步,可以采取下面的方式解決
解決步驟如下:
先確保170和200都執行了stop slave;停止主主同步
1.先進入主庫170,進行鎖表,防止數據寫入(也可以用SQLyog執行flush tables with read lock;等等語句,我用windows的cmd連docker安裝的mysql5.7.24大概1分鍾就自動退出)
C:\Users\1111>mysql -uroot -p -h192.168.0.170
Enter password: ****
Welcome to the MySQL monitor. Commands end with ; or \g.
Your MySQL connection id is 129
Server version: 5.7.24-log MySQL Community Server (GPL)
Copyright (c) 2000, 2019, Oracle and/or its affiliates. All rights reserved.
Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
mysql>
使用命令鎖定為只讀(必須use指定數據庫不然鎖定只讀不生效):
mysql> use novadb2;
mysql> flush tables with read lock;
注意:該處是鎖定為只讀狀態,語句不區分大小寫
2.進行數據備份
把170的數據備份到novadb2_20190212.sql文件
mysqldump -uroot -p -h192.168.0.170 novadb2 -e --max_allowed_packet=1048576 --net_buffer_length=16384 >C:\nova_work_document\novaold_mysqldb_backup\novadb2_20190212.sql
這里注意一點:數據庫備份一定要定期進行,可以用shell腳本或者python腳本,都比較方便,確保數據萬無一失
3.停止從庫200的狀態
mysql> stop slave;
4.然后到從庫200執行mysql命令,導入數據備份
mysql> use novadb2
mysql> source C:\nova_work_document\novaold_mysqldb_backup\novadb2_20190212.sql
5.查看master 170的狀態

6.設置從庫200同步,注意該處的同步點,就是170主庫show master status信息里的| File| Position兩項
SET sql_mode=(SELECT REPLACE(@@sql_mode,'ONLY_FULL_GROUP_BY',''));
change master to master_host='192.168.0.170',master_user='backup',master_password='123456',master_log_file='mysql-bin.000011',master_log_pos=1382;
7.重新開啟從200的同步
mysql> start slave;
9.查看200的同步狀態
mysql> show slave status\G 查看:
Slave_IO_Running: Yes
Slave_SQL_Running: Yes
同理按照上面步驟在170上也配置從200同步,啟動start slave;
好了,同步完成啦
最后把主庫170解除只讀鎖定
mysql> use novadb2;
mysql> unlock tables;
mycat安裝
cd /root
wget http://dl.mycat.io/1.6.6.1/Mycat-server-1.6.6.1-release-20181031195535-linux.tar.gztar -zxvf Mycat-server-1.6.6.1-release-20181031195535-linux.tar.gz
如果沒有配置jdk那么執行 tar -zxvf jdk-8u131-linux-x64.gz
並且在/etc/profile的最后加上
JAVA_HOME=/root/jdk1.8.0_131
JRE_HOME=/root/jdk1.8.0_131/jre
MYCAT_HOME=/root/mycat
PATH=$PATH:$JAVA_HOME/bin:$JRE_HOME/bin:$MYCAT_HOME/bin
CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar:$JRE_HOME/lib
export JAVA_HOME JRE_HOME MYCAT_HOME PATH CLASSPATH
然后執行source /etc/profile使配置生效
修改server.xml
進入/root/mycat/conf
cp server.xml server2019_bak.xml
vim server.xml
然后對如下截圖的幾個地方進行修改


接下來修改schema.xml
cp schema.xml schema2019_bak.xml
vim schema.xml
修改為如下內容
<?xml version="1.0"?>
<!DOCTYPE mycat:schema SYSTEM "schema.dtd">
<mycat:schema xmlns:mycat="http://io.mycat/">
<schema name="NOVADB" checkSQLschema="false" sqlMaxLimit="100">
<!-- auto sharding by id (long) -->
<table name="e_instance_step_status_" primaryKey="step_id" subTables="e_instance_step_status_$1-20" dataNode="dn1" rule="sharding-by-murmur" />
<table name="e_config_match" dataNode="dn1" />
<table name="e_instance" dataNode="dn1" />
<table name="e_task" dataNode="dn1" />
<table name="e_task_plan" dataNode="dn1" />
<table name="m_category" dataNode="dn1" />
<table name="m_component_platform" dataNode="dn1" />
<table name="m_configitem" dataNode="dn1" />
<table name="m_dish" dataNode="dn1" />
<table name="m_dish_detail" dataNode="dn1" />
<table name="m_instanceset" dataNode="dn1" />
<table name="m_instanceset_detail" dataNode="dn1" />
<table name="m_instanceset_detail_config" dataNode="dn1" />
<table name="m_instanceset_row" dataNode="dn1" />
<table name="m_odm_company" dataNode="dn1" />
<table name="m_odminfo" dataNode="dn1" />
<table name="m_permission" dataNode="dn1" />
<table name="m_platform_hwphase" dataNode="dn1" />
<table name="m_platform_n" dataNode="dn1" />
<table name="m_platform_sku" dataNode="dn1" />
<table name="m_platform_sku_detail" dataNode="dn1" />
<table name="m_role" dataNode="dn1" />
<table name="m_role_permission" dataNode="dn1" />
<table name="m_user" dataNode="dn1" />
<table name="m_user_role" dataNode="dn1" />
<table name="t_attachments" dataNode="dn1" />
<table name="t_case" dataNode="dn1" />
<table name="t_case_section" dataNode="dn1" />
<table name="t_caseorsect_plan" dataNode="dn1" />
<table name="t_config_match" dataNode="dn1" />
<table name="t_instance" dataNode="dn1" />
<table name="t_instance_struct_" primaryKey="struct_id" subTables="t_instance_struct_$1-20" dataNode="dn1" rule="sharding-by-murmur" />
<table name="t_plan" dataNode="dn1" />
<table name="t_section" dataNode="dn1" />
<table name="t_step" dataNode="dn1" />
<table name="t_tag" dataNode="dn1" />
<table name="t_tag_category" dataNode="dn1" />
<table name="t_tag_obj" dataNode="dn1" />
<table name="t_task" dataNode="dn1" />
<table name="t_task_instance_row" dataNode="dn1" />
<table name="t_task_plan" dataNode="dn1" />
<table name="t_tasksendstatus" dataNode="dn1" />
<!-- global table is auto cloned to all defined data nodes ,so can join
with any table whose sharding node is in the same data node -->
<!--<table name="company" primaryKey="ID" type="global" dataNode="dn1,dn2,dn3" />
<table name="goods" primaryKey="ID" type="global" dataNode="dn1,dn2" />-->
<!-- random sharding using mod sharind rule -->
<!--<table name="hotnews" primaryKey="ID" autoIncrement="true" dataNode="dn1,dn2,dn3"
rule="mod-long" />-->
<!-- <table name="dual" primaryKey="ID" dataNode="dnx,dnoracle2" type="global"
needAddLimit="false"/> <table name="worker" primaryKey="ID" dataNode="jdbc_dn1,jdbc_dn2,jdbc_dn3"
rule="mod-long" /> -->
<!--<table name="employee" primaryKey="ID" dataNode="dn1,dn2"
rule="sharding-by-intfile" />
<table name="customer" primaryKey="ID" dataNode="dn1,dn2"
rule="sharding-by-intfile">
<childTable name="orders" primaryKey="ID" joinKey="customer_id"
parentKey="id">
<childTable name="order_items" joinKey="order_id"
parentKey="id" />
</childTable>
<childTable name="customer_addr" primaryKey="ID" joinKey="customer_id"
parentKey="id" />
</table>-->
<!-- <table name="oc_call" primaryKey="ID" dataNode="dn1$0-743" rule="latest-month-calldate"
/> -->
</schema>
<!-- <dataNode name="dn1$0-743" dataHost="localhost1" database="db$0-743"
/> -->
<dataNode name="dn1" dataHost="localhost1" database="novadb2" />
<!--<dataNode name="dn2" dataHost="localhost1" database="db2" />
<dataNode name="dn3" dataHost="localhost1" database="db3" />-->
<!--<dataNode name="dn4" dataHost="sequoiadb1" database="SAMPLE" />
<dataNode name="jdbc_dn1" dataHost="jdbchost" database="db1" />
<dataNode name="jdbc_dn2" dataHost="jdbchost" database="db2" />
<dataNode name="jdbc_dn3" dataHost="jdbchost" database="db3" /> -->
<dataHost name="localhost1" maxCon="1000" minCon="10" balance="1"
writeType="0" dbType="mysql" dbDriver="native" switchType="1" slaveThreshold="100">
<heartbeat>select user()</heartbeat>
<!-- can have multi write hosts -->
<writeHost host="Master1" url="192.168.0.200:3306" user="root"
password="root">
<!-- can have multi read hosts -->
<readHost host="Slave1" url="192.168.0.199:3306" user="root" password="root" />
<readHost host="Slave2" url="192.168.0.198:3306" user="root" password="root" />
</writeHost>
<writeHost host="Master2" url="192.168.0.170:3306" user="root"
password="root">
<!-- can have multi read hosts -->
<readHost host="Slave3" url="192.168.0.169:3306" user="root" password="root" />
<readHost host="Slave4" url="192.168.0.168:3306" user="root" password="root" />
</writeHost>
<!--<writeHost host="hostS1" url="localhost:3316" user="root"
password="123456" />-->
<!-- <writeHost host="hostM2" url="localhost:3316" user="root" password="123456"/> -->
</dataHost>
<!--
<dataHost name="sequoiadb1" maxCon="1000" minCon="1" balance="0" dbType="sequoiadb" dbDriver="jdbc">
<heartbeat> </heartbeat>
<writeHost host="hostM1" url="sequoiadb://1426587161.dbaas.sequoialab.net:11920/SAMPLE" user="jifeng" password="jifeng"></writeHost>
</dataHost>
<dataHost name="oracle1" maxCon="1000" minCon="1" balance="0" writeType="0" dbType="oracle" dbDriver="jdbc"> <heartbeat>select 1 from dual</heartbeat>
<connectionInitSql>alter session set nls_date_format='yyyy-mm-dd hh24:mi:ss'</connectionInitSql>
<writeHost host="hostM1" url="jdbc:oracle:thin:@127.0.0.1:1521:nange" user="base" password="123456" > </writeHost> </dataHost>
<dataHost name="jdbchost" maxCon="1000" minCon="1" balance="0" writeType="0" dbType="mongodb" dbDriver="jdbc">
<heartbeat>select user()</heartbeat>
<writeHost host="hostM" url="mongodb://192.168.0.99/test" user="admin" password="123456" ></writeHost> </dataHost>
<dataHost name="sparksql" maxCon="1000" minCon="1" balance="0" dbType="spark" dbDriver="jdbc">
<heartbeat> </heartbeat>
<writeHost host="hostM1" url="jdbc:hive2://feng01:10000" user="jifeng" password="jifeng"></writeHost> </dataHost> -->
<!-- <dataHost name="jdbchost" maxCon="1000" minCon="10" balance="0" dbType="mysql"
dbDriver="jdbc"> <heartbeat>select user()</heartbeat> <writeHost host="hostM1"
url="jdbc:mysql://localhost:3306" user="root" password="123456"> </writeHost>
</dataHost> -->
</mycat:schema>
其中關鍵部分如下截圖


balance="1": 全部的readHost與stand by writeHost參與select語句的負載均衡。writeType="0": 所有寫操作發送到配置的第一個writeHost,第一個掛了切到還生存的第二個 writeHost,重新啟動后以切換后的為准,切換記錄在配置文件中:dnindex.properties 。switchType="1": 1 默認值,自動切換。
接下來修改rule.xml文件配置里面的一致性hash算法
vim rule.xml
主要需要根據你的分表修改如下截圖的內容
這個圖是修改需要分表的字段

這個圖是修改為一共多少個分表

啟動mycat服務
mycat start
[root@localhost conf]# ss -lntup |egrep '(8066|9066)'
tcp LISTEN 0 100 :::9066 :::* users:(("java",pid=12589,fd=94))
tcp LISTEN 0 100 :::8066 :::* users:(("java",pid=12589,fd=98))
驗證mycat服務是否正常
第一步:使用mysql的客戶端連接mycat
C:\Windows\system32>mysql -uroot -p123456 -h192.168.0.199 -P8066
mysql: [Warning] Using a password on the command line interface can be insecure.
Welcome to the MySQL monitor. Commands end with ; or \g.
Your MySQL connection id is 26
Server version: 5.6.29-mycat-1.6.6.1-release-20181031195535 MyCat Server (OpenCloudDB)
Copyright (c) 2000, 2019, Oracle and/or its affiliates. All rights reserved.
Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
mysql> show databases;
+----------+
| DATABASE |
+----------+
| NOVADB |
+----------+
1 row in set (0.00 sec)
mysql> use NOVADB;
Database changed
mysql> select * from t_tag;
+----+-------------+-------------+---------------+---------------------+
| id | tag_name | category_id | category_name | createtime |
+----+-------------+-------------+---------------+---------------------+
| 1 | aaa | 1 | bb | 2019-01-17 16:38:30 |
| 2 | bbb | 1 | bb | 2019-01-17 16:38:48 |
| 3 | adasdqw | 1 | bb | 2019-01-18 11:23:22 |
| 5 | ww | 27 | qq | 2019-01-21 18:14:24 |
| 6 | dasdsad | 28 | dsadsa | 2019-01-23 13:57:47 |
| 7 | gfdgf | 29 | gffd | 2019-01-23 14:01:51 |
| 8 | N | 30 | automation | 2019-01-23 15:15:45 |
| 9 | ccccccc | 31 | wang | 2019-01-23 16:04:56 |
| 11 | ww | 35 | BB | 2019-01-23 17:07:30 |
| 12 | dasdsadsa | 36 | dasdasd | 2019-01-24 18:43:16 |
| 13 | 22222222 | 37 | 1111111111 | 2019-01-24 18:43:16 |
| 14 | 44444444444 | 38 | 3333333333 | 2019-01-24 18:43:16 |
+----+-------------+-------------+---------------+---------------------+
12 rows in set (0.00 sec)
第二步:也可以通過連接9066這個mycat的監控管理端口查看Mycat的讀寫情況
C:\Windows\system32>mysql -uroot -p123456 -h192.168.0.199 -P9066
mysql> show @@datasource;
+----------+---------+-------+---------------+------+------+--------+------+------+---------+-----------+------------+
| DATANODE | NAME | TYPE | HOST | PORT | W/R | ACTIVE | IDLE | SIZE | EXECUTE | READ_LOAD | WRITE_LOAD |
+----------+---------+-------+---------------+------+------+--------+------+------+---------+-----------+------------+
| dn1 | Master1 | mysql | 192.168.0.200 | 3306 | W | 0 | 10 | 1000 | 106 | 0 | 19 |
| dn1 | Master2 | mysql | 192.168.0.170 | 3306 | W | 0 | 1 | 1000 | 93 | 15 | 0 |
| dn1 | Slave1 | mysql | 192.168.0.199 | 3306 | R | 0 | 6 | 1000 | 100 | 16 | 0 |
| dn1 | Slave2 | mysql | 192.168.0.198 | 3306 | R | 0 | 7 | 1000 | 102 | 18 | 0 |
| dn1 | Slave3 | mysql | 192.168.0.169 | 3306 | R | 0 | 7 | 1000 | 103 | 19 | 0 |
| dn1 | Slave4 | mysql | 192.168.0.168 | 3306 | R | 0 | 7 | 1000 | 100 | 16 | 0 |
+----------+---------+-------+---------------+------+------+--------+------+------+---------+-----------+------------+
6 rows in set (0.00 sec)
參考
故障匯總
第一次配置的時候mysql的Master1端和Master2沒有配置log-slave-updates導致,slave1和slave2上沒有Master1端的數據。
解釋:
從庫開啟log-bin參數,如果直接往Master1寫數據,是可以記錄log-bin日志的,但是Master1通過I0線程讀取主主同步Master2二進制日志文件,然后通過SQL線程寫入的數據,是不會記錄binlog日志的。也就是說主主從庫Master1從主庫Master2上復制的數據,是不寫入主主從庫Master1的binlog日志的。所以主主從庫Master1做為其他從庫的主庫時需要在配置文件中添加log-slave-updates參數。Master1和Master2都一樣
解決辦法:
[mysqld]
log-slave-updates
