Qingcloud_MySQL Plus(Xenon) 高可用搭建實驗


實驗:Xenon on 5.7.30

 Xenon (MySQL Plus) 是青雲Qingcloud的一個開源項目,號稱金融級別強一致性的高可用解決方案,項目地址為 https://github.com/radondb/xenon

 

本實驗筆記創建與Microsoft onenote , 轉換到web有格式損失。

 

環境信息

 

IP

port

role

hostname

192.168.188.51

3306

node1

ms51

192.168.188.52

3306

node2

ms52

192.168.188.53

3306

node3

ms53

192.168.188.54

3306

node4

ms54

192.168.188.50

3306

s-ip

--

 

 

 

 

 

 

 

 

 

 

 

 

CentOS Linux release 7.6.1810

mysql-5.7.30-linux-glibc2.12-x86_64

xtrabackup version 2.4.20

 

由於Xenon本身架構的限制,暫不支持單節點多實例的方式建立高可用。可以使用docker,本來Xenon就是基於docker架構設計的。

 

操作系統配置

創建mysql用戶,如果之前已經配置,可能需要調整mysql賬號

使mysql用戶可以login

[root@ms51 data]# sed -i  's#/sbin/nologin#/bin/bash#' /etc/passwd  &&  grep mysql /etc/passwd

mysql:x:2000:2000::/usr/local/mysql:/bin/bash
 

[root@ms51 data]#  echo mysql | passwd --stdin mysql

  

 

補充建立一下home目錄

[root@ms51 data]# usermod -d /home/mysql mysql && mkdir /home/mysql  && cp -f /root/.bash* /home/mysql/ && chown -R mysql:mysql /home/mysql && chmod -R 700 /home/mysql

  

 

配置並驗證mysql的ssh互信

[mysql@ms51 ~]$ ssh-keygen

[mysql@ms51 ~]$ cat .ssh/id_rsa.pub  >> .ssh/authorized_keys


[mysql@ms52 ~]$ scp .ssh/id_rsa.pub  192.168.188.201:/tmp/2

[mysql@ms53 ~]$ scp .ssh/id_rsa.pub  192.168.188.201:/tmp/3

 
[mysql@ms51 ~]$ cat /tmp/2  >> .ssh/authorized_keys

[mysql@ms51 ~]$ cat /tmp/3  >> .ssh/authorized_keys

[mysql@ms51 ~]$ chmod 600 .ssh/authorized_keys

 

[mysql@ms51 ~]$ scp .ssh/authorized_keys  192.168.188.202:~/.ssh/

[mysql@ms51 ~]$ scp .ssh/authorized_keys  192.168.188.203:~/.ssh/

  

 

注意selinux, 如果開啟的話,ssh互信是不生效的。

 

配置sudoers

將mysql加入sudoer中

[root@ms51 ~]# visudo    # 或者直接vi /etc/sudoers

mysql   ALL=(ALL)       NOPASSWD:/usr/sbin/ip

  

驗證一下

[root@ms51 data]# su - mysql -c "sudo ip a"

  有結果就ok。

 

軟件安裝

xtrabackup

mysql安裝

 

MySQL層面的配置

這一環節主要是確認復制、半同步復制可用,配置結束后可以關閉實例。實際啟用Xenon后,主從關系由Xenon選舉得出,與這里的角色配置關系不大。

記得修改server-id。

配置3個實例。

 

啟動三個數據庫實例並完成基礎配置

shell# mysqld --defaults-file=/data/mysql/mysql3306/my3306.cnf &


mysql> set global super_read_only=0;

mysql> alter user user() identified by 'mysql';

  

配置GTID+半同步、創建復制架構

node1、node2、node3 檢查下列參數,以支持半同步開啟

  • gtid_mode
  • enforce_gtid_consistency
  • binlog_format=row
mysql> show global variables like '%GTID%';
+----------------------------------+----------------------------------------+
| Variable_name                    | Value                                  |
+----------------------------------+----------------------------------------+
| binlog_gtid_simple_recovery      | ON                                     |
| enforce_gtid_consistency         | ON                                     |
| gtid_executed                    | 5ea86dca-8b58-11ea-86d8-0242c0a8bc33:1 |
| gtid_executed_compression_period | 1000                                   |
| gtid_mode                        | ON                                     |
| gtid_owned                       |                                        |
| gtid_purged                      |                                        |
| session_track_gtids              | OFF                                    |
+----------------------------------+----------------------------------------+
8 rows in set (0.00 sec)

mysql> show global variables like '%binlog_format%';
+---------------+-------+
| Variable_name | Value |
+---------------+-------+
| binlog_format | ROW   |
+---------------+-------+
1 row in set (0.01 sec)

  

創建復制用戶

node1、node2、node3上都進行創建

mysql> create user 'rep'@'192.168.188.%' identified by 'rep';
Query OK, 0 rows affected (0.17 sec)

mysql> grant replication slave on *.* to 'rep'@'192.168.188.%';
Query OK, 0 rows affected (0.14 sec)

  

配置復制架構

node1、node2、node3重置一下master status:

mysql> reset master;

  

node2、node3配置復制:

mysql> change master to master_host='192.168.188.51',master_port=3306,master_user='rep',master_password='rep',master_auto_position=1;

  

 

配置半同步

node1、node2、node3所有節點加載半同步庫

mysql> install plugin rpl_semi_sync_slave soname 'semisync_slave.so';
Query OK, 0 rows affected (0.01 sec)

 

mysql> install plugin rpl_semi_sync_master soname 'semisync_master.so';
Query OK, 0 rows affected (0.00 sec)

 

mysql> show plugins;
+----------------------------+----------+--------------------+--------------------+---------+
| Name                       | Status   | Type               | Library            | License |
+----------------------------+----------+--------------------+--------------------+---------+
| rpl_semi_sync_slave        | ACTIVE   | REPLICATION        | semisync_slave.so  | GPL     |
| rpl_semi_sync_master       | ACTIVE   | REPLICATION        | semisync_master.so | GPL     |
+----------------------------+----------+--------------------+--------------------+---------+

 

  

 

node1(臨時master)啟用半同步

mysql> set global rpl_semi_sync_master_enabled=1;
Query OK, 0 rows affected (0.00 sec)

 

mysql> set global rpl_semi_sync_master_timeout=10000000;
Query OK, 0 rows affected (0.00 sec)

 

mysql> show global status like '%semi%';
+--------------------------------------------+-------+
| Variable_name                              | Value |
+--------------------------------------------+-------+
| Rpl_semi_sync_master_status                | ON    |
+--------------------------------------------+-------+
15 rows in set (0.00 sec)

  

 

node2、node3(臨時從庫)啟用半同步

mysql> set global rpl_semi_sync_slave_enabled=1;
Query OK, 0 rows affected (0.00 sec)

 

mysql> start slave;
Query OK, 0 rows affected (0.01 sec)

 

mysql> show global status like '%SEMI%';
+--------------------------------------------+-------+
| Variable_name                              | Value |
+--------------------------------------------+-------+
| Rpl_semi_sync_slave_status                 | ON    |
+--------------------------------------------+-------+
15 rows in set (0.00 sec)

  

 

看下node1,已經注冊了2個半同步slave

mysql> show global status like '%semi%';
+--------------------------------------------+-------+
| Variable_name                              | Value |
+--------------------------------------------+-------+
| Rpl_semi_sync_master_clients               | 2     |
| Rpl_semi_sync_master_status                | ON    |
+--------------------------------------------+-------+
15 rows in set (0.00 sec)

  

 

半同步復制架構配置完成。

 

創建數據庫用戶root@127.0.0.1

Xenon會用到這個用戶,與后面配置文件的設定有關。

 

由於已經配置主從復制,直接在master上執行就可以了,讓它自己復制到所有節點。

 

root@localhost [(none)]>create user root@127.0.0.1 identified by 'mysql';
Query OK, 0 rows affected (0.04 sec)

 

root@localhost [(none)]>GRANT ALL PRIVILEGES ON *.* TO 'root'@'127.0.0.1'  WITH GRANT OPTION;
Query OK, 0 rows affected (0.04 sec)

 

root@localhost [(none)]>grant super on *.* TO 'root'@'127.0.0.1';
Query OK, 0 rows affected (0.04 sec)

  

 

 

MySQL層面配置完成,接下來可以關閉全部mysql,去搞Xenon了

mysql> shutdown;

  

 

 

 

 

配置xenon

安裝依賴

Xenon需要安裝sshpass、golang。

需要在全部節點上安裝依賴。

[root@ms51 data]# curl -s https://mirror.go-repo.io/centos/go-repo.repo | tee /etc/yum.repos.d/go-repo.repo

[root@ms51 data]# yum install go sshpass -y

  

 

確保go在PATH中

[root@ms51 data]# go version
go version go1.14.2 linux/amd64

  

 

獲取、部署Xenon

可以直接通過git拉取項目,也可以通過其它途徑(直接去github上下載zip)獲取到項目后直接上傳到每一個服務器。

通過git方式,需安裝git

[root@ms51 data]# yum install git -y

  

git clone

[root@ms51 data]# pwd
/data/
[root@ms51 data]# git clone https://github.com/radondb/xenon

 clone后可以直接scp給其它節點。

 

通過zip方式:

從github下載zip后,依次部署到所有節點上

[root@ms51 data]# unzip /ofiles/xenon-master.zip -d /data/

  

編譯(所有節點)

[root@ms51 data]# pwd
/data

[root@ms51 data]# ls
mysql  xenon-master

[root@ms51 data]# cd xenon-master/
 

[root@ms51 xenon-master]# ls
conf  docs  LICENSE  makefile  README.md  src
 
[root@ms51 xenon-master]# make
..

[root@ms51 xenon-master]# ls
bin  conf  docs  LICENSE  makefile  README.md  src

[root@ms51 xenon-master]# ls bin/
xenon  xenoncli

  

 

完成編譯后, 規划文件目錄結構(也可以不做)

可以復制或移動bin/ conf/ 目錄, 但是不要刪除或改變git項目目錄及src/目錄

 

建立config.path文件,以指定默認配置文件位置(所有節點)

后續xenoncli 命令依賴config.path文件,該文件里指定了配置文件的路徑

 

[root@ms51 xenon-master]# cp conf/xenon-sample.conf.json xenon.json

[root@ms52 xenon-master]# cp conf/xenon-sample.conf.json xenon.json

[root@ms53 xenon-master]# cp conf/xenon-sample.conf.json xenon.json

[root@ms53 xenon-master]# echo "/etc/xenon/xenon.json" > /data/xenon-master/bin/config.path && mkdir /etc/xenon && ln -sf /data/xenon-master/xenon.json  /etc/xenon/xenon.json

[root@ms52 xenon-master]# echo "/etc/xenon/xenon.json" > /data/xenon-master/bin/config.path && mkdir /etc/xenon && ln -sf /data/xenon-master/xenon.json  /etc/xenon/xenon.json

[root@ms53 xenon-master]# echo "/etc/xenon/xenon.json" > /data/xenon-master/bin/config.path && mkdir /etc/xenon && ln -sf /data/xenon-master/xenon.json  /etc/xenon/xenon.json

 

  

將整個xenon目錄所有者給mysql(所有節點)

[root@ms51 xenon-master]#  chown -R mysql:mysql /data/xenon-master/

[root@ms52 xenon-master]#  chown -R mysql:mysql /data/xenon-master/

[root@ms53 xenon-master]#  chown -R mysql:mysql /data/xenon-master/

  

 

配置xenon.json(所有節點)

說明:

server段:本機IP

raft段:s-ip,要綁定s-ip的網卡名稱

mysql段:指定本機mysql相關配置,host段和admin拼合起來就是前面創建root@127.0.0.1的用途。兩個sysvars可以根據實際需求配置角色所需的配置動作。考慮到半同步enabled放在my.cnf文件中沒有在動作中靈活,便將半同步設定也放在動作里了。

replication段:

backup段:本機IP,backupdir需要注意,這個指定為mysql實例的datadir,以供xtrabackup使用。另外在執行rebuildme的時候,xenon會清空該目錄以重建實例。

  

 

		[root@ms51 xenon]# cat xenon.json
		{
			"server":
			{
				"endpoint":"192.168.188.51:8801"
			},
		
			"raft":
			{
				"meta-datadir":"raft.meta",
				"heartbeat-timeout":1000,
				"election-timeout":3000,
				"leader-start-command":"sudo /sbin/ip a a 192.168.188.50/16 dev eth0 && arping -c 3 -A  192.168.188.50  -I eth0",
				"leader-stop-command":"sudo /sbin/ip a d 192.168.188.50/16 dev eth0 "
			},
		
			"mysql":
			{
				"admin":"root",
				"passwd":"mysql",
				"host":"127.0.0.1",
				"port":3306,
				"basedir":"/usr/local/mysql",
				"defaults-file":"/data/mysql/mysql3306/my3306.cnf",
				"ping-timeout":1000,
				"master-sysvars":"super_read_only=0;read_only=0;sync_binlog=default;innodb_flush_log_at_trx_commit=defaulti;rpl_semi_sync_slave_enabled=0;rpl_semi_sync_master_enabled=1",
				"slave-sysvars": "super_read_only=1;read_only=1;sync_binlog=1000;innodb_flush_log_at_trx_commit=2;rpl_semi_sync_slave_enabled=1;rpl_semi_sync_master_enabled=0"
		
			},
		
			"replication":
			{
				"user":"rep",
				"passwd":"rep"
			},
		
			"backup":
			{
				"ssh-host":"192.168.188.51",
				"ssh-user":"mysql",
				"ssh-passwd":"mysql",
				"basedir":"/usr/local/mysql",
				"backupdir":"/data/mysql/mysql3306/data",
				"xtrabackup-bindir":"/usr/bin"
			},
		
			"rpc":
			{
				"request-timeout":500
			},
		
			"log":
			{
				"level":"INFO"
			}
		}
		
		slave替換hostIP即可。

  

 

 

 

啟動集群

mysqld可以關掉。

 

穩妥起見,在全部節點測試一下——切換到mysql用戶,測試一下json文件的可讀性及動作是否能夠執行

[mysql@ms51 ~]$ cat /etc/xenon/xenon.json
..
..
[mysql@ms51 ~]$ sudo /sbin/ip a a 192.168.188.50/16 dev eth0 && arping -c 3 -A 192.168.188.50 -I eth0
	ARPING 192.168.188.50 from 192.168.188.50 eth0
	Sent 3 probes (3 broadcast(s))
Received 0 response(s)

[mysql@ms51 ~]$ ip a
	..
	36: eth0@if37: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
	link/ether 02:42:c0:a8:bc:33 brd ff:ff:ff:ff:ff:ff link-netnsid 0
	inet 192.168.188.51/24 brd 192.168.188.255 scope global eth0
	valid_lft forever preferred_lft forever
	inet 192.168.188.50/16 scope global eth0
	valid_lft forever preferred_lft forever

[mysql@ms51 ~]$ sudo /sbin/ip a d 192.168.188.50/16 dev eth0
[mysql@ms51 ~]$ ip a
	..
	2: sit0@NONE: <NOARP> mtu 1480 qdisc noop state DOWN group default qlen 1000
	link/sit 0.0.0.0 brd 0.0.0.0
	36: eth0@if37: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
	link/ether 02:42:c0:a8:bc:33 brd ff:ff:ff:ff:ff:ff link-netnsid 0
	inet 192.168.188.51/24 brd 192.168.188.255 scope global eth0
	valid_lft forever preferred_lft forever

  

 

測試通過。

啟動集群吧。

 

啟動Xenon

目前mysql沒運行

 

【node1】啟動Xenon

[mysql@ms51 ~]$ cd /data/xenon-master/

 

[mysql@ms51 xenon-master]$ ls

bin  conf  docs  LICENSE  makefile  README.md  src  xenon.json

 

[mysql@ms51 xenon-master]$ bin/xenon -c /etc/xenon/xenon.json  > ./xenon.log 2>&1 &

 

[mysql@ms51 xenon-master]$ less xenon.log

  

 

 

 

檢查xenon日志,可以看到xenon自己去通過mysqld_safe去啟動mysql實例了,並設置為只讀模式,檢測了一下slave配置。

 

 

通過ps命令可以看到本機mysqld和mysqld_safe 進程,可以通過PID看出,mysqld進程是被mysqld_safe進程拉起的。

[mysql@ms51 xenon-master]$ ps -ef|grep mysql
root       607   594  0 11:48 pts/0    00:00:00 su - mysql
mysql      608   607  0 11:48 pts/0    00:00:00 -bash
mysql      637   608  0 11:51 pts/0    00:00:01 bin/xenon -c /etc/xenon/xenon.json
mysql      648     1  0 11:51 pts/0    00:00:00 /bin/sh /usr/local/mysql/bin/mysqld_safe --defaults-file=/data/mysql/mysql3306/my3306.cnf
mysql     1766   648  0 11:51 pts/0    00:00:00 /usr/local/mysql/bin/mysqld --defaults-file=/data/mysql/mysql3306/my3306.cnf --basedir=/usr/local/mysql/ --datadir=/data/mysql/mysql3306/data --plugin-dir=/usr/local/mysql//lib/plugin --log-error=/data/mysql/mysql3306/logs/error.log --open-files-limit=65536 --pid-file=ms51.pid --socket=/data/mysql/mysql3306/tmp/mysql.sock --port=3306	

  

Xenon進程

[mysql@ms51 xenon-master]$ ps -ef|grep xenon
mysql      637   608  0 11:51 pts/0    00:00:01 bin/xenon -c /etc/xenon/xenon.json

 

 

在node1本地可以登錄MySQL實例了

[mysql@ms51 xenon-master]$ mysql -S /data/mysql/mysql3306/tmp/mysql.sock -pmysql

   

啟動集群后, 可以發現Xenon目錄結構有新對象

[mysql@ms51 xenon-master]$ ls
bin  conf  docs  LICENSE  makefile  raft.meta  README.md  src  xenon.json  xenon.log
[mysql@ms51 xenon-master]$ ls raft.meta/
[mysql@ms51 xenon-master]$

  

 

這個目錄在xenon添加成員后會保存一個json文件peers.json,里面記錄成員信息。

 

node1查看Xenon集群狀態

 

[mysql@ms51 xenon-master]$ bin/xenoncli cluster status
+---------------------+-------------------------------+---------+---------+--------------------------+--------------------+----------------+----------+
|         ID          |             Raft              | Mysqld  | Monitor |          Backup          |       Mysql        | IO/SQL_RUNNING | MyLeader |
+---------------------+-------------------------------+---------+---------+--------------------------+--------------------+----------------+----------+
| 192.168.188.51:8801 | [ViewID:0 EpochID:0]@FOLLOWER | RUNNING | ON      | state:[NONE]␤            | [ALIVE] [READONLY] | [true/true]    |          |
|                     |                               |         |         | LastError:               |                    |                |          |
+---------------------+-------------------------------+---------+---------+--------------------------+--------------------+----------------+----------+
(1 rows)

  

#如果mysql列為空, 檢查Xenon日志。應該是連接用戶有問題,沒連接上——這就是前面創建root@127.0.0.1的原因。

 

 

 

在node1中添加其它節點

就是xenon.json中的host段的內容,先添加node2

[mysql@ms51 xenon-master]$ bin/xenoncli cluster add 192.168.188.51:8801,192.168.188.52:8801
 2020/05/01 12:03:23.654459       [WARNING]     cluster.prepare.to.add.nodes[192.168.188.51:8801,192.168.188.52:8801].to.leader[]
 2020/05/01 12:03:23.654522       [WARNING]     cluster.canot.found.leader.forward.to[192.168.188.51:8801]
 2020/05/01 12:03:23.655442       [WARNING]     cluster.add.nodes.to.leader[].done

  

 

 

再檢查集群狀態

[mysql@ms51 xenon-master]$ bin/xenoncli cluster status
+---------------------+-------------------------------+---------+---------+--------------------------+--------------------+----------------+----------+
|         ID          |             Raft              | Mysqld  | Monitor |          Backup          |       Mysql        | IO/SQL_RUNNING | MyLeader |
+---------------------+-------------------------------+---------+---------+--------------------------+--------------------+----------------+----------+
| 192.168.188.51:8801 | [ViewID:0 EpochID:1]@FOLLOWER | RUNNING | ON      | state:[NONE]␤            | [ALIVE] [READONLY] | [true/true]    |          |
|                     |                               |         |         | LastError:               |                    |                |          |
+---------------------+-------------------------------+---------+---------+--------------------------+--------------------+----------------+----------+
| 192.168.188.52:8801 | UNKNOW                        | UNKNOW  | UNKNOW  | UNKNOW                   | UNKNOW             | UNKNOW         | UNKNOW   |
+---------------------+-------------------------------+---------+---------+--------------------------+--------------------+----------------+----------+
(2 rows)

  

啟動node2的xenon節點

[mysql@ms52 ~]$  ps -ef|grep mysql
root       576    24  0 12:07 pts/0    00:00:00 su - mysql
mysql      577   576  0 12:07 pts/0    00:00:00 -bash
mysql      594   577  0 12:07 pts/0    00:00:00 ps -ef
mysql      595   577  0 12:07 pts/0    00:00:00 grep --color=auto mysql

[mysql@ms52 ~]$ cd /data/xenon-master/
[mysql@ms52 xenon-master]$  bin/xenon -c /etc/xenon/xenon.json  > ./xenon.log 2>&1 &

[mysql@ms52 xenon-master]$  ps -ef|grep mysql
root       576    24  0 12:07 pts/0    00:00:00 su - mysql
mysql      577   576  0 12:07 pts/0    00:00:00 -bash
mysql      596   577  0 12:07 pts/0    00:00:00 bin/xenon -c /etc/xenon/xenon.json
mysql      608     1  0 12:07 pts/0    00:00:00 /bin/sh /usr/local/mysql/bin/mysqld_safe --defaults-file=/data/mysql/mysql3306/my3306.cnf
mysql     1726   608  0 12:07 pts/0    00:00:00 /usr/local/mysql/bin/mysqld --defaults-file=/data/mysql/mysql3306/my3306.cnf --basedir=/usr/local/mysql/ --datadir=/data/mysql/mysql3306/data --plugin-dir=/usr/local/mysql//lib/plugin --log-error=/data/mysql/mysql3306/logs/error.log --open-files-limit=65536 --pid-file=ms52.pid --socket=/data/mysql/mysql3306/tmp/mysql.sock --port=3306

  

 

回到node1上查看集群狀態,能看到node2狀態了

但是此時node2並未添加成員,所以集群並未真正建立。此時不會開啟選舉,因此二者都是readonly狀態,myleader列也都是空值。

[mysql@ms51 xenon-master]$ bin/xenoncli cluster status
+---------------------+---------------------------------+---------+---------+--------------------------+--------------------+----------------+----------+
|         ID          |              Raft               | Mysqld  | Monitor |          Backup          |       Mysql        | IO/SQL_RUNNING | MyLeader |
+---------------------+---------------------------------+---------+---------+--------------------------+--------------------+----------------+----------+
| 192.168.188.51:8801 | [ViewID:34 EpochID:2]@CANDIDATE | RUNNING | ON      | state:[NONE]␤            | [ALIVE] [READONLY] | [true/true]    |          |
|                     |                                 |         |         | LastError:               |                    |                |          |
+---------------------+---------------------------------+---------+---------+--------------------------+--------------------+----------------+----------+
| 192.168.188.52:8801 | [ViewID:0 EpochID:0]@FOLLOWER   | RUNNING | ON      | state:[NONE]␤            | [ALIVE] [READONLY] | [true/true]    |          |
|                     |                                 |         |         | LastError:               |                    |                |          |
+---------------------+---------------------------------+---------+---------+--------------------------+--------------------+----------------+----------+
(2 rows)

  

 

node2上查看集群狀態,只能看到自己

[mysql@ms52 xenon-master]$ bin/xenoncli cluster status
+---------------------+-------------------------------+---------+---------+--------------------------+--------------------+----------------+----------+
|         ID          |             Raft              | Mysqld  | Monitor |          Backup          |       Mysql        | IO/SQL_RUNNING | MyLeader |
+---------------------+-------------------------------+---------+---------+--------------------------+--------------------+----------------+----------+
| 192.168.188.52:8801 | [ViewID:0 EpochID:0]@FOLLOWER | RUNNING | ON      | state:[NONE]␤            | [ALIVE] [READONLY] | [true/true]    |          |
|                     |                               |         |         | LastError:               |                    |                |          |
+---------------------+-------------------------------+---------+---------+--------------------------+--------------------+----------------+----------+
(1 rows)

  

 

 

在node2上添加集群成員node1

[mysql@ms52 xenon-master]$  bin/xenoncli cluster add 192.168.188.51:8801,192.168.188.52:8801
 2020/05/01 14:49:28.836254       [WARNING]     cluster.prepare.to.add.nodes[192.168.188.51:8801,192.168.188.52:8801].to.leader[]
 2020/05/01 14:49:28.836297       [WARNING]     cluster.canot.found.leader.forward.to[192.168.188.52:8801]
 2020/05/01 14:49:28.837039       [WARNING]     cluster.add.nodes.to.leader[].done

[mysql@ms52 xenon-master]$ bin/xenoncli cluster status
+---------------------+-------------------------------+---------+---------+--------------------------+---------------------+----------------+---------------------+
|         ID          |             Raft              | Mysqld  | Monitor |          Backup          |        Mysql        | IO/SQL_RUNNING |      MyLeader       |
+---------------------+-------------------------------+---------+---------+--------------------------+---------------------+----------------+---------------------+
| 192.168.188.52:8801 | [ViewID:4 EpochID:1]@FOLLOWER | RUNNING | ON      | state:[NONE]␤            | [ALIVE] [READONLY]  | [true/true]    | 192.168.188.51:8801 |
|                     |                               |         |         | LastError:               |                     |                |                     |
+---------------------+-------------------------------+---------+---------+--------------------------+---------------------+----------------+---------------------+
| 192.168.188.51:8801 | [ViewID:4 EpochID:1]@LEADER   | RUNNING | ON      | state:[NONE]␤            | [ALIVE] [READWRITE] | [true/true]    | 192.168.188.51:8801 |
|                     |                               |         |         | LastError:               |                     |                |                     |
+---------------------+-------------------------------+---------+---------+--------------------------+---------------------+----------------+---------------------+
(2 rows)

  

 

 

這個過程中node1上查看集群狀態,可以發現隨着選舉的進行,兩節點狀態角色的轉變

[mysql@ms51 xenon-master]$ bin/xenoncli cluster status
+---------------------+--------------------------------+---------+---------+--------------------------+--------------------+----------------+----------+
|         ID          |              Raft              | Mysqld  | Monitor |          Backup          |       Mysql        | IO/SQL_RUNNING | MyLeader |
+---------------------+--------------------------------+---------+---------+--------------------------+--------------------+----------------+----------+
| 192.168.188.51:8801 | [ViewID:1 EpochID:1]@CANDIDATE | RUNNING | ON      | state:[NONE]␤            | [ALIVE] [READONLY] | [true/true]    |          |
|                     |                                |         |         | LastError:               |                    |                |          |
+---------------------+--------------------------------+---------+---------+--------------------------+--------------------+----------------+----------+
| 192.168.188.52:8801 | [ViewID:0 EpochID:0]@FOLLOWER  | RUNNING | ON      | state:[NONE]␤            | [ALIVE] [READONLY] | [true/true]    |          |
|                     |                                |         |         | LastError:               |                    |                |          |
+---------------------+--------------------------------+---------+---------+--------------------------+--------------------+----------------+----------+
(2 rows)
[mysql@ms51 xenon-master]$ bin/xenoncli cluster status
+---------------------+-------------------------------+---------+---------+--------------------------+---------------------+----------------+---------------------+
|         ID          |             Raft              | Mysqld  | Monitor |          Backup          |        Mysql        | IO/SQL_RUNNING |      MyLeader       |
+---------------------+-------------------------------+---------+---------+--------------------------+---------------------+----------------+---------------------+
| 192.168.188.51:8801 | [ViewID:4 EpochID:1]@LEADER   | RUNNING | ON      | state:[NONE]␤            | [ALIVE] [READWRITE] | [true/true]    | 192.168.188.51:8801 |
|                     |                               |         |         | LastError:               |                     |                |                     |
+---------------------+-------------------------------+---------+---------+--------------------------+---------------------+----------------+---------------------+
| 192.168.188.52:8801 | [ViewID:4 EpochID:1]@FOLLOWER | RUNNING | ON      | state:[NONE]␤            | [ALIVE] [READONLY]  | [true/true]    | 192.168.188.51:8801 |
|                     |                               |         |         | LastError:               |                     |                |                     |
+---------------------+-------------------------------+---------+---------+--------------------------+---------------------+----------------+---------------------+
(2 rows)

  

 

這一步可以得出結論:

Xenon集群創建時,需要至少保證2個節點之間彼此添加對方為成員,才會開始選舉。

否則都read only。

 

在node1和node2上添加成員node3,並啟動node3的Xenon

[mysql@ms51 xenon-master]$ bin/xenoncli cluster add 192.168.188.51:8801,192.168.188.52:8801,192.168.188.53:8801
 2020/05/01 15:00:52.517000       [WARNING]     cluster.prepare.to.add.nodes[192.168.188.51:8801,192.168.188.52:8801,192.168.188.53:8801].to.leader[192.168.188.51:8801]
 2020/05/01 15:00:52.518164       [WARNING]     cluster.add.nodes.to.leader[192.168.188.51:8801].done
[mysql@ms51 xenon-master]$ bin/xenoncli cluster status
+---------------------+-------------------------------+---------+---------+--------------------------+---------------------+----------------+---------------------+
|         ID          |             Raft              | Mysqld  | Monitor |          Backup          |        Mysql        | IO/SQL_RUNNING |      MyLeader       |
+---------------------+-------------------------------+---------+---------+--------------------------+---------------------+----------------+---------------------+
| 192.168.188.51:8801 | [ViewID:6 EpochID:2]@LEADER   | RUNNING | ON      | state:[NONE]␤            | [ALIVE] [READWRITE] | [true/true]    | 192.168.188.51:8801 |
|                     |                               |         |         | LastError:               |                     |                |                     |
+---------------------+-------------------------------+---------+---------+--------------------------+---------------------+----------------+---------------------+
| 192.168.188.52:8801 | [ViewID:6 EpochID:2]@FOLLOWER | RUNNING | ON      | state:[NONE]␤            | [ALIVE] [READONLY]  | [true/true]    | 192.168.188.51:8801 |
|                     |                               |         |         | LastError:               |                     |                |                     |
+---------------------+-------------------------------+---------+---------+--------------------------+---------------------+----------------+---------------------+
| 192.168.188.53:8801 | UNKNOW                        | UNKNOW  | UNKNOW  | UNKNOW                   | UNKNOW              | UNKNOW         | UNKNOW              |
+---------------------+-------------------------------+---------+---------+--------------------------+---------------------+----------------+---------------------+
(3 rows)

[mysql@ms53 xenon-master]$  bin/xenon -c /etc/xenon/xenon.json  > xenon.log  2>&1 &
[mysql@ms53 xenon-master]$  bin/xenoncli cluster status
+---------------------+-------------------------------+------------+---------+--------------------------+--------------------+----------------+----------+
|         ID          |             Raft              |   Mysqld   | Monitor |          Backup          |       Mysql        | IO/SQL_RUNNING | MyLeader |
+---------------------+-------------------------------+------------+---------+--------------------------+--------------------+----------------+----------+
| 192.168.188.53:8801 | [ViewID:0 EpochID:0]@FOLLOWER | NOTRUNNING | ON      | state:[NONE]␤            | [ALIVE] [READONLY] | [true/true]    |          |
|                     |                               |            |         | LastError:               |                    |                |          |
+---------------------+-------------------------------+------------+---------+--------------------------+--------------------+----------------+----------+
(1 rows)

  

 

 

通過node1查看集群狀態

[mysql@ms51 xenon-master]$ bin/xenoncli cluster status
+---------------------+-------------------------------+---------+---------+--------------------------+---------------------+----------------+---------------------+
|         ID          |             Raft              | Mysqld  | Monitor |          Backup          |        Mysql        | IO/SQL_RUNNING |      MyLeader       |
+---------------------+-------------------------------+---------+---------+--------------------------+---------------------+----------------+---------------------+
| 192.168.188.51:8801 | [ViewID:6 EpochID:2]@LEADER   | RUNNING | ON      | state:[NONE]␤            | [ALIVE] [READWRITE] | [true/true]    | 192.168.188.51:8801 |
|                     |                               |         |         | LastError:               |                     |                |                     |
+---------------------+-------------------------------+---------+---------+--------------------------+---------------------+----------------+---------------------+
| 192.168.188.52:8801 | [ViewID:6 EpochID:2]@FOLLOWER | RUNNING | ON      | state:[NONE]␤            | [ALIVE] [READONLY]  | [true/true]    | 192.168.188.51:8801 |
|                     |                               |         |         | LastError:               |                     |                |                     |
+---------------------+-------------------------------+---------+---------+--------------------------+---------------------+----------------+---------------------+
| 192.168.188.53:8801 | [ViewID:0 EpochID:0]@FOLLOWER | RUNNING | ON      | state:[NONE]␤            | [ALIVE] [READONLY]  | [true/true]    |                     |
|                     |                               |         |         | LastError:               |                     |                |                     |
+---------------------+-------------------------------+---------+---------+--------------------------+---------------------+----------------+---------------------+
(3 rows)

  

 

 

node3添加集群成員,然后立即通過node1查看集群狀態

[mysql@ms53 xenon-master]$  bin/xenoncli cluster add 192.168.188.51:8801,192.168.188.52:8801,192.168.188.53:8801
 2020/05/01 15:04:39.640015       [WARNING]     cluster.prepare.to.add.nodes[192.168.188.51:8801,192.168.188.52:8801,192.168.188.53:8801].to.leader[]
 2020/05/01 15:04:39.640080       [WARNING]     cluster.canot.found.leader.forward.to[192.168.188.53:8801]
 2020/05/01 15:04:39.641207       [WARNING]     cluster.add.nodes.to.leader[].done

[mysql@ms51 xenon-master]$ bin/xenoncli cluster status
+---------------------+-------------------------------+---------+---------+--------------------------+---------------------+----------------+---------------------+
|         ID          |             Raft              | Mysqld  | Monitor |          Backup          |        Mysql        | IO/SQL_RUNNING |      MyLeader       |
+---------------------+-------------------------------+---------+---------+--------------------------+---------------------+----------------+---------------------+
| 192.168.188.51:8801 | [ViewID:6 EpochID:2]@LEADER   | RUNNING | ON      | state:[NONE]␤            | [ALIVE] [READWRITE] | [true/true]    | 192.168.188.51:8801 |
|                     |                               |         |         | LastError:               |                     |                |                     |
+---------------------+-------------------------------+---------+---------+--------------------------+---------------------+----------------+---------------------+
| 192.168.188.52:8801 | [ViewID:6 EpochID:2]@FOLLOWER | RUNNING | ON      | state:[NONE]␤            | [ALIVE] [READONLY]  | [true/true]    | 192.168.188.51:8801 |
|                     |                               |         |         | LastError:               |                     |                |                     |
+---------------------+-------------------------------+---------+---------+--------------------------+---------------------+----------------+---------------------+
| 192.168.188.53:8801 | [ViewID:6 EpochID:2]@FOLLOWER | RUNNING | ON      | state:[NONE]␤            | [ALIVE] [READONLY]  | [true/true]    | 192.168.188.51:8801 |
|                     |                               |         |         | LastError:               |                     |                |                     |
+---------------------+-------------------------------+---------+---------+--------------------------+---------------------+----------------+---------------------+
(3 rows)

  

 

 

可以發現,在node3添加集群成員前,在集群中查看node3節點的MyLeader是空值。在node3添加集群成員后,集群狀態中MyLeader列才有值。

 

這一步可以得出結論:MyLeader非空時才意味着節點接入了Xenon集群的業務。

 

 

啟動集群后

因為踩過坑,一定要檢查一下各節點半同步的狀態

由於timeout設置的非常大,如果半同步未建立成功,那么數據庫操作會hang,生產的話就出大事了。

node1:

mysql> show master status;
+------------------+----------+--------------+------------------+------------------------------------------+
| File             | Position | Binlog_Do_DB | Binlog_Ignore_DB | Executed_Gtid_Set                        |
+------------------+----------+--------------+------------------+------------------------------------------+
| mysql-bin.000005 |      194 |              |                  | 5ea86dca-8b58-11ea-86d8-0242c0a8bc33:1-5 |
+------------------+----------+--------------+------------------+------------------------------------------+
1 row in set (0.00 sec)

mysql> show slave status\G
Empty set (0.00 sec)

mysql> show global status like '%semi%';
+--------------------------------------------+-------+
| Variable_name                              | Value |
+--------------------------------------------+-------+
| Rpl_semi_sync_master_clients               | 2     |
| Rpl_semi_sync_master_status                | ON    |
+--------------------------------------------+-------+
15 rows in set (0.01 sec)

  

 

node2:

mysql> show master status;
+------------------+----------+--------------+------------------+------------------------------------------+
| File             | Position | Binlog_Do_DB | Binlog_Ignore_DB | Executed_Gtid_Set                        |
+------------------+----------+--------------+------------------+------------------------------------------+
| mysql-bin.000006 |      194 |              |                  | 5ea86dca-8b58-11ea-86d8-0242c0a8bc33:1-5 |
+------------------+----------+--------------+------------------+------------------------------------------+
1 row in set (0.00 sec)

mysql> show slave status \G
..
*************************** 1. row ***************************
1 row in set (0.00 sec)

mysql>
mysql> show global status like '%semi%';
+--------------------------------------------+-------+
| Variable_name                              | Value |
+--------------------------------------------+-------+
| Rpl_semi_sync_slave_status                 | ON    |
+--------------------------------------------+-------+
15 rows in set (0.01 sec)

   

node3:

mysql>  show master status;
+------------------+----------+--------------+------------------+------------------------------------------+
| File             | Position | Binlog_Do_DB | Binlog_Ignore_DB | Executed_Gtid_Set                        |
+------------------+----------+--------------+------------------+------------------------------------------+
| mysql-bin.000005 |      194 |              |                  | 5ea86dca-8b58-11ea-86d8-0242c0a8bc33:1-5 |
+------------------+----------+--------------+------------------+------------------------------------------+
1 row in set (0.00 sec)

mysql> show slave status \G
*************************** 1. row ***************************
..
1 row in set (0.00 sec)

mysql> show global status like '%semi%';
+--------------------------------------------+-------+
| Variable_name                              | Value |
+--------------------------------------------+-------+
| Rpl_semi_sync_slave_status                 | ON    |
+--------------------------------------------+-------+
15 rows in set (0.00 sec)

  

 

測試一下

前面配置Xenon參數的時候已經額外增加了主從角色的sysvars,以靈活啟用半同步參數。

 

mysql> create database kk;
Query OK, 1 row affected (0.15 sec)

mysql> show databases;
+--------------------+
| Database           |
+--------------------+
| information_schema |
| kk                 |
| mysql              |
| performance_schema |
| sys                |
+--------------------+
5 rows in set (0.00 sec)

  

 

Xenon集群到這里就搭建完了。

 

 

通過Xenon集群進行備份、擴容節點

思路:

  1. 模擬業務運行:使用循環,通過sip(vip)訪問數據庫,並不斷的插入數據。
  2. 建立一個新環境,配置好os、mysql和xenon,不初始化mysql實例
  3. 在新環境中配置xen
  4. 嘗試通過xen的backup和rebuildme建立新節點並加入到集群

 

先說結論:

  • 備份
    • 使用Xenon備份時,備份位置必須指定絕對路徑,且mysql用戶對該路徑具有寫權限。
    • 使用Xenon備份時,備份位置目錄不存在時Xenon會自動通過ssh通道創建目錄。
    • 重建/擴容
      • 為Xenon集群添加新節點時,新節點無需初始化MySQL實例,可以基於xenon backup直接rebuildme建立新節點。
      • Xenon rebuildme基於xtrabackup,因此在通過rebuildme添加新節點時,需要創建好mysql datadir和my.cnf ,與xenon.json中mysql section對應參數相符合。

 

擴容節點的初步嘗試

查看當前集群情況

[mysql@ms52 xenon-master]$ bin/xenoncli cluster status
+---------------------+-------------------------------+---------+---------+--------------------------+---------------------+----------------+---------------------+
|         ID          |             Raft              | Mysqld  | Monitor |          Backup          |        Mysql        | IO/SQL_RUNNING |      MyLeader       |
+---------------------+-------------------------------+---------+---------+--------------------------+---------------------+----------------+---------------------+
| 192.168.188.51:8801 | [ViewID:3 EpochID:0]@LEADER   | RUNNING | ON      | state:[NONE]␤            | [ALIVE] [READWRITE] | [true/true]    | 192.168.188.51:8801 |
|                     |                               |         |         | LastError:               |                     |                |                     |
+---------------------+-------------------------------+---------+---------+--------------------------+---------------------+----------------+---------------------+
| 192.168.188.52:8801 | [ViewID:3 EpochID:0]@FOLLOWER | RUNNING | ON      | state:[NONE]␤            | [ALIVE] [READONLY]  | [true/true]    | 192.168.188.51:8801 |
|                     |                               |         |         | LastError:               |                     |                |                     |
+---------------------+-------------------------------+---------+---------+--------------------------+---------------------+----------------+---------------------+
| 192.168.188.53:8801 | [ViewID:3 EpochID:0]@FOLLOWER | RUNNING | ON      | state:[NONE]␤            | [ALIVE] [READONLY]  | [true/true]    | 192.168.188.51:8801 |
|                     |                               |         |         | LastError:               |                     |                |                     |
+---------------------+-------------------------------+---------+---------+--------------------------+---------------------+----------------+---------------------+
(3 rows)

  

 

集群里創建一個用戶,供client角色使用

mysql>  create user kk@'192.168.188.%' identified by 'kk';
Query OK, 0 rows affected (0.02 sec)

mysql> grant super on *.* to  kk@'192.168.188.%';
Query OK, 0 rows affected (0.02 sec)

mysql> grant all privileges on *.* to kk@'192.168.188.%';
Query OK, 0 rows affected (0.01 sec)

  

 

隨便找一個節點作為client角色,通過sip訪問xen集群

再這里選擇使用ms53節點作為client

[mysql@ms53 xenon-master]$ mysql -h 192.168.188.50 -ukk -pkk
	
mysql> show databases;
+--------------------+
| Database           |
+--------------------+
| information_schema |
| kk                 |
| mysql              |
| performance_schema |
| sys                |
+--------------------+
5 rows in set (0.00 sec)

mysql> use kk
Database changed
mysql> show tables;
Empty set (0.00 sec)

mysql> create table k1(id int primary key auto_increment, numbers int);
Query OK, 0 rows affected (0.04 sec)

mysql> exit

  

模擬業務運行,不停產生事務

[mysql@ms53 xenon-master]$ while : ;do echo "insert into kk.k1(numbers) values(round(rand()*10086));" |mysql -h  192.168.188.50 -ukk -pkk ;sleep 10 ;done

  

 

另外起一個session ,查看一下表的狀態

mysql> select count(*) from kk.k1;
+----------+
| count(*) |
+----------+
|     2779 |
+----------+
1 row in set (0.00 sec)

  okay,讓它先插着吧,我們去創建新環境

 

創建新節點

 

配置新節點

  • 創建mysql用戶

步驟略。

  • 配置mysql環境

步驟略。

  • 配置mysql用戶ssh互信

步驟略。

  • 安裝xtrabackup

步驟略。

  • 部署xenon

步驟略。

 

嘗試啟動xen

[mysql@ms54 xenon-master]$ bin/xenon -c /etc/xenon/xenon.json  > xenon.log 2>&1 &
[1] 18042
[mysql@ms54 xenon-master]$ bin/xenoncli cluster status
	cluster.go:227: unexpected error: get.client.error[dial tcp 192.168.188.54:8801: connect: connection refused]
	
	 2020/05/01 13:45:11.234158       [PANIC]        get.client.error[dial tcp 192.168.188.54:8801: connect: connection refused]
	panic:    [PANIC]        get.client.error[dial tcp 192.168.188.54:8801: connect: connection refused]
	
	goroutine 1 [running]:
	xbase/xlog.(*Log).Panic(0xc000184300, 0x8d111e, 0x2, 0xc000197b28, 0x1, 0x1)
	        /data/xenon-master/src/xbase/xlog/xlog.go:142 +0x153
	cli/cmd.ErrorOK(0x9796e0, 0xc000184880)
	        /data/xenon-master/src/cli/cmd/common.go:35 +0x245
	cli/cmd.clusterStatusCommandFn(0xc0001e4fc0, 0xcac370, 0x0, 0x0)
	        /data/xenon-master/src/cli/cmd/cluster.go:227 +0xaa
	vendor/github.com/spf13/cobra.(*Command).execute(0xc0001e4fc0, 0xcac370, 0x0, 0x0, 0xc0001e4fc0, 0xcac370)
	        /data/xenon-master/src/vendor/github.com/spf13/cobra/command.go:603 +0x22e
	vendor/github.com/spf13/cobra.(*Command).ExecuteC(0xc78820, 0x1, 0xc000197f78, 0x40744f)
	        /data/xenon-master/src/vendor/github.com/spf13/cobra/command.go:689 +0x2bc
	vendor/github.com/spf13/cobra.(*Command).Execute(...)
	        /data/xenon-master/src/vendor/github.com/spf13/cobra/command.go:648
	main.main()
	        /data/xenon-master/src/cli/cli.go:43 +0x31

  

 

會報錯,查看日志是無法訪問本地mysql

 

無視錯誤,直接添加集群成員

[mysql@ms54 xenon-master]$  bin/xenoncli cluster add 192.168.188.51:8801,192.168.188.52:8801,192.168.188.53:8801,192.168.188.54:8801
 2020/05/01 13:46:11.930873       [WARNING]     cluster.prepare.to.add.nodes[192.168.188.51:8801,192.168.188.52:8801,192.168.188.53:8801,192.168.188.54:8801].to.leader[]
 2020/05/01 13:46:11.930963       [WARNING]     cluster.canot.found.leader.forward.to[192.168.188.54:8801]
 2020/05/01 13:46:11.933108       [WARNING]     cluster.add.nodes.to.leader[].done
[mysql@ms54 xenon-master]$ bin/xenoncli cluster status
+---------------------+-------------------------------+------------+---------+--------------------------+---------------------+----------------+---------------------+
|         ID          |             Raft              |   Mysqld   | Monitor |          Backup          |        Mysql        | IO/SQL_RUNNING |      MyLeader       |
+---------------------+-------------------------------+------------+---------+--------------------------+---------------------+----------------+---------------------+
| 192.168.188.54:8801 | [ViewID:0 EpochID:3]@FOLLOWER | NOTRUNNING | ON      | state:[NONE]␤            | []                  | [false/false]  |                     |
|                     |                               |            |         | LastError:               |                     |
+---------------------+-------------------------------+------------+---------+--------------------------+---------------------+----------------+---------------------+
| 192.168.188.51:8801 | [ViewID:3 EpochID:0]@LEADER   | RUNNING    | ON      | state:[NONE]␤            | [ALIVE] [READWRITE] | [true/true]    | 192.168.188.51:8801 |
|                     |                               |            |         | LastError:               |                     |
+---------------------+-------------------------------+------------+---------+--------------------------+---------------------+----------------+---------------------+
| 192.168.188.52:8801 | [ViewID:3 EpochID:0]@FOLLOWER | RUNNING    | ON      | state:[NONE]␤            | [ALIVE] [READONLY]  | [true/true]    | 192.168.188.51:8801 |
|                     |                               |            |         | LastError:               |                     |
+---------------------+-------------------------------+------------+---------+--------------------------+---------------------+----------------+---------------------+
| 192.168.188.53:8801 | [ViewID:3 EpochID:0]@FOLLOWER | RUNNING    | ON      | state:[NONE]␤            | [ALIVE] [READONLY]  | [true/true]    | 192.168.188.51:8801 |
|                     |                               |            |         | LastError:               |                     |
+---------------------+-------------------------------+------------+---------+--------------------------+---------------------+----------------+---------------------+
(4 rows)

  

 

 

竟然成功了,好像有戲!

 

嘗試無備份情況下直接rebuild

 

[mysql@ms54 xenon-master]$ bin/xenoncli mysql rebuildme
 2020/05/01 13:47:36.300229       [WARNING]     =====prepare.to.rebuildme=====
                        IMPORTANT: Please check that the backup run completes successfully.
                                   At the end of a successful backup run innobackupex
                                   prints "completed OK!".

 2020/05/01 13:47:36.300912       [WARNING]     S1-->check.raft.leader
 2020/05/01 13:47:36.315906       [WARNING]     rebuildme.found.best.slave[192.168.188.52:8801].leader[192.168.188.51:8801]
 2020/05/01 13:47:36.315973       [WARNING]     S2-->prepare.rebuild.from[192.168.188.52:8801]....
 2020/05/01 13:47:36.317637       [WARNING]     S3-->check.bestone[192.168.188.52:8801].is.OK....
 2020/05/01 13:47:36.317689       [WARNING]     S4-->set.learner
 2020/05/01 13:47:36.319220       [WARNING]     S5-->stop.monitor
 2020/05/01 13:47:36.320562       [WARNING]     S6-->kill.mysql
 2020/05/01 13:47:36.347934       [WARNING]     S7-->check.bestone[192.168.188.52:8801].is.OK....
 2020/05/01 13:47:36.351788       [WARNING]     S8-->rm.datadir[/data/mysql/mysql3306/data]
 2020/05/01 13:47:36.351846       [WARNING]     S9-->xtrabackup.begin....
 2020/05/01 13:47:36.352273       [WARNING]     rebuildme.backup.req[&{From: BackupDir:/data/mysql/mysql3306/data SSHHost:192.168.188.54 SSHUser:mysql SSHPasswd:mysql SSHPort:22 IOPSLimits:100000 XtrabackupBinDir:/usr/bin}].from[192.168.188.52:8801]
 2020/05/01 13:47:36.862121       [PANIC]        rsp[cmd.outs.[completed OK!].found[1]!=expects[2]] != [OK]
panic:    [PANIC]        rsp[cmd.outs.[completed OK!].found[1]!=expects[2]] != [OK]

goroutine 1 [running]:
xbase/xlog.(*Log).Panic(0xc000094300, 0x8d8f06, 0xf, 0xc000191d88, 0x1, 0x1)
        /data/xenon-master/src/xbase/xlog/xlog.go:142 +0x153
cli/cmd.RspOK(...)
        /data/xenon-master/src/cli/cmd/common.go:41
cli/cmd.mysqlRebuildMeCommandFn(0xc0000e8b40, 0xcac370, 0x0, 0x0)
        /data/xenon-master/src/cli/cmd/mysql.go:268 +0x847
vendor/github.com/spf13/cobra.(*Command).execute(0xc0000e8b40, 0xcac370, 0x0, 0x0, 0xc0000e8b40, 0xcac370)
        /data/xenon-master/src/vendor/github.com/spf13/cobra/command.go:603 +0x22e
vendor/github.com/spf13/cobra.(*Command).ExecuteC(0xc78820, 0x1, 0xc000191f78, 0x40744f)
        /data/xenon-master/src/vendor/github.com/spf13/cobra/command.go:689 +0x2bc
vendor/github.com/spf13/cobra.(*Command).Execute(...)
        /data/xenon-master/src/vendor/github.com/spf13/cobra/command.go:648
main.main()
        /data/xenon-master/src/cli/cli.go:43 +0x31

  

毫無意外的失敗了

 

備份一下再試試

直接在新節點上執行備份

[mysql@ms54 xenon-master]$ bin/xenoncli mysql backup --to=/data/backup
 2020/05/01 13:49:02.476638       [WARNING]     rebuildme.found.best.slave[192.168.188.52:8801].leader[192.168.188.51:8801]
 2020/05/01 13:49:02.476764       [WARNING]     S1-->found.the.best.backup.host[192.168.188.52:8801]....
 2020/05/01 13:49:02.483030       [WARNING]     S2-->rm.and.mkdir.backupdir[/data/backup]
 2020/05/01 13:49:02.483097       [WARNING]     S3-->xtrabackup.begin....
 2020/05/01 13:49:02.483672       [WARNING]     rebuildme.backup.req[&{From: BackupDir:/data/backup SSHHost:192.168.188.54 SSHUser:mysql SSHPasswd:mysql SSHPort:22 IOPSLimits:100000 XtrabackupBinDir:/usr/bin}].from[192.168.188.52:8801]
 2020/05/01 13:49:09.695471       [WARNING]     S3-->xtrabackup.end....
 2020/05/01 13:49:09.695495       [WARNING]     S4-->apply-log.begin....
 2020/05/01 13:49:15.592119       [WARNING]     S4-->apply-log.end....
 2020/05/01 13:49:15.592178       [WARNING]     completed OK!
 2020/05/01 13:49:15.592183       [WARNING]     backup.all.done....

  

 

 

再次嘗試rebuildme

依然失敗。

 

突然想到,xtrabackup恢復的時候是需要指定my.cnf文件的, 而我當前新環境並沒有!也沒創建實例的數據目錄!

 

創建目錄,創建my3306.cnf

[mysql@ms54 xenon-master]$ mkdir /data/mysql/mysql3306/{data,logs,tmp} -p
[mysql@ms54 xenon-master]$ vi /data/mysql/mysql3306/my3306.cnf

  

 

終於成功了

[mysql@ms54 xenon-master]$ bin/xenoncli mysql rebuildme
 2020/05/01 14:04:53.910063       [WARNING]     =====prepare.to.rebuildme=====
                        IMPORTANT: Please check that the backup run completes successfully.
                                   At the end of a successful backup run innobackupex
                                   prints "completed OK!".

 2020/05/01 14:04:53.910476       [WARNING]     S1-->check.raft.leader
 2020/05/01 14:04:53.921750       [WARNING]     rebuildme.found.best.slave[192.168.188.52:8801].leader[192.168.188.51:8801]
 2020/05/01 14:04:53.921809       [WARNING]     S2-->prepare.rebuild.from[192.168.188.52:8801]....
 2020/05/01 14:04:53.923227       [WARNING]     S3-->check.bestone[192.168.188.52:8801].is.OK....
 2020/05/01 14:04:53.923273       [WARNING]     S4-->set.learner
 2020/05/01 14:04:53.924674       [WARNING]     S5-->stop.monitor
 2020/05/01 14:04:53.926274       [WARNING]     S6-->kill.mysql
 2020/05/01 14:04:53.942920       [WARNING]     S7-->check.bestone[192.168.188.52:8801].is.OK....
 2020/05/01 14:04:53.945976       [WARNING]     S8-->rm.datadir[/data/mysql/mysql3306/data]
 2020/05/01 14:04:53.946023       [WARNING]     S9-->xtrabackup.begin....
 2020/05/01 14:04:53.946406       [WARNING]     rebuildme.backup.req[&{From: BackupDir:/data/mysql/mysql3306/data SSHHost:192.168.188.54 SSHUser:mysql SSHPasswd:mysql SSHPort:22 IOPSLimits:100000 XtrabackupBinDir:/usr/bin}].from[192.168.188.52:8801]
 2020/05/01 14:05:00.153294       [WARNING]     S9-->xtrabackup.end....
 2020/05/01 14:05:00.153352       [WARNING]     S10-->apply-log.begin....
 2020/05/01 14:05:05.228702       [WARNING]     S10-->apply-log.end....
 2020/05/01 14:05:05.228755       [WARNING]     S11-->start.mysql.begin...
 2020/05/01 14:05:05.229831       [WARNING]     S11-->start.mysql.end...
 2020/05/01 14:05:05.229890       [WARNING]     S12-->wait.mysqld.running.begin....
 2020/05/01 14:05:08.238027       [WARNING]     wait.mysqld.running...
 2020/05/01 14:05:08.247805       [WARNING]     S12-->wait.mysqld.running.end....
 2020/05/01 14:05:08.247866       [WARNING]     S13-->wait.mysql.working.begin....
 2020/05/01 14:05:11.250009       [WARNING]     wait.mysql.working...
 2020/05/01 14:05:11.250786       [WARNING]     S13-->wait.mysql.working.end....
 2020/05/01 14:05:11.250863       [WARNING]     S14-->stop.and.reset.slave.begin....
 2020/05/01 14:05:11.354249       [WARNING]     S14-->stop.and.reset.slave.end....
 2020/05/01 14:05:11.354321       [WARNING]     S15-->reset.master.begin....
 2020/05/01 14:05:11.429591       [WARNING]     S15-->reset.master.end....
 2020/05/01 14:05:11.430310       [WARNING]     S15-->set.gtid_purged[5ea86dca-8b58-11ea-86d8-0242c0a8bc33:1-97741
].begin....
 2020/05/01 14:05:11.441819       [WARNING]     S15-->set.gtid_purged.end....
 2020/05/01 14:05:11.441889       [WARNING]     S16-->enable.raft.begin...
 2020/05/01 14:05:11.443257       [WARNING]     S16-->enable.raft.done...
 2020/05/01 14:05:11.443320       [WARNING]     S17-->wait[3000 ms].change.to.master...
 2020/05/01 14:05:11.443833       [WARNING]     S18-->start.slave.begin....
 2020/05/01 14:05:11.494324       [WARNING]     S18-->start.slave.end....
 2020/05/01 14:05:11.494404       [WARNING]     completed OK!
 2020/05/01 14:05:11.494422       [WARNING]     rebuildme.all.done....

  

 

成功了!

 

檢查一下集群狀態

[mysql@ms53 xenon-master]$ bin/xenoncli cluster status
+---------------------+-------------------------------+---------+---------+--------------------------+---------------------+----------------+---------------------+
|         ID          |             Raft              | Mysqld  | Monitor |          Backup          |        Mysql        | IO/SQL_RUNNING |      MyLeader       |
+---------------------+-------------------------------+---------+---------+--------------------------+---------------------+----------------+---------------------+
| 192.168.188.51:8801 | [ViewID:5 EpochID:1]@LEADER   | RUNNING | ON      | state:[NONE]␤            | [ALIVE] [READWRITE] | [true/true]    | 192.168.188.51:8801 |
|                     |                               |         |         | LastError:               |                     |                |                     |
+---------------------+-------------------------------+---------+---------+--------------------------+---------------------+----------------+---------------------+
| 192.168.188.52:8801 | [ViewID:5 EpochID:1]@FOLLOWER | RUNNING | ON      | state:[NONE]␤            | [ALIVE] [READONLY]  | [true/true]    | 192.168.188.51:8801 |
|                     |                               |         |         | LastError:               |                     |                |                     |
+---------------------+-------------------------------+---------+---------+--------------------------+---------------------+----------------+---------------------+
| 192.168.188.53:8801 | [ViewID:5 EpochID:1]@FOLLOWER | RUNNING | ON      | state:[NONE]␤            | [ALIVE] [READONLY]  | [true/true]    | 192.168.188.51:8801 |
|                     |                               |         |         | LastError:               |                     |                |                     |
+---------------------+-------------------------------+---------+---------+--------------------------+---------------------+----------------+---------------------+
| 192.168.188.54:8801 | [ViewID:5 EpochID:1]@FOLLOWER | RUNNING | ON      | state:[NONE]␤            | [ALIVE] [READONLY]  | [true/true]    | 192.168.188.51:8801 |
|                     |                               |         |         | LastError:               |                     |                |                     |
+---------------------+-------------------------------+---------+---------+--------------------------+---------------------+----------------+---------------------+
(4 rows)

  

檢查一下master和新節點的數據狀態、同步狀態

新節點:

[mysql@ms54 xenon-master]$ mysql -S /data/mysql/mysql3306/tmp/mysql.sock  -pmysql

mysql> select count(*) from kk.k1;
+----------+
| count(*) |
+----------+
|   104875 |
+----------+
1 row in set (0.01 sec)

  

 

 

master:

mysql> select count(*) from kk.k1;
+----------+
| count(*) |
+----------+
|   104875 |
+----------+
1 row in set (0.01 sec)

  

 

 

集群擴容成功!

 

 

 

Xenon集群的搭建總結:

  1. 集群節點數要奇數,不然影響選舉。
  2. 半同步的status非常值得關注,由於timeout設置的非常大,如果半同步未建立成功,那么數據庫操作會hang(等待ACK,但是slave永遠不會發出ACK,且slave已經完成了復制過來的動作)。解決辦法是手動將master復制降級為異步復制:在master上直接運行 set global rpl_semi_sync_master_enabled=0; 之后master會立即完成動作,此時再去調整各節點參數以啟用半同步。

 Xenon集群節點狀態的探索總結:

驗證過程太過冗長,直接上結論。

  1. 初始化集群階段, 啟動xenon@node1。此時xenon中查看集群狀態,node1節點恆為read only。
  2. 在node1上增加成員節點node2、node3 ,啟動xenon@node2。此時xenon中查看集群狀態,node1節點恆為read only,node2節點恆為read only,MyLeader都為空。
  3. 在node2上增加成員節點node1、node3。此時通過node1查看集群狀態,node1、node2節點會短時間內完成選舉,勝出者成為master,節點狀態變更為read/write。
  4. 如果3節點集群突然s2個slave都死掉, xenon在10次重試后,master會解除vip(sip),唯一存活的實例切換為read only。
  5. 集群內單節點存活時,單節點永遠ro;大於等於2個節點時會選舉master,master會rw。
  6. 如果所有節點上的xenon進程都被殺掉,那么sip會殘留在最后綁定sip的節點上(暫稱為舊master)。如果不理會舊master的xen狀態,只對其它節點重啟xenon后,新master會持有sip。此時兩個節點查看ip的話,會發現都持有sip。但是由於arping動作,網絡內其它機器都會連接到新master上。
  7. 接6,通過在舊master上ssh sip及mysql -h sip, 會發現舊master依然毫不知情的連接給自己, 哈哈。從這一步可以明白xenon的一個邏輯——加入xenon集群后,經過選舉,落選者執行會被raft執行leader-stop-command,釋放掉sip。

  

Xenon backup/rebuild 探索總結:

  • backup 備份
    • 使用Xenon備份時,備份位置必須指定絕對路徑,且mysql用戶對該路徑具有寫權限。
    • 使用Xenon備份時,備份位置目錄不存在時Xenon會自動通過ssh通道創建目錄。
    • rebuileme 重建/擴容
      • 為Xenon集群添加新節點時,新節點無需初始化MySQL實例,可以基於xenon backup直接rebuildme建立新節點。
      • Xenon rebuildme基於xtrabackup,因此在通過rebuildme添加新節點時,需要創建好mysql datadir和my.cnf ,與xenon.json中mysql section對應參數相符合。

 

 

https://github.com/jolleykong/gitnote_ms/blob/master/mysql/xenon_on_MySQL5.7.md


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM