MGR搭建過程中遇到的錯誤以及解決辦法


 

轉自:https://cloud.tencent.com/developer/article/1533657

MGR搭建過程中遇到的一些故障

實際中我一共部署了三套MGR環境,分別是單機多實例的MGR環境,多機同網段的MGR環境,多機不同網段的MGR環境,部署的過程大同小異,但是還是有一些有出入的地方,這里把部署過程遇到的故障列舉出來,供大家參考,如果能有幸解決您在部署時候的問題,那是極好的。

01

常見故障1

[ERROR] Plugin group_replication reported: 'This member has more executed transactions than those present in the group. Local transactions: bb874065-c485-11e8-8b52-000c2934472e:1 > Group transactions: 3db33b36-0e51-409f-a61d-c99756e90155:1-11'
[ERROR] Plugin group_replication reported: 'The member contains transactions not present in the group. The member will now exit the group.'
[Note] Plugin group_replication reported: ‘To force this member into the group you can use the group_replication_allow_local_disjoint_gtids_join option’

解決方案:

根據提示打開set global group_replication_allow_local_disjoint_gtids_join=ON;

02

常見故障2

[ERROR] Plugin group_replication reported: 'This member has more executed transactions than those present in the group. Local transactions: bb874065-c485-11e8-8b52-000c2934472e:1 > Group transactions: 3db33b36-0e51-409f-a61d-c99756e90155:1-15'
[Warning] Plugin group_replication reported: 'The member contains transactions not present in the group. It is only allowed to join due to group_replication_allow_local_disjoint_gtids_join option'
[Note] Plugin group_replication reported: 'This server is working as secondary member with primary member address localhost.localdomaion:3306.'

 

解決方案:

該故障和故障1的不同之處在於該問題出現時,參數group_replication_allow_local_disjoint_gtids_join已經設置成為on了。解決該問題的方法是執行reset master就行,然后重新在主節點和從節點開啟通道,即

CHANGE MASTER TO MASTER_USER='rpl_user', MASTER_PASSWORD='rpl_pass' FOR CHANNEL 'group_replication_recovery';

03

常見故障3

本機測試時,遇到下面的問題

[Warning] Storing MySQL user name or password information in the master info repository is not secure and is therefore not recommended. Please consider using the USER and PASSWORD connection options for START SLAVE; see the 'START SLAVE Syntax' in the MySQL Manual for more information.
 [ERROR] Slave I/O for channel 'group_replication_recovery': error connecting to master 'rpl_user@localhost.localdomaion:' - retry-time: 60  retries: 1, Error_code: 2005
 [ERROR] Plugin group_replication reported: 'There was an error when connecting to the donor server. Please check that group_replication_recovery channel credentials and all MEMBER_HOST column values of performance_schema.replication_group_members table are correct and DNS resolvable.'
 [ERROR] Plugin group_replication reported: 'For details please check performance_schema.replication_connection_status table and error log messages of Slave I/O for channel group_replication_recovery.'
 [Note] Plugin group_replication reported: 'Retrying group recovery connection with another donor. Attempt /'

解決方案:

這個問題是由於測試環境上三台主機的hostname設置成為了同一個名稱,改了hostname之后,這個問題就解決了。

04

常見故障4

#在線上正式環境操作時,出現下面的錯誤,
mysql--root@localhost:(none) ::>>START GROUP_REPLICATION;
ERROR  (HY000): The server is not configured properly to be an active member of the group. Please see more details on error log.
#查看log文件,發現只有一個warning:
2019-02-20T07::30.233937Z  [Warning] Plugin group_replication reported: 'Group Replication requires slave-preserve-commit-order to be set to ON when using more than 1 applier threads.

解決方案:

mysql--root@localhost:(none) ::>>show variables like "%preserve%";
+--------------------------------+---------+
| Variable_name               | Value   |
+--------------------------------+---------+
| slave_preserve_commit_order | OFF    |
+--------------------------------+---------+
 row in set (0.01 sec)
mysql--root@localhost:(none) ::>>set global slave_preserve_commit_order=;
Query OK,  rows affected (0.00 sec)

05

常見問題5

2019-02-20T08::31.088437Z  [Warning] Plugin group_replication reported: '[GCS] Connection attempt from IP address 192.168.9.208 refused. 
Address is not in the IP whitelist.'
2019-02-20T08::32.088676Z  [Warning] Plugin group_replication reported: '[GCS] Connection attempt from IP address 192.168.9.208 refused.
 Address is not in the IP whitelist.'

解決方法:

在my.cnf中配置group_replication_ip_whitelist參數即可解決

06

常見問題6

2019-02-20T08::44.087492Z  [Warning] Plugin group_replication reported: 'read failed'
2019-02-20T08::44.096171Z  [ERROR] Plugin group_replication reported: '[GCS] The member was unable to join the group. Local port: 24801'
2019-02-20T08::14.065775Z  [ERROR] Plugin group_replication reported: 'Timeout on wait for view after joining group

解決方案:

將my.cnf中的參數group_replication_group_seeds設置為只包含除自身外其他group成員的ip地址以及內部通信端口,如果寫成group所有成員的IP地址,則會出現這個錯誤,這和相同網段的MGR部署方式有些差異。

07

常見問題7

[ERROR] Plugin group_replication reported: ‘[GCS] Error on opening a connection to oceanbase07: on local port: '.’
[ERROR] Plugin group_replication reported: ‘[GCS] Error on opening a connection to oceanbase08: on local port: '.’
[ERROR] Plugin group_replication reported: ‘[GCS] Error on opening a connection to oceanbase07: on local port: '.’

解決方案:

未開通防火牆上的固定端口,開通防火牆之后即可解決

08

常見問題8

[Warning] Storing MySQL user name or password information in the master info repository is not secure and is therefore not recommended. Please consider using the USER and PASSWORD connection options for START SLAVE; see the 'START SLAVE Syntax' in the MySQL Manual for more information.
 [ERROR] Slave I/O for channel 'group_replication_recovery': Master command COM_REGISTER_SLAVE failed: Access denied for user 'rpl_user'@'%' (using password: YES) (Errno: 1045), Error_code: 1597
 [ERROR] Slave I/O thread couldn't register on master
 [Note] Slave I/O thread exiting for channel 'group_replication_recovery', read up to log 'FIRST', position      

 

解決方案:

漏掉了某個節點的用戶,為了保險起見,在group節點上執行

CREATE USER rpl_user@'%';

GRANT REPLICATION SLAVE ON *.* TO rpl_user@'%' IDENTIFIED BY 'rpl_pass';

09

常見問題9

 [ERROR] Failed to open the relay log './localhost-relay-bin.000011' (relay_log_pos ).
 [ERROR] Could not find target log file mentioned in relay log info in the index file './work_NAT_1-relay-bin. index' during relay log initialization.
 [ERROR] Slave: Failed to initialize the master info structure for channel ''; its record may still be present in 'mysql.slave_master_info' table, consider deleting it.
 [ERROR] Failed to open the relay log './localhost-relay-bin-group_replication_recovery.000001' (relay_log_pos      ).
 [ERROR] Could not find target log file mentioned in relay log info in the index file './work_NAT_1-relay-bin-group_replication_recovery.index' during relay log initialization.
 [ERROR] Slave: Failed to initialize the master info structure for channel 'group_replication_recovery'; its record may still be present in 'mysql.slave_master_info' table, consider deleting it.
 [ERROR] Failed to create or recover replication info repositories.
 [ERROR] Slave SQL for channel '': Slave failed to initialize relay log info structure from the repository, Error_code: 
 [ERROR] /usr/local/mysql/bin/mysqld: Slave failed to initialize relay log info structure from the repository
 [ERROR] Failed to start slave threads for channel ''

解決方案:

這個錯誤是由於slave節點由於某種原因導致找不到relay-log的位置了,需要重新reset slave

 

 

第二部分:

 

MySQL 5.7 Group Replication錯誤總結(r11筆記第84天)

   今天來總結下MySQL 5.7中的一些問題處理,相對來說常規一些。搭建的過程我就不用多說了,昨天的文章里面可以看到一個基本的方式,在測試環境很容易模擬,如果在多台物理機環境中搭建是不是也一樣呢,答案是肯定的,我自己都一一試過了。

    因為搭建的環境官方建議也是single_primary的方式,即一主寫入,其它做讀,也就是讀寫分離,當然支持multi_primary理論上也是可行的,但是還是有點小問題,我們就以single_primary來舉例。

 問題1:

讀節點加入組的時候,start group_replication拋出了下面的錯誤。基本碰到這個錯誤,你離搭建成功就不遠了。

2017-02-20T07:56:30.064556Z 0 [ERROR] Plugin group_replication reported: 'This member has more executed transactions than those present in the group. Local transactions: 89328c79-f730-11e6-ab63-782bcb377193:1-2 > Group transactions: 7c744904-f730-11e6-a72d-782bcb377193:1-4'
2017-02-20T07:56:30.064580Z 0 [ERROR] Plugin group_replication reported: 'The member contains transactions not present in the group. The member will now exit the group.'
2017-02-20T07:56:30.064587Z 0 [Note] Plugin group_replication reported: 'To force this member into the group you can use the group_replication_allow_local_disjoint_gtids_join option'
可以很明顯看到日志中已經提示了,需要設置參數,也就是兼容加入組。group_replication_allow_local_disjoint_gtids_join設置完成后運行start group_replication即可。

 

問題2:

如果碰到這個錯誤,也不用太擔心,可以從日志看到是因為參數的不兼容性導致的。比如主寫設置為multi-primary,讀節點設置為single-primary,統一一下即可。

2017-02-21T10:20:56.324890+08:00 0 [ERROR] Plugin group_replication reported: 'This member has more executed transactions than those present in the group. Local transactions: 87b9c8fe-f352-11e6-bb33-0026b935eb76:1-5,
b79d42f4-f351-11e6-9891-0026b935eb76:1,
f7c7b9f8-f352-11e6-b1de-a4badb1b524e:1 > Group transactions: 87b9c8fe-f352-11e6-bb33-0026b935eb76:1-5,
b79d42f4-f351-11e6-9891-0026b935eb76:1'
2017-02-21T10:20:56.324971+08:00 0 [ERROR] Plugin group_replication reported: 'The member configuration is not compatible with the group configuration. Variables such as single_primary_mode or enforce_update_everywhere_checks must have the same value on every server in the group. (member configuration option: [], group configuration option: [group_replication_single_primary_mode]).'
2017-02-21T10:20:56.325052+08:00 19 [Note] Plugin group_replication reported: 'Going to wait for view modification'
2017-02-21T10:20:56.325594+08:00 0 [Note] Plugin group_replication reported: 'getstart group_id 53d187f2'

問題3:

這個問題困擾了我很久,其實本質上就是節點的設置,里面有一個group_name, 這個名字可以不能設置為每個節點的uuid,比如節點1,2,3這幾個節點,group_replication_group_name是需要一致的。之前每次失敗都會認認真真拷貝uuid,發現適得其反。

2017-02-22T14:46:35.819072Z 0 [Warning] Plugin group_replication reported: 'read failed'
2017-02-22T14:46:35.851829Z 0 [ERROR] Plugin group_replication reported: '[GCS] The member was unable to join the group. Local port: 24902'
2017-02-22T14:47:05.814080Z 30 [ERROR] Plugin group_replication reported: 'Timeout on wait for view after joining group'
2017-02-22T14:47:05.814183Z 30 [Note] Plugin group_replication reported: 'Requesting to leave the group despite of not being a member'
2017-02-22T14:47:05.814213Z 30 [ERROR] Plugin group_replication reported: '[GCS] The member is leaving a group without being on one.'
2017-02-22T14:47:05.814567Z 30 [Note] Plugin group_replication reported: 'auto_increment_increment is reset to 1'
2017-02-22T14:47:05.814583Z 30 [Note] Plugin group_replication reported: 'auto_increment_offset is reset to 1'
2017-02-22T14:47:05.814859Z 36 [Note] Error reading relay log event for channel 'group_replication_applier': slave SQL thread was killed
2017-02-22T14:47:05.815720Z 33 [Note] Plugin group_replication reported: 'The group replication applier thread was killed'統一之后,啟動的過程其實很快。

mysql> start group_replication;
Query OK, 0 rows affected (1.52 sec)

 

基本上搭建過程就這幾類問題,還有主機名類的問題,這方面還有一些小的bug,如果需要特別設置,還可以指定report_host來完成。

 

問題4:

環境搭建好之后,我們來創建一個普通的表,有時候好的習慣和規范在這種時候就尤其重要。

創建表test_tab

create table test_tab (id int,name varchar(30));然后插入一條數據,看起來這是一個再正常不過的操作,但是在MGR里面就會有錯誤,因為一個基本要求就是表中含有主鍵。

mysql> insert into test_tab values(1,'a');
ERROR 3098 (HY000): The table does not comply with the requirements by an external plugin.修復的方式就是添加主鍵:

mysql> alter table test_tab add primary key(id);
Query OK, 0 rows affected (0.01 sec)
Records: 0  Duplicates: 0  Warnings: 0

問題5(模擬災難):

我們目前搭建的是single-primary的模式。如果主寫節點發生故障,整個group該怎么處理呢,就會優先把第二個節點S2省紀委主寫。

要測試的話還是很簡單的。我們把節點1的服務直接kill掉。看看主節點會漂移到哪里。

首先是組復制的基本情況,目前存在5個節點,我們直接kill節點1,即端口為24801的。

+---------------------------+--------------------------------------+-------------+-------------+--------------+
| CHANNEL_NAME              | MEMBER_ID                            | MEMBER_HOST | MEMBER_PORT | MEMBER_STATE |
+---------------------------+--------------------------------------+-------------+-------------+--------------+
| group_replication_applier | 52d26194-f90a-11e6-a247-782bcb377193 | grtest      |       24801 | ONLINE       |
| group_replication_applier | 5abaaf89-f90a-11e6-b4de-782bcb377193 | grtest      |       24802 | ONLINE       |
| group_replication_applier | 655248b9-f90a-11e6-86b4-782bcb377193 | grtest      |       24803 | ONLINE       |
| group_replication_applier | 6defc92c-f90a-11e6-990c-782bcb377193 | grtest      |       24804 | ONLINE       |
| group_replication_applier | 76bc07a1-f90a-11e6-ab0a-782bcb377193 | grtest      |       24805 | ONLINE       |
+---------------------------+--------------------------------------+-------------+-------------+--------------+

  節點2會輸出下面的日志,意味值這個節點正式上崗了。

2017-02-22T14:59:45.157989Z 0 [Note] Plugin group_replication reported: 'getstart group_id 98e4de29'
2017-02-22T14:59:45.434062Z 0 [Note] Plugin group_replication reported: 'Unsetting super_read_only.'
2017-02-22T14:59:45.434130Z 40 [Note] Plugin group_replication reported: 'A new primary was elected, enabled conflict detection until the new primary applies all relay logs'然后就會看到組復制的情況成了下面的局面,毫無疑問,第一個節點被剔除了。

+---------------------------+--------------------------------------+-------------+-------------+--------------+
| CHANNEL_NAME              | MEMBER_ID                            | MEMBER_HOST | MEMBER_PORT | MEMBER_STATE |
+---------------------------+--------------------------------------+-------------+-------------+--------------+
| group_replication_applier | 5abaaf89-f90a-11e6-b4de-782bcb377193 | grtest      |       24802 | ONLINE       |
| group_replication_applier | 655248b9-f90a-11e6-86b4-782bcb377193 | grtest      |       24803 | ONLINE       |
| group_replication_applier | 6defc92c-f90a-11e6-990c-782bcb377193 | grtest      |       24804 | ONLINE       |
| group_replication_applier | 76bc07a1-f90a-11e6-ab0a-782bcb377193 | grtest      |       24805 | ONLINE       |
+---------------------------+--------------------------------------+-------------+-------------+--------------+

從日志我們可以看到是第二個節點升為主寫了,那么問題來了。

 

問題6:

怎么判斷一個復制組中哪個是主節點,不能完全靠猜或者翻看日志來判斷吧。

我們用下面的語句來過濾得到。

mysql> select *from  performance_schema.replication_group_members where member_id =(select variable_value from performance_schema.global_status WHERE VARIABLE_NAME= 'group_replication_primary_member');
+---------------------------+--------------------------------------+-------------+-------------+--------------+
| CHANNEL_NAME              | MEMBER_ID                            | MEMBER_HOST | MEMBER_PORT | MEMBER_STATE |
+---------------------------+--------------------------------------+-------------+-------------+--------------+
| group_replication_applier | 5abaaf89-f90a-11e6-b4de-782bcb377193 | grtest      |       24802 | ONLINE       |
+---------------------------+--------------------------------------+-------------+-------------+--------------+
1 row in set (0.00 sec)

MySQL 5.7 Group Replication錯誤總結(r11筆記第84天)

原創  MySQL  作者:jeanron100  時間:2017-02-22 23:12:04   728   0

   今天來總結下MySQL 5.7中的一些問題處理,相對來說常規一些。搭建的過程我就不用多說了,昨天的文章里面可以看到一個基本的方式,在測試環境很容易模擬,如果在多台物理機環境中搭建是不是也一樣呢,答案是肯定的,我自己都一一試過了。

    因為搭建的環境官方建議也是single_primary的方式,即一主寫入,其它做讀,也就是讀寫分離,當然支持multi_primary理論上也是可行的,但是還是有點小問題,我們就以single_primary來舉例。

 問題1:

讀節點加入組的時候,start group_replication拋出了下面的錯誤。基本碰到這個錯誤,你離搭建成功就不遠了。

2017-02-20T07:56:30.064556Z 0 [ERROR] Plugin group_replication reported: 'This member has more executed transactions than those present in the group. Local transactions: 89328c79-f730-11e6-ab63-782bcb377193:1-2 > Group transactions: 7c744904-f730-11e6-a72d-782bcb377193:1-4'
2017-02-20T07:56:30.064580Z 0 [ERROR] Plugin group_replication reported: 'The member contains transactions not present in the group. The member will now exit the group.'
2017-02-20T07:56:30.064587Z 0 [Note] Plugin group_replication reported: 'To force this member into the group you can use the group_replication_allow_local_disjoint_gtids_join option'
可以很明顯看到日志中已經提示了,需要設置參數,也就是兼容加入組。group_replication_allow_local_disjoint_gtids_join設置完成后運行start group_replication即可。

 

問題2:

如果碰到這個錯誤,也不用太擔心,可以從日志看到是因為參數的不兼容性導致的。比如主寫設置為multi-primary,讀節點設置為single-primary,統一一下即可。

2017-02-21T10:20:56.324890+08:00 0 [ERROR] Plugin group_replication reported: 'This member has more executed transactions than those present in the group. Local transactions: 87b9c8fe-f352-11e6-bb33-0026b935eb76:1-5,
b79d42f4-f351-11e6-9891-0026b935eb76:1,
f7c7b9f8-f352-11e6-b1de-a4badb1b524e:1 > Group transactions: 87b9c8fe-f352-11e6-bb33-0026b935eb76:1-5,
b79d42f4-f351-11e6-9891-0026b935eb76:1'
2017-02-21T10:20:56.324971+08:00 0 [ERROR] Plugin group_replication reported: 'The member configuration is not compatible with the group configuration. Variables such as single_primary_mode or enforce_update_everywhere_checks must have the same value on every server in the group. (member configuration option: [], group configuration option: [group_replication_single_primary_mode]).'
2017-02-21T10:20:56.325052+08:00 19 [Note] Plugin group_replication reported: 'Going to wait for view modification'
2017-02-21T10:20:56.325594+08:00 0 [Note] Plugin group_replication reported: 'getstart group_id 53d187f2'

問題3:

這個問題困擾了我很久,其實本質上就是節點的設置,里面有一個group_name, 這個名字可以不能設置為每個節點的uuid,比如節點1,2,3這幾個節點,group_replication_group_name是需要一致的。之前每次失敗都會認認真真拷貝uuid,發現適得其反。

2017-02-22T14:46:35.819072Z 0 [Warning] Plugin group_replication reported: 'read failed'
2017-02-22T14:46:35.851829Z 0 [ERROR] Plugin group_replication reported: '[GCS] The member was unable to join the group. Local port: 24902'
2017-02-22T14:47:05.814080Z 30 [ERROR] Plugin group_replication reported: 'Timeout on wait for view after joining group'
2017-02-22T14:47:05.814183Z 30 [Note] Plugin group_replication reported: 'Requesting to leave the group despite of not being a member'
2017-02-22T14:47:05.814213Z 30 [ERROR] Plugin group_replication reported: '[GCS] The member is leaving a group without being on one.'
2017-02-22T14:47:05.814567Z 30 [Note] Plugin group_replication reported: 'auto_increment_increment is reset to 1'
2017-02-22T14:47:05.814583Z 30 [Note] Plugin group_replication reported: 'auto_increment_offset is reset to 1'
2017-02-22T14:47:05.814859Z 36 [Note] Error reading relay log event for channel 'group_replication_applier': slave SQL thread was killed
2017-02-22T14:47:05.815720Z 33 [Note] Plugin group_replication reported: 'The group replication applier thread was killed'統一之后,啟動的過程其實很快。

mysql> start group_replication;
Query OK, 0 rows affected (1.52 sec)

 

基本上搭建過程就這幾類問題,還有主機名類的問題,這方面還有一些小的bug,如果需要特別設置,還可以指定report_host來完成。

 

問題4:

環境搭建好之后,我們來創建一個普通的表,有時候好的習慣和規范在這種時候就尤其重要。

創建表test_tab

create table test_tab (id int,name varchar(30));然后插入一條數據,看起來這是一個再正常不過的操作,但是在MGR里面就會有錯誤,因為一個基本要求就是表中含有主鍵。

mysql> insert into test_tab values(1,'a');
ERROR 3098 (HY000): The table does not comply with the requirements by an external plugin.修復的方式就是添加主鍵:

mysql> alter table test_tab add primary key(id);
Query OK, 0 rows affected (0.01 sec)
Records: 0  Duplicates: 0  Warnings: 0

問題5(模擬災難):

我們目前搭建的是single-primary的模式。如果主寫節點發生故障,整個group該怎么處理呢,就會優先把第二個節點S2省紀委主寫。

要測試的話還是很簡單的。我們把節點1的服務直接kill掉。看看主節點會漂移到哪里。

首先是組復制的基本情況,目前存在5個節點,我們直接kill節點1,即端口為24801的。

+---------------------------+--------------------------------------+-------------+-------------+--------------+
| CHANNEL_NAME              | MEMBER_ID                            | MEMBER_HOST | MEMBER_PORT | MEMBER_STATE |
+---------------------------+--------------------------------------+-------------+-------------+--------------+
| group_replication_applier | 52d26194-f90a-11e6-a247-782bcb377193 | grtest      |       24801 | ONLINE       |
| group_replication_applier | 5abaaf89-f90a-11e6-b4de-782bcb377193 | grtest      |       24802 | ONLINE       |
| group_replication_applier | 655248b9-f90a-11e6-86b4-782bcb377193 | grtest      |       24803 | ONLINE       |
| group_replication_applier | 6defc92c-f90a-11e6-990c-782bcb377193 | grtest      |       24804 | ONLINE       |
| group_replication_applier | 76bc07a1-f90a-11e6-ab0a-782bcb377193 | grtest      |       24805 | ONLINE       |
+---------------------------+--------------------------------------+-------------+-------------+--------------+

  節點2會輸出下面的日志,意味值這個節點正式上崗了。

2017-02-22T14:59:45.157989Z 0 [Note] Plugin group_replication reported: 'getstart group_id 98e4de29'
2017-02-22T14:59:45.434062Z 0 [Note] Plugin group_replication reported: 'Unsetting super_read_only.'
2017-02-22T14:59:45.434130Z 40 [Note] Plugin group_replication reported: 'A new primary was elected, enabled conflict detection until the new primary applies all relay logs'然后就會看到組復制的情況成了下面的局面,毫無疑問,第一個節點被剔除了。

+---------------------------+--------------------------------------+-------------+-------------+--------------+
| CHANNEL_NAME              | MEMBER_ID                            | MEMBER_HOST | MEMBER_PORT | MEMBER_STATE |
+---------------------------+--------------------------------------+-------------+-------------+--------------+
| group_replication_applier | 5abaaf89-f90a-11e6-b4de-782bcb377193 | grtest      |       24802 | ONLINE       |
| group_replication_applier | 655248b9-f90a-11e6-86b4-782bcb377193 | grtest      |       24803 | ONLINE       |
| group_replication_applier | 6defc92c-f90a-11e6-990c-782bcb377193 | grtest      |       24804 | ONLINE       |
| group_replication_applier | 76bc07a1-f90a-11e6-ab0a-782bcb377193 | grtest      |       24805 | ONLINE       |
+---------------------------+--------------------------------------+-------------+-------------+--------------+

從日志我們可以看到是第二個節點升為主寫了,那么問題來了。

 

問題6:

怎么判斷一個復制組中哪個是主節點,不能完全靠猜或者翻看日志來判斷吧。

我們用下面的語句來過濾得到。

mysql> select *from  performance_schema.replication_group_members where member_id =(select variable_value from performance_schema.global_status WHERE VARIABLE_NAME= 'group_replication_primary_member');
+---------------------------+--------------------------------------+-------------+-------------+--------------+
| CHANNEL_NAME              | MEMBER_ID                            | MEMBER_HOST | MEMBER_PORT | MEMBER_STATE |
+---------------------------+--------------------------------------+-------------+-------------+--------------+
| group_replication_applier | 5abaaf89-f90a-11e6-b4de-782bcb377193 | grtest      |       24802 | ONLINE       |
+---------------------------+--------------------------------------+-------------+-------------+--------------+
1 row in set (0.00 sec)


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM