MySQL InnoDB Cluster安裝 ---------------------------------------- 關閉防火牆 systemctl stop firewalld.service 關閉selinux 臨時關閉:(用永久關閉系統總崩潰 實驗中用的臨時關閉) [root@localhost ~]# getenforce Enforcing [root@localhost ~]# setenforce 0 [root@localhost ~]# getenforce Permissive 永久關閉: [root@localhost ~]# vi /etc/sysconfig/selinux SELINUX=enforcing 改為 SELINUX=disabled 重啟服務reboot 1 2 3安裝 mysql 和 MySQL Shell 4 安裝MySQL Shell 和 MySQL Router node1 172.16.6.110 node2 172.16.6.117 node3 172.16.6.126 node4 172.16.6.64 每台配置成下面這樣 [root@localhost ~]# vi /etc/hosts 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 172.16.6.110 node1 172.16.6.117 node2 172.16.6.126 node3 172.16.6.64 node4 ---------------------------------------- 安裝mysql mysql 8 yum安裝 安裝指導網站 https://dev.mysql.com/doc/mysql-yum-repo-quick-guide/en/ 添加yum源 Adding the MySQL Yum Repository ->sudo rpm -Uvh mysql80-community-release-el6-n.noarch.rpm 安裝 yum -y install mysql-community-server 啟動 service mysqld start 查看本地默認密碼 grep 'temporary password' /var/log/mysqld.log 登錄 mysql -uroot -p -> 填密碼 -> ALTER USER 'root'@'localhost' IDENTIFIED BY 'Pp88888888_'; -> 開啟遠程訪問相關 use mysql; -> CREATE USER 'root'@'%' IDENTIFIED BY 'Pp88888888_'; -> ALTER USER 'root'@'%' IDENTIFIED WITH mysql_native_password BY 'Pp88888888_'; -> grant all privileges on *.* to root with grant option; -> flush privileges; ---------------------------------------- 安裝MySQL Shell Download MySQL Shell https://dev.mysql.com/downloads/shell/ mv /root/mysql-shell-8.0.16-linux-glibc2.12-x86-64bit.tar.gz /usr/local/mysqlShell/ cd /usr/local/mysqlShell/ tar xvf mysql-shell-8.0.16-linux-glibc2.12-x86-64bit.tar.gz export PATH=/usr/local/mysqlShell/mysql-shell-8.0.16-linux-glibc2.12-x86-64bit/bin/:$PATH ---------------------------------------- 安裝MySQL Router Download MySQL Router https://dev.mysql.com/downloads/router/ 解壓后
./mysqlrouter --help
...
# Examples
Bootstrap for use with InnoDB cluster into system-wide installation
sudo mysqlrouter --bootstrap root@clusterinstance01 --user=mysqlrouter
Start router
sudo mysqlrouter --user=mysqlrouter
Bootstrap for use with InnoDb cluster in a self-contained directory
mysqlrouter --bootstrap root@clusterinstance01 -d myrouter
Start router
myrouter/start.sh
[root@node4 bin]#
執行 1 :
[root@node4 bin]# ./mysqlrouter --bootstrap root@node1:3306 --user root
Please enter MySQL password for root:
# Bootstrapping system MySQL Router instance...
- Checking for old Router accounts
- No prior Router accounts found
- Creating mysql account mysql_router1_l7gsgfztmaop@'%' for cluster management
- Storing account in keyring
- Adjusting permissions of generated files
- Creating configuration /usr/local/mysqlRouter/mysql-router-8.0.16-linux-glibc2.12-x86_64/bin/.././mysqlrouter.conf
# MySQL Router configured for the InnoDB cluster 'prodCluster'
After this MySQL Router has been started with the generated configuration
$ /etc/init.d/mysqlrouter restart
or
$ systemctl start mysqlrouter
or
$ ./mysqlrouter -c /usr/local/mysqlRouter/mysql-router-8.0.16-linux-glibc2.12-x86_64/bin/.././mysqlrouter.conf
the cluster 'prodCluster' can be reached by connecting to:
## MySQL Classic protocol
- Read/Write Connections: localhost:6446
- Read/Only Connections: localhost:6447
## MySQL X protocol
- Read/Write Connections: localhost:64460
- Read/Only Connections: localhost:64470
[root@node4 bin]#
執行 2 :
[root@node4 bin]# ./mysqlrouter --user=root
Loading all plugins.
plugin 'logger:' loading
plugin 'metadata_cache:prodCluster' loading
plugin 'routing:prodCluster_default_ro' loading
plugin 'routing:prodCluster_default_rw' loading
plugin 'routing:prodCluster_default_x_ro' loading
plugin 'routing:prodCluster_default_x_rw' loading
Initializing all plugins.
plugin 'logger' initializing
logging facility initialized, switching logging to loggers specified in configuration
通過 root Pp88888888_ 6446 可用SQLyog 連接
---------------------------------------- 配件安裝結束 下面是配置 ======================================== 每一個mysql的my.cnf配置 [mysqld] ...(原來內容) 添加下面即可 binlog_checksum=NONE enforce_gtid_consistency=ON gtid_mode=ON server_id=1 (1,2,3) report_host=node1 (node1,node2,node3) 注意更改配置后重啟 ======================================== 在 node1 node2 node3 上分別執行類似下面的操作 例 在 node1 上執行 1: mysqlsh 'root'@'node1':3306 檢查是否可用 dba.checkInstanceConfiguration('root@node1:3306') 出現這個即可被正常使用 The instance 'node1:3306' is valid for InnoDB cluster usage. { "status": "ok" } 2: dba.configureLocalInstance('root@node1:3306',{clusterAdmin: 'zrClusterAdmin',clusterAdminPassword: 'Pp88888888_'}); 3: 退出 ======================================== 創建集群 及 將節點添加到集群 創建集群 在 node4 上執行下面命令 mysqlsh 'root'@'node1':3306 var cluster = dba.createCluster('zrCluster') 將節點添加到集群 cluster.addInstance('root@node1:3306'); cluster.addInstance('root@node2:3306'); cluster.addInstance('root@node3:3306'); 查看集群狀態 cluster.status(); { "clusterName": "zrCluster", "defaultReplicaSet": { "name": "default", "primary": "node1:3306", "ssl": "REQUIRED", "status": "OK", "statusText": "Cluster is ONLINE and can tolerate up to ONE failure.", "topology": { "node1:3306": { "address": "node1:3306", "mode": "R/W", "readReplicas": {}, "role": "HA", "status": "ONLINE", "version": "8.0.16" }, "node2:3306": { "address": "node2:3306", "mode": "R/O", "readReplicas": {}, "role": "HA", "status": "ONLINE", "version": "8.0.16" }, "node3:3306": { "address": "node3:3306", "mode": "R/O", "readReplicas": {}, "role": "HA", "status": "ONLINE", "version": "8.0.16" } }, "topologyMode": "Single-Primary" }, "groupInformationSourceMember": "node1:3306" }