好久沒有寫過博客了,趁着年假還有一天,把去年項目所運用的讀寫分離在這里概述一下及其注意點,以防以后項目再有使用到;
准備工作
1 開發環境:window,idea,maven,spring boot,mybatis,druid(淘寶數據庫連接池)
2 數據庫服務器:linux,mysql master(192.168.203.135),mysql salve(192.168.203.139)
3 讀寫分離之前必須先做好數據庫的主從復制,關於主從復制不是該篇幅的主要敘述重點,關於主從復制讀者可以自行google或者百度,教程基本都是一樣,可行

注意以下幾點:
a:做主從復制時,首先確定兩台服務器的mysql沒任何自定義庫(否則只可以配置完后之前的東西沒法同步,或者兩個庫都有完全相同的庫應該也是可以同步)
b:server_id必須配置不一樣
c:防火牆不能把mysql服務端口給攔截了(默認3306)
d:確保兩台mysql可以相互訪問
e:重置master,slave。Reset master;reset slave;開啟關閉slave,start slave;stop slave;
f:主DB server和從DB server數據庫的版本一致
4 讀寫分離方式:
4-1 基於程序代碼內部實現: 在代碼中根據select 、insert進行路由分類,這類方法也是目前生產環境下應用最廣泛的。優點是性能較好,因為程序在代碼中實現,不需要增加額外的硬件開支,缺點是需要開發人員來實現,運維人員無從下手。
4-2 基於中間代理層實現: 代理一般介於應用服務器和數據庫服務器之間,代理數據庫服務器接收到應用服務器的請求后根據判斷后轉發到,后端數據庫,有以下代表性的程序。
本文基於兩種方式的敘述:
基於應用層代碼實現方式(內容都是通過代碼體現,必要的說明存在代碼中)
1 配置pom.xml,導入需要的jar包
<?xml version="1.0" encoding="UTF-8"?> <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd"> <modelVersion>4.0.0</modelVersion> <groupId>com.lishun</groupId> <artifactId>mysql_master_salve</artifactId> <version>0.0.1-SNAPSHOT</version> <packaging>jar</packaging> <name>mysql_master_salve</name> <description>Demo project for Spring Boot</description> <parent> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-parent</artifactId> <version>1.5.10.RELEASE</version> <relativePath/> <!-- lookup parent from repository --> </parent> <properties> <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding> <project.reporting.outputEncoding>UTF-8</project.reporting.outputEncoding> <java.version>1.8</java.version> </properties> <dependencies> <dependency> <groupId>org.mybatis.spring.boot</groupId> <artifactId>mybatis-spring-boot-starter</artifactId> <version>1.3.1</version> </dependency> <dependency> <groupId>mysql</groupId> <artifactId>mysql-connector-java</artifactId> <scope>runtime</scope> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-test</artifactId> <scope>test</scope> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-web</artifactId> <version>RELEASE</version> </dependency> <dependency> <groupId>com.alibaba</groupId> <artifactId>druid</artifactId> <version>1.0.18</version> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-aop</artifactId> </dependency> </dependencies> <build> <plugins> <plugin> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-maven-plugin</artifactId> </plugin> <plugin> <groupId>org.mybatis.generator</groupId> <artifactId>mybatis-generator-maven-plugin</artifactId> <version>1.3.2</version> <dependencies> <dependency> <groupId>mysql</groupId> <artifactId>mysql-connector-java</artifactId> <version>5.1.43</version> </dependency> </dependencies> <configuration> <overwrite>true</overwrite> </configuration> </plugin> </plugins> </build> </project>
2 配置application.properties
server.port=9022 #mybatis配置*mapper.xml文件和實體別名 mybatis.mapper-locations=classpath:mapper/*.xml mybatis.type-aliases-package=com.lishun.entity spring.datasource.driver-class-name=com.mysql.jdbc.Driver spring.datasource.password=123456 spring.datasource.username=root #寫節點 spring.datasource.master.url=jdbc:mysql://192.168.203.135:3306/worldmap #兩個個讀節點(為了方便測試這里用的是同一個服務器數據庫,生產環境應該不使用) spring.datasource.salve1.url=jdbc:mysql://192.168.203.139:3306/worldmap spring.datasource.salve2.url=jdbc:mysql://192.168.203.139:3306/worldmap # druid 連接池 Setting # 初始化大小,最小,最大 spring.datasource.type=com.alibaba.druid.pool.DruidDataSource spring.datasource.initialSize=5 spring.datasource.minIdle=5 spring.datasource.maxActive=20 # 配置獲取連接等待超時的時間 spring.datasource.maxWait=60000 # 配置間隔多久才進行一次檢測,檢測需要關閉的空閑連接,單位是毫秒 spring.datasource.timeBetweenEvictionRunsMillis=60000 # 配置一個連接在池中最小生存的時間,單位是毫秒 spring.datasource.minEvictableIdleTimeMillis=300000 spring.datasource.validationQuery=SELECT 1 FROM rscipc_sys_user spring.datasource.testWhileIdle=true spring.datasource.testOnBorrow=false spring.datasource.testOnReturn=false # 打開PSCache,並且指定每個連接上PSCache的大小 spring.datasource.poolPreparedStatements=true spring.datasource.maxPoolPreparedStatementPerConnectionSize=20 # 配置監控統計攔截的filters,去掉后監控界面sql無法統計,'wall'用於防火牆 spring.datasource.filters=stat,wall,log4j # 通過connectProperties屬性來打開mergeSql功能;慢SQL記錄 spring.datasource.connectionProperties=druid.stat.mergeSql=true;druid.stat.slowSqlMillis=5000 spring.datasource.logSlowSql=true #End
3 啟動類(注意:其他需要spring管理的bean(service,config等)必須放在該啟動類的子包下,不然會掃描不到bean,導致注入失敗)
@SpringBootApplication
@MapperScan("com.lishun.mapper") //!!!!!! 注意:掃描所有mapper
public class MysqlMasterSalveApplication {
public static void main(String[] args) {
SpringApplication.run(MysqlMasterSalveApplication.class, args);
}
}
4 動態數據源 DynamicDataSource
/**
* @author lishun
* @Description:動態數據源, 繼承AbstractRoutingDataSource
* @date 2017/8/9
*/
public class DynamicDataSource extends AbstractRoutingDataSource {
public static final Logger log = LoggerFactory.getLogger(DynamicDataSource.class);
/**
* 默認數據源
*/
public static final String DEFAULT_DS = "read_ds";
private static final ThreadLocal<String> contextHolder = new ThreadLocal<>();
public static void setDB(String dbType) {// 設置數據源名
log.info("切換到{}數據源", dbType);
contextHolder.set(dbType);
}
public static void clearDB() {
contextHolder.remove();
}// 清除數據源名
@Override
protected Object determineCurrentLookupKey() {
return contextHolder.get();
}
}
5 線程池配置數據源
@Configuration
public class DruidConfig {
private Logger logger = LoggerFactory.getLogger(DruidConfig.class);
@Value("${spring.datasource.master.url}")
private String masterUrl;
@Value("${spring.datasource.salve1.url}")
private String salve1Url;
@Value("${spring.datasource.salve2.url}")
private String salve2Url;
@Value("${spring.datasource.username}")
private String username;
@Value("${spring.datasource.password}")
private String password;
@Value("${spring.datasource.driver-class-name}")
private String driverClassName;
@Value("${spring.datasource.initialSize}")
private int initialSize;
@Value("${spring.datasource.minIdle}")
private int minIdle;
@Value("${spring.datasource.maxActive}")
private int maxActive;
@Value("${spring.datasource.maxWait}")
private int maxWait;
@Value("${spring.datasource.timeBetweenEvictionRunsMillis}")
private int timeBetweenEvictionRunsMillis;
@Value("${spring.datasource.minEvictableIdleTimeMillis}")
private int minEvictableIdleTimeMillis;
@Value("${spring.datasource.validationQuery}")
private String validationQuery;
@Value("${spring.datasource.testWhileIdle}")
private boolean testWhileIdle;
@Value("${spring.datasource.testOnBorrow}")
private boolean testOnBorrow;
@Value("${spring.datasource.testOnReturn}")
private boolean testOnReturn;
@Value("${spring.datasource.filters}")
private String filters;
@Value("${spring.datasource.logSlowSql}")
private String logSlowSql;
@Bean
public ServletRegistrationBean druidServlet() {
logger.info("init Druid Servlet Configuration ");
ServletRegistrationBean reg = new ServletRegistrationBean();
reg.setServlet(new StatViewServlet());
reg.addUrlMappings("/druid/*");
reg.addInitParameter("loginUsername", username);
reg.addInitParameter("loginPassword", password);
reg.addInitParameter("logSlowSql", logSlowSql);
return reg;
}
@Bean
public FilterRegistrationBean filterRegistrationBean() {
FilterRegistrationBean filterRegistrationBean = new FilterRegistrationBean();
filterRegistrationBean.setFilter(new WebStatFilter());
filterRegistrationBean.addUrlPatterns("/*");
filterRegistrationBean.addInitParameter("exclusions", "*.js,*.gif,*.jpg,*.png,*.css,*.ico,/druid/*");
filterRegistrationBean.addInitParameter("profileEnable", "true");
return filterRegistrationBean;
}
@Bean
public DataSource druidDataSource() {
DruidDataSource datasource = new DruidDataSource();
datasource.setUrl(masterUrl);
datasource.setUsername(username);
datasource.setPassword(password);
datasource.setDriverClassName(driverClassName);
datasource.setInitialSize(initialSize);
datasource.setMinIdle(minIdle);
datasource.setMaxActive(maxActive);
datasource.setMaxWait(maxWait);
datasource.setTimeBetweenEvictionRunsMillis(timeBetweenEvictionRunsMillis);
datasource.setMinEvictableIdleTimeMillis(minEvictableIdleTimeMillis);
datasource.setValidationQuery(validationQuery);
datasource.setTestWhileIdle(testWhileIdle);
datasource.setTestOnBorrow(testOnBorrow);
datasource.setTestOnReturn(testOnReturn);
try {
datasource.setFilters(filters);
} catch (SQLException e) {
logger.error("druid configuration initialization filter", e);
}
Map<Object, Object> dsMap = new HashMap();
dsMap.put("read_ds_1", druidDataSource_read1());
dsMap.put("read_ds_2", druidDataSource_read2());
dsMap.put("write_ds", datasource);
DynamicDataSource dynamicDataSource = new DynamicDataSource();
dynamicDataSource.setTargetDataSources(dsMap);
return dynamicDataSource;
}
public DataSource druidDataSource_read1() {
DruidDataSource datasource = new DruidDataSource();
datasource.setUrl(salve1Url);
datasource.setUsername(username);
datasource.setPassword(password);
datasource.setDriverClassName(driverClassName);
datasource.setInitialSize(initialSize);
datasource.setMinIdle(minIdle);
datasource.setMaxActive(maxActive);
datasource.setMaxWait(maxWait);
datasource.setTimeBetweenEvictionRunsMillis(timeBetweenEvictionRunsMillis);
datasource.setMinEvictableIdleTimeMillis(minEvictableIdleTimeMillis);
datasource.setValidationQuery(validationQuery);
datasource.setTestWhileIdle(testWhileIdle);
datasource.setTestOnBorrow(testOnBorrow);
datasource.setTestOnReturn(testOnReturn);
try {
datasource.setFilters(filters);
} catch (SQLException e) {
logger.error("druid configuration initialization filter", e);
}
return datasource;
}
public DataSource druidDataSource_read2() {
DruidDataSource datasource = new DruidDataSource();
datasource.setUrl(salve2Url);
datasource.setUsername(username);
datasource.setPassword(password);
datasource.setDriverClassName(driverClassName);
datasource.setInitialSize(initialSize);
datasource.setMinIdle(minIdle);
datasource.setMaxActive(maxActive);
datasource.setMaxWait(maxWait);
datasource.setTimeBetweenEvictionRunsMillis(timeBetweenEvictionRunsMillis);
datasource.setMinEvictableIdleTimeMillis(minEvictableIdleTimeMillis);
datasource.setValidationQuery(validationQuery);
datasource.setTestWhileIdle(testWhileIdle);
datasource.setTestOnBorrow(testOnBorrow);
datasource.setTestOnReturn(testOnReturn);
try {
datasource.setFilters(filters);
} catch (SQLException e) {
logger.error("druid configuration initialization filter", e);
}
return datasource;
}
}
6 數據源注解:在service層通過數據源注解來指定數據源
/**
* @author lishun
* @Description: 讀數據源注解
* @date 2017/8/9
*/
@Target({ElementType.METHOD})
@Retention(RetentionPolicy.RUNTIME)
public @interface ReadDataSource {
String vlaue() default "read_ds";
}
/**
* @author lishun
* @Description: 寫數據源注解
* @date 2017/8/9
*/
@Target({ElementType.METHOD})
@Retention(RetentionPolicy.RUNTIME)
public @interface WriteDataSource {
String value() default "write_ds";
}
7 service aop切面來切換數據源
/**
* @author lishun
* @Description: TODO
* @date 2017/8/9
*/
@Component
@Aspect
public class ServiceAspect implements PriorityOrdered {
@Pointcut("execution(public * com.lishun.service.*.*(..))")
public void dataSource(){};
@Before("dataSource()")
public void before(JoinPoint joinPoint){
Class<?> className = joinPoint.getTarget().getClass();//獲得當前訪問的class
String methodName = joinPoint.getSignature().getName();//獲得訪問的方法名
Class[] argClass = ((MethodSignature)joinPoint.getSignature()).getParameterTypes();//得到方法的參數的類型
String dataSource = DynamicDataSource.DEFAULT_DS;
try {
Method method = className.getMethod(methodName, argClass);// 得到訪問的方法對象
if (method.isAnnotationPresent(ReadDataSource.class)) {
ReadDataSource annotation = method.getAnnotation(ReadDataSource.class);
dataSource = annotation.vlaue();
int i = new Random().nextInt(2) + 1; /* 簡單的負載均衡 */
dataSource = dataSource + "_" + i;
}else if (method.isAnnotationPresent(WriteDataSource.class)){
WriteDataSource annotation = method.getAnnotation(WriteDataSource.class);
dataSource = annotation.value();
}
} catch (Exception e) {
e.printStackTrace();
}
DynamicDataSource.setDB(dataSource);// 切換數據源
}
/* 基於方法名
@Before("execution(public * com.lishun.service.*.find*(..)) || execution(public * com.lishun.service.*.query*(..))")
public void read(JoinPoint joinPoint){
DynamicDataSource.setDB("read_ds");// 切換數據源
}
@Before("execution(public * com.lishun.service.*.insert*(..)) || execution(public * com.lishun.service.*.add*(..))")
public void write(JoinPoint joinPoint){
DynamicDataSource.setDB("write_ds");// 切換數據源
}
*/
@After("dataSource()")
public void after(JoinPoint joinPoint){
DynamicDataSource.clearDB();// 切換數據源
}
@AfterThrowing("dataSource()")
public void AfterThrowing(){
System.out.println("AfterThrowing---------------" );
}
@Override
public int getOrder() {
return 1;//數值越小該切面先被執行,先選擇數據源(防止事務aop使用數據源出現空異常)
}
}
8 測試 mapper的代碼就不貼了,主要是service和controller
service
@Service
@Transactional
public class WmIpInfoServiceImpl implements WmIpInfoService {
@Autowired
public WmIpInfoMapper wmIpInfoMapper;
@Override
@ReadDataSource
public WmIpInfo findOneById(String id) {
//wmIpInfoMapper.selectByPrimaryKey(id);
return wmIpInfoMapper.selectByPrimaryKey(id);
}
@Override
@WriteDataSource
public int insert(WmIpInfo wmIpInfo) {
int result = wmIpInfoMapper.insert(wmIpInfo);
return result;
}
}
contrlloer
@RestController
public class IndexController {
@Autowired
public WmIpInfoService wmIpInfoService;
@GetMapping("/index/{id}")
public WmIpInfo index(@PathVariable(value = "id") String id){
WmIpInfo wmIpInfo = new WmIpInfo();
wmIpInfo.setId(UUID.randomUUID().toString());
wmIpInfoService.insert(wmIpInfo);
wmIpInfoService.findOneById(id);
return null;
}
}
運行spring boot 在瀏覽器輸入http://localhost:9022/index/123456
查看日志

基於中間件方式實現讀寫分離(mycat:主要是mycat安裝使用及其注意事項)
3-1 下載 http://dl.mycat.io/
3-2 解壓,配置MYCAT_HOME;
3-3 修改文件 vim conf/schema.xml
<?xml version="1.0"?> <!DOCTYPE mycat:schema SYSTEM "schema.dtd"> <mycat:schema xmlns:mycat="http://io.mycat/"> <schema name="worldmap" checkSQLschema="false" sqlMaxLimit="100" dataNode="worldmap_node"></schema> <dataNode name="worldmap_node" dataHost="worldmap_host" database="worldmap" /> <!-- database:數據庫名稱 --> <dataHost name="worldmap_host" maxCon="1000" minCon="10" balance="1" writeType="0" dbType="mysql" dbDriver="native" switchType="2" slaveThreshold="100"> <heartbeat>select user()</heartbeat> <writeHost host="hostM1" url="192.168.203.135:3306" user="root" password="123456"><!--讀寫分離模式,寫庫:192.168.203.135,讀庫192.168.203.139--> <readHost host="hostR1" url="192.168.203.139:3306" user="root" password="123456" /> </writeHost> <writeHost host="hostM2" url="192.168.203.135:3306" user="root" password="123456"> <!--主從切換模式,當hostM1宕機,讀寫操作在hostM2服務器數據庫執行--> </dataHost> </mycat:schema>
配置說明:
name:屬性唯一標識dataHost標簽,供上層的標簽使用。
maxCon:最大連接數
minCon:最先連接數
balance
1、balance=0 不開啟讀寫分離機制,所有讀操作都發送到當前可用的writehost了 .
2、balance=1 全部的readhost與stand by writeHost 參與select語句的負載均衡。簡單的說,雙主雙從模式(M1àS1,M2àS2,並且M1和M2互為主備),正常情況下,M1,S1,S2都參與select語句的復雜均衡。
3、balance=2 所有讀操作都隨機的在readhost和writehost上分發
writeType 負載均衡類型,目前的取值有3種:
1、writeType="0″, 所有寫操作發送到配置的第一個writeHost。
2、writeType="1″,所有寫操作都隨機的發送到配置的writeHost。
3、writeType="2″,不執行寫操作。
switchType
1、switchType=-1 表示不自動切換
2、switchType=1 默認值,自動切換
3、switchType=2 基於MySQL 主從同步的狀態決定是否切換
dbType:數據庫類型 mysql,postgresql,mongodb、oracle、spark等。
heartbeat:用於和后端數據庫進行心跳檢查的語句。例如,MYSQL可以使用select user(),Oracle可以使用select 1 from dual等。
這個標簽還有一個connectionInitSql屬性,主要是當使用Oracla數據庫時,需要執行的初始化SQL語句就這個放到這里面來。例如:altersession set nls_date_format='yyyy-mm-dd hh24:mi:ss'
當switchType=2 主從切換的語句必須是:show slave status
writeHost、readHost:這兩個標簽都指定后端數據庫的相關配置給mycat,用於實例化后端連接池。唯一不同的是,writeHost指定寫實例、readHost指定讀實例,
在一個dataHost內可以定義多個writeHost和readHost。但是,如果writeHost指定的后端數據庫宕機,那么這個writeHost綁定的所有readHost都將不可用。
另一方面,由於這個writeHost宕機系統會自動的檢測到,並切換到備用的writeHost上去。
3-4 修改文件 vim conf/server.xml
<!DOCTYPE mycat:server SYSTEM "server.dtd"> <mycat:server xmlns:mycat="http://io.mycat/"> <system> </system> <user name="root"> <property name="password">123456</property> <property name="schemas">worldmap</property><!--與schema.xml相對應--> <property name="readOnly">false</property> <!--readOnly是應用連接中間件邏輯庫所具有的權限。true為只讀,false為讀寫都有,默認為false。--> </user> </mycat:server>
3-5 啟動 mycat start
查看啟動日志:logs/wrapper.log;,正常啟動成功后會有mycat.log日志,如果服務未啟動成功不會有對應日志
3-6:對於開發人員mycat相當於一個新的數據庫服務端(默認端口8066),開發人員增刪改查不再是直接連接數據庫,而是連接數據庫中間件,中間件通過其自帶的lua腳本進行sql判斷,來路由到指定數據庫(實質根據selet,insert,update,delete關鍵字)
3-7:測試讀寫分離
讀數據路由到 192.168.203.139

寫數據路由到192.168.203.135
當主庫宕機,讀寫操作都在192.168.203.139


3-8:注意事項
一般使用框架都會用到事務,如果都要到事務那么就都會訪問主服務器,達不到分離的效果,因此配置事務的時候要注意區分,比如只對包含增刪改的進行事務配置
