【分庫分表】sharding-jdbc實踐—分庫分表入門


一、准備工作

1、准備三個數據庫:db0、db1、db2

2、每個數據庫新建兩個訂單表:t_order_0、t_order_1

DROP TABLE IF EXISTS `t_order_x`;
CREATE TABLE `t_order_x` (
  `id` bigint NOT NULL AUTO_INCREMENT,
  `user_id` bigint NOT NULL,
  `order_id` bigint NOT NULL,
  `order_no` varchar(30) NOT NULL,
  `isactive` tinyint NOT NULL DEFAULT '1',
  `inserttime` datetime NOT NULL DEFAULT CURRENT_TIMESTAMP,
  `updatetime` datetime NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
  PRIMARY KEY (`id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8;

二、分庫分表配置

 數據源的配置可以使用任何鏈接池,本例用druid為例。

1、引言依賴包:

引用最新的maven包

<sharding-jdbc.version>2.0.1</sharding-jdbc.version>


<dependency>
<groupId>io.shardingjdbc</groupId>
<artifactId>sharding-jdbc-core</artifactId>
<version>${sharding-jdbc.version}</version>
</dependency>

2、配置DataSource:

    @Bean(name = "shardingDataSource", destroyMethod = "close")
    @Qualifier("shardingDataSource")
    public DataSource getShardingDataSource() {
        // 配置真實數據源
        Map<String, DataSource> dataSourceMap = new HashMap<>(3);

        // 配置第一個數據源
        DruidDataSource dataSource1 = createDefaultDruidDataSource();
        dataSource1.setDriverClassName("com.mysql.jdbc.Driver");
        dataSource1.setUrl("jdbc:mysql://localhost:3306/db0");
        dataSource1.setUsername("root");
        dataSource1.setPassword("root");
        dataSourceMap.put("db0", dataSource1);

        // 配置第二個數據源
        DruidDataSource dataSource2 = createDefaultDruidDataSource();
        dataSource2.setDriverClassName("com.mysql.jdbc.Driver");
        dataSource2.setUrl("jdbc:mysql://localhost:3306/db1");
        dataSource2.setUsername("root");
        dataSource2.setPassword("root");
        dataSource2.setName("db1-0001");
        dataSourceMap.put("db1", dataSource2);

        // 配置第三個數據源
        DruidDataSource dataSource3 = createDefaultDruidDataSource();
        dataSource3.setDriverClassName("com.mysql.jdbc.Driver");
        dataSource3.setUrl("jdbc:mysql://localhost:3306/db2");
        dataSource3.setUsername("root");
        dataSource3.setPassword("root");
        dataSourceMap.put("db2", dataSource3);


        // 配置Order表規則
        TableRuleConfiguration orderTableRuleConfig = new TableRuleConfiguration();
        orderTableRuleConfig.setLogicTable("t_order");
        orderTableRuleConfig.setActualDataNodes("db${0..2}.t_order_${0..1}");
        //orderTableRuleConfig.setActualDataNodes("db0.t_order_0,db0.t_order_1,db1.t_order_0,db1.t_order_1,db2.t_order_0,db2.t_order_1");

        // 配置分庫策略(Groovy表達式配置db規則)
        orderTableRuleConfig.setDatabaseShardingStrategyConfig(new InlineShardingStrategyConfiguration("user_id", "db${user_id % 3}"));

        // 配置分表策略(Groovy表達式配置表路由規則)
        orderTableRuleConfig.setTableShardingStrategyConfig(new InlineShardingStrategyConfiguration("order_id", "t_order_${order_id % 2}"));

        // 配置分片規則
        ShardingRuleConfiguration shardingRuleConfig = new ShardingRuleConfiguration();
        shardingRuleConfig.getTableRuleConfigs().add(orderTableRuleConfig);

        // 配置order_items表規則...

        // 獲取數據源對象
        DataSource dataSource = null;
        try {
            dataSource = ShardingDataSourceFactory.createDataSource(dataSourceMap, shardingRuleConfig, new ConcurrentHashMap(), new Properties());
        } catch (SQLException e) {
            e.printStackTrace();
        }
        return dataSource;
    }

可以使用Druid監控db。

三、示例驗證

1、新增數據

@Slf4j
@RestController
@RequestMapping("/order")
public class OrderController {
    @Autowired
    private OrderMapper orderMapper;

    @RequestMapping("/add")
    public void addOrder() {
        OrderEntity entity10 = new OrderEntity();
        entity10.setOrderId(10000L);
        entity10.setOrderNo("No1000000");
        entity10.setUserId(102333001L);
        orderMapper.insertSelective(entity10);
OrderEntity entity11
= new OrderEntity(); entity11.setOrderId(10001L); entity11.setOrderNo("No1000000"); entity11.setUserId(102333000L); orderMapper.insertSelective(entity11); } }

依據配置的分片規則

  • DB路由規則:user_id % 3:

     102333001 % 3 = 1

102333000 % 3 = 0

  • 表路由規則:order_id % 2:

10000 % 2 = 0

10001 % 2 = 1

userid=102333001,orderId=10000的數據落地到db1.t_order_0
userid=102333000,orderId=10001的數據落地到db0.t_order_1

2、未指定分片規則字段的查詢

    /**廣播遍歷所有的庫和表*/
    @RequestMapping("get")
    public void getOrder() {
        List<Integer> ids = new ArrayList<>();
        ids.add(4);
        List<OrderEntity> orderEntities = orderMapper.selectByPrimaryIds(ids);

        log.info(JSON.toJSONString(orderEntities));
    }

由druid監控sql得知,查詢被廣播到db0、db1、db2的各個表里,如下監控所示:

3、不能執行批量插入操作

不支持對不同分片規則的字段值進行批量插入操作,類似sql:insert into t_order values(x,x,x,x),(x,x,x,x),(x,x,x,x)

 4、謹慎修改分片規則字段

如果修改了分片規則的字段,比如本例的user_id或order_id,因為路由規則會造成數據存在,卻查不到數據的情況。

    @RequestMapping("/upd")
    public void update() {
        OrderEntity orderWhere = new OrderEntity();
        orderWhere.setOrderId(10001L);
        orderWhere.setUserId(102333001L);
        orderWhere.setId(4L);

        OrderEntity orderSet = new OrderEntity();
        orderSet.setOrderId(10002L);
        orderSet.setOrderNo("修改訂單號");

        orderMapper.updateByPredicate(orderSet, orderWhere);

        /**查不到,orderId更改會引起路由查詢失敗*/
        OrderEntity predicate = new OrderEntity();
        predicate.setOrderId(10002L);
        OrderEntity entity = orderMapper.selectSingleByPredicate(predicate);
        log.info("after update orderEntity:"+JSON.toJSONString(entity));
    }

四、sharding建表

目前配置並驗證了3個庫,每庫2個order表的場景:

如果分庫分表數量比較多,僅僅創建表就是一件很繁瑣的事情。sharding查詢數據不指定分片規則字段時,會自動路由到各個庫的各個表里查詢,不知道大家有沒有想到:如果配置要創建表的路由規則,用sharding來執行一條創建sql的語句,會不會就自動路由到各個庫去執行了,也就代替人工去各個庫建表了呢?下面來驗證一下這個想法,以創建t_order_items表為例:

1、配置t_order_items的規則

在上面配置t_order規則下面補充t_order_items的規則配置:

        // 省略配置order_item表規則...
        TableRuleConfiguration orderItemTableRuleConfig = new TableRuleConfiguration(); orderItemTableRuleConfig.setLogicTable("t_order_items"); orderItemTableRuleConfig.setActualDataNodes("db${0..2}.t_order_items_${0..1}");// 配置分庫策略
        orderItemTableRuleConfig.setDatabaseShardingStrategyConfig(new InlineShardingStrategyConfiguration("order_id", "db${order_id % 3}")); // 配置分表策略
        orderItemTableRuleConfig.setTableShardingStrategyConfig(new InlineShardingStrategyConfiguration("order_id", "t_order_items_${order_id % 2}")); shardingRuleConfig.getTableRuleConfigs().add(orderItemTableRuleConfig);

2、t_order_items建表sql語句

    <update id="createTItemsIfNotExistsTable"> CREATE TABLE IF NOT EXISTS `t_order_items` ( `id` bigint NOT NULL AUTO_INCREMENT, `order_id` bigint NOT NULL, `unique_no` varchar(32) NOT NULL, `quantity` int NOT NULL DEFAULT '1', `is_active` tinyint NOT NULL DEFAULT 1, `inserttime` datetime NOT NULL DEFAULT CURRENT_TIMESTAMP, `updatetime` datetime NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP, PRIMARY KEY (`id`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8; </update>

3、OrderItemsMapper方法

Integer createTItemsIfNotExistsTable();

4、執行方法

orderItemsMapper.createTItemsIfNotExistsTable();

查看db0、db1、db2:

驗證了我們上面的想法,建表成功了。 

 

附錄

如果沒有配置t_order_items規則,執行建表sql會報錯:

org.mybatis.spring.MyBatisSystemException: nested exception is org.apache.ibatis.exceptions.PersistenceException:
### Error updating database. Cause: io.shardingjdbc.core.exception.ShardingJdbcException: Cannot find table rule and default data source with logic table: 't_order_items'
### The error may involve defaultParameterMap
### The error occurred while setting parameters
### SQL: CREATE TABLE IF NOT EXISTS `t_order_items` ( `id` bigint NOT NULL AUTO_INCREMENT, `order_id` bigint NOT NULL, `unique_no` varchar(32) NOT NULL, `quantity` int NOT NULL DEFAULT '1', `is_active` tinyint NOT NULL DEFAULT 1, `inserttime` datetime NOT NULL DEFAULT CURRENT_TIMESTAMP, `updatetime` datetime NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP, PRIMARY KEY (`id`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8;
### Cause: io.shardingjdbc.core.exception.ShardingJdbcException: Cannot find table rule and default data source with logic table: 't_order_items'


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM