Mysql超大分頁怎么優化處理
1)數據庫層面,這也是我們主要集中關注的(雖然收效沒那么大),類似於
select * from table where age > 20 limit 1000000,10
這種查詢其實也是有可以優化的余地的. 這條語句需要load1000000數據然后基本上全部丟棄,只取10條當然比較慢. 當時我們可以修改為
select * from table where id in (select id from table where age > 20 limit 1000000,10)
這樣雖然也load了一百萬的數據,但是由於索引覆蓋,要查詢的所有字段都在索引中,所以速度會很快. 同時如果ID連續的好(自增id連續),我們還可以
select * from table where id > 1000000 limit 10
效率也是不錯的,優化的可能性有許多種,但是核心思想都一樣,就是減少load的數據.
2)從需求的角度減少這種請求…主要是不做類似的需求(直接跳轉到幾百萬頁之后的具體某一頁.只允許逐頁查看或者按照給定的路線走,這樣可預測,可緩存)以及防止ID泄漏且連續被人惡意攻擊
3)分頁聯表left join優化
select * from sicimike where name like 'c6%' order by id limit 30000, 5;
可改為
select a.* from sicimike a inner join (select id from sicimike where name like 'c6%' order by id limit 30000, 5) b on a.id = b.id;
效果展示
mysql> select a.* from sicimike a inner join (select id from sicimike where name like 'c6%' order by id limit 30000, 5) b on a.id = b.id; +---------+------------+-----+---------------------+ | id | name | age | add_time | +---------+------------+-----+---------------------+ | 7466563 | c6db537243 | 59 | 2020-02-14 13:34:01 | | 7466920 | c62dec7921 | 79 | 2020-02-14 13:34:01 | | 7467162 | c610b89b31 | 71 | 2020-02-14 13:34:01 | | 7467590 | c67bbd4bfd | 10 | 2020-02-14 13:34:01 | | 7467825 | c6db24865b | 51 | 2020-02-14 13:34:01 | +---------+------------+-----+---------------------+ 5 rows in set (0.05 sec) mysql> select * from sicimike where name like 'c6%' order by id limit 30000, 5; +---------+------------+-----+---------------------+ | id | name | age | add_time | +---------+------------+-----+---------------------+ | 7466563 | c6db537243 | 59 | 2020-02-14 13:34:01 | | 7466920 | c62dec7921 | 79 | 2020-02-14 13:34:01 | | 7467162 | c610b89b31 | 71 | 2020-02-14 13:34:01 | | 7467590 | c67bbd4bfd | 10 | 2020-02-14 13:34:01 | | 7467825 | c6db24865b | 51 | 2020-02-14 13:34:01 | +---------+------------+-----+---------------------+ 5 rows in set (2.26 sec)
