查看連接數變化
CPU利用率到達100%,首先懷疑,是不是業務高峰活躍連接陡增,而數據庫預留的資源不足造成的結果。我們需要查看下,問題發生時,活躍的連接數是否比平時多很多。對於RDS for PG,數據庫上的連接數變化,可以從控制台的監控信息中看到。而當前活躍的連接數>可以直接連接數據庫,使用下列查詢語句得到:
select count( * ) from pg_stat_activity where state not like '%idle';
追蹤慢SQL
如果活躍連接數的變化處於正常范圍,則很大概率可能是當時有性能很差的SQL被大量執行導致。由於RDS有慢SQL日志,我們可以通過這個日志,定位到當時比較耗時的SQL來進一步做分析。但通常問題發生時,整個系統都處於停滯狀態,所有SQL都慢下來,當時記錄的>慢SQL可能非常多,並不容易排查罪魁禍首。這里我們介紹幾種在問題發生時,即介入追查慢SQL的方法。
1、第一種方法是使用pg_stat_statements插件定位慢SQL,步驟如下。
1.1 如果沒有創建這個插件,需要手動創建。我們要利用插件和數據庫系統里面的計數信息(如SQL執行時間累積等),而這些信息是不斷累積的,包含了歷史信息。為了更方便的排查當前的CPU滿問題,我們要先重置計數器。
create extension pg_stat_statements; select pg_stat_reset(); select pg_stat_statements_reset();
1.2 等待一段時間(例如1分鍾),使計數器積累足夠的信息。
1.3 查詢最耗時的SQL(一般就是導致問題的直接原因)。
select * from pg_stat_statements order by total_time desc limit 5;
1.4 查詢讀取Buffer次數最多的SQL,這些SQL可能由於所查詢的數據沒有索引,而導致了過多的Buffer讀,也同時大量消耗了CPU。
select * from pg_stat_statements order by shared_blks_hit+shared_blks_read desc limit 5;
2、第二種方法是,直接通過pg_stat_activity視圖,利用下面的查詢,查看當前長時間執行,一直不結束的SQL。這些SQL對應造成CPU滿,也有直接嫌疑。
select datname, usename, client_addr, application_name, state, backend_start, xact_start, xact_stay, query_start, query_stay, replace(query, chr(10), ' ') as query from (select pgsa.datname as datname, pgsa.usename as usename, pgsa.client_addr client_addr, pgsa.application_name as application_name, pgsa.state as state, pgsa.backend_start as backend_start, pgsa.xact_start as xact_start, extract(epoch from (now() - pgsa.xact_start)) as xact_stay, pgsa.query_start as query_start, extract(epoch from (now() - pgsa.query_start)) as query_stay , pgsa.query as query from pg_stat_activity as pgsa where pgsa.state != 'idle' and pgsa.state != 'idle in transaction' and pgsa.state != 'idle in transaction (aborted)') idleconnections order by query_stay desc limit 5;
datname | usename | client_addr | application_name | state | backend_start | xact_start | xact_stay | query_start | query_stay | query ---------+-------------+---------------+--------------------------+--------+-------------------------------+-------------------------------+---------------+-------------------------------+---------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- denali | denaliadmin | 10.222.16.45 | pgAdmin III - Query Tool | active | 2018-02-26 22:36:05.603781+00 | 2018-02-26 22:36:13.054396+00 | 187614.245395 | 2018-02-26 22:36:13.054396+00 | 187614.245395 | select * from gen3_search_eu_17q2_20171115_epl.place_name \r where place_id not in (select place_id from gen3_search_eu_17q1_20170308_epl.place_name ) \r and name not in (select name from gen3_search_eu_17q1_20170308_epl.place_name)\r and lang = 'ENG'\r limit 50 denali | denaliadmin | 10.222.16.45 | pgAdmin III - Query Tool | active | 2018-02-26 23:46:24.442846+00 | 2018-02-26 23:46:34.920261+00 | 183392.37953 | 2018-02-26 23:46:34.920261+00 | 183392.37953 | select * from gen3_search_eu_17q2_20171115_epl.place_name \r where place_id not in (select place_id from gen3_search_eu_17q1_20170308_epl.place_name ) \r and name not in (select name from gen3_search_eu_17q1_20170308_epl.place_name)\r and lang = 'ENG'\r limit 50\r denali | denaliadmin | 10.222.16.45 | pgAdmin III - Query Tool | active | 2018-02-27 01:19:53.83589+00 | 2018-02-27 01:20:01.519778+00 | 177785.780013 | 2018-02-27 01:20:01.519778+00 | 177785.780013 | select * from gen3_search_eu_17q2_20171115_epl.place_name \r where place_id not in (select place_id from gen3_search_eu_17q1_20170308_epl.place_name ) \r and name not in (select name from gen3_search_eu_17q1_20170308_epl.place_name)\r limit 50 denali | denaliadmin | 10.222.16.45 | pgAdmin III - Query Tool | active | 2018-02-27 01:46:05.207888+00 | 2018-02-27 01:47:52.039779+00 | 176115.260012 | 2018-02-27 01:47:52.039779+00 | 176115.260012 | select a.place_id, a.metadata_dictionary_id,a.value, a.lang, b.place_id, b.metadata_dictionary_id, b.value, b.lang\r from gen3_search_eu_17q1_20170308_epl.place_address a \r inner join gen3_search_eu_17q2_20171115_epl.place_address b\r on a.place_id = b.place_id \r where a.metadata_dictionary_id = b.metadata_dictionary_id and a.lang = b.lang and a.value!=b.value and b.place_id not in (select poi_id from gen3_search_eu_17q2_20171115_epl.place_embeded_ids)\r limit 100\r denali | denaliadmin | 10.224.14.148 | pgAdmin III - Query Tool | active | 2018-02-27 05:05:39.903885+00 | 2018-02-27 05:05:48.827779+00 | 164238.472012 | 2018-02-27 05:05:48.827779+00 | 164238.472012 | select a.place_id, a.metadata_dictionary_id,a.value, a.lang, b.place_id, b.metadata_dictionary_id, b.value, b.lang\r from gen3_search_eu_17q1_20170308_epl.place_address a \r inner join gen3_search_eu_17q2_20171115_epl.place_address b\r on a.place_id = b.place_id \r where a.metadata_dictionary_id = b.metadata_dictionary_id and a.lang = b.lang and a.value!=b.value and b.place_id not in (select poi_id from gen3_search_eu_17q2_20171115_epl.place_embeded_ids)\r limit 100\r (5 rows)
3、第3種方法,是從數據表上表掃描(Table Scan)的信息開始查起,查找缺失索引的表。數據表如果缺失索引,大部分熱數據又都在內存時(例如內存8G,熱數據6G),此時數據庫只能使用表掃描,並需要處理已在內存中的大量的無關記錄,而耗費大量CPU。特別是對於表記錄數超100的表,一次表掃描占用大量CPU(基本把一個CPU占滿),多個連接並發(例如上百連接),把所有CPU占滿。
3.1 通過下面的查詢,查出使用表掃描最多的表:
select * from pg_stat_user_tables where n_live_tup > 100000 and seq_scan > 0 order by seq_tup_read desc limit 10;
3.2 查詢當前正在運行的訪問到上述表的慢查詢:
select * from pg_stat_activity where query ilike '%<table name>%' and query_start - now() > interval '10 seconds';
3.3 也可以通過pg_stat_statements插件定位涉及到這些表的查詢:
select * from pg_stat_statements where query ilike '%<table>%'order by shared_blks_hit+shared_blks_read desc limit 3;
處理慢SQL
對於上面的方法查出來的慢SQL,首先需要做的可能是Cancel或Kill掉他們,使業務先恢復:
select pg_cancel_backend(pid) from pg_stat_activity where query like '%<query text>%' and pid != pg_backend_pid(); select pg_terminate_backend(pid) from pg_stat_activity where query like '%<query text>%' and pid != pg_backend_pid();
如果這些SQL確實是業務上必需的,則需要對他們做優化。這方面有“三板斧”:
1、對查詢涉及的表,執行ANALYZE <table>或VACUUM ANZLYZE <table>,更新表的統計信息,使查詢計划更准確。注意,為避免對業務影響,最好在業務低峰執行。
2、執行explain (query text)或explain (buffers true, analyze true, verbose true) (query text)命令,查看SQL的執行計划(注意,前者不會實際執行SQL,后者會實際執行而且能得到詳細的執行信息),對其中的Table Scan涉及的表,建立索引。
3、重新編寫SQL,去除掉不必要的子查詢、改寫UNION ALL、使用JOIN CLAUSE固定連接順序等到,都是進一步深度優化SQL的手段,這里不再深入說明。
https://www.centos.bz/2017/08/postgresql-cpu-100-slow-sql/