原文鏈接:https://smilenicky.blog.csdn.net/article/details/94862797,
我的sql調優專欄:https://smilenicky.blog.csdn.net/article/category/8679315
整體性能分析
AWR、ASH、ADDM、AWRDD
整體分析調優工具
- AWR:關注數據庫的整體性能的報告;
- ASH:數據庫中的等待事件與哪些SQL具體對應的報告;
- ADDM:oracle給出的一些建議
- AWRDD:Oracle針對不同時段的性能對比報告
- AWRSQRPT:oracle獲取統計信息與執行計划
不同場景對應工具
局部分析調優工具:
- explain plan for
- set autotrace on
- statistics_level=all
- 直接通過sql_id獲取
- 10046 trace
- awrrpt.sql
整體性能工具要點
- AWR關注點:load profile、efficiency percentages、top 5 time events、SQL Statistics、segment_statistics
- ASH關注點:等待事件與sql完美結合
- ADDM:各種建議與對應SQL
- AWRDD:不同時期 load profile的比較、不同時期等待事件的比較、不同時期TOP SQL的比較
- AWRSQRPT:獲取與關注點(統計信息與執行計划)
select output from table (dbms_workload_repository.awr_report_html(v_dbid,v_instance_number,v_min_snap_id,v_max_snap_id));
相關查詢試圖:
- v$session (當前正在發生)
- v$session_wait(當前正在等待)
- v$session_wait_history (會話最近的10次等待事件)
- v$active_session_history (內存中的ASH采集信息,理論為1小時)
- wrh$_active_session_history (寫入AWR庫中的ASH信息,理論為1小時以上)
- dba_hist_active_sess_history (根據wrh$_active_session_history生成的視圖)
執行計划
獲取執行計划的方法:
(1) explain plan for
步驟:
- 1:explain plan for 你的SQL;
- 2:select * from table (dbms_xplan. display()) ;
- 優點:不需要真的執行,快捷方便
- 缺點:沒有輸出運行時的統計信息(邏輯讀、遞歸調用,物理讀),因為沒有真正執行,所以不能看到執行了多少行、表被訪問了多少次等等
(2) set autotrace on
sqlplus登錄:
用戶名/密碼@主機名稱:1521/數據庫名
步驟:
- 1:set sutoatrace on
- 2:在此次執行你的sql;
- 優點:可以看到運行時的統計信息(邏輯讀、遞歸調用,物理讀)
- 缺點:不能看到表被訪問了多少次,也需要等sql執行完成才能看
(3) statistics_level=all
步驟:
- 1:alter session set statistics_level=all;
- 2:在此處執行你的SQL;
- 3:select * from table(dbms_xplan.display_cursor(null , null,'allstats last'));
假如使用了Hint語法: /*+ gather_plan_statistics */,就可以省略步驟1,直接執行步驟2和3,獲取執行計划
關鍵字解讀:
- Starts:該SQL執行的次數
- E-Rows:為執行計划預計的行數
- A-Rows:實際返回的行數,E-Rows和A-Rows作比較,就可以看出具體那一步執行計划出問題了
- A-Time:每一步實際執行的時間,可以看出耗時的SQL
- Buffers:每一步實際執行的邏輯讀或一致性讀
- Reads:物理讀
- OMem:當前操作完成所有內存工作區操作總使用私有內存工作區(PGA)的大小
- lMem:當工作區大小無法符滿足操作需求的大小時,需要將部分數據寫入臨時磁盤空間中(如果僅需要寫入一次就可以完成操作,就稱一次通過,One-Pass;否則為多次通過,Multi-Pass)。改數據為語句最后一次執行中,單次寫磁盤所需要的內存大小,這個是由優化器統計數據以及前一次執行的性能數據估算得出的
- Used-Mem:語句最后一次執行中,當前操作所使用的內存工作區大小,括號里面為(發生磁盤交換的次數,1次即為One-Pass,大於一次則為Mullti-Pass,如果沒有使用磁盤,則顯示為OPTI1MAL)
OMem、lMem為執行所需要的內存評估值,OMem為最優執行模式所需要內存的評估值,Used-Mem為消耗的內存
優點:
- 可以從STARTS得出表被訪問多少次;
- 可以清晰地從E-ROWS和A-ROWS中分別得出預測的行數和真實的行數
缺點: - 必須等到語句真正執行完成后,才可以得出結果
- 無法控制記錄打屏輸出,不想aututrace有traceonly命令
- 沒有專門的輸出統計信息,看不到遞歸調用的次數,看不出物理讀具體數值,不過有邏輯讀,邏輯讀才是重點
(4) dbms_xplan.display_cursor獲取
步驟
從共享池獲取
//${SQL_ID}參數可以從共享池拿
select * from table(dbms_xplan.display_cursor(${SQL_ID}));
還可以從AWR性能視圖里獲取
select * from table(dbms_xplan.display_awr(${SQL_ID}));
多個執行計划的情況,可以用類似方法查出
select * from table(dbms_xplan.display_cursor(${SQL_ID},0));
select * from table(dbms_xplan.display_cursor(${SQL_ID},1));
優點:
- 和explain一樣不需要真正執行,知道sql_id就好
缺點:
- 不能判斷處理了多少行
- 無法判斷表被訪問了多少次
- 沒有輸出運行時的相關統計信息(邏輯讀、遞歸調用、物理讀)
(5) 事件10046 trace跟蹤
步驟:
1:alter session set events '10046 trace name context forever,level 12';//開啟跟蹤
2:執行你的語句
3:alter session set events '10046 trace name context off';//關閉跟蹤
4:找到跟蹤產生的文件
5:tkprof trc文件 目標文件 sys=no sort=prsela,exeela,fchela(格式化命令)
優點:
- 可以看出SQL語句對應的等待事件
- 可以列出sql語句中的函數調用的
- 可以看出解析事件和執行事件
- 可以跟蹤整個程序包
- 可以看出處理的行數,產生的邏輯讀
缺點: - 步驟比較繁瑣
- 無法判斷表被訪問了多少次
- 執行計划中的謂詞部分不能清晰地顯示出來
(6) awrsqrpt.sql
步驟:
1:@?/rdbms/admin/awrsqrpt.sql
具體可以參考我之前的博客:https://smilenicky.blog.csdn.net/article/details/89429989
解釋經典執行計划的方法
可以分為兩種類型:單獨型和聯合型
聯合型分為:關聯的聯合型和非關聯的聯合型
【單獨型】
單獨型比較好理解,執行順序是按照id=1,id=2,id=3執行,由遠及近
先scott登錄,然后執行sql,例子來自《收獲,不止SQL優化》一書
select deptno, count(*)
from emp
where job = 'CLERK'
and sal < 3000
group by deptno
所以可以給出單獨型的圖例:
【聯合型關聯型】
(1) 聯合型的關聯型(NL)
這里使用Hint的nl
select /*+ ordered use_nl(dept) index(dept) */ *
from emp, dept
where emp.deptno = dept.deptno
and emp.comm is null
and dept.dname != 'SALES'
這圖來自《收獲,不止SQL優化》,可以看出id為2的A-Rows實踐返回行數為10,id為3的Starts為10,說明驅動表emp訪問的結果集返回多少條記錄,被驅動表就被訪問多少次,這是關聯型的顯著特征
關聯型不一定是驅動表返回多少條,被驅動表就被訪問多少次的,注意FILTER模式也是關聯型的
(2) 聯合型的關聯型(FILTER)
前面已經介紹了聯合型關聯型(nl)這種方法的,這種方法是驅動表返回多少條記錄,被驅動表就被訪問了多少次,不過這種情況對於FILTER模式下並不適用
執行SQL,這里使用Hint /*+ no_unnset */
select * from emp where not exists (select /*+ no_unnset */ 0 from dept
where dept.dname='SALES' and dept.deptno = emp.deptno) and not exists(select /*+ no_unnset */ 0 from bonus where bonus.ename = emp.ename)
ps:圖來自《收獲,不止SQL優化》一書,這里可以看出id為2的地方,A-Rows實際返回行數為8,而id為3的地方,Starts為3,說明對應SQL執行3次,也即dept被驅動表被訪問了3次,這和剛才介紹的nl方式不同,為什么不同?
查詢一下SQL,可以看出實際返回3條,其它的都是重復多的,
select dname, count(*) from emp, dept where emp.deptno = dept.deptno group by dname;
所以,就很明顯了,被過濾了重復數據,也就是說FILTER模式的對數據進行過濾,驅動表執行結果集返回多少行不重復數據,被驅動表就被訪問多少次,FILTER模式可以說是對nl模式的改善
(3) 聯合型的關聯型(UPDATE)
update emp e1 set sal = (select avg(sal) from emp e2 where e2.deptno = e1.deptno),comm = (select avg(comm) from emp e3)
聯合型的關聯型(UPDATE)和FILTER模式類似,所以就不重復介紹
(4) 聯合型的關聯型(CONNECT BY WITH FILTERING)
select /*+ connect_by_filtering */ level, ename ,prior
ename as manager from emp start with mgr is null connect by prior empno = mgr
給出聯合型關聯型圖例:
【聯合型非關聯型】
可以執行SQL
select ename from emp union all select dname from dept union all select '%' from dual
對於plsql可以使用工具查看執行計划,sqlplus客戶端的可以使用statistics_level=all的方法獲取執行計划,具體步驟
- 1:alter session set statistics_level=all;
- 2:在此處執行你的SQL;
- 3:select * from table(dbms_xplan.display_cursor(null , null,'allstats last'));
可以給出聯合型非關聯型的圖例:
【調優TIPS】
出現哈希連接,可以在子查詢加個rownum,讓優化器先內部查詢好再查詢外部,不構成哈希連接
索引列有空值是不走索引的,模糊匹配也不能走索引
with as用法,有緩存,可以用於提高性能
select * from emp where deptno in (select deptno from dept where dname='SALES');
with tmp as (select deptno from dept where dname='SALES')
select * from emp where deptno in (select * from tmp)
虛擬索引
alter session set "_use_nosegment_indexes"=true;
create index index_name on table_name(col_name) nosegment;
物化視圖
create materialized view [視圖名稱]
build immediate | deferred
refresh fase | complete | force
on demand | commit
start with [start time]
next [next time]
with primary key | rowid //可以省略,一般默認是主鍵物化視圖
as [要執行的SQL]
ok,解釋一下這些語法用意:
build immediate | deferred (視圖創建的方式):
- (1) immediate:表示創建物化視圖的時候是生成數據的;
- (2) deferre:就相反了,只創建物化視圖,不生成數據
refresh fase | complete | force (視圖刷新的方式):
- (1) fase:增量刷新,也就是距離上次刷新時間到當前時間所有改變的數據都刷新到物化視圖,注意,fase模式必須創建視圖日志
- (2) complete:全量更新的,complete方式相當於創建視圖重新全部查一遍
- (3) force:視圖刷新方式的默認方式,當增量刷新可用則增量刷新,當增量刷新不可用,則全量刷新,一般不要用默認方式
on demand | commit start with ... next ...(視圖刷新時間):
- (1) demand:根據用戶需要刷新時間,也就是說用戶要手動刷新
- (2) commit:事務一提交,就自動刷新視圖
- (3) start with:指定首次刷新的時間,一般用當前時間
- (4) next:物化視圖刷新數據的周期,格式一般為“startTime+時間間隔”
Oracle體系結構
Oracle體系結構由實例和一組數據文件組成,實例由SGA內存區,SGA意思是共享內存區,由share pool(共享池)、data buffer(數據緩沖區)、log buffer(日志緩沖區)組成
SGA內存區的share pool是解析SQL並保存執行計划的,然后SQL根據執行計划獲取數據時先看data buffer里是否有數據,沒數據才從磁盤讀,然后還是讀到data buffer里,下次就直接讀data buffer的,當SQL更新時,data buffer的數據就必須寫入磁盤備份,為了保護這些數據,才有log buffer,這就是大概的原理簡介
系統結構關系圖如:
未綁定遍歷SQL查詢
create table t_bind_sql as select sql_text,module from v$sqlarea;
alter table t_bind_sql add sql_text_wo_constants varchar2(1000);
create or replace function
remove_constants( p_query in varchar2 ) return varchar2
as
l_query long;
l_char varchar2(10);
l_in_quotes boolean default FALSE;
begin
for i in 1 .. length( p_query )
loop
l_char := substr(p_query,i,1);
if ( l_char = '''' and l_in_quotes )
then
l_in_quotes := FALSE;
elsif ( l_char = '''' and NOT l_in_quotes )
then
l_in_quotes := TRUE;
l_query := l_query || '''#';
end if;
if ( NOT l_in_quotes ) then
l_query := l_query || l_char;
end if;
end loop;
l_query := translate( l_query, '0123456789', '@@@@@@@@@@' );
for i in 0 .. 8 loop
l_query := replace( l_query, lpad('@',10-i,'@'), '@' );
l_query := replace( l_query, lpad(' ',10-i,' '), ' ' );
end loop;
return upper(l_query);
end;
/
update t_bind_sql set sql_text_wo_constants = remove_constants(sql_text);
commit;
select sql_text_wo_constants, module,count(*) CNT
from t_bind_sql
group by sql_text_wo_constants,module
having count(*) > 100
order by 3 desc;
查詢數據情況信息SQL:
select s.snap_date,
decode(s.redosize, null, '--shutdown or end--', s.currtime) "TIME",
to_char(round(s.seconds / 60, 2)) "elapse(min)",
round(t.db_time / 1000000 / 60, 2) "DB time(min)",
s.redosize redo,
round(s.redosize / s.seconds, 2) "redo/s",
s.logicalreads logical,
round(s.logicalreads / s.seconds, 2) "logical/s",
physicalreads physical,
round(s.physicalreads / s.seconds, 2) "phy/s",
s.executes execs,
round(s.executes / s.seconds, 2) "execs/s",
s.parse,
round(s.parse / s.seconds, 2) "parse/s",
s.hardparse,
round(s.hardparse / s.seconds, 2) "hardparse/s",
s.transactions trans,
round(s.transactions / s.seconds, 2) "trans/s"
from (select curr_redo - last_redo redosize,
curr_logicalreads - last_logicalreads logicalreads,
curr_physicalreads - last_physicalreads physicalreads,
curr_executes - last_executes executes,
curr_parse - last_parse parse,
curr_hardparse - last_hardparse hardparse,
curr_transactions - last_transactions transactions,
round(((currtime + 0) - (lasttime + 0)) * 3600 * 24, 0) seconds,
to_char(currtime, 'yy/mm/dd') snap_date,
to_char(currtime, 'hh24:mi') currtime,
currsnap_id endsnap_id,
to_char(startup_time, 'yyyy-mm-dd hh24:mi:ss') startup_time
from (select a.redo last_redo,
a.logicalreads last_logicalreads,
a.physicalreads last_physicalreads,
a.executes last_executes,
a.parse last_parse,
a.hardparse last_hardparse,
a.transactions last_transactions,
lead(a.redo, 1, null) over(partition by b.startup_time order by b.end_interval_time) curr_redo,
lead(a.logicalreads, 1, null) over(partition by b.startup_time order by b.end_interval_time) curr_logicalreads,
lead(a.physicalreads, 1, null) over(partition by b.startup_time order by b.end_interval_time) curr_physicalreads,
lead(a.executes, 1, null) over(partition by b.startup_time order by b.end_interval_time) curr_executes,
lead(a.parse, 1, null) over(partition by b.startup_time order by b.end_interval_time) curr_parse,
lead(a.hardparse, 1, null) over(partition by b.startup_time order by b.end_interval_time) curr_hardparse,
lead(a.transactions, 1, null) over(partition by b.startup_time order by b.end_interval_time) curr_transactions,
b.end_interval_time lasttime,
lead(b.end_interval_time, 1, null) over(partition by b.startup_time order by b.end_interval_time) currtime,
lead(b.snap_id, 1, null) over(partition by b.startup_time order by b.end_interval_time) currsnap_id,
b.startup_time
from (select snap_id,
dbid,
instance_number,
sum(decode(stat_name, 'redo size', value, 0)) redo,
sum(decode(stat_name,
'session logical reads',
value,
0)) logicalreads,
sum(decode(stat_name,
'physical reads',
value,
0)) physicalreads,
sum(decode(stat_name, 'execute count', value, 0)) executes,
sum(decode(stat_name,
'parse count (total)',
value,
0)) parse,
sum(decode(stat_name,
'parse count (hard)',
value,
0)) hardparse,
sum(decode(stat_name,
'user rollbacks',
value,
'user commits',
value,
0)) transactions
from dba_hist_sysstat
where stat_name in
('redo size',
'session logical reads',
'physical reads',
'execute count',
'user rollbacks',
'user commits',
'parse count (hard)',
'parse count (total)')
group by snap_id, dbid, instance_number) a,
dba_hist_snapshot b
where a.snap_id = b.snap_id
and a.dbid = b.dbid
and a.instance_number = b.instance_number
order by end_interval_time)) s,
(select lead(a.value, 1, null) over(partition by b.startup_time order by b.end_interval_time) - a.value db_time,
lead(b.snap_id, 1, null) over(partition by b.startup_time order by b.end_interval_time) endsnap_id
from dba_hist_sys_time_model a, dba_hist_snapshot b
where a.snap_id = b.snap_id
and a.dbid = b.dbid
and a.instance_number = b.instance_number
and a.stat_name = 'DB time') t
where s.endsnap_id = t.endsnap_id
order by s.snap_date, time desc;
KEEP方式,固定緩存
SQL> alter system set db_keep_cache_size=100M;
系統已更改。
SQL> drop table t;
表已刪除。
SQL> create table t as select * from dba_objects;
表已創建。
SQL> create index idx_object_id on t(object_id);
索引已創建。
SQL> select BUFFER_POOL from user_tables where TABLE_NAME='T';
BUFFER_
-------
DEFAULT
SQL> select BUFFER_POOL from user_indexes where INDEX_NAME='IDX_OBJECT_ID';
BUFFER_
-------
DEFAULT
SQL> alter index idx_object_id storage(buffer_pool keep);
索引已更改。
SQL> --以下將索引全部讀進內存
SQL> select /*+index(t,idx_object_id)*/ count(*) from t where object_id is not null;
COUNT(*)
----------
111113
SQL> --以下將數據全部讀進內存
SQL> alter table t storage(buffer_pool keep);
表已更改。
SQL> select /*+full(t)*/ count(*) from t;
COUNT(*)
----------
111113
SQL> --執行KEEP操作后,通過如下方法查詢出BUFFER_POOL列值為KEEP,表示已經KEEP成功了
SQL> select BUFFER_POOL from user_tables where TABLE_NAME='T';
BUFFER_
-------
KEEP
SQL> select BUFFER_POOL from user_indexes where INDEX_NAME='IDX_OBJECT_ID';
BUFFER_
-------
KEEP
獲取提交次數超過一個閾值的SID:
select t1.sid, t1.value, t2.name
from v$sesstat t1, v$statname t2
where t2.name like '%user commits%'
and t1.STATISTIC# = t2.STATISTIC#
and value >= 10000
order by value desc;
獲取對應的SQL_ID
select t.SID,
t.PROGRAM,
t.EVENT,
t.LOGON_TIME,
t.WAIT_TIME,
t.SECONDS_IN_WAIT,
t.SQL_ID,
t.PREV_SQL_ID
from v$session t
where sid in(132);
通過SQL_ID獲取對應SQL
select t.sql_id,
t.sql_text,
t.EXECUTIONS,
t.FIRST_LOAD_TIME,
t.LAST_LOAD_TIME
from v$sqlarea t
where sql_id in ('ccpn5c32bmfmf');
日志切換規律查詢SQL:
SELECT SUBSTR(TO_CHAR(first_time, 'MM/DD/RR HH:MI:SS'),1,5) Day,
SUM(DECODE(SUBSTR(TO_CHAR(first_time, 'MM/DD/RR HH24:MI:SS'),10,2),'00',1,0)) H00,
SUM(DECODE(SUBSTR(TO_CHAR(first_time, 'MM/DD/RR HH24:MI:SS'),10,2),'01',1,0)) H01,
SUM(DECODE(SUBSTR(TO_CHAR(first_time, 'MM/DD/RR HH24:MI:SS'),10,2),'02',1,0)) H02,
SUM(DECODE(SUBSTR(TO_CHAR(first_time, 'MM/DD/RR HH24:MI:SS'),10,2),'03',1,0)) H03,
SUM(DECODE(SUBSTR(TO_CHAR(first_time, 'MM/DD/RR HH24:MI:SS'),10,2),'04',1,0)) H04,
SUM(DECODE(SUBSTR(TO_CHAR(first_time, 'MM/DD/RR HH24:MI:SS'),10,2),'05',1,0)) H05,
SUM(DECODE(SUBSTR(TO_CHAR(first_time, 'MM/DD/RR HH24:MI:SS'),10,2),'06',1,0)) H06,
SUM(DECODE(SUBSTR(TO_CHAR(first_time, 'MM/DD/RR HH24:MI:SS'),10,2),'07',1,0)) H07,
SUM(DECODE(SUBSTR(TO_CHAR(first_time, 'MM/DD/RR HH24:MI:SS'),10,2),'08',1,0)) H08,
SUM(DECODE(SUBSTR(TO_CHAR(first_time, 'MM/DD/RR HH24:MI:SS'),10,2),'09',1,0)) H09,
SUM(DECODE(SUBSTR(TO_CHAR(first_time, 'MM/DD/RR HH24:MI:SS'),10,2),'10',1,0)) H10,
SUM(DECODE(SUBSTR(TO_CHAR(first_time, 'MM/DD/RR HH24:MI:SS'),10,2),'11',1,0)) H11,
SUM(DECODE(SUBSTR(TO_CHAR(first_time, 'MM/DD/RR HH24:MI:SS'),10,2),'12',1,0)) H12,
SUM(DECODE(SUBSTR(TO_CHAR(first_time, 'MM/DD/RR HH24:MI:SS'),10,2),'13',1,0)) H13,
SUM(DECODE(SUBSTR(TO_CHAR(first_time, 'MM/DD/RR HH24:MI:SS'),10,2),'14',1,0)) H14,
SUM(DECODE(SUBSTR(TO_CHAR(first_time, 'MM/DD/RR HH24:MI:SS'),10,2),'15',1,0)) H15,
SUM(DECODE(SUBSTR(TO_CHAR(first_time, 'MM/DD/RR HH24:MI:SS'),10,2),'16',1,0)) H16,
SUM(DECODE(SUBSTR(TO_CHAR(first_time, 'MM/DD/RR HH24:MI:SS'),10,2),'17',1,0)) H17,
SUM(DECODE(SUBSTR(TO_CHAR(first_time, 'MM/DD/RR HH24:MI:SS'),10,2),'18',1,0)) H18,
SUM(DECODE(SUBSTR(TO_CHAR(first_time, 'MM/DD/RR HH24:MI:SS'),10,2),'19',1,0)) H19,
SUM(DECODE(SUBSTR(TO_CHAR(first_time, 'MM/DD/RR HH24:MI:SS'),10,2),'20',1,0)) H20,
SUM(DECODE(SUBSTR(TO_CHAR(first_time, 'MM/DD/RR HH24:MI:SS'),10,2),'21',1,0)) H21,
SUM(DECODE(SUBSTR(TO_CHAR(first_time, 'MM/DD/RR HH24:MI:SS'),10,2),'22',1,0)) H22 ,
SUM(DECODE(SUBSTR(TO_CHAR(first_time, 'MM/DD/RR HH24:MI:SS'),10,2),'23',1,0)) H23,
COUNT(*) TOTAL
FROM v$log_history a
where first_time>=to_char(sysdate-11)
GROUP BY SUBSTR(TO_CHAR(first_time, 'MM/DD/RR HH:MI:SS'),1,5)
ORDER BY SUBSTR(TO_CHAR(first_time, 'MM/DD/RR HH:MI:SS'),1,5) DESC;
跟蹤日志暴增故障
--1、redo大量產生必然是由於大量產生"塊改變"。從awr視圖中找出"塊改變"最多的segments。
select * from (
SELECT to_char(begin_interval_time, 'YYYY_MM_DD HH24:MI') snap_time,
dhsso.object_name,
SUM(db_block_changes_delta)
FROM dba_hist_seg_stat dhss,
dba_hist_seg_stat_obj dhsso,
dba_hist_snapshot dhs
WHERE dhs.snap_id = dhss. snap_id
AND dhs.instance_number = dhss. instance_number
AND dhss.obj# = dhsso. obj#
AND dhss.dataobj# = dhsso.dataobj#
AND begin_interval_time> sysdate - 60/1440
GROUP BY to_char(begin_interval_time, 'YYYY_MM_DD HH24:MI'),
dhsso.object_name
order by 3 desc)
where rownum<=5;
--2、從awr視圖中找出步驟1中排序靠前的對象涉及到的SQL。
SELECT to_char(begin_interval_time, 'YYYY_MM_DD HH24:MI'),
dbms_lob.substr(sql_text, 4000, 1),
dhss.instance_number,
dhss.sql_id,
executions_delta,
rows_processed_delta
FROM dba_hist_sqlstat dhss, dba_hist_snapshot dhs, dba_hist_sqltext dhst
WHERE UPPER(dhst.sql_text) LIKE '%這里寫對象名大寫%'
AND dhss.snap_id = dhs.snap_id
AND dhss.instance_Number = dhs.instance_number
AND dhss.sql_id = dhst.sql_id;
--3、從ASH相關視圖中找出執行這些SQL的session、module和machine。
select * from dba_hist_active_sess_history WHERE sql_id = '';
select * from v$active_session_history where sql_Id = '';
--4. dba_source 看看是否有存儲過程包含這個SQL
--以下操作產生大量的redo,可以用上述的方法跟蹤它們。
drop table test_redo purge;
create table test_redo as select * from dba_objects;
insert into test_redo select * from test_redo;
insert into test_redo select * from test_redo;
insert into test_redo select * from test_redo;
insert into test_redo select * from test_redo;
insert into test_redo select * from test_redo;
exec dbms_workload_repository.create_snapshot();
--執行了大量的針對test_redo表的INSERT操作后,我們開始按如下方法進行跟蹤,看能否發現更新的是哪張表,是哪些語句。
SQL> select * from (
2 SELECT to_char(begin_interval_time, 'YYYY_MM_DD HH24:MI') snap_time,dhsso.object_ name,SUM(db_block_changes_delta)
3 FROM dba_hist_seg_stat dhss,dba_hist_seg_stat_obj dhsso,dba_hist_snapshot dhs
4 WHERE dhs.snap_id = dhss. snap_id
5 AND dhs.instance_number = dhss. instance_number AND dhss.obj# = dhsso. obj# AND dhss.dataobj# = dhsso.dataobj#
6 AND begin_interval_time> sysdate - 60/1440
7 GROUP BY to_char(begin_interval_time, 'YYYY_MM_DD HH24:MI'), dhsso.object_name order by 3 desc)
8 where rownum<=3;
SQL> SELECT to_char(begin_interval_time,'YYYY_MM_DD HH24:MI'),dbms_lob.substr(sql_ text,4000,1),dhss.sql_id,executions_delta,rows_processed_delta
2 FROM dba_hist_sqlstat dhss, dba_hist_snapshot dhs, dba_hist_sqltext dhst
3 WHERE UPPER(dhst.sql_text) LIKE '%TEST_REDO%' AND dhss.snap_id = dhs.snap_id
4 AND dhss.instance_Number = dhs.instance_number AND dhss.sql_id = dhst.sql_id;
Oracle邏輯結構
數據庫(Database)由若干表空間(Tablespace)組成,表空間(Tablespace)由若干段(Segment)組成,段(Segment)由若干區(Extent)組成,區(Extent)又由若干塊(Block)組成
Block越大,相同數據量的情況下存儲的行就越多,Block需要的越少, 訪問的邏輯讀就越小,對應的consistent gets就越小
ps:實踐情況並非Block越大越好,block越大,不同的訪問的數據落在同一個Block的概率就越大,這個很容易產生熱競爭
查看表空間的總體情況:
SELECT A.TABLESPACE_NAME "表空間名",
A.TOTAL_SPACE "總空間(G)",
NVL(B.FREE_SPACE, 0) "剩余空間(G)",
A.TOTAL_SPACE - NVL(B.FREE_SPACE, 0) "使用空間(G)",
CASE
WHEN A.TOTAL_SPACE = 0 THEN
0
ELSE
trunc(NVL(B.FREE_SPACE, 0) / A.TOTAL_SPACE * 100, 2)
END "剩余百分比%" --避免分母為0
FROM (SELECT TABLESPACE_NAME,
trunc(SUM(BYTES) / 1024 / 1024 / 1024, 2) TOTAL_SPACE
FROM DBA_DATA_FILES
GROUP BY TABLESPACE_NAME) A,
(SELECT TABLESPACE_NAME,
trunc(SUM(BYTES / 1024 / 1024 / 1024), 2) FREE_SPACE
FROM DBA_FREE_SPACE
GROUP BY TABLESPACE_NAME) B
WHERE A.TABLESPACE_NAME = B.TABLESPACE_NAME(+)
ORDER BY 5;
Oracle表設計與調優
分區類型:分區分為范圍分區、列表分區、HASH分區、組合分區四種
- 范圍分區
關鍵字partition by range
create table range_part_tab (seq number,deal_date date,unit_code number,remark varchar2(100))
partition by range (deal_date)
(
partition p1 values less than (TO_DATE('2018-11-01','YYYY-MM-DD')),
partition p2 values less than (TO_DATE('2018-12-02','YYYY-MM-DD')),
partition p3 values less than (TO_DATE('2019-01-01','YYYY-MM-DD')),
partition p4 values less than (TO_DATE('2019-02-01','YYYY-MM-DD')),
partition p5 values less than (TO_DATE('2019-03-01','YYYY-MM-DD')),
partition p6 values less than (TO_DATE('2019-04-01','YYYY-MM-DD')),
partition p7 values less than (TO_DATE('2019-05-01','YYYY-MM-DD')),
partition p8 values less than (TO_DATE('2019-06-01','YYYY-MM-DD')),
partition p9 values less than (TO_DATE('2019-07-01','YYYY-MM-DD')),
partition p10 values less than (TO_DATE('2019-08-01','YYYY-MM-DD'))
);
insert into range_part_tab
(seq, deal_date, unit_code, remark)
select rownum,
to_date(to_char(sysdate-365, 'J') +
trunc(DBMS_RANDOM.value(0, 365)),'J'),
ceil(dbms_random.value(210,220)),
rpad('*', 1, '*')
from dual
connect by rownum <= 1000;
- 列表分區
create table list_part_tab (seq number,deal_date date,unit_code number,remark varchar2(100))
partition by list (unit_code)
(
partition p1 values (211),
partition p2 values (212),
partition p3 values (213),
partition p4 values (214),
partition p5 values (215),
partition p6 values (216),
partition p7 values (217),
partition p8 values (218),
partition p9 values (219),
partition p10 values (220),
partition p0 values (DEFAULT)
);
insert into list_part_tab
(seq, deal_date, unit_code, remark)
select rownum,
to_date(to_char(sysdate-365, 'J') +
trunc(DBMS_RANDOM.value(0, 365)),'J'),
ceil(dbms_random.value(210,220)),
rpad('*', 1, '*')
from dual
connect by rownum <= 1000;
commit;
- 散列分區
散列分區也叫hash分區,partitions后接分區數,盡量設置為偶數,
create table hash_part_tab (seq number,deal_date date,unit_code number,remark varchar2(100))
partition by hash (deal_date)
partitions 12;
insert into hash_part_tab
(seq, deal_date, unit_code, remark)
select rownum,
to_date(to_char(sysdate-365, 'J') +
trunc(DBMS_RANDOM.value(0, 365)),'J'),
ceil(dbms_random.value(210,220)),
rpad('*', 1, '*')
from dual
connect by rownum <= 1000;
commit;
- 組合分區
主要有兩種:oracle11之前只支持范圍列表分區(RANGE-LIST)和范圍散列分區(RANGE-HASH),oracle11之后支持(范圍范圍分區)RANGE-RANGE、 (列表范圍分區)LIST-RANGE、(列表散列分區)LIST-HASH、(列表列表分區)LIST-LIST這幾種組合,為了避免每個主分區中都寫相同的從分區,可以用模板方式(subpartition template)
create table range_list_part_tab (seq number,deal_date date,unit_code number,remark varchar2(100))
partition by range (deal_date)
subpartition by list (unit_code)
subpartition template
(subpartition s1 values (211),
subpartition s2 values (212),
subpartition s3 values (213),
subpartition s4 values (214),
subpartition s5 values (215),
subpartition s6 values (216),
subpartition s7 values (217),
subpartition s8 values (218),
subpartition s9 values (219),
subpartition s10 values (220),
subpartition s0 values (DEFAULT) )
(
partition p1 values less than (TO_DATE('2018-11-01','YYYY-MM-DD')),
partition p2 values less than (TO_DATE('2018-12-02','YYYY-MM-DD')),
partition p3 values less than (TO_DATE('2019-01-01','YYYY-MM-DD')),
partition p4 values less than (TO_DATE('2019-02-01','YYYY-MM-DD')),
partition p5 values less than (TO_DATE('2019-03-01','YYYY-MM-DD')),
partition p6 values less than (TO_DATE('2019-04-01','YYYY-MM-DD')),
partition p7 values less than (TO_DATE('2019-05-01','YYYY-MM-DD')),
partition p8 values less than (TO_DATE('2019-06-01','YYYY-MM-DD')),
partition p9 values less than (TO_DATE('2019-07-01','YYYY-MM-DD')),
partition p10 values less than (TO_DATE('2019-08-01','YYYY-MM-DD'))
);
insert into range_list_part_tab
(seq, deal_date, unit_code, remark)
select rownum,
to_date(to_char(sysdate-365, 'J') +
trunc(DBMS_RANDOM.value(0, 365)),'J'),
ceil(dbms_random.value(210,220)),
rpad('*', 1, '*')
from dual
connect by rownum <= 1000;
commit;
普通表和分區表區別,分區表分成幾部分就有幾個segment
select segment_name,
partition_name,
segment_type,
bytes / 1024 / 1024 "字節數(M)",
tablespace_name
from user_segments
where segment_name IN ('RANGE_PART_TAB', 'NOR_TAB');
分區相關操作
- Split分區
拆分分區,范圍分區和列表分區都適合分區,注意不能對HASH類型的分區進行拆分
create table list_part_tab (seq number,deal_date date,unit_code number,remark varchar2(100))
partition by list (unit_code)
(
partition p1 values (211),
partition p2 values (212),
partition p3 values (213),
partition p4 values (214),
partition p5 values (215),
partition p6 values (216),
partition p7 values (217),
partition p8 values (218),
partition p9 values (219),
partition p10 values (220),
partition p0 values (DEFAULT)
);
alter table list_part_tab split partition p10 at(220) into (PARTITION p11,PARTITION p12);
- 新增分區
ALTER TABLE list_part_tab ADD PARTITION P13 VALUES LESS THAN(250);
新增子分區
ALTER TABLE list_part_tab MODIFY PARTITION P13 ADD SUBPARTITION P13SUB1 VALUES(350);
- 刪除分區
ALTER TABLE list_part_tab DROP PARTITION P13;
刪除子分區
ALTER TABLE list_part_tab DROP SUBPARTITION P13SUB1;
- TRUNCATE分區
TRUNCATE是指刪除分區的數據,並不會刪除分區
ALTER TABLE list_part_tab TRUNCATE PARTITION P2;
TRUNCATE子分區
ALTER TABLE list_part_tab TRUNCATE SUBPARTITION P13SUB1;
- 合並分區
合並分區是將相鄰的分區合並成一個分區,結果分區將采用較高分區的界限,值得注意的是,不能將分區合並到界限較低的分區
ALTER TABLE list_part_tab MERGE PARTITIONS P1,P2 INTO PARTITION P2;
- 接合分區(coalesca)
將散列分區中的數據接合到其它分區中,當散列分區中的數據比較大時,可以增加散列分區,然后進行接合,注意接合只適用於散列分區
ALTER TABLE list_part_tab COALESCA PARTITION;
- 重命名分區
ALTER TABLE SAlist_part_tabLES RENAME PARTITION P11 TO P1;
- 交換分區
交換分區是說交換兩張表結構一樣的表的數據,注意最好加上including indexs更新全局索引,不加的話,全局索引會失效
alter table list_part_tab exchange partition p1 with table range_part_tab including indexs update global indexs;
分區相關查詢
*查詢數據庫所有分區表的信息
select * from DBA_PART_TABLES
- 查詢分區表類型、是否有子分區,分區總數
select pt.partitioning_type, pt.subpartitioning_type, pt.partition_count
from user_part_tables pt
- 查詢分區詳細詳細:
SELECT tab.* FROM USER_TAB_PARTITIONS tab WHERE TABLE_NAME='LIST_PART_TAB'
- 查詢分區表哪列建分區
select column_name, object_type, column_position
from user_part_key_columns
where name = 'LIST_PART_TAB';
- 查詢分區表大小
select sum(bytes / 1024 / 1024)
from user_segments
where segment_name = 'LIST_PART_TAB';
- 查詢分區表各分區的大小和分區名
select partition_name, segment_type, bytes
from user_segments
where segment_name = 'LIST_PART_TAB';
- 查詢分區表各索引大小
select segment_name, segment_type, sum(bytes) / 1024 / 1024
from user_segments
where segment_name in
(select index_name
from user_indexes
where table_name = 'LIST_PART_TAB')
group by segment_name, segment_type;
- 查詢分區表的統計信息
select table_name,
partition_name,
last_analyzed,
partition_position,
num_rows
from user_tab_statistics
where table_name = 'LIST_PART_TAB';
- 查詢分區表索引情況
select table_name,
index_name,
last_analyzed,
blevel,
num_rows,
leaf_blocks,
distinct_keys,
status
from user_indexes
where table_name = 'LIST_PART_TAB';
- 查詢索引在哪些列上
select index_name, column_name, column_position
from user_ind_columns
where table_name = 'LIST_PART_TAB';
- 查詢普通表失效的索引
select ind.index_name,
ind.table_name,
ind.blevel,
ind.num_rows,
ind.leaf_blocks,
ind.distinct_keys
from user_indexes ind
where status = 'INVALID';
- 查詢分區表失效的索引
select a.blevel,
a.leaf_blocks,
a.index_name,
b.table_name,
a.partition_name,
a.status
from user_ind_partitions a, user_indexes b
where a.index_name = b.index_name
and a.status = 'UNUSABLE';
分區表索引失效的操作
操作動作 | 操作命令 | 是否失效(全局索引) | 如何避免(全局索引) | 是否失效(分區索引) | 如何避免(分區索引) |
---|---|---|---|---|---|
truncate分區 | alter table part_tab_trunc truncate partition p1 ; | 失效 | alter table part_tab_trunc truncate partition p1 Update GLOBAL indexes; | 沒影響 | N/A |
drop分區 | alter table part_tab_drop drop partition p1; | 失效 | alter table part_tab_drop drop partition p1 Update GLOBAL indexes; | 沒影響 | N/A |
split分區 | alter table part_tab_split SPLIT PARTITION P_MAX at(30000) into (PARTITION p3,PARTITION P_MAX); | 失效 | alter table part_tab_split SPLIT PARTITION P_MAX at (30000) into (PARTITION p3,PARTITION P_MAX) update global indexes; | 沒影響 | N/A |
add分區 | alter table part_tab_add add PARTITION p6 values less than (60000); | 沒影響 | N/A | 沒影響 | N/A |
exchange分區 | alter table part_tab_exch exchange partition p1 with table normal_tab including indexes; | 失效 | alter table part_tab_exch exchange partition p1 with table normal_tab including indexes update global indexes; | 沒影響 | N/A |
全局臨時表:全局臨時表分為兩種類型,一種是基於會話的全局臨時表(on commit preserve rows);一種是基於事務的全局臨時表(on commit delete rows)
create global temporary table [臨時表名] on commit (preserve rows)|(delete rows) as select * from [數據表];
eg:
create global temporary table tmp on commit preserve rows as select * from dba_objects;
全局臨時表特點:
- 一、高效刪除記錄;
- 二、不同會話訪問臨時表看到的會話是不同的
select * from v$mystat where rownum=1;
ps:基於事務的臨時表在事務提交和會話連接退出時,臨時表數據會被刪除;基於會話的臨時表就是在會話連接退出時,臨時表數據被刪除
索引組織表:
壓縮技術
- 表壓縮
ALTER TABLE t MOVE COMPRESS ;
- 索引壓縮
create index idx2_object_union on t2 (owner , object_type , object_name );
ALTER index idx2_object_union rebuild COMPRESS ;
簇表:簇由一組共享多個數據塊的多個表組成,它將這些表的相關行一起存儲到相同數據塊中,這樣可以減少查詢數據所需的磁盤讀取量。新建簇之后,在簇中新建的表被稱為簇表
ps:表結構設計時,最好存放什么數據就設計為什么類型,避免執行時類型轉換,影響性能
Oracle索引知識
索引由根塊(Root)、莖塊(Branch)、葉子塊(Leaf)組成,其中葉子塊主要存儲索引列具體值(Key Column Value)以及能定位到數據塊具體位置的Rowid,莖塊和根塊主要保存對應下級對應索引
索引特性:
- 索引本身是有序的
- 索引本身能存儲列值
注意:
- 僅等值無范圍查詢時,組合的順序不影晌性能
drop table t purge;
create table t as select * from dba objects;
update t set object_id=rownum ;
commit;
create index idx_id_type on t(object_id, object_type) ;
create index idx_type_id on t(object_type , object_id) ;
set autotrace off;
alter session set statistics_level=all ;
select /*+index(t idx_id_type)*/ * from t where object_id=20 and object_type='TABLE';
select * from table(dbms_xplan.display cursor(null , null , 'allstats last'));
select /*+index(t,idx_type id)*/ * from t where object_id=20 and object_type= 'TABLE';
select * from table(dbms_xplan.display cursor(null , null , 'allstats last'));
- 范圍查詢時,組合索引最佳順序一般是將等值查詢的列置前
select /*+index (t, idx_id_type)*/ * from t where object_id>=20 and object_id<2000 and
object_type='TABLE';
select /*+index (t , idx_type_id) */ * from t where object_id>=20 and object_id<2000
and object type='TABLE';
- Oracle不能同時在索引根的兩段尋找最大值和最小值
set autotrace on
select max(object_id) , min(object_id) from t;
笛卡爾乘積寫法:
set autotrace on
select max, min
from (select max(object_id) max from t ) a ,
(select min(object_id) min from t ) b;
索引最新的數據塊一般是在最右邊
索引的缺點
- 熱快競爭:索引最新的數據塊一般在最右邊,而訪問也一般是訪問比較新的數據,所以容易造成熱快競爭
- 更新新增問題:索引本身是有序的,所以查詢時候很快,但是更新時候就麻煩了,新增更新索引都需要保證排序
索引失效
索引失效分為邏輯失效和物理失效
- 邏輯失效
邏輯失效是因為一些sql語法導致索引失效,比如加了一些函數,而索引列不是函數索引 - 物理失效
物理失效是真的失效,比如被設置unusable屬性,分區表的不規范操作也會導致索引失效等等情況
alter index index_name unusable;
索引分類:BTree索引、位圖索引、函數索引、反向索引、全文索引
位圖索引:位圖索引儲存的就是比特值
函數索引:就是將一個函數計算的結果存儲在行的列中
自定義函數的情況,要加上deterministic關鍵字
自定義一個函數:
create or replace function f_addusl(i int) return int is
begin
return(i + 1);
end;
建函數索引
create index idx_ljb_test on t(f_addusl(id));
出現:ORA-30553:函數不能確定
方法:加上deterministic關鍵字
create or replace function f_addusl(i int) return int deterministic is
begin
return(i + 1);
end;
在自定義函數代碼更新時,對應的函數索引也要重建,否則不能用到原來的函數索引
反向索引:反向索引其實也是BTree索引的一種特例,不過在列中字節會反轉的(反向索引是為了避免熱快競爭,比如索引列中存儲的列值是遞增的,比如250101,250102,按照BTree索引的特性,一般是按照順序存儲在索引右邊的,所以容易形成熱快競爭,而反向索引可以避免這種情況,因為反向索引是這樣存儲的,比如101052,201052,這樣列值就距離很遠了,避免了熱快競爭)
反向索引不能用到范圍查詢
全文索引:所謂Oracle全文索引是通過Oracle詞法分析器(lexer)將所有的表意單元term存儲dr$開頭的表里並存儲term出現的位置、次數、hash值等等信息,Oracle提供了basic_lexer(針對英語)、chinese_vgram_lexer(漢語分析器)、chinese_lexer(新的漢語分析器)
- basic_lexer:是一種適用於英文的分析器,根據空格或者標點符號將詞元分離,不管對於中文來說是沒有空格的,所以這種分析器不適合中文
- chinese_vgram_lexer:這是一種原先專門的中文分析器,支持所有的漢字字符集,比如zhs16gbk單點。這種分析器,分析過程是按字為單元進行分析的,舉個例子,“索引本身是有序的”,按照這種分析器,會分成詞元“索”、“索引”、“引本”、“本身”、“身是”、“是有”、“有序”、“序的”、“的”這些詞元,然后你發現像“序的”這些詞在中文中基本是不成立的,不過這種Oracle分析器本身就不認識中文,所以只能全部分析,很明顯效率是不好的
- chinese_lexer:這是一種新的中文分析器,前面提到chinese_vgram_lexer這種分析器雖然支持所有的中文字符集,但是效率不高,所以chinese_lexer是對其的改進版本,這種分析器認識很多中文詞匯,能夠比較快查詢,提高效率,不過這種分析器只能支持utf-8字符集
drop table t purge;
create table t as select * from dba_objects where object_name is not null;
update t set object_name ='高興' where rownum<=2;
commit;
select * from t where object_name like '%高興%';
//設置詞法分析器
BEGIN
ctx_ddl.create_preference ('lexer1', 'chinese_vgram_lexer');
END;
//授權
grant ctxapp to scott;
alter user ctxsys account unlock;
alter user ctxsys identified by ctxsys;
connect ctxsys/ctxsys;
grant execute on ctx_ddl to scott;
connect ljb/ljb;
//刪除全文索引
drop index idx_content;
//查看數據文件信息
select * from v$datafile;
//建立全文索引
CREATE INDEX idx_content ON t(object_name) indextype is ctxsys.context parameters('lexer lexer1');
//執行同步命令
exec ctx_ddl.sync_index('idx_content','20M');
Oracle表連接
兩個表之間的表連接方法有排序合並連接、嵌套循環連接、哈希連接、笛卡爾連接
-
排序合並連接(merge sort join)
-
嵌套循環連接(Nested loop join)
-
哈希連接(Hash join)
-
笛卡爾連接(Cross join)
【表連接方法特性區別】
(1)表訪問次數區別
使用Hint語法強制使用nl
select /*+ leading(t1) use_nl(t2)*/ * from t1,t2
where t1.id = t2.id
and t1.id in (17,19);
查看執行計划
SQL> select * from table(dbms_xplan.display_cursor(null,null,'allstats last'));
PLAN_TABLE_OUTPUT
SQL_ID 245z7n1cxaf3m, child number 0
-------------------------------------
SELECT /*+ leading(t1) use_nl(t2)*/ * FROM t1, t2 WHERE t1.id = t2.t1_id
Plan hash value: 1967407726
--------------------------------------------------------------------------------
-----
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buff
ers |
--------------------------------------------------------------------------------
-----
| 0 | SELECT STATEMENT | | 1 | | 300 |00:00:00.25 | 29
747 |
| 1 | NESTED LOOPS | | 1 | 300 | 300 |00:00:00.25 | 29
747 |
| 2 | TABLE ACCESS FULL| T1 | 1 | 300 | 300 |00:00:00.01 |
27 |
|* 3 | TABLE ACCESS FULL| T2 | 300 | 1 | 300 |00:00:00.25 | 29
720 |
--------------------------------------------------------------------------------
-----
Predicate Information (identified by operation id):
---------------------------------------------------
3 - filter("T1"."ID"="T2"."T1_ID")
Note
PLAN_TABLE_OUTPUT
- dynamic sampling used for this statement (level=2)
已選擇24行。
Nested sort join中,驅動表被訪問0或1次,被驅動表被訪問0或者n次,n是驅動表返回的結果集條數
然后同樣可以進行hash join、merge join的實踐,hash join用/*+ leading(t1) use_hash(t2) */
Hash join中驅動表被訪問0或者1次,被驅動表也一樣
merge sort join中驅動表被訪問0或者1次,被驅動表也一樣
(2)表連接順序影響
對於前面的用t1為驅動表的情況,現在換一下順序,
SQL>SELECT /*+ leading(t2) use_nl(t1)*/ * FROM t1, t2 WHERE t1.id = t2.t1_id;
SQL> select * from table(dbms_xplan.display_cursor(null,null,'allstats last'));
PLAN_TABLE_OUTPUT
SQL_ID fgw5v7y16yn4m, child number 0
-------------------------------------
SELECT /*+ leading(t2) use_nl(t1)*/ * FROM t1, t2 WHERE t1.id = t2.t1_id
Plan hash value: 4016936828
--------------------------------------------------------------------------------
-----
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buff
ers |
--------------------------------------------------------------------------------
-----
| 0 | SELECT STATEMENT | | 1 | | 300 |00:00:00.30 | 70
139 |
| 1 | NESTED LOOPS | | 1 | 300 | 300 |00:00:00.30 | 70
139 |
| 2 | TABLE ACCESS FULL| T2 | 1 | 9485 | 10000 |00:00:00.01 |
119 |
|* 3 | TABLE ACCESS FULL| T1 | 10000 | 1 | 300 |00:00:00.29 | 70
020 |
--------------------------------------------------------------------------------
-----
Predicate Information (identified by operation id):
---------------------------------------------------
3 - filter("T1"."ID"="T2"."T1_ID")
Note
PLAN_TABLE_OUTPUT
- dynamic sampling used for this statement (level=2)
已選擇24行。
可以看出表連接順序對NL連接是有影響的,同理實驗,可以看出對hash join也是有影響的,而merger join不影響
(3)表連接排序
對於這幾種表連接,可以用set autotrace on方式查看sorts屬性,可以得出只有merge join是有排序的,Nl連接和hash join是無序的
(4)各表連接失效情況
hash join不支持的條件是“>、<、<>、like”的連接方式,merge join不支持的條件是“<>、like”支持“<、>”的情況,而nl連接沒有限制,這是幾種表連接方法的區別
EXIST一定比IN查詢快?
count(列名)一定比count(*)查詢快?