【故障處理】隊列等待之enq: TX - row lock contention
1 BLOG文檔結構圖
2 前言部分
2.1 導讀和注意事項
各位技術愛好者,看完本文后,你可以掌握如下的技能,也可以學到一些其它你所不知道的知識,~O(∩_∩)O~:
① enq: TX - row lock contention等待事件的解決
② 一般等待事件的解決辦法
③ 隊列等待的基本知識
④ ADDM的使用
⑤ 如何獲取歷史執行計划
⑥ 查詢綁定變量的具體值
⑦ 很多有用的查詢性能的SQL語句
Tips:
① 本文在ITpub(http://blog.itpub.net/26736162)、博客園(http://www.cnblogs.com/lhrbest)和微信公眾號(xiaomaimiaolhr)有同步更新
② 文章中用到的所有代碼,相關軟件,相關資料請前往小麥苗的雲盤下載(http://blog.itpub.net/26736162/viewspace-1624453/)
③ 若文章代碼格式有錯亂,推薦使用搜狗、360或QQ瀏覽器,也可以下載pdf格式的文檔來查看,pdf文檔下載地址:http://blog.itpub.net/26736162/viewspace-1624453/,另外itpub格式顯示有問題,可以去博客園地址閱讀
④ 本篇BLOG中命令的輸出部分需要特別關注的地方我都用灰色背景和粉紅色字體來表示,比如下邊的例子中,thread 1的最大歸檔日志號為33,thread 2的最大歸檔日志號為43是需要特別關注的地方;而命令一般使用黃色背景和紅色字體標注;對代碼或代碼輸出部分的注釋一般采用藍色字體表示。
List of Archived Logs in backup set 11
Thrd Seq Low SCN Low Time Next SCN Next Time
---- ------- ---------- ------------------- ---------- ---------
1 32 1621589 2015-05-29 11:09:52 1625242 2015-05-29 11:15:48
1 33 1625242 2015-05-29 11:15:48 1625293 2015-05-29 11:15:58
2 42 1613951 2015-05-29 10:41:18 1625245 2015-05-29 11:15:49
2 43 1625245 2015-05-29 11:15:49 1625253 2015-05-29 11:15:53
[ZHLHRDB1:root]:/>lsvg -o
T_XDESK_APP1_vg
rootvg
[ZHLHRDB1:root]:/>
00:27:22 SQL> alter tablespace idxtbs read write;
====》2097152*512/1024/1024/1024=1G
本文如有錯誤或不完善的地方請大家多多指正,ITPUB留言或QQ皆可,您的批評指正是我寫作的最大動力。
3 故障分析及解決過程
3.1 故障環境介紹
項目 |
source db |
db 類型 |
RAC |
db version |
11.2.0.4.0 |
db 存儲 |
ASM |
OS版本及kernel版本 |
AIX 64位 7.1.0.0 |
3.2 故障發生現象及報錯信息
早上同事過來跟我說昨天有一套數據庫做壓力測試的時候,CPU利用率很高,他已經抓取當時的AWR,讓我幫忙分析分析,下邊我們來看看AWR中的數據:
從AWR報告的頭部可以分析得到,數據庫為RAC庫,11.2.0.4版本,AIX64位系統,有32顆CPU,共48G內存,收集了40分鍾內的AWR報告,但是DB Time有15180分鍾,約為15180/40=379倍,說明這段時間內系統的負載異常的大。
如果關注數據庫的性能,那么當拿到一份AWR報告的時候,最想知道的第一件事情可能就是系統資源的利用情況了,而首當其沖的,就是CPU。而細分起來,CPU可能指的是
l OS級的User%, Sys%, Idle%
l DB所占OS CPU資源的Busy%
l DB CPU又可以分為前台所消耗的CPU和后台所消耗的CPU
我們分析以下主機CPU的情況:
分析上面的主機圖片,我們可以得出下面的結論:
v OS 級的 User%,Sys%,Idle%:
OS 級的%User 為 2.9,%Sys 為 2.3,%Idle 為 94.8,所以%Busy應該是 100-94.8=5.2。
v DB 所占 OS CPU 資源的 Busy%
DB 占了 OS CPU 資源的 2.2,%Busy CPU 則可以通過上面的數據得到:
%Busy CPU = %Total CPU/(%Busy) * 100 = 2.2/5.2* 100 = 42.3,和報告的42.2相吻合。
接下來我們看看Load Profile部分:
可以看到,每秒的事務數大約為358,非常大,接下來看看等待事件:
其它的項目就不列出了,從等待事件中可以很明顯的看出enq: TX - row lock contention這個等待事件異常。Top 10 Foreground Events by Total Wait Time這個部分也是AWR報告中非常重要的部分,從這里可以看出等待時間在前10位的是什么事件,基本上就可以判斷出性能瓶頸在什么地方。通常,在沒有問題的數據庫中,CPU time總是列在第一個。在這里,enq: TX - row lock contention等待了 3,813,533次,等待時間為855,777秒,平均等待時間為855777/3813533=224毫秒,占DB Time的94%,等待類別為Application上的等待問題。
3.3 故障分析
根據AWR報告的內容,我們知道只要解決了enq: TX - row lock contention這個等待事件即可解決問題。那么我們首先來了解一些關於這個等待事件的知識。
===============================================================================
SELECT * FROM V$EVENT_NAME WHERE NAME = 'enq: TX - row lock contention';
SELECT * FROM V$LOCK_TYPE D WHERE D.TYPE IN ('TX');
SELECT D.EQ_NAME, D.EQ_TYPE, D.REQ_REASON, D.REQ_DESCRIPTION
FROM V$ENQUEUE_STATISTICS D
WHERE D.EQ_TYPE IN ('TX')
AND D.REQ_REASON='row lock contention';
等待事件enq: TX - row lock contention中的enq是enquence的簡寫。enquence是協調訪問數據庫資源的內部鎖。
所有以“enq:”打頭的等待事件都表示這個會話正在等待另一個會話持有的內部鎖釋放,它的名稱格式是enq:enqueue_type - related_details。這里的enqueue_type是TX,related_details是row lock contention。數據庫動態性能視圖v$event_name提供所有以“enq:”開頭的等待事件的列表。
Oracle 的enqueue 包含以下模式:
模式代碼 |
解釋 |
1 |
Null mode |
2 |
Sub-Share |
3 |
Sub-Exclusive |
4 |
Share |
5 |
Share/Sub-Exclusive |
6 |
Exclusive |
enq: TX - row lock contention 通常是Application級別的問題。通常情況下,Oracle數據庫的等待事件enq: TX - row lock contention會在下列三種情況下會出現。
(一)第一種情況,是真正的業務邏輯上的行鎖沖突,如一條記錄被多個人同時修改。這種鎖對應的請求模式是6(Waits for TX in mode 6 :A 會話持有row level lock,B會話等待這個lock釋放。)。不同的session更新或刪除同一個記錄。(This occurs when one application is updating or deleting a row that another session is also trying to update or delete. )
解決辦法:持有鎖的會話commit或者rollback。
(二)第二種情況,是唯一鍵沖突(In mode 4,唯一索引),如主鍵字段相同的多條記錄同時插入。這種鎖對應的請求模式是4。這也是應用邏輯問題。表上存在唯一索引,A會話插入一個值(未提交),B會話隨后也插入同樣的值;A會話提交后,enq: TX - row lock contention消失。
解決辦法:持有鎖的會話commit或者rollback。
(三)第三種情況,是bitmap索引的更新沖突(in mode 4 :bitmap),就是多個會話同時更新bitmap索引的同一個數據塊。源於bitmap的特性:位圖索引的一個鍵值,會指向多行記錄,所以更新一行就會把該鍵值指向的所有行鎖定。此時會話請求鎖的對應的請求模式是4。bitmap索引的物理結構和普通索引一樣,也是 B-tree 結構,但它存儲的數據記錄的邏輯結構為"key_value,start_rowid,end_rowid,bitmap"。
其內容類似這樣:"‘8088’,00000000000,10000034441,1001000100001111000"
Bitmap是一個二進制,表示 START_ROWID 到 END_ROWID 的記錄,1 表示等於 key_value即‘8088’的 ROWID 記錄, 0 則表示不是這個記錄。
解決辦法:持有鎖的會話commit或者rollback。
在了解bitmap索引的結構之后,我們就能理解同時插入多條記錄到擁有bitmap索引的表時,就會同時更新bitmap索引中一個塊中的記錄,等於某一個記錄被同時更新,自然就會出現行鎖等待。插入並發量越大,等待越嚴重。
(四)其他原因
It could be a primary key problem; a trigger firing attempting to insert, delete, or update a row; a problem with initrans; waiting for an index split to complete; problems with bitmap indexes;updating a row already updated by another session; or something else.
(https://forums.oracle.com/forums/thread.jspa?threadID=860488)
如果數據庫一出現enq: TX - row lock contention等待,可以去看v$session和v$session_wait等視圖。在v$session和v$session_wait中,如果看到的event列是enq: TX - row lock contention的,就表示這個會話正處於行鎖等待。該等待事件的請求模式可以從v$session和v$session_wait的p1列中得到。
select sid,
chr(bitand(p1, -16777216) / 16777215) ||
chr(bitand(p1, 16711680) / 65535) "Name",
(bitand(p1, 65535)) "Mode"
from v$session_wait
where event like 'enq%';
通過這個SQL可以將p1轉換為易閱讀的文字。
===============================================================================
有了以上的知識,我們知道,目前首先需要找到產生等待事件的類型進而來分析解決問題。
3.4 故障解決過程
根據AWR報告可以得到故障的時間是'2016-08-31 17:30:00'到'2016-08-31 18:15:00'之間。
我們查詢出現問題時間段的ASH視圖DBA_HIST_ACTIVE_SESS_HISTORY來找到我們需要的鎖類型及SQL語句。
SELECT D.SQL_ID, COUNT(1)
FROM DBA_HIST_ACTIVE_SESS_HISTORY D
WHERE D.SAMPLE_TIME BETWEEN TO_DATE('2016-08-31 17:30:00', 'YYYY-MM-DD HH24:MI:SS') AND
TO_DATE('2016-08-31 18:15:00', 'YYYY-MM-DD HH24:MI:SS')
AND D.EVENT = 'enq: TX - row lock contention'
GROUP BY D.SQL_ID;
只有一條SQL語句,看來就是它了,我們來看看鎖的類型:
SELECT D.SQL_ID,CHR(BITAND(P1, -16777216) / 16777215) ||
CHR(BITAND(P1, 16711680) / 65535) "Lock",
BITAND(P1, 65535) "Mode", COUNT(1),COUNT(DISTINCT d.session_id )
FROM DBA_HIST_ACTIVE_SESS_HISTORY D
WHERE D.SAMPLE_TIME BETWEEN TO_DATE('2016-08-31 17:30:00', 'YYYY-MM-DD HH24:MI:SS') AND
TO_DATE('2016-08-31 18:15:00', 'YYYY-MM-DD HH24:MI:SS')
AND D.EVENT = 'enq: TX - row lock contention'
GROUP BY D.SQL_ID,(CHR(BITAND(P1, -16777216) / 16777215) ||
CHR(BITAND(P1, 16711680) / 65535)),(BITAND(P1, 65535));
看來約有556個會話等待該鎖,鎖為TX鎖,模式為6,剛好是我們之前的分析的原因中的第一種(第一種情況,是真正的業務邏輯上的行鎖沖突,如一條記錄被多個人同時修改。這種鎖對應的請求模式是6)。我們可以分析具體的對象是哪個:
SELECT D.current_obj#,
D.current_file#,
D.current_block#,
D.current_row#,D.EVENT,
D.P1TEXT,
D.P1,
D.P2TEXT,
D.P2,
CHR(BITAND(P1, -16777216) / 16777215) ||
CHR(BITAND(P1, 16711680) / 65535) "Lock",
BITAND(P1, 65535) "Mode",
D.BLOCKING_SESSION,
D.BLOCKING_SESSION_STATUS,
D.BLOCKING_SESSION_SERIAL#,
D.SQL_ID,
TO_CHAR(D.SAMPLE_TIME, 'YYYYMMDDHH24MISS') SAMPLE_TIME
FROM DBA_HIST_ACTIVE_SESS_HISTORY D
WHERE D.SAMPLE_TIME BETWEEN TO_DATE('2016-08-31 17:30:00', 'YYYY-MM-DD HH24:MI:SS') AND
TO_DATE('2016-08-31 18:15:00', 'YYYY-MM-DD HH24:MI:SS')
AND D.EVENT = 'enq: TX - row lock contention';
SELECT * FROM dba_objects D WHERE D.object_id=87620;
可以看到等待的是一張表。
可以看到SQL_ID為1cmnjddakrqbv的SQL最多,我們查看具體SQL內容:
SELECT A.* FROM V$SQL A WHERE A.SQL_ID IN ('1cmnjddakrqbv') ;
SELECT A.SQL_TEXT,A.EXECUTIONS,A.MODULE FROM V$SQL A WHERE A.SQL_ID IN ('1cmnjddakrqbv');
可以看到,該SQL是JDBC的,也就是JAVA前台的語句,我們將SQL語句拷貝出來:
update organization o set o.quota_unused = o.quota_unused-:1,o.mod_date = systimestamp where o.quota_unused >= :2 and o.bank_id=:3 and o.pro_id=:4
SQL語句是一個UPDATE語句,我們查詢其執行計划是否正確,是否有索引:
SELECT *
FROM TABLE(DBMS_WORKLOAD_REPOSITORY.AWR_SQL_REPORT_HTML(L_DBID => 3860591551,
L_INST_NUM => 1,
L_BID => 1145,
L_EID => 1148,
L_SQLID => '1cmnjddakrqbv'));
拷貝到txt中另存為html即可以看到報告:
可以看到SQL走的是INDEX UNIQUE SCAN,說明表上不缺索引,設計上沒有問題。這里簡單點看歷史執行計划也可以這樣:SELECT * FROM TABLE(DBMS_XPLAN.DISPLAY_AWR(SQL_ID => '1cmnjddakrqbv' )) ;
我們繼續查看該SQL的綁定變量的值具體是多少:
SELECT * FROM V$SQL_BIND_CAPTURE A WHERE A.SQL_ID IN ('1cmnjddakrqbv') ;
SELECT A.SQL_ID,A.NAME,A.POSITION,A.DATATYPE_STRING,A.VALUE_STRING FROM V$SQL_BIND_CAPTURE A WHERE A.SQL_ID IN ('1cmnjddakrqbv') ;
這里准確點我們應該查詢DBA_HIST_SQLBIND該視圖,如下:
SELECT A.sql_id,A.name,A.datatype_string,A.value_string,COUNT(1)
FROM DBA_HIST_SQLBIND A
WHERE A.SQL_ID IN ('1cmnjddakrqbv')
AND A.SNAP_ID BETWEEN 1145 AND 1148
GROUP BY A.sql_id,A.name,A.datatype_string,A.value_string
;
結果差不多。由此找到了產生等待的SQL語句,我們查詢一下:
SELECT * FROM CNSL.ORGANIZATION o WHERE O.QUOTA_UNUSED >= 1
AND O.BANK_ID IN ( '17856' , '05612' )
AND O.PRO_ID = 'HSB201602';
查詢出來也就2行的數據,說明整個過程中,500多的會話就在更新這2條記錄,那肯定會產生行鎖等待的。
3.4.1 ADDM的建議
最后突然想到了ADDM,於是做了個ADDM查詢。
DECLARE
TASK_NAME VARCHAR2(50) := 'HEALTH_CHECK_BY_LHR';
TASK_DESC VARCHAR2(50) := 'HEALTH_CHECK_BY_LHR';
TASK_ID NUMBER;
BEGIN
DBMS_ADVISOR.CREATE_TASK('ADDM', TASK_ID, TASK_NAME, TASK_DESC, NULL);
DBMS_ADVISOR.SET_TASK_PARAMETER(TASK_NAME, 'START_SNAPSHOT', 1145);
DBMS_ADVISOR.SET_TASK_PARAMETER(TASK_NAME, 'END_SNAPSHOT', 1148);
DBMS_ADVISOR.SET_TASK_PARAMETER(TASK_NAME, 'INSTANCE', 1);
DBMS_ADVISOR.SET_TASK_PARAMETER(TASK_NAME, 'DB_ID', 3860591551);
DBMS_ADVISOR.EXECUTE_TASK(TASK_NAME);
EXCEPTION
WHEN OTHERS THEN
NULL;
END;
/
執行完成后ADDM報告就創建好了,然后我們用函數DBMS_ADVISOR.GET_TASK_REPORT來獲取報告:
SELECT DBMS_ADVISOR.GET_TASK_REPORT('HEALTH_CHECK_BY_LHR', 'TEXT', 'ALL') ADDM_RESULTS
FROM DUAL;
查看ADDM具體內容,重要的內容我用紅色標注出來了:
ADDM Report for Task 'HEALTH_CHECK_BY_LHR'
------------------------------------------
Analysis Period
---------------
AWR snapshot range from 1145 to 1148.
Time period starts at 31-AUG-16 05.33.52 PM
Time period ends at 31-AUG-16 06.14.36 PM
Analysis Target
---------------
Database 'ORACNSL' with DB ID 3860591551.
Database version 11.2.0.4.0.
ADDM performed an analysis of instance oraCNSL1, numbered 1 and hosted at
ZFCNSLDB1.
Activity During the Analysis Period
-----------------------------------
Total database time was 910831 seconds.
The average number of active sessions was 372.68.
Summary of Findings
-------------------
Description Active Sessions Recommendations
Percent of Activity
------------------------- ------------------- ---------------
1 Top SQL Statements 371.83 | 99.77 1
2 Row Lock Waits 350.15 | 93.96 1
3 Buffer Busy - Hot Objects 20.62 | 5.53 1
4 Buffer Busy - Hot Block 20.6 | 5.53 2
5 "Cluster" Wait Class 17.7 | 4.75 0
6 Global Cache Messaging 5.86 | 1.57 1
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Findings and Recommendations
----------------------------
Finding 1: Top SQL Statements
Impact is 371.83 active sessions, 99.77% of total activity.
-----------------------------------------------------------
SQL statements consuming significant database time were found. These
statements offer a good opportunity for performance improvement.
Recommendation 1: SQL Tuning
Estimated benefit is 371.83 active sessions, 99.77% of total activity.
----------------------------------------------------------------------
Action
Investigate the UPDATE statement with SQL_ID "1cmnjddakrqbv" for
possible performance improvements. You can supplement the information
given here with an ASH report for this SQL_ID.
Related Object
SQL statement with SQL_ID 1cmnjddakrqbv.
update organization o set o.quota_unused =
o.quota_unused-:1,o.mod_date = systimestamp where o.quota_unused >=
:2 and o.bank_id=:3 and o.pro_id=:4
Rationale
The SQL spent only 4% of its database time on CPU, I/O and Cluster
waits. Therefore, the SQL Tuning Advisor is not applicable in this case.
Look at performance data for the SQL to find potential improvements.
Rationale
Database time for this SQL was divided as follows: 100% for SQL
execution, 0% for parsing, 0% for PL/SQL execution and 0% for Java
execution.
Rationale
SQL statement with SQL_ID "1cmnjddakrqbv" was executed 377427 times and
had an average elapsed time of 2.4 seconds.
Rationale
Waiting for event "enq: TX - row lock contention" in wait class
"Application" accounted for 94% of the database time spent in processing
the SQL statement with SQL_ID "1cmnjddakrqbv".
Finding 2: Row Lock Waits
Impact is 350.15 active sessions, 93.96% of total activity.
-----------------------------------------------------------
SQL statements were found waiting for row lock waits.
Recommendation 1: Application Analysis
Estimated benefit is 350.15 active sessions, 93.96% of total activity.
----------------------------------------------------------------------
Action
Significant row contention was detected in the TABLE "CNSL.ORGANIZATION"
with object ID 87620. Trace the cause of row contention in the
application logic using the given blocked SQL.
Related Object
Database object with ID 87620.
Rationale
The SQL statement with SQL_ID "1cmnjddakrqbv" was blocked on row locks.
Related Object
SQL statement with SQL_ID 1cmnjddakrqbv.
update organization o set o.quota_unused =
o.quota_unused-:1,o.mod_date = systimestamp where o.quota_unused >=
:2 and o.bank_id=:3 and o.pro_id=:4
Symptoms That Led to the Finding:
---------------------------------
Wait class "Application" was consuming significant database time.
Impact is 350.15 active sessions, 93.96% of total activity.
Finding 3: Buffer Busy - Hot Objects
Impact is 20.62 active sessions, 5.53% of total activity.
---------------------------------------------------------
Read and write contention on database blocks was consuming significant
database time.
Recommendation 1: Schema Changes
Estimated benefit is 20.6 active sessions, 5.53% of total activity.
-------------------------------------------------------------------
Action
Consider rebuilding the TABLE "CNSL.ORGANIZATION" with object ID 87620
using a higher value for PCTFREE.
Related Object
Database object with ID 87620.
Symptoms That Led to the Finding:
---------------------------------
Read and write contention on database blocks was consuming significant
database time.
Impact is 20.62 active sessions, 5.53% of total activity.
Inter-instance messaging was consuming significant database time on
this instance.
Impact is 5.86 active sessions, 1.57% of total activity.
Wait class "Cluster" was consuming significant database time.
Impact is 17.7 active sessions, 4.75% of total activity.
Finding 4: Buffer Busy - Hot Block
Impact is 20.6 active sessions, 5.53% of total activity.
--------------------------------------------------------
A hot data block with concurrent read and write activity was found. The block
belongs to segment "CNSL.ORGANIZATION" and is block 171 in file 14.
Recommendation 1: Application Analysis
Estimated benefit is 20.6 active sessions, 5.53% of total activity.
-------------------------------------------------------------------
Action
Investigate application logic to find the cause of high concurrent read
and write activity to the data present in this block.
Related Object
Database block with object number 87620, file number 14 and block
number 171.
Recommendation 2: Schema Changes
Estimated benefit is 20.6 active sessions, 5.53% of total activity.
-------------------------------------------------------------------
Action
Consider rebuilding the TABLE "CNSL.ORGANIZATION" with object ID 87620
using a higher value for PCTFREE.
Related Object
Database object with ID 87620.
Symptoms That Led to the Finding:
---------------------------------
Read and write contention on database blocks was consuming significant
database time.
Impact is 20.62 active sessions, 5.53% of total activity.
Inter-instance messaging was consuming significant database time on
this instance.
Impact is 5.86 active sessions, 1.57% of total activity.
Wait class "Cluster" was consuming significant database time.
Impact is 17.7 active sessions, 4.75% of total activity.
Finding 5: "Cluster" Wait Class
Impact is 17.7 active sessions, 4.75% of total activity.
--------------------------------------------------------
Wait class "Cluster" was consuming significant database time.
No recommendations are available.
Finding 6: Global Cache Messaging
Impact is 5.86 active sessions, 1.57% of total activity.
--------------------------------------------------------
Inter-instance messaging was consuming significant database time on this
instance.
Recommendation 1: Application Analysis
Estimated benefit is 5.86 active sessions, 1.57% of total activity.
-------------------------------------------------------------------
Action
Look at the "Top SQL Statements" finding for SQL statements consuming
significant time on Cluster waits. For example, the UPDATE statement
with SQL_ID "1cmnjddakrqbv" is responsible for 100% of Cluster wait
during the analysis period.
Symptoms That Led to the Finding:
---------------------------------
Wait class "Cluster" was consuming significant database time.
Impact is 17.7 active sessions, 4.75% of total activity.
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Additional Information
----------------------
Miscellaneous Information
-------------------------
Wait class "Commit" was not consuming significant database time.
Wait class "Concurrency" was not consuming significant database time.
Wait class "Configuration" was not consuming significant database time.
CPU was not a bottleneck for the instance.
Wait class "Network" was not consuming significant database time.
Wait class "User I/O" was not consuming significant database time.
The network latency of the cluster interconnect was within acceptable limits
of 1 milliseconds.
Session connect and disconnect calls were not consuming significant database
time.
Hard parsing of SQL statements was not consuming significant database time.
看來ADDM可以一針見血的找到系統存在的問題。
About Me
..........................................................................................................................................................................................................
● 本文作者:小麥苗,只專注於數據庫的技術,更注重技術的運用
● 本文在itpub(http://blog.itpub.net/26736162)、博客園(http://www.cnblogs.com/lhrbest)和個人微信公眾號(xiaomaimiaolhr)上有同步更新,推薦pdf文件閱讀
● QQ群:230161599 微信群:私聊
● 本文itpub地址:http://blog.itpub.net/26736162/viewspace-2124369/ 博客園地址:http://www.cnblogs.com/lhrbest/articles/5831147.html
● 本文pdf版:http://yunpan.cn/cdEQedhCs2kFz (提取碼:ed9b)
● 小麥苗分享的其它資料:http://blog.itpub.net/26736162/viewspace-1624453/
● 聯系我請加QQ好友(642808185),注明添加緣由
● 於 2016-09-01 09:00~2016-09-01 19:00 在中行完成
● 【版權所有,文章允許轉載,但須以鏈接方式注明源地址,否則追究法律責任】
..........................................................................................................................................................................................................
長按識別二維碼或微信客戶端掃描下邊的二維碼來關注小麥苗的微信公眾號:xiaomaimiaolhr,學習最實用的數據庫技術。