1.概述
HBase中表的基本單位是Region,日常在調用HBase API操作一個表時,交互的數據也會以Region的形式進行呈現。一個表可以有若干個Region,今天筆者就來和大家分享一下Region合並的一些問題和解決方法。
2.內容
在分析合並Region之前,我們先來了解一下Region的體系結構,如下圖所示:
從圖中可知,能夠總結以下知識點:
- HRegion:一個Region可以包含多個Store;
- Store:每個Store包含一個Memstore和若干個StoreFile;
- StoreFile:表數據真實存儲的地方,HFile是表數據在HDFS上的文件格式。
如果要查看HFile文件,HBase有提供命令,命令如下:
hbase hfile -p -f /hbase/data/default/ip_login/d0d7d881bb802592c09d305e47ae70a5/_d/7ec738167e9f4d4386316e5e702c8d3d
執行輸出結果,如下圖所示:
2.1 為什么需要合並Region
那為什么需要合並Region呢?這個需要從Region的Split來說。當一個Region被不斷的寫數據,達到Region的Split的閥值時(由屬性hbase.hregion.max.filesize來決定,默認是10GB),該Region就會被Split成2個新的Region。隨着業務數據量的不斷增加,Region不斷的執行Split,那么Region的個數也會越來越多。
一個業務表的Region越多,在進行讀寫操作時,或是對該表執行Compaction操作時,此時集群的壓力是很大的。這里筆者做過一個線上統計,在一個業務表的Region個數達到9000+時,每次對該表進行Compaction操作時,集群的負載便會加重。而間接的也會影響應用程序的讀寫,一個表的Region過大,勢必整個集群的Region個數也會增加,負載均衡后,每個RegionServer承擔的Region個數也會增加。
因此,這種情況是很有必要的進行Region合並的。比如,當前Region進行Split的閥值設置為30GB,那么我們可以對小於等於10GB的Region進行一次合並,減少每個業務表的Region,從而降低整個集群的Region,減緩每個RegionServer上的Region壓力。
2.2 如何進行Region合並
那么我們如何進行Region合並呢?HBase有提供一個合並Region的命令,具體操作如下:
# 合並相鄰的兩個Region hbase> merge_region 'ENCODED_REGIONNAME', 'ENCODED_REGIONNAME' # 強制合並兩個Region hbase> merge_region 'ENCODED_REGIONNAME', 'ENCODED_REGIONNAME', true
但是,這種方式會有一個問題,就是只能一次合並2個Region,如果這里有幾千個Region需要合並,這種方式是不可取的。
2.2.1 批量合並
這里有一種批量合並的方式,就是通過編寫腳本(merge_small_regions.rb)來實現,實現代碼如下:
# Test Mode: # # hbase org.jruby.Main merge_empty_regions.rb namespace.tablename <skip_size> <batch_regions> <merge?> # # Non Test - ie actually do the merge: # # hbase org.jruby.Main merge_empty_regions.rb namespace.tablename <skip_size> <batch_regions> merge # # Note: Please replace namespace.tablename with your namespace and table, eg NS1.MyTable. This value is case sensitive. require 'digest' require 'java' java_import org.apache.hadoop.hbase.HBaseConfiguration java_import org.apache.hadoop.hbase.client.HBaseAdmin java_import org.apache.hadoop.hbase.TableName java_import org.apache.hadoop.hbase.HRegionInfo; java_import org.apache.hadoop.hbase.client.Connection java_import org.apache.hadoop.hbase.client.ConnectionFactory java_import org.apache.hadoop.hbase.client.Table java_import org.apache.hadoop.hbase.util.Bytes def list_bigger_regions(admin, table, low_size) cluster_status = admin.getClusterStatus() master = cluster_status.getMaster() biggers = [] cluster_status.getServers.each do |s| cluster_status.getLoad(s).getRegionsLoad.each do |r| # getRegionsLoad returns an array of arrays, where each array # is 2 elements # Filter out any regions that don't match the requested # tablename next unless r[1].get_name_as_string =~ /#{table}\,/ if r[1].getStorefileSizeMB() > low_size if r[1].get_name_as_string =~ /\.([^\.]+)\.$/ biggers.push $1 else raise "Failed to get the encoded name for #{r[1].get_name_as_string}" end end end end biggers end # Handle command line parameters table_name = ARGV[0] low_size = 1024 if ARGV[1].to_i >= low_size low_size=ARGV[1].to_i end limit_batch = 1000 if ARGV[2].to_i <= limit_batch limit_batch = ARGV[2].to_i end do_merge = false if ARGV[3] == 'merge' do_merge = true end config = HBaseConfiguration.create(); connection = ConnectionFactory.createConnection(config); admin = HBaseAdmin.new(connection); bigger_regions = list_bigger_regions(admin, table_name, low_size) regions = admin.getTableRegions(Bytes.toBytes(table_name)); puts "Total Table Regions: #{regions.length}" puts "Total bigger regions: #{bigger_regions.length}" filtered_regions = regions.reject do |r| bigger_regions.include?(r.get_encoded_name) end puts "Total regions to consider for Merge: #{filtered_regions.length}" filtered_regions_limit = filtered_regions if filtered_regions.length < 2 puts "There are not enough regions to merge" filtered_regions_limit = filtered_regions end if filtered_regions.length > limit_batch filtered_regions_limit = filtered_regions[0,limit_batch] puts "But we will merge : #{filtered_regions_limit.length} regions because limit in parameter!" end r1, r2 = nil filtered_regions_limit.each do |r| if r1.nil? r1 = r next end if r2.nil? r2 = r end # Skip any region that is a split region if r1.is_split() r1 = r2 r2 = nil puts "Skip #{r1.get_encoded_name} bcause it in spliting!" next end if r2.is_split() r2 = nil puts "Skip #{r2.get_encoded_name} bcause it in spliting!" next end if HRegionInfo.are_adjacent(r1, r2) # only merge regions that are adjacent puts "#{r1.get_encoded_name} is adjacent to #{r2.get_encoded_name}" if do_merge admin.mergeRegions(r1.getEncodedNameAsBytes, r2.getEncodedNameAsBytes, false) puts "Successfully Merged #{r1.get_encoded_name} with #{r2.get_encoded_name}" sleep 2 end r1, r2 = nil else puts "Regions are not adjacent, so drop the first one and with the #{r2.get_encoded_name} to iterate again" r1 = r2 r2 = nil end end admin.close
該腳本默認是合並1GB以內的Region,個數為1000個。如果我們要合並小於10GB,個數在4000以內,腳本(merging-region.sh)如下:
#! /bin/bash num=$1 echo "[`date "+%Y-%m-%d %H:%M:%S"`] INFO : RegionServer Start Merging..." if [ ! -n "$num" ]; then echo "[`date "+%Y-%m-%d %H:%M:%S"`] INFO : Default Merging 10 Times." num=10 elif [[ $num == *[!0-9]* ]]; then echo "[`date "+%Y-%m-%d %H:%M:%S"`] INFO : Input [$num] Times Must Be Number." exit 1 else echo "[`date "+%Y-%m-%d %H:%M:%S"`] INFO : User-Defined Merging [$num] Times." fi for (( i=1; i<=$num; i++ )) do echo "[`date "+%Y-%m-%d %H:%M:%S"`] INFO : Merging [$i] Times,Total [$num] Times." hbase org.jruby.Main merge_small_regions.rb namespace.tablename 10240 4000 merge sleep 5 done
在merging-region.sh腳本中,做了參數控制,可以循環來執行批量合並腳本。可能在實際操作過程中,批量執行一次Region合並,合並后的結果Region還是有很多(可能此時又有新的Region生成),這是我們可以使用merging-region.sh這個腳本多次執行批量合並Region操作,具體操作命令如下:
# 默認循環10次,例如本次循環執行5次 sh merging-region.sh 5
2.3 如果在合並Region的過程中出現永久RIT怎么辦
在合並Region的過程中出現永久RIT怎么辦?筆者在生產環境中就遇到過這種情況,在批量合並Region的過程中,出現了永久MERGING_NEW的情況,雖然這種情況不會影響現有集群的正常的服務能力,但是如果集群有某個節點發生重啟,那么可能此時該RegionServer上的Region是沒法均衡的。因為在RIT狀態時,HBase是不會執行Region負載均衡的,即使手動執行balancer命令也是無效的。
如果不解決這種RIT情況,那么后續有HBase節點相繼重啟,這樣會導致整個集群的Region驗證不均衡,這是很致命的,對集群的性能將會影響很大。經過查詢HBase JIRA單,發現這種MERGING_NEW永久RIT的情況是觸發了HBASE-17682的BUG,需要打上該Patch來修復這個BUG,其實就是HBase源代碼在判斷業務邏輯時,沒有對MERGING_NEW這種狀態進行判斷,直接進入到else流程中了。源代碼如下:
for (RegionState state : regionsInTransition.values()) { HRegionInfo hri = state.getRegion(); if (assignedRegions.contains(hri)) { // Region is open on this region server, but in transition. // This region must be moving away from this server, or splitting/merging. // SSH will handle it, either skip assigning, or re-assign. LOG.info("Transitioning " + state + " will be handled by ServerCrashProcedure for " + sn); } else if (sn.equals(state.getServerName())) { // Region is in transition on this region server, and this // region is not open on this server. So the region must be // moving to this server from another one (i.e. opening or // pending open on this server, was open on another one. // Offline state is also kind of pending open if the region is in // transition. The region could be in failed_close state too if we have // tried several times to open it while this region server is not reachable) if (state.isPendingOpenOrOpening() || state.isFailedClose() || state.isOffline()) { LOG.info("Found region in " + state + " to be reassigned by ServerCrashProcedure for " + sn); rits.add(hri); } else if(state.isSplittingNew()) { regionsToCleanIfNoMetaEntry.add(state.getRegion()); } else { LOG.warn("THIS SHOULD NOT HAPPEN: unexpected " + state); } } }
修復之后的代碼如下:
for (RegionState state : regionsInTransition.values()) { HRegionInfo hri = state.getRegion(); if (assignedRegions.contains(hri)) { // Region is open on this region server, but in transition. // This region must be moving away from this server, or splitting/merging. // SSH will handle it, either skip assigning, or re-assign. LOG.info("Transitioning " + state + " will be handled by ServerCrashProcedure for " + sn); } else if (sn.equals(state.getServerName())) { // Region is in transition on this region server, and this // region is not open on this server. So the region must be // moving to this server from another one (i.e. opening or // pending open on this server, was open on another one. // Offline state is also kind of pending open if the region is in // transition. The region could be in failed_close state too if we have // tried several times to open it while this region server is not reachable) if (state.isPendingOpenOrOpening() || state.isFailedClose() || state.isOffline()) { LOG.info("Found region in " + state + " to be reassigned by ServerCrashProcedure for " + sn); rits.add(hri); } else if(state.isSplittingNew()) { regionsToCleanIfNoMetaEntry.add(state.getRegion()); } else if (isOneOfStates(state, State.SPLITTING_NEW, State.MERGING_NEW)) { regionsToCleanIfNoMetaEntry.add(state.getRegion()); }else { LOG.warn("THIS SHOULD NOT HAPPEN: unexpected " + state); } } }
但是,這里有一個問題,目前該JIRA單只是說了需要去修復BUG,打Patch。但是,實際生產情況下,面對這種RIT情況,是不可能長時間停止集群,影響應用程序讀寫的。那么,有沒有臨時的解決辦法,先臨時解決當前的MERGING_NEW這種永久RIT,之后在進行HBase版本升級操作。
辦法是有的,在分析了MERGE合並的流程之后,發現HBase在執行Region合並時,會先生成一個初始狀態的MERGING_NEW。整個Region合並流程如下:
從流程圖中可以看到,MERGING_NEW是一個初始化狀態,在Master的內存中,而處於Backup狀態的Master內存中是沒有這個新Region的MERGING_NEW狀態的,那么可以通過對HBase的Master進行一個主備切換,來臨時消除這個永久RIT狀態。而HBase是一個高可用的集群,進行主備切換時對用戶應用來說是無感操作。因此,面對MERGING_NEW狀態的永久RIT可以使用對HBase進行主備切換的方式來做一個臨時處理方案。之后,我們在對HBase進行修復BUG,打Patch進行版本升級。
3.總結
HBase的RIT問題,是一個比較常見的問題,在遇到這種問題時,可以先冷靜的分析原因,例如查看Master的日志、仔細閱讀HBase Web頁面RIT異常的描述、使用hbck命令查看Region、使用fsck查看HDFS的block等。分析出具體的原因后,我們在對症下葯,做到大膽猜想,小心求證。
4.結束語
這篇博客就和大家分享到這里,如果大家在研究學習的過程當中有什么問題,可以加群進行討論或發送郵件給我,我會盡我所能為您解答,與君共勉!
另外,博主出書了《Hadoop大數據挖掘從入門到進階實戰》,喜歡的朋友或同學, 可以在公告欄那里點擊購買鏈接購買博主的書進行學習,在此感謝大家的支持。