hbase regionserver異常宕機


原因分析:

線上hbase,在凌晨1點左右,發現某一台regionserver進行了重啟(regionserver加了守護線程)

1、查看master日志:

2020-02-27 01:04:57,001 ERROR [RpcServer.FifoRWQ.default.read.handler=26,queue=10,port=16000] master.MasterRpcServices: Region server a3ster,16020,1582342923163 reported a fatal error:
ABORTING region server a3ser,16020,1582342923163: Replay of WAL required. Forcing server shutdown
Cause:
org.apache.hadoop.hbase.DroppedSnapshotException: region: T_BL,\x0A\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00,1572576275632.069e4d877a4ff46f9964ac8bcddb09ef.
        at org.apache.hadoop.hbase.regionserver.HRegion.internalFlushCacheAndCommit(HRegion.java:2509)
        at org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:2186)
        at org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:2148)
        at org.apache.hadoop.hbase.regionserver.HRegion.flushcache(HRegion.java:2039)
        at org.apache.hadoop.hbase.regionserver.HRegion.flush(HRegion.java:1965)
        at org.apache.hadoop.hbase.regionserver.MemStoreFlusher.flushRegion(MemStoreFlusher.java:505)
        at org.apache.hadoop.hbase.regionserver.MemStoreFlusher.flushRegion(MemStoreFlusher.java:475)
        at org.apache.hadoop.hbase.regionserver.MemStoreFlusher.access$900(MemStoreFlusher.java:75)
        at org.apache.hadoop.hbase.regionserver.MemStoreFlusher$FlushHandler.run(MemStoreFlusher.java:263)
        at java.lang.Thread.run(Thread.java:748)
Caused by: org.apache.hadoop.hbase.exceptions.TimeoutIOException: Failed to get sync result after 300000 ms for ringBufferSequence=101793126, WAL system stuck?
        at org.apache.hadoop.hbase.regionserver.wal.SyncFuture.get(SyncFuture.java:174)
        at org.apache.hadoop.hbase.regionserver.wal.FSHLog.blockOnSync(FSHLog.java:1406)
        at org.apache.hadoop.hbase.regionserver.wal.FSHLog.publishSyncThenBlockOnCompletion(FSHLog.java:1400)
        at org.apache.hadoop.hbase.regionserver.wal.FSHLog.sync(FSHLog.java:1512)
        at org.apache.hadoop.hbase.regionserver.wal.WALUtil.writeMarker(WALUtil.java:126)
        at org.apache.hadoop.hbase.regionserver.wal.WALUtil.writeFlushMarker(WALUtil.java:75)
        at org.apache.hadoop.hbase.regionserver.HRegion.internalFlushCacheAndCommit(HRegion.java:2486)
        ... 9 more

2020-02-27 01:04:57,032 ERROR [RpcServer.FifoRWQ.default.read.handler=29,queue=8,port=16000] master.MasterRpcServices: Region server a3ser,16020,1582342923163 reported a fatal error:
ABORTING region server a3serz,16020,1582342923163: Replay of WAL required. Forcing server shutdown
Cause:

2、查看regioserver 日志

2020-02-27 01:04:56,813 WARN  [ResponseProcessor for block BP-1884348122-10.62.2.1-1545175191847:blk_1489206371_467735337] hdfs.DFSClient: Slow ReadProcessor read fields took 327586ms (threshold=30000ms); ack: seqno: 1 status: SUCCESS status: SUCCESS downstreamAckTimeNanos: 965211 4: "\000\000", targets: [11.23.3.3:9866, 11.23.3.5:9866] 2020-02-27 01:04:56,816 FATAL [MemStoreFlusher.6] regionserver.HRegionServer: ABORTING region server a3serz,16020,1582342923163: Replay of WAL required. Forcing server shutdown
org.apache.hadoop.hbase.DroppedSnapshotException: region: T_BL,\x0A\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00,1572576275632.069e4d877a4ff46f9964ac8bcddb09ef.
        at org.apache.hadoop.hbase.regionserver.HRegion.internalFlushCacheAndCommit(HRegion.java:2509)
        at org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:2186)
        at org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:2148)
        at org.apache.hadoop.hbase.regionserver.HRegion.flushcache(HRegion.java:2039)
        at org.apache.hadoop.hbase.regionserver.HRegion.flush(HRegion.java:1965)
        at org.apache.hadoop.hbase.regionserver.MemStoreFlusher.flushRegion(MemStoreFlusher.java:505)
        at org.apache.hadoop.hbase.regionserver.MemStoreFlusher.flushRegion(MemStoreFlusher.java:475)
        at org.apache.hadoop.hbase.regionserver.MemStoreFlusher.access$900(MemStoreFlusher.java:75)
        at org.apache.hadoop.hbase.regionserver.MemStoreFlusher$FlushHandler.run(MemStoreFlusher.java:263)
        at java.lang.Thread.run(Thread.java:748)
Caused by: org.apache.hadoop.hbase.exceptions.TimeoutIOException: Failed to get sync result after 300000 ms for ringBufferSequence=101793126, WAL system stuck?
        at org.apache.hadoop.hbase.regionserver.wal.SyncFuture.get(SyncFuture.java:174)
        at org.apache.hadoop.hbase.regionserver.wal.FSHLog.blockOnSync(FSHLog.java:1406)
        at org.apache.hadoop.hbase.regionserver.wal.FSHLog.publishSyncThenBlockOnCompletion(FSHLog.java:1400)
        at org.apache.hadoop.hbase.regionserver.wal.FSHLog.sync(FSHLog.java:1512)
        at org.apache.hadoop.hbase.regionserver.wal.WALUtil.writeMarker(WALUtil.java:126)
        at org.apache.hadoop.hbase.regionserver.wal.WALUtil.writeFlushMarker(WALUtil.java:75)
        at org.apache.hadoop.hbase.regionserver.HRegion.internalFlushCacheAndCommit(HRegion.java:2486)

 

分析:

出現DroppedSnapshotException錯誤,一般都是由於在進行刷新memstore時,出現了問題,上訴標黃的地方,說明在刷新某一個region級別的memstore時,往hdfs寫入數據時間過長,導致regionserver掛掉

 

hbase memstore 刷新觸發條件如下:

HBase會在如下幾種情況下觸發flush操作,需要注意的是MemStore的最小flush單元是HRegion而不是單個MemStore。可想而知,如果一個HRegion中Memstore過多,每次flush的開銷必然會很大,因此我們也建議在進行表設計的時候盡量減少ColumnFamily的個數。

Memstore級別限制:當Region中任意一個MemStore的大小達到了上限(hbase.hregion.memstore.flush.size,默認128MB),會觸發Memstore刷新。

Region級別限制:當Region中所有Memstore的大小總和達到了上限(hbase.hregion.memstore.block.multiplier * hbase.hregion.memstore.flush.size,默認 2* 128M = 256M),會觸發memstore刷新。

Region Server級別限制:當一個Region Server中所有Memstore的大小總和達到了上限(hbase.regionserver.global.memstore.upperLimit * hbase_heapsize,默認 40%的JVM內存使用量),會觸發部分Memstore刷新。Flush順序是按照Memstore由大到小執行,先Flush Memstore最大的Region,再執行次大的,直至總體Memstore內存使用量低於閾值(hbase.regionserver.global.memstore.lowerLimit * hbase_heapsize,默認 38%的JVM內存使用量)。

當一個Region Server中HLog數量達到上限(可通過參數hbase.regionserver.maxlogs配置)時,系統會選取最早的一個 HLog對應的一個或多個Region進行flush

HBase定期刷新Memstore:默認周期為1小時,確保Memstore不會長時間沒有持久化。為避免所有的MemStore在同一時間都進行flush導致的問題,定期的flush操作有20000左右的隨機延時。

手動執行flush:用戶可以通過shell命令 flush ‘tablename’或者flush ‘region name’分別對一個表或者一個Region進行flush。

 

上訴的這個問題不僅僅是hbase本身的問題,跟hdfs也相關。

3、查看hbase 寫入數據時,datanode節點的Slow狀態情況

$ egrep -o "Slow.*?(took|cost)" hadoop-hduser-datanode-a3ser.log.1 |sort |uniq -c
     36 Slow BlockReceiver write data to disk cost
   2743 Slow BlockReceiver write packet to mirror took
      2 Slow flushOrSync took
     35 Slow manageWriterOsCache took
     21 Slow PacketResponder send ack to upstream took

說明:

Slow BlockReceiver write data to disk cost : 表明在將塊寫入OS緩存或磁盤時存在延遲

Slow BlockReceiver write packet to mirror took :表明在網絡上寫入塊時有延遲

Slow manageWriterOsCache took : 表明在將塊寫入OS緩存或磁盤時存在延遲

Slow PacketResponder send ack to upstream took : 母雞 。。。

Slow flushOrSync took : 表明在將塊寫入OS緩存或磁盤時存在延遲

 

4.解決方案

1.設置memstore大小;HloG數量設置;

2.check hdfs 並且修復
3、檢查datanode 集群負載,網絡情況
4.重啟server。

 

 

借鑒:http://ddrv.cn/a/258124

a3ster


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM