解決hbase頻繁掉的問題


1、情況說明,測試集群,6台hdfs,一台hbase

在使用hbase的時候,出現hbase總是掛掉問題

2、錯誤現象:

2020-06-05 15:28:27,670 WARN  [RS_OPEN_META-bb-cc-aa:16020-0-MetaLogRoller] wal.ProtobufLogWriter: Failed to write trailer, non-fatal, continuing...
java.io.IOException: All datanodes xxx:50010 are bad. Aborting...
        at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:1137)
        at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.processDatanodeError(DFSOutputStream.java:933)
        at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:487)
2020-06-05 15:28:27,670 WARN  [RS_OPEN_META-xxx4:16020-0-MetaLogRoller] wal.FSHLog: Riding over failed WAL close of hdfs://xxxx:9000/hbase/WALs/xxx,16020,1591341651357/xxx%2C16020%2C1591341651357.meta.1591341967425.meta, cause="All datanodes xxx:50010 are bad. Aborting...", errors=2; THIS FILE WAS NOT CLOSED BUT ALL EDITS SYNCED SO SHOULD BE OK
2020-06-05 15:28:27,671 INFO  [RS_OPEN_META-xxx:16020-0-MetaLogRoller] wal.FSHLog: Rolled WAL /hbase/WALs/xxx,16020,1591341651357/xxx%2C16020%2C1591341651357.meta.1591341967425.meta with entries=61, filesize=47.94 KB; new WAL /hbase/WALs/xxx,16020,1591341651357/xxx%2C16020%2C1591341651357.meta.1591342107655.meta
2020-06-05 15:28:53,482 WARN  [ResponseProcessor for block BP-705947195-xxx-1495826397385:blk_1080155959_6415222] hdfs.DFSClient: DFSOutputStream ResponseProcessor exception  for block BP-705947195-xxx.82-1495826397385:blk_1080155959_6415222
java.io.EOFException: Premature EOF: no length prefix available
        at org.apache.hadoop.hdfs.protocolPB.PBHelper.vintPrefixed(PBHelper.java:2000)
        at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:176)
        at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer$ResponseProcessor.run(DFSOutputStream.java:798)
2020-06-05 15:29:27,704 WARN  [ResponseProcessor for block BP-705947195-xxx-1495826397385:blk_1080156011_6415223] hdfs.DFSClient: DFSOutputStream ResponseProcessor exception  for block BP-705947195-xxx-1495826397385:blk_1080156011_6415223
java.io.EOFException: Premature EOF: no length prefix available

 

3、分析

看日志出現上述的問題是由於hbase在flush的時候,寫入數據到hdfs出現的異常,這種測試數據量本身就很少,可能flush的時候,僅僅只有幾十kb的數據量,不應該出現datanode寫入異常。

 

4、嘗試解決辦法:

1、嘗試socket超時,修改超時時間:

<property>
<name>dfs.client.socket-timeout</name>
<value>6000000</value>
</property>
<property>
<name>dfs.datanode.socket.write.timeout</name>
<value>6000000</value>
</property>

2、嘗試調整jvm
export HBASE_REGIONSERVER_OPTS=" -XX:+UseG1GC -Xmx8g -Xms8g -XX:+UnlockExperimentalVMOptions -XX:MaxGCPauseMillis=100 -XX:-ResizePLAB -XX:+ParallelRefProcEnabled -XX:+AlwaysPreTouch -XX:ParallelGCThreads=32 -XX:ConcGCThreads=8 -XX:G1HeapWastePercent=3 -XX:InitiatingHeapOccupancyPercent=35 -XX:G1MixedGCLiveThresholdPercent=85 -XX:MaxDirectMemorySize=12g -XX:G1NewSizePercent=1 -XX:G1MaxNewSizePercent=15 -verbose:gc -XX:+PrintGC -XX:+PrintGCDetails -XX:+PrintGCApplicationStoppedTime -XX:+PrintHeapAtGC -XX:+PrintGCDateStamps -XX:+PrintAdaptiveSizePolicy -XX:PrintSafepointStatisticsCount=1 -XX:PrintFLSStatistics=1 -Xloggc:${HBASE_LOG_DIR}/gc-hbase-regionserver-`hostname`.log"

3、嘗試修改Zookeeper的超時時間以及HBase超時后不abort變為restart
<property>
    <name>zookeeper.session.timeout</name>
    <value>600000</value>
</property>

<property>
    <name>hbase.regionserver.restart.on.zk.expire</name>
    <value>true</value>
</property>

4、修改hdfs-site.xml 嘗試修改副本失敗的時候分配策略
<property>
    <name>dfs.client.block.write.replace-datanode-on-failure.enable</name>
    <value>true</value>
</property>

<property>
    <name>dfs.client.block.write.replace-datanode-on-failure.policy</name>
    <value>ALWAYS</value>  # 可以改成其他參數值
</property>

5、修改datanode處理線程數量
<property>
    <name>dfs.datanode.max.transfer.threads</name>
    <value>8192</value>
</property>

等待看是否還有問題

 

借鑒:

https://blog.csdn.net/microGP/article/details/81234065


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM