HBase client訪問ZooKeeper獲取root-region-server DeadLock問題(zookeeper.ClientCnxn Unable to get data of znode /hbase/root-region-server)


2012年11月28日 出現故障," Unable to get data of znode /hbase/root-region-server"

問題比較詭異,兩個機房,只有一個機房故障,5台服務器相續故障,錯誤日志相同。使用的HBase客戶端版本為0.94.0

1)分析步驟:

1 jstack jmap 查看是否有死鎖、block或內存溢出

jmap 看內存回收狀況沒有什么異常,內存和CPU占用都不多

 jstack pid > test.log
pid: Unable to open socket file: target process not responding or HotSpot VM not loaded
The -F option can be used when the target process is not responding
出現這種錯誤,使用-F參數
jstack -l -F pid >jstack.log
jstack -l pid >jstack.log
這時jstack.log並不會有有用信息,要去catalina.out里看,注意在服務下線后使用,容易造成tomcat僵死
catalina.out里發現deadlock死鎖
Found one Java-level deadlock:
=============================
"catalina-exec-800":
  waiting to lock monitor 0x000000005f1f6530 (object 0x0000000731902200, a java.lang.Object),
  which is held by "catalina-exec-710"
"catalina-exec-710":
  waiting to lock monitor 0x00002aaab9a05bd0 (object 0x00000007321f8708, a org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation),
  which is held by "catalina-exec-29-EventThread"
"catalina-exec-29-EventThread":
  waiting to lock monitor 0x000000005f9f0af0 (object 0x0000000732a9c7e0, a org.apache.hadoop.hbase.zookeeper.RootRegionTracker),
  which is held by "catalina-exec-710"
Java stack information for the threads listed above:
===================================================
"catalina-exec-800":
        at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:943)
        - waiting to lock <0x0000000731902200> (a java.lang.Object)
        at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:836)
        at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.relocateRegion(HConnectionManager.java:807)
        at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.getRegionLocation(HConnectionManager.java:725)
        at org.apache.hadoop.hbase.client.ServerCallable.connect(ServerCallable.java:82)
        at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:162)
        at org.apache.hadoop.hbase.client.HTable.get(HTable.java:685)
        at org.apache.hadoop.hbase.client.HTablePool$PooledHTable.get(HTablePool.java:366)
        at com.weibo.api.commons.hbase.CustomHBase.get(CustomHBase.java:171)
        at com.weibo.api.commons.hbase.CustomHBase.get(CustomHBase.java:160)
        at com.weibo.api.commons.hbase.CustomHBase.get(CustomHBase.java:150)

"catalina-exec-710": at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.resetZooKeeperTrackers(HConnectionManager.java:599) - waiting to lock <0x00000007321f8708> (a org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.abort(HConnectionManager.java:1660) at org.apache.hadoop.hbase.zookeeper.ZooKeeperNodeTracker.getData(ZooKeeperNodeTracker.java:158) - locked <0x0000000732a9c7e0> (a org.apache.hadoop.hbase.zookeeper.RootRegionTracker) at org.apache.hadoop.hbase.zookeeper.RootRegionTracker.getRootRegionLocation(RootRegionTracker.java:62) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:821) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:801) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:933) ...... at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:123) at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:99) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:894) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:948) - locked <0x0000000731902200> (a java.lang.Object)
"catalina-exec-29-EventThread": at org.apache.hadoop.hbase.zookeeper.ZooKeeperNodeTracker.stop(ZooKeeperNodeTracker.java:98) - waiting to lock <0x0000000732a9c7e0> (a org.apache.hadoop.hbase.zookeeper.RootRegionTracker) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.resetZooKeeperTrackers(HConnectionManager.java:604) - locked <0x00000007321f8708> (a org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.abort(HConnectionManager.java:1660) at org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher.connectionEvent(ZooKeeperWatcher.java:374) at org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher.process(ZooKeeperWatcher.java:271) at org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:521) at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:497) Found 1 deadlock.

 warn.log中報告Interrupted異常 是由上述死鎖引起的

2012-11-28 20:06:17 [WARN] hconnection-0x4384d0a47f41c63 Unable to get data of znode /hbase/root-region-server
java.lang.InterruptedException
        at java.lang.Object.wait(Native Method)
        at java.lang.Object.wait(Object.java:485)
        at org.apache.zookeeper.ClientCnxn.submitRequest(ClientCnxn.java:1253)
        at org.apache.zookeeper.ZooKeeper.getData(ZooKeeper.java:1129)
        at org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.getData(RecoverableZooKeeper.java:264)
        at org.apache.hadoop.hbase.zookeeper.ZKUtil.getDataInternal(ZKUtil.java:522)
        at org.apache.hadoop.hbase.zookeeper.ZKUtil.getDataAndWatch(ZKUtil.java:498)
        at org.apache.hadoop.hbase.zookeeper.ZooKeeperNodeTracker.getData(ZooKeeperNodeTracker.java:156)
        at org.apache.hadoop.hbase.zookeeper.RootRegionTracker.getRootRegionLocation(RootRegionTracker.java:62)
        at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:821)
        ......
        at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.relocateRegion(HConnectionManager.java:807)
        at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:1042)
        at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:836)
        at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.relocateRegion(HConnectionManager.java:807)
        at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.getRegionLocation(HConnectionManager.java:725)
        at org.apache.hadoop.hbase.client.ServerCallable.connect(ServerCallable.java:82)
        at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:162)
        at org.apache.hadoop.hbase.client.HTable.get(HTable.java:685)
        at org.apache.hadoop.hbase.client.HTablePool$PooledHTable.get(HTablePool.java:366)
      
2012-11-28 20:06:17 [ERROR] [hbase_error]
java.io.IOException: Giving up after tries=1
        at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:192)
        at org.apache.hadoop.hbase.client.HTable.get(HTable.java:685)
        at org.apache.hadoop.hbase.client.HTablePool$PooledHTable.get(HTablePool.java:366)
 
Caused by: java.lang.InterruptedException: sleep interrupted
        at java.lang.Thread.sleep(Native Method)
        at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:189)
        ... 13 more

https://issues.apache.org/jira/browse/HBASE-5060  HBase client is blocked forever ,跟這個問題有點相似,但沒有解決這問題

2 同進程另啟動線程訪問root-region-server

                    // check if zk is ok
                    ZooKeeper zk = null;
                    Watcher watch = new Watcher() {
                        public void process(WatchedEvent event) {
                        }
                    };
                    String zookeeperQuorum = wbobjectHBaseConfMap.get("hbase.zookeeper.quorum");
                    
                    if (StringUtils.isNotBlank(zookeeperQuorum)) {
                        try {
                            zk = new ZooKeeper(zookeeperQuorum, 30000, watch);
                            byte[] data = zk.getData("/hbase/root-region-server", watch, null);
                            ApiLogger.info(" get root-region-server success! ip:"
                                    + Util.toStr(data));
                        } catch (Exception e) {
                            ApiLogger.error(" get root-region-server error!" + e.getMessage());
                        } finally {
                            try {
                                zk.close();
                            } catch (InterruptedException e) {
                                ApiLogger.error("close zk error!");
                            }
                        }
                    }
                    

發現獨立線程在整個進程死鎖時還能正常工作,HBase的zookeeper實例異常死鎖后就不能恢復,導致scan操作超時到30s+,並且不能恢復,正常應該在ms級別。

因此認為是HBase客戶端連接ZooKeeper時出問題,流程:

出現網絡抖動或者root表遷移,緩存表未命中,重新去獲取root-region-server,結果獲取失敗,進行ZooKeeper重置操作

經過研究死鎖代碼

HConnectionImplementation 發現 get root-region-server異常,鎖定RootRegionTracker,試圖鎖定自身, 執行 resetZooKeeperTrackers並 調用 RootRegionTracker .stop 去重置 ZooKeeper
而與此同時RootRegionTracker在執行stop方法時,已經鎖定了HConnectionImplementation,還需要鎖定它本身( synchronized方法),但其已經在 HConnectionImplementation中被鎖定。
HConnectionImplementation代碼中:
 this.rootRegionTracker = new RootRegionTracker(this.zooKeeper, this);
調用時也沒有超時cancel代碼 這樣就導致了兩個資源循環等待的死鎖
2)仿真模擬 
以下是個小程序,簡單模擬這種死鎖問題:
import java.util.concurrent.TimeUnit;

import org.apache.hadoop.hbase.ServerName;

class ZooKeeperNodeTracker {
    private boolean stopped = false;
    private AbortAble hConnectionImplementation;
    public ZooKeeperNodeTracker(AbortAble hConnectionImplementation) {
        this.hConnectionImplementation = hConnectionImplementation;
    }

    public synchronized void stop()  throws InterruptedException {
        this.stopped = true;
        System.out.println(Thread.currentThread()+"|"+Thread.currentThread().getId()+"stop zknode");
        TimeUnit.MICROSECONDS.sleep(100);
        notifyAll();
    }

    public boolean condition() {
        return stopped;
    }

    public boolean start() {
         stopped=false;
         return true;
    }
    
    public synchronized boolean getData(int i) throws InterruptedException {
        //error in get root region server
        if (i %100 ==0){
            hConnectionImplementation.resetZooKeeperTrackers();
            throw new InterruptedException("interrupted");            
        }
        return true;
    }

}

public class HConnectionManagerTest {

    static class HConnectionImplementation implements AbortAble{
        public HConnectionImplementation() {
            rootRegionTracker = new ZooKeeperNodeTracker(this);
        }

        private volatile ZooKeeperNodeTracker rootRegionTracker;
        @Override
        public synchronized void resetZooKeeperTrackers() {
            try{
                if (rootRegionTracker != null) {
                    rootRegionTracker.stop();
                    rootRegionTracker = null;
                    System.out.println(Thread.currentThread()+"|"+Thread.currentThread().getId()+"resetZooKeeperTrackers");
                }
            }catch(InterruptedException e){
                System.out.println(Thread.currentThread()+"----------resetZooKeeperTrackers Interrupted-----------");
                
            }
        }


        public void testGetData(String name) {
            int i = 1;
            while (i >0) {
                i++;
                try {
                    rootRegionTracker.getData(i);
                } catch (Exception e) {
                    resetZooKeeperTrackers();
                }
                if(i %100 ==0){
                    rootRegionTracker = new ZooKeeperNodeTracker(this);
                    System.out.println(name+" restart test");
                }
            }
        }
    }

    public static void main(String[] args)  {

        final HConnectionImplementation hcon = new HConnectionImplementation();
        Thread scan1 = new Thread(new Runnable() {
            public void run() {
                hcon.testGetData("[scan1]");
            }
        });
        
        Thread scan2 = new Thread(new Runnable() {
            public void run() {
                hcon.testGetData("[scan2]");
            }
        });
        try{
            scan1.start();
            scan2.start();
            TimeUnit.SECONDS.sleep(2);
        }catch (InterruptedException e){
            System.out.println("----------testgetdata -------interrupt");
        }
    }

}

類名構造函數都模擬HBase Client,並且放大getData error的情形,當同時並發兩個scan操作時,前一個scan過程中,獲取不到root-region-server,在ZooKeeperNodeTracker中做stop() 時,后一個scan也開始在HConnectionImplementation中執行resetZooKeeperTrackers(),兩個資源ZooKeeperNodeTracker和HConnectionImplementation被各自分別占用等待,導致死鎖。模擬程序的死鎖解除可以更改

public synchronized boolean getData
方法為
public boolean getData

public synchronized void resetZooKeeperTrackers() {
            try{
                if (rootRegionTracker != null) {
                    rootRegionTracker.stop();

 


public void resetZooKeeperTrackers() {
            try{
                if (rootRegionTracker != null) {
            synchronized(rootRegionTracker){
              rootRegionTracker.stop();

 

解除互斥條件解決問題

 3)最終解決方案

通過Hadoop大會現場跟HBase開發者Ted Yu咨詢,稱0.94.0有很多bug不穩定,建議升級到0.94.2,通過查看relase note, 官方的兩個patch地址已在0.94.1中修復 (通過對hbase源碼進行分析找對問題對應點,再看對應源碼svn詳細修改記錄)
1 通過避免嵌套重試循環來解決rpc線程卡死: https://issues.apache.org/jira/browse/HBASE-6326  .
2 通過等待-root-的region地址設置到root region tracker 來避免deadlock問題:https://issues.apache.org/jira/browse/HBASE-6115  

 


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM