Spark操作HBase報:org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException異常解決方案


一.異常信息

  19/03/21 15:01:52 WARN scheduler.TaskSetManager: Lost task 4.0 in stage 21.0 (TID 14640, hntest07, executor 64)  org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 3 actions: AAA.bbb: 3 times,
  at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.makeException(AsyncProcess.java:258)
  at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.access$2000(AsyncProcess.java:238)
  at org.apache.hadoop.hbase.client.AsyncProcess.waitForAllPreviousOpsAndReset(AsyncProcess.java:1810)
  at org.apache.hadoop.hbase.client.BufferedMutatorImpl.backgroundFlushCommits(BufferedMutatorImpl.java:240)
  at org.apache.hadoop.hbase.client.BufferedMutatorImpl.flush(BufferedMutatorImpl.java:190)
  at org.apache.hadoop.hbase.client.HTable.flushCommits(HTable.java:1498)
  at org.apache.hadoop.hbase.client.HTable.put(HTable.java:1094)
  at org.com.tl.spark.main.LabelSummaryTaskEntrance$$anonfun$main$1$$anonfun$apply$mcVI$sp$1.apply(LabelSummaryTaskEntrance.scala:163)
  at org.com.tl.spark.main.LabelSummaryTaskEntrance$$anonfun$main$1$$anonfun$apply$mcVI$sp$1.apply(LabelSummaryTaskEntrance.scala:127)
  at org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1$$anonfun$apply$33.apply(RDD.scala:920)
  at org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1$$anonfun$apply$33.apply(RDD.scala:920)
  at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1888)
  at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1888)
  at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
  at org.apache.spark.scheduler.Task.run(Task.scala:89)
  at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:242)
  at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
  at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
  at java.lang.Thread.run(Thread.java:745)

二.代碼

 val config = HBaseConfiguration.create()
 config.set("hbase.zookeeper.quorum", "hbase01,hbase02,hbase03")
 config.set("hbase.zookeeper.property.clientPort", "2181")
 val connection = ConnectionFactory.createConnection(config)
 val admin = connection.getAdmin
 val table = connection.getTable(TableName.valueOf("ZHEN:TABLENAME"))

三.解決方案

  1.在代碼標紅的地方把庫名+表名全部大寫,中間用":"間隔。

  2.Hbase版本不一致【服務器上啟動的Hbase和Spark導入的Hbase-lib不一致】。

  3.hdfs的datanode或namenode宕機。

  4.Hbase的Hmaster或者HRegionServer掛了。


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM