hbase的啟動


start-dfs.sh
start-yarn.sh
start-hbase.sh

1,先啟動hbase:hbase有內置的zookeeper,如果沒有裝zookeeper,啟動hbase的時候會有一個HQuorumPeer進程。
2.先啟動zookeeper:如果用外置的zookeeper管理hbase,則先啟動zookeeper,然后啟動hbase,啟動后會有一個QuorumPeerMain進程。

兩個進程的名稱不一樣
HQuorumPeer表示hbase管理的zookeeper
QuorumPeerMain表示zookeeper獨立的進程

如果遇到正在初始化無法使用。
重置hdfs 即可。
當然,數據也沒了。

如果:

bigdata@/usr/local/spark/mycode/hbase| sbt package
[info] Updated file /usr/local/spark/mycode/hbase/project/build.properties: set sbt.version to 1.3.8
[info] Loading settings for project global-plugins from build.sbt ...
[info] Loading global plugins from /home/bigdata/.sbt/1.0/plugins
[info] Loading project definition from /usr/local/spark/mycode/hbase/project
[info] Loading settings for project hbase from simple.sbt ...
[info] Set current project to Simple Project (in build file:/usr/local/spark/mycode/hbase/)
[warn] There may be incompatibilities among your library dependencies; run 'evicted' to see detailed eviction warnings.
[info] Compiling 1 Scala source to /usr/local/spark/mycode/hbase/target/scala-2.11/classes ...
[error] /usr/local/spark/mycode/hbase/src/main/scala/SparkOperateHbase.scala:4:8: object TableInputFormat is not a member of package org.apache.hadoop.hbase.mapreduce
[error] import org.apache.hadoop.hbase.mapreduce.TableInputFormat
[error]        ^
[error] /usr/local/spark/mycode/hbase/src/main/scala/SparkOperateHbase.scala:16:14: not found: value TableInputFormat
[error]     conf.set(TableInputFormat.INPUT_TABLE, "student")
[error]              ^
[error] /usr/local/spark/mycode/hbase/src/main/scala/SparkOperateHbase.scala:17:51: not found: type TableInputFormat
[error]     val stuRDD = sc.newAPIHadoopRDD(conf, classOf[TableInputFormat],
[error]                                                   ^
[error] three errors found
[error] (Compile / compileIncremental) Compilation failed
[error] Total time: 22 s, completed May 16, 2020 10:36:21 AM
bigdata@/usr/local/spark/mycode/hbase| 

加上依賴即可解決,參考:解決找不到TableInputFormat
在原來的simple.sbt 中加入一行(hbase-mapreduce):

name := "Simple Project"
version := "1.0"
scalaVersion := "2.11.12"
libraryDependencies += "org.apache.spark" %% "spark-core" % "2.4.5"
libraryDependencies += "org.apache.hbase" % "hbase-client" % "2.1.10"
libraryDependencies += "org.apache.hbase" % "hbase-common" % "2.1.10"
libraryDependencies += "org.apache.hbase" % "hbase-server" % "2.1.10"
libraryDependencies += "org.apache.hbase" % "hbase-mapreduce" % "2.1.10"

打包成功:

bigdata@/usr/local/spark/mycode/hbase| sbt package
[info] Loading settings for project global-plugins from build.sbt ...
[info] Loading global plugins from /home/bigdata/.sbt/1.0/plugins
[info] Loading project definition from /usr/local/spark/mycode/hbase/project
[info] Loading settings for project hbase from simple.sbt ...
[info] Set current project to Simple Project (in build file:/usr/local/spark/mycode/hbase/)
[warn] There may be incompatibilities among your library dependencies; run 'evicted' to see detailed eviction warnings.
[info] Compiling 1 Scala source to /usr/local/spark/mycode/hbase/target/scala-2.11/classes ...
[success] Total time: 17 s, completed May 16, 2020 4:50:28 PM
bigdata@/usr/local/spark/mycode/hbase| 

NotServingRegionException

2020-05-17 21:43:49,193 INFO  [ubuntuForBigdata:16000.activeMasterManager] zookeeper.MetaTableLocator: Failed verification of hbase:meta,,1 at address=ubuntuforbigdata,16201,1589722735655, exception=org.apache.hadoop.hbase.NotServingRegionException: Region hbase:meta,,1 is not online on ubuntuforbigdata,16201,1589723019231
    at org.apache.hadoop.hbase.regionserver.HRegionServer.getRegionByEncodedName(HRegionServer.java:2899)
    at org.apache.hadoop.hbase.regionserver.RSRpcServices.getRegion(RSRpcServices.java:960)
    at org.apache.hadoop.hbase.regionserver.RSRpcServices.getRegionInfo(RSRpcServices.java:1245)
    at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:22233)
    at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2127)
    at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:107)
    at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:133)
    at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:108)
    at java.lang.Thread.run(Thread.java:748)

2020-05-20 17:41:24,254 INFO  [ubuntuForBigdata:16000.activeMasterManager] master.MasterFileSystem: Log folder hdfs://localhost:9000/hbase/WALs/ubuntuforbigdata,16201,1589967677343 belongs to an existing region server
2020-05-20 17:41:24,354 INFO  [ubuntuForBigdata:16000.activeMasterManager] zookeeper.MetaTableLocator: Failed verification of hbase:meta,,1 at address=ubuntuforbigdata,16201,1589945558750, exception=org.apache.hadoop.hbase.NotServingRegionException: Region hbase:meta,,1 is not online on ubuntuforbigdata,16201,1589967677343
    at org.apache.hadoop.hbase.regionserver.HRegionServer.getRegionByEncodedName(HRegionServer.java:2899)
    at org.apache.hadoop.hbase.regionserver.RSRpcServices.getRegion(RSRpcServices.java:960)
    at org.apache.hadoop.hbase.regionserver.RSRpcServices.getRegionInfo(RSRpcServices.java:1245)
    at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:22233)
    at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2127)
    at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:107)
    at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:133)
    at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:108)
    at java.lang.Thread.run(Thread.java:748)

2020-05-20 17:41:24,358 INFO  [ubuntuForBigdata:16000.activeMasterManager] master.MasterFileSystem: Log dir for server ubuntuforbigdata,16201,1589945558750 does not exist

停止Hbase服務時導致zookeeper的meta數據丟失或損毀所致

停止HBase服務,停止ZooKeeper服務

把zookeeper的每個節點的zoo.cfg指定的dataDir=/hadoop/zookeeper-data目錄的文件清除掉

如果使用的是HBase自帶的zookeeper, 需要清空 hbase-site.xml 配置下的目錄

    <property>
            <name>hbase.zookeeper.property.dataDir</name>
            <value>/download/hbase-1.2.5/tmp</value>
    </property>

然后重啟zookeeper,再重啟hbas

引自:http://www.lizhe.name/node/78

Security is off.

Safe mode is ON. Resources are low on NN. Please add or free up more resources then turn off safe mode manually. NOTE: If you turn off safe mode before adding resources, the NN will immediately return to safe mode. Use "hdfs dfsadmin -safemode leave" to turn safe mode off.

26 files and directories, 5 blocks = 31 total filesystem object(s).

Heap Memory used 46.42 MB of 166 MB Heap Memory. Max Heap Memory is 889 MB.

Non Heap Memory used 46.26 MB of 47 MB Commited Non Heap Memory. Max Non Heap Memory is -1 B. 

spark-submit 提交程序報錯

Exception in thread "main" java.lang.NoSuchMethodError: com.google.common.base.Preconditions.checkArgument(ZLjava/lang/String;Ljava/lang/Object;)V
	at org.apache.hadoop.conf.Configuration.set(Configuration.java:1357)
	at org.apache.hadoop.conf.Configuration.set(Configuration.java:1338)
	at org.apache.spark.deploy.SparkHadoopUtil.appendS3AndSparkHadoopConfigurations(SparkHadoopUtil.scala:106)
	at org.apache.spark.deploy.SparkHadoopUtil.newConfiguration(SparkHadoopUtil.scala:116)
	at org.apache.spark.deploy.SparkHadoopUtil.<init>(SparkHadoopUtil.scala:50)
	at org.apache.spark.deploy.SparkHadoopUtil$.hadoop$lzycompute(SparkHadoopUtil.scala:384)
	at org.apache.spark.deploy.SparkHadoopUtil$.hadoop(SparkHadoopUtil.scala:384)

hbase shell 執行 create 'student','info'報錯:

ERROR: java.util.concurrent.ExecutionException: org.apache.hadoop.ipc.RemoteException(java.io.IOException): File /hbase/.tmp/data/default/student/644db1e708712afe0920984db5a31c55/.regioninfo could only be replicated to 0 nodes instead of minReplication (=1).  There are 1 datanode(s) running and no node(s) are excluded in this operation.


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM