0,HBase簡介
HBase是Apache Hadoop中的一個子項目,是一個HBase是一個開源的、分布式的、多版本的、面向列的、非關系(NoSQL)的、可伸縮性分布式數據存儲模型,Hbase依托於Hadoop的HDFS作為最基本存儲基礎單元。HBase的服務器體系結構遵從簡單的主從服務器架構,它由HRegion Server群和HMaster Server構成。HMaster Server負責管理所有的HRegion Server,而HBase中的所有Server都是通過Zookeeper進行的分布式信息共享與任務協調的工作。HMaster Server本身並不存儲HBase中的任何數據,HBase邏輯上的表可能會被划分成多個Region,然后存儲到HRegionServer群中,HRegionServer響應用戶I/O請求,向HDFS文件系統中讀寫數據。HBase Master Server中存儲的是從數據到HRegion Server的映射。
下面一幅圖是Hbase在Hadoop Ecosystem中的位置

上圖描述了Hadoop EcoSystem中的各層系統,其中HBase位於結構化存儲層,Hadoop HDFS為HBase提供了高可靠性的底層存儲支持,Hadoop MapReduce為HBase提供了高性能的計算能力,Zookeeper為HBase提供了穩定服務和failover機制。 此外,Pig和Hive還為HBase提供了高層語言支持,使得在HBase上進行數據統計處理變的非常簡單。 Sqoop則為HBase提供了方便的RDBMS數據導入功能,使得傳統數據庫數據向HBase中遷移變的非常方便。
1,系統環境配置
- 安裝hadoop
- 安裝zookeeper
2,下載與安裝:
- Hbase 版本必需 與 Hadoop 版本匹配,否則會安裝失敗或不能正常使用。關於兩者何種版本能正常匹配,可以看官方文檔查看 hbase 官方文檔(http://hbase.apache.org/book.html#basic.prerequisites),找到與 hadoop 版本對應的 hbase 並下載(http://archive.apache.org/dist/hbase/)
- 使用tar解壓hbase
cd /usr/local tar -zxvf hbase-1.2.1-bin.tar.gz mv /home/hbase
- 使用vi /etc/profile設置環境變量

3,系統參數配置
配置工作具體如下:
- 使用 vi /home/hbase/conf/hbase-env.sh 修改系統環境
export JAVA_HOME=/usr/local/jdk1.8 export HBASE_PID_DIR=/home/hbase/pid #使用mkdir /home/hbase/pid命令先創建 export HBASE_MANAGES_ZK=false #不適用內置zookeeper,使用我們自己安裝的(具體指定使用哪個zookeeper是通過/etc/profile中的ZK_HOME變量來指定的)
- vi conf/hbase-site.xml 配置系統參數
<configuration> <property> <name>hbase.rootdir</name> <value>hdfs://master:9000/hbase</value> <description>設置 hbase 數據庫存放數據的目錄,這里是放在hadoop hdfs上,這里要與hadoop的core-site.xml文件中的fs.default.name中的值一致,然后在后面添加自己的子目錄,我這里定義是hbase</description> </property> <property> <name>hbase.cluster.distributed</name> <value>true</value> <description>打開 hbase 分布模式</description> </property> <property> <name>hbase.master</name> <value>master</value> <description>指定 hbase 集群主控節點</description> </property> <property> <name>hbase.tmp.dir</name> <value>/home/user/tmp/hbase</value> <description>hbase的一些臨時文件存放目錄。</description> </property> <property> <name>hbase.zookeeper.quorum</name> <value>master,slave1,slave2</value> <description> 指定 zookeeper 集群節點名 , 因為是由 zookeeper 表決算法決定的</description> </property> <property> <name>hbase.zookeeper.property.clientPort</name> <value>2181</value> <description> 連接到zookeeper的端口,默認是2181</description> </property> </configuration>
- vi conf/regionservers 該文件指定了HRegionServer進程將在哪些節點上運行
msater slave1 slave2
- 向其他節點傳遞安裝,使用下列命令
scp /home/hbase root@slave1:/home/ scp /home/hbase root@slave2:/home/
完成后使用vi /etc/profile 設置各自節點的環境變量
4,啟動hbase服務
啟動hbase前要確保,hadoop,zookeeper已經啟動,進入$HBASE_HOME/bin目錄下,輸入命令start-hbase.sh

執行jps查看系統進程

其他節點

啟動日志會輸出到/home/hbase/logs/hbase-root-master-master.log中,可以查看排除異常
5,測試
啟動完成后,執行如下命令可以進入到hbase shell界面,使用命令status檢查集群節點狀態
這里可以使用 hbase shell命令執行數據庫操作,具體參考 http://www.cnblogs.com/nexiyi/p/hbase_shell.html
另外也可以直接打開網址:http://192.168.137.122:16010/master-status,在web中查看集群狀態,其中192.168.137.122是master所在節點的IP,16010為hbase默認端口(老版本中為60010)
6,錯誤
本次安裝測試中主要出現了一下幾個錯誤:
- 各節點節點時間不一致
org.apache.hadoop.hbase.ClockOutOfSyncException: org.apache.hadoop.hbase.ClockOutOfSyncException: Server hadoopslave2,60020,1372320861420 has been rejected; Reported time is too far out of sync with master. Time difference of 143732ms > max allowed of 30000ms at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:525) at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:95) at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:79) at org.apache.hadoop.hbase.regionserver.HRegionServer.reportForDuty(HRegionServer.java:2093) at org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:744) at java.lang.Thread.run(Thread.java:722) Caused by: org.apache.hadoop.ipc.RemoteException: org.apache.hadoop.hbase.ClockOutOfSyncException: Server hadoopslave2,60020,1372320861420 has been rejected; Reported time is too far out of sync with master. Time difference of 143732ms > max allowed of 30000ms在各節點的hbase-site.xml文件中加入下列代碼
<property> <name>hbase.master.maxclockskew</name> <value>200000</value> </property>
- Directory is not empty
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.fs.PathIsNotEmptyDirectoryException): `/hbase/WALs/slave1,16000,1446046595488-splitting is non empty': Directory is not empty at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.deleteInternal(FSNamesystem.java:3524) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.deleteInt(FSNamesystem.java:3479) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.delete(FSNamesystem.java:3463) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.delete(NameNodeRpcServer.java:751) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.delete(ClientNamenodeProtocolServerSideTranslatorPB.java:562) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:415) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1614) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007) at org.apache.hadoop.ipc.Client.call(Client.java:1411) at org.apache.hadoop.ipc.Client.call(Client.java:1364) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206) at com.sun.proxy.$Proxy15.delete(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.delete(ClientNamenodeProtocolTranslatorPB.java:490) at sun.reflect.GeneratedMethodAccessor7.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102) at com.sun.proxy.$Proxy16.delete(Unknown Source) at sun.reflect.GeneratedMethodAccessor7.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:279) at com.sun.proxy.$Proxy17.delete(Unknown Source) at sun.reflect.GeneratedMethodAccessor7.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:279) at com.sun.proxy.$Proxy17.delete(Unknown Source) at org.apache.hadoop.hdfs.DFSClient.delete(DFSClient.java:1726) at org.apache.hadoop.hdfs.DistributedFileSystem$11.doCall(DistributedFileSystem.java:588) at org.apache.hadoop.hdfs.DistributedFileSystem$11.doCall(DistributedFileSystem.java:584) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.delete(DistributedFileSystem.java:584) at org.apache.hadoop.hbase.master.SplitLogManager.splitLogDistributed(SplitLogManager.java:297) at org.apache.hadoop.hbase.master.MasterFileSystem.splitLog(MasterFileSystem.java:400) at org.apache.hadoop.hbase.master.MasterFileSystem.splitLog(MasterFileSystem.java:373) at org.apache.hadoop.hbase.master.MasterFileSystem.splitLog(MasterFileSystem.java:295) at org.apache.hadoop.hbase.master.procedure.ServerCrashProcedure.splitLogs(ServerCrashProcedure.java:388) at org.apache.hadoop.hbase.master.procedure.ServerCrashProcedure.executeFromState(ServerCrashProcedure.java:228) at org.apache.hadoop.hbase.master.procedure.ServerCrashProcedure.executeFromState(ServerCrashProcedure.java:72) at org.apache.hadoop.hbase.procedure2.StateMachineProcedure.execute(StateMachineProcedure.java:119) at org.apache.hadoop.hbase.procedure2.Procedure.doExecute(Procedure.java:452) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execProcedure(ProcedureExecutor.java:1050) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execLoop(ProcedureExecutor.java:841) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execLoop(ProcedureExecutor.java:794) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.access$400(ProcedureExecutor.java:75) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor$2.run(ProcedureExecutor.java:479)
參考https://issues.apache.org/jira/browse/HBASE-14729,進入hadoop文件系統,刪除掉報錯的目錄或真個WALs

- TableExistsException: hbase:namespace
zookeeper.MetaTableLocator: Failed verification of hbase:meta,,1 at address=slave1,16020,1428456823337, exception=org.apache.hadoop.hbase.NotServingRegionException: Region hbase:meta,,1 is not online on worker05,16020,1428461295266 at org.apache.hadoop.hbase.regionserver.HRegionServer.getRegionByEncodedName(HRegionServer.Java:2740) at org.apache.hadoop.hbase.regionserver.RSRpcServices.getRegion(RSRpcServices.java:859) at org.apache.hadoop.hbase.regionserver.RSRpcServices.getRegionInfo(RSRpcServices.java:1137) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:20862) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2031) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:107) at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130) at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107) at java.lang.Thread.run(Thread.java:745)HMaster啟動之后自動掛掉(或非正常重啟),並且master的log里出現“TableExistsException: hbase:namespace”字樣;
很可能是更換了Hbase的版本過后zookeeper還保留着上一次的Hbase設置,所以造成了沖突.
刪除zookeeper信息,重啟之后就沒問題了# sh zkCli.sh -server slave1:2181 [zk: slave1:2181(CONNECTED) 0] ls / [zk: slave1:2181(CONNECTED) 0] rmr /hbase [zk: slave1:2181(CONNECTED) 0] quit
1,參考
- Hbase系統架構及數據結構 http://www.open-open.com/lib/view/open1346821084631.html
- HRegionServer詳解 http://www.superwu.cn/2015/04/28/2081/
- HBase深入分析之RegionServer http://www.tuicool.com/articles/R3UB73
- Base 超詳細介紹 http://blog.csdn.net/frankiewang008/article/details/41965543
- 搭建Zookeeper與Hbase過程及遇到的問題總結 http://my.oschina.net/hanzhankang/blog/129335?fromerr=zuMjZe9d
- hadoop hbase維護問題總結 http://www.tuicool.com/articles/yAr2Yf2
- HBase集群安裝過程中的問題集錦 http://www.cnblogs.com/likehua/p/3850253.html
- hadoop集群,hbase集群常見錯誤 http://www.itomcn.com/hadoop-hbase-errors.html
- hbase常識及habse適合什么場景 http://www.cnblogs.com/bhlsheji/p/5406816.html
