Hbase1.4.9的安裝


HBase介紹

HBase – Hadoop Database,是一個高可靠性、高性能、面向列、可伸縮的分布式存儲系統,利用HBase技術可在廉價PC Server上搭建起大規模結構化存儲集群。

HBase是Google Bigtable的開源實現,類似Google Bigtable利用GFS作為其文件存儲系統,HBase利用Hadoop HDFS作為其文件存儲系統;Google運行MapReduce來處理Bigtable中的海量數據,HBase同樣利用Hadoop MapReduce來處理HBase中的海量數據。

上圖描述了Hadoop EcoSystem中的各層系統,其中HBase位於結構化存儲層,Hadoop HDFS為HBase提供了高可靠性的底層存儲支持,Hadoop MapReduce為HBase提供了高性能的計算能力,Zookeeper為HBase提供了穩定服務和failover機制。

此外,Pig和Hive還為HBase提供了高層語言支持,使得在HBase上進行數據統計處理變的非常簡單。 Sqoop則為HBase提供了方便的RDBMS數據導入功能,使得傳統數據庫數據向HBase中遷移變的非常方便。

安裝hbase前,請安裝獨立的zookeeper-3.4.12 詳情請參考本人博客:https://www.cnblogs.com/hello-wei/p/10261517.html

把下載好的包 傳到/home/hadoop/hbase 解壓后:

[hadoop@master hbase]$ ll
total 788
drwxr-xr-x.  4 hadoop hadoop   4096 Dec  5 10:53 bin
-rw-r--r--.  1 hadoop hadoop 228302 Dec  5 10:58 CHANGES.txt
drwxr-xr-x.  2 hadoop hadoop   4096 Dec  5 10:54 conf
drwxr-xr-x. 12 hadoop hadoop   4096 Dec  5 11:53 docs
drwxr-xr-x.  7 hadoop hadoop   4096 Dec  5 11:43 hbase-webapps
-rw-r--r--.  1 hadoop hadoop    261 Dec  5 11:56 LEGAL
drwxrwxr-x.  3 hadoop hadoop   4096 Jan  8 11:10 lib
-rw-r--r--.  1 hadoop hadoop 143082 Dec  5 11:56 LICENSE.txt
-rw-r--r--.  1 hadoop hadoop 404470 Dec  5 11:56 NOTICE.txt
-rw-r--r--.  1 hadoop hadoop   1477 Dec  5 10:53 README.txt

設置環境變量

[root@master master]# vi /etc/profile

export HBASE_HOME=/home/hadoop/hbase
export PATH=$PATH:$HBASE_HOME/bin

[root@master master]# source /etc/profile

[hadoop@master ~]$ hbase version
HBase 1.4.9
Source code repository git://apurtell-ltm4.internal.salesforce.com/Users/apurtell/src/hbase revision=d625b212e46d01cb17db9ac2e9e927fdb201afa1
Compiled by apurtell on Wed Dec  5 11:54:10 PST 2018
From source with checksum a7716fc1849b07ea6dd830a08291e754

編輯hbase-env.sh

#注釋到下面兩行
# Configure PermSize. Only needed in JDK7. You can safely remove it for JDK8+
#export HBASE_MASTER_OPTS="$HBASE_MASTER_OPTS -XX:PermSize=128m -XX:MaxPermSize=128m -XX:ReservedCodeCacheSize=256m"
#export HBASE_REGIONSERVER_OPTS="$HBASE_REGIONSERVER_OPTS -XX:PermSize=128m -XX:MaxPermSize=128m -XX:ReservedCodeCacheSize=256m"
#Java環境 export JAVA_HOME
=/usr/local/jdk1.8 #通過hadoop的配置文件找到hadoop集群 export HBASE_CLASSPATH=/home/hadoop/hadoop-2.7.3/etc/hadoop #使用HBASE自帶的zookeeper管理集群 export HBASE_MANAGES_ZK=false

編輯hbase-site.xml

<configuration>
   <property>  
     <name>hbase.rootdir</name>  
     <value>hdfs://192.168.1.30:9000/hbase</value>  #這里要和hadoop下的配置文件core-site.xml的<name>fs.defaultFS</name>值hdfs://192.168.1.30:9000一致
     </property>  
     <!--啟用分布式集群-->  
   <property>  
     <name>hbase.cluster.distributed</name>  
     <value>true</value>  
   </property>  
      <!--默認HMaster HTTP訪問端口-->  
     <property>  
      <name>hbase.master.info.port</name>  
      <value>16010</value>  
     </property>  
      <!--默認HRegionServer HTTP訪問端口-->  
    <property>  
       <name>hbase.regionserver.info.port</name>  
       <value>16030</value>  
    </property>  
    <property>  
      <name>hbase.zookeeper.quorum</name>  
      <value>master:2181,saver1:2181,saver2:2181</value>
    </property>
   <property>
      <name>hbase.coprocessor.abortonerror</name>
     <value>false</value>
    </property>
 </configuration>

編輯regionservers

[hadoop@master conf]$ vi regionservers 
saver1
saver2

拷貝到其他節點:

scp -r hbase hadoop@192.168.1.40:/home/hadoop
scp -r hbase hadoop@192.168.1.50:/home/hadoop

啟動:

start-hbase.sh

[hadoop@master hadoop]$ start-hbase.sh
running master, logging to /home/hadoop/hbase/logs/hbase-hadoop-master-master.out
saver2: running regionserver, logging to /home/hadoop/hbase/bin/../logs/hbase-hadoop-regionserver-saver2.out
saver1: running regionserver, logging to /home/hadoop/hbase/bin/../logs/hbase-hadoop-regionserver-saver1.out

進入HBASE shell

[hadoop@master hadoop]$  hbase shell
2019-01-15 22:31:31,100 WARN  [main] util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/home/hadoop/hbase/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/home/hadoop/hadoop-2.7.3/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
HBase Shell
Use "help" to get list of supported commands.
Use "exit" to quit this interactive shell.
Version 1.4.9, rd625b212e46d01cb17db9ac2e9e927fdb201afa1, Wed Dec  5 11:54:10 PST 2018

hbase(main):001:0> list
TABLE                                                                                                                                                                                        
0 row(s) in 0.2370 seconds

=> []

master后台進程:

[hadoop@master hadoop]$ jps
5235 Jps
3380 NodeManager
2917 DataNode
4886 HMaster
3062 SecondaryNameNode
3705 QuorumPeerMain
3274 ResourceManager
2814 NameNode
[hadoop@master hadoop]$

saver1后台進程:

[hadoop@saver2 ~]$ jps
3320 Jps
3164 HRegionServer
2668 QuorumPeerMain
2396 DataNode
2510 NodeManager
[hadoop@saver2 ~]$

完。


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM