前面已經完成Zookeeper和HDFS的安裝,本文會詳細介紹Hbase的安裝步驟。以及安裝過程中遇到問題的匯總。
系列文章:
下面開始Hbase的安裝。
Hbase的服務器規划
1
2
3
|
192.168.67.101 c6701
--Master + regionserver
192.168.67.102 c6702
--Master(standby)+regionserver
192.168.67.103 c6703
--regionserver
|
---在c6701上安裝Hbase
1. 創建hbase用戶,及創建相關目錄
1
2
3
4
5
6
7
8
|
su - root
useradd hbase
echo
"hbase:hbase"
| chpasswd
mkdir -p /data/zookeeper
mkdir -p /data/hbase/tmp
mkdir -p /data/hbase/logs
chown -R hbase:hbase /data/hbase
chown -R a+rx /home/hdfs <<<<<<<<<<<<<讓hadoop目錄,其他人可以用讀和執行
--很重要hbase需要訪問
|
2. 解壓軟件
1
2
3
|
su - hbase
cd /tmp/software
tar -zxvf hbase-1.1.3.tar.gz -C /home/hbase/
|
3. 設置hbase-site.xml的參數
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
|
[hbase@c6701 conf]$ more hbase-site.xml
<configuration>
<property>
<
name
>hbase.tmp.dir</
name
>
<value>/data/hbase/tmp</value>
</property>
<property>
<
name
>hbase.rootdir</
name
>
<value>hdfs://ns/hbase</value> <<<<<<<<<<<<<<<<<這里要注意,ns是namenode的名字,Hbase可以訪問很多HDFS,在這里標注namenode,才是指定訪問這個namenode,實際在hdfs看到是/hbase的目錄,不會看到ns的。
</property>
<property>
<
name
>hbase.cluster.distributed</
name
>
<value>
true
</value>
</property>
<property>
<
name
>hbase.master</
name
>
<value>60000</value>
</property>
<property>
<
name
>hbase.zookeeper.quorum</
name
>
<value>c6701,c6702,c6703</value>
</property>
<property>
<
name
>hbase.zookeeper.property.clientPort</
name
>
<value>2181</value>
</property>
<property>
<
name
>hbase.zookeeper.property.dataDir</
name
>
<value>/data/zookeeper</value>
</property>
</configuration>
|
4. 設置hbase-env.sh的參數
1
2
3
4
5
6
7
8
9
10
11
12
13
|
[hbase@c6701 conf]$ cat hbase-env.sh |grep -v
"^#"
export JAVA_HOME=/usr/
local
/jdk1.8.0_144
export HBASE_CLASSPATH=$HADOOP_HOME/etc/hadoop/
export HBASE_HEAPSIZE=500M
export HBASE_OPTS=
"-XX:+UseConcMarkSweepGC"
export HBASE_REGIONSERVER_OPTS=
"-Xmx1g -Xms400m -Xmn128m -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=70"
export HBASE_MASTER_OPTS=
"$HBASE_MASTER_OPTS -Xmx1g -Xms400m -XX:PermSize=128m -XX:MaxPermSize=128m"
export HBASE_REGIONSERVER_OPTS=
"$HBASE_REGIONSERVER_OPTS -XX:PermSize=128m -XX:MaxPermSize=128m"
export HBASE_LOG_DIR=/data/hbase/logs
export HBASE_PID_DIR=/data/hbase/hadoopPidDir
export HBASE_MANAGES_ZK=
true
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$HBASE_HOME/lib/:/usr/lib64/
export HBASE_LIBRARY_PATH=$HBASE_LIBRARY_PATH:$HBASE_HOME/lib/:/usr/lib64/
|
5. 注意一下內存設置,由於是測試環境,設置過大,導致內存不足,無法啟動問題
1
2
|
export HBASE_REGIONSERVER_OPTS=
"-Xmx1g -Xms400m -Xmn128m -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=70"
export HBASE_MASTER_OPTS=
"$HBASE_MASTER_OPTS -Xmx1g -Xms400m -XX:PermSize=128m -XX:MaxPermSize=128m"
|
內存不足的錯誤
1
2
3
4
5
6
7
8
9
10
|
[hbase@c6701 bin]$ ./hbase-daemon.sh start master
starting master, logging
to
/data/hbase/logs/hbase-hbase-master-c6701.python279.org.
out
Java HotSpot(TM) 64-
Bit
Server VM warning: ignoring
option
PermSize=128m; support was removed
in
8.0
Java HotSpot(TM) 64-
Bit
Server VM warning: ignoring
option
MaxPermSize=128m; support was removed
in
8.0
Java HotSpot(TM) 64-
Bit
Server VM warning: INFO: os::commit_memory(0x00000006c5330000, 2060255232, 0) failed; error=
'Cannot allocate memory'
(errno=12)
#
# There
is
insufficient memory
for
the Java Runtime Environment
to
continue
.
# Native memory allocation (mmap) failed
to
map 2060255232 bytes
for
committing reserved memory.
# An error report file
with
more information
is
saved
as
:
# /home/hbase/hbase-1.1.3/bin/hs_err_pid7507.log
|
6. 增加hadoop的參數到/etc/profile中,后面hbase運行,需要知道hadoop_home
1
2
3
4
5
6
7
8
9
10
11
12
|
export JAVA_HOME=/usr/
local
/jdk1.8.0_144
export JRE_HOME=/usr/
local
/jdk1.8.0_144/jre
export PATH=$JAVA_HOME/bin:$PATH:/home/hbase/hbase-1.1.3/bin
export HADOOP_HOME=/home/hdfs/hadoop-2.6.0-EDH-0u2
export HADOOP_INSTALL=$HADOOP_HOME
export HADOOP_MAPRED_HOME=$HADOOP_HOME
export HADOOP_COMMON_HOME=$HADOOP_HOME
export HADOOP_HDFS_HOME=$HADOOP_HOME
export YARN_HOME=$HADOOP_HOME
export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/native
export PATH=$PATH:$HADOOP_HOME/sbin:$HADOOP_HOME/bin
export HADOOP_OPTS=
"-Djava.library.path=$HADOOP_HOME/lib:$HADOOP_COMMON_LIB_NATIVE_DIR"
|
---安裝c6702的hbase
7. 創建hbase用戶
1
|
ssh c6702
"useradd hbase;echo "
hbase:hbase
" | chpasswd"
|
8. 為hbase用戶ssh免密
1
|
ssh-copy-id hbase@c6702
|
9. 拷貝軟件,創建目錄,解壓軟件
1
2
3
4
|
scp -r /tmp/software/hbase-1.1.3.tar.gz root@c6702:/tmp/software/.
ssh c6702
"chmod 777 /tmp/software/*;mkdir -p /data/zookeeper;mkdir -p /data/hbase/tmp;mkdir -p /data/hbase/logs;chown -R hbase:hbase /data/hbase"
ssh c6702
"chmod -R a+rx /home/hdfs"
ssh hbase@c6702
"tar -zxvf /tmp/software/hbase-1.1.3.tar.gz -C /home/hbase"
|
10.復制配置文件
1
2
3
|
scp -r /etc/profile root@c6702:/etc/profile
scp -r /home/hbase/hbase-1.1.3/conf/hbase-site.xml hbase@c6702:/home/hbase/hbase-1.1.3/conf/.
scp -r /home/hbase/hbase-1.1.3/conf/hbase-env.sh hbase@c6702:/home/hbase/hbase-1.1.3/conf/.
|
---安裝c6703的hbase
8. 創建hbase用戶
1
|
ssh c6703
"useradd hbase;echo "
hbase:hbase
" | chpasswd"
|
9. 為hbase用戶ssh免密
1
|
ssh-copy-id hbase@c6703
|
10. 拷貝軟件,創建目錄,解壓軟件
1
2
3
4
|
scp -r /tmp/software/hbase-1.1.3.tar.gz root@c6703:/tmp/software/.
ssh c6703
"chmod 777 /tmp/software/*;mkdir -p /data/zookeeper;mkdir -p /data/hbase/tmp;mkdir -p /data/hbase/logs;chown -R hbase:hbase /data/hbase"
ssh c6703
"chmod -R a+rx /home/hdfs"
ssh hbase@c6703
"tar -zxvf /tmp/software/hbase-1.1.3.tar.gz -C /home/hbase"
|
11. 復制配置文件
1
2
3
|
scp -r /etc/profile root@c6703:/etc/profile
scp -r /home/hbase/hbase-1.1.3/conf/hbase-site.xml hbase@c6703:/home/hbase/hbase-1.1.3/conf/.
scp -r /home/hbase/hbase-1.1.3/conf/hbase-env.sh hbase@c6703:/home/hbase/hbase-1.1.3/conf/.
|
12. 啟動Hbase master(在c6701和c6702)
1
2
|
hbase-daemon.sh start master
ssh -t -q c6702 sudo su -l hbase -c
"/usr/local/hadoop/hbase-release/bin/hbase-daemon.sh\ start\ master"
|
13. 啟動Hbase regionserver(在c6701和c6702和c6703)
1
2
3
|
hbase-daemon.sh start regionserver
ssh -t -q c6702 sudo su -l hbase -c
"/usr/local/hadoop/hbase-release/bin/hbase-daemon.sh\ start\ regionserver"
ssh -t -q c6703 sudo su -l hbase -c
"/usr/local/hadoop/hbase-release/bin/hbase-daemon.sh\ start\ regionserver"
|
HBASE安裝過程,問題匯總
1. 首先Hbase-site.xml設置錯誤,ns是namenode的名字,Hbase可以訪問很多HDFS,在這里標注namenode,才是指定訪問這個namenode,實際在hdfs看到是/hbase的目錄,不會看到ns的。
1
2
|
<
name
>hbase.rootdir</
name
>
<value>hdfs://ns/hbase</value> <<<<<<<<<<<<<<<<<這里要注意
|
2.通過hbase訪問hdfs遇到問題,雖然也可以對hdfs操作,但是有warning
1
2
3
4
5
6
7
8
9
10
11
12
13
|
[root@c6701 home]# su - hbase
$ hdfs dfs -mkdir /hbase
$ hdfs dfs -ls /
Found 1 items
drwxrwx
--- - hdfs hadoop 0 2017-10-25 10:18 /hbase
權限不對,需要hbase是owner才可以讓hbase正常訪問hdfs路徑
$ hadoop fs -chown hbase:hbase /hbase
$ hdfs dfs -ls /
Found 1 items
drwxrwx
--- - hbase hbase 0 2017-10-25 10:18 /hbase
已經修改成功
[hbase@c6701 ~]$ hdfs dfs -ls /hbase/test
17/09/27 07:45:59 WARN util.NativeCodeLoader: Unable
to
load
native-hadoop library
for
your platform... using builtin-java classes
where
applicable
|
最后注釋掉/etc/profile中的CLASSPATH=解決的這個問題
1
2
3
4
|
export JAVA_HOME=/usr/
local
/jdk1.8.0_144
export JRE_HOME=/usr/
local
/jdk1.8.0_144/jre
#export CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar:$JRE_HOME/lib:$CLASSPATH
export PATH=$JAVA_HOME/bin:$PATH:/home/hbase/hbase-1.1.3/bin
|
3. 啟動Hbase的時候,還遇到權限問題,無法找到hadoop。首先通過hbase用戶檢查hadoop訪問權限
1
2
|
su - hbase
/home/hdfs/hadoop-2.6.0-EDH-0u2/bin/hadoop version
|
如果權限有問題,需要增加hbase執行hadoop文件的權限
4. 啟動過程中,遇到錯誤,是由於/home/hbase/hbase-1.1.3/lib/原有的一些jar包版本與hadoop不同,進而影響,刪除即可。hbase會從hadoop中讀取jar包
1
2
3
4
5
6
7
8
9
10
11
|
[hbase@c6702 ~]$ hbase-daemon.sh start regionserver
starting regionserver, logging
to
/data/hbase/logs/hbase-hbase-regionserver-c6702.python279.org.
out
Java HotSpot(TM) 64-
Bit
Server VM warning: ignoring
option
PermSize=128m; support was removed
in
8.0
Java HotSpot(TM) 64-
Bit
Server VM warning: ignoring
option
MaxPermSize=128m; support was removed
in
8.0
SLF4J: Class path
contains
multiple SLF4J bindings.
SLF4J: Found binding
in
[jar:file:/home/hbase/hbase-1.1.3/lib/kylin-jdbc-1.5.2.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding
in
[jar:file:/home/hbase/hbase-1.1.3/lib/kylin-job-1.5.2.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding
in
[jar:file:/home/hbase/hbase-1.1.3/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding
in
[jar:file:/home/hdfs/hadoop-2.6.0-EDH-0u2/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings
for
an explanation.
SLF4J: Actual binding
is
of
type [org.slf4j.impl.Log4jLoggerFactory]
|
至此Zookeeper+HDFS+Hbase安裝全部完成,前面還是比較順利的,但是在Hbase安裝的過程中,由於與Hadoop的銜接上,出現一些問題,耗費一些時間分析解決。
后續會繼續測試,hadoop升級的過程。
http://blog.51cto.com/hsbxxl/1971501