https://blog.csdn.net/weixin_44878850/article/details/89111148
1. 虛擬機安裝、克隆
2. 網絡配置
# 修改 hostname
hostname hadoop1
hostnamectl set-hostname hadoop1
# 最靠譜方式
Vim /etc/sysconfig/network
打開網卡:
centos
安裝后默認是關閉網卡的,要進 /etc/sysconfig/network-scripts/
下修改 ifcfg-ens33
,將ONBOOT
的值改成 yes
。ONBOOT是指明在系統啟動時是否激活網卡,只有在激活狀態的網卡才能連接網絡,進行網絡通信
修改完畢后,service network restart
重啟網絡,檢查 ping baidu.com
是否能夠通
克隆虛擬機修改 onboot 后重啟網絡失敗
-
錯誤類型:
Bringing up interface eth0:Device eth0 does not seem to be present,delaying
-
解決方法:刪除
/etc/udev/rules.d/70-persistent-net.rules
,然后將網卡配置文件/etc/sysconfig/network-scripts/ifcfg-eth0
的uuid
和hwaddr
這兩行刪除,重啟系統即可! -
參考:https://jingyan.baidu.com/article/e75aca85006645142edac6df.html
注意:有些沒有
ifcfg-ens33
,只有ifcfg-eth0
,修改ifcfg-eth0
即可!
2.1 設置靜態 IP、主機映射
設置靜態 IP
以下操作每台服務器 hadoop1/2/3
都需要設置:
1、Vim /etc/sysconfig/network-scripts/ifcfg-ens33
,有些可能會是 ifcfg-eth0
:
TYPE=Ethernet
PROXY_METHOD=none
BROWSER_ONLY=no
BOOTPROTO=static # 設置為 靜態
DEFROUTE=yes
IPV4_FAILURE_FATAL=no
IPV6INIT=yes
IPV6_AUTOCONF=yes
IPV6_DEFROUTE=yes
IPV6_FAILURE_FATAL=no
IPV6_ADDR_GEN_MODE=stable-privacy
NAME=ens33
UUID=1c71c1dc-e4f8-4594-b77e-5e04f6906a31
DEVICE=ens33
ONBOOT=yes # 必須為 yes
IPADDR=192.168.131.137 # 靜態 IP
GATEWAY=192.168.2.2 # 網關
NETMAK=255.255.255.0 # 子網掩碼
DNS1=8.8.8.8
DNS2=114.114.114.114
2、每一台虛擬機配置 ip
和域名映射:
# vim /etc/hosts
192.168.131.137 hadoop1
192.168.131.138 hadoop2
192.168.131.139 hadoop3
3、關閉防火牆:
# 關閉防火牆
Service iptables stop
# 禁止防火牆開機自啟
Chkconfig iptables off
# 關閉selinux,設置 SELINUX 參數為 disabled
Vim /etc/selinux/config
4、重啟網絡、檢查:
# 重啟網絡
service network restart
# hadoop1/2/3 三台服務器之間相互 ping 看是否能夠通
ping hadoop1
# 重啟
reboot
2.2 添加hadoop賬戶並分配sudo權限
useradd -m hadoop
passwd hadoop
# 分配 sudo 權限,在該行root ALL=(ALL) ALL下添加hadoop ALL=(ALL) ALL保存后退出,並切換回hadoop用戶
visudo
# 如下配置
## The COMMANDS section may have other options added to it.
##
## Allow root to run any commands anywhere
root ALL=(ALL) ALL
hadoop ALL=(ALL) ALL
# 切換用戶
su hadoop
三台服務器IP、用戶名、密碼對應
# 分別有兩個用戶:root(110139)、hadoop(hadoop)
192.168.131.137 hadoop1
192.168.131.138 hadoop2
192.168.131.139 hadoop3
2.3 實現三台服務器之間免密登錄
1、安裝 ssh
服務:
yum install -y openssl openssh-server
yum -y install openssh-clients # 一定要安裝客戶端,不然使用不了 ssh
# 用vim打開配置文件/etc/ssh/sshd_config
# 將 PermitRootLogin,RSAAuthentication,PubkeyAuthentication 的注釋設置打開
# 常用命令
service sshd restart # 重啟SSH服務
service sshd start # 啟動服務
service sshd stop # 停止服務
netstat -antp | grep sshd # 查看是否啟動22端口
chkconfig sshd on # 設置開機啟動
chkconfig sshd off # 設置禁止開機啟動
yum install
時出現:
[root@hadoop3 .ssh]# yum install -y openssl openssh-server
Loaded plugins: fastestmirror
Setting up Install Process
Loading mirror speeds from cached hostfile
YumRepo Error: All mirror URLs are not using ftp, http[s] or file.
Eg. Invalid release/repo/arch combination/
removing mirrorlist with no valid mirrors: /var/cache/yum/x86_64/6/base/mirrorlist.txt
Error: Cannot find a valid baseurl for repo: base
解決方法:
sed -i "s|enabled=1|enabled=0|g" /etc/yum/pluginconf.d/fastestmirror.conf
mv /etc/yum.repos.d/CentOS-Base.repo /etc/yum.repos.d/CentOS-Base.repo.backup
curl -o /etc/yum.repos.d/CentOS-Base.repo https://www.xmpan.com/Centos-6-Vault-Aliyun.repo
yum clean all
yum makecache
yum repolist # 查看當前源
參考:YumRepo Error: All mirror URLs are not using ftp, http[s] or file.centos6 yum失敗!
2、生成公鑰私鑰(一定要在 hadoop 用戶下執行!!!,否則后續 start-dfs.sh 沒有權限)
# 一路回車即可 路徑:/home/hadoop/.ssh/id_rsa、id_rsa.pub
ssh-keygen -t rsa
# 在 hadoop1 服務器
cd /home/hadoop/.ssh/
cat id_rsa.pub > authorized_keys
# 再將 Hadoop2、Hadoop3 的 id_rsa.pub 的內容拷貝到 Hadoop1 的 authorized_keys,最后將 authorized_keys 分別傳到 Hadoop2、Hadoop3 的 /home/hadoop/.ssh/
scp authorized_keys root@hadoop2:/home/hadoop/.ssh/
scp authorized_keys root@hadoop3:/home/hadoop/.ssh/
# 分別給三個節點 修改authorized_keys權限
chmod 600 authorized_keys
chmod 700 .ssh
# 測試,ssh + 主機名
ssh hadoop2
# hbase 集群
https://www.cnblogs.com/qingyunzong/p/8668880.html
# hadoop 集群
https://www.cnblogs.com/qingyunzong/p/8496127.html
3. jdk 安裝
1、如果你使用root用戶進行安裝。 vi /etc/profile
即可 系統變量
2、如果你使用普通用戶進行安裝。 vi ~/.bashrc
用戶變量
# .bashrc
# Source global definitions
if [ -f /etc/bashrc ]; then
. /etc/bashrc
fi
# User specific aliases and functions
export JAVA_HOME=/home/hadoop/apps/jdk1.8.0_261
export PATH=$JAVA_HOME/bin:$PATH
檢查:
[hadoop@hadoop3 apps]$ source ~/.bashrc
[hadoop@hadoop3 apps]$ echo $JAVA_HOME
/home/hadoop/apps/jdk1.8.0_261
[hadoop@hadoop3 apps]$ java -version
java version "1.8.0_261"
Java(TM) SE Runtime Environment (build 1.8.0_261-b12)
Java HotSpot(TM) 64-Bit Server VM (build 25.261-b12, mixed mode)
4. hadoop 安裝
集群規划:
服務/主機名 | IP | 用戶 | HDFS | YARN |
---|---|---|---|---|
hadoop1 | 192.168.131.137 | hadoop | NameNode,DataNode | NodeManager,ResourceManager |
hadoop2 | 192.168.131.138 | hadoop | DataNode | NodeManager |
hadoop3 | 192.168.131.139 | hadoop | DataNode | NodeManager |
服務 | 主節點 | 從節點 |
---|---|---|
HDFS | NameNode | DataNode |
YARN | ResourceManager | NodeManager |
注意:hadoop 用戶最好給 apps、data、hadoop-2.7.5/ 等都添加權限 sudo chown hadoop:hadoop apps/
1、解壓:
[hadoop@hadoop1 apps]$ tar -zxvf hadoop-2.7.5.tar.gz
2、hadoop-env.sh
:
配置文件目錄:/home/hadoop/apps/hadoop-2.7.5/etc/hadoop
cd /home/hadoop/apps/hadoop-2.7.5/etc/hadoop
vim hadoop-env.sh
# 只需修改 jdk 路徑
export JAVA_HOME=/usr/local/jdk1.8.0_73
3、core-site.xml
:
<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://hadoop1:9000</value>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>/home/hadoop/data/hadoopdata</value>
</property>
</configuration>
4、hdfs-site.xml
:
<configuration>
<property>
<name>dfs.namenode.name.dir</name>
<value>/home/hadoop/data/hadoopdata/name</value>
<description>為了保證元數據的安全一般配置多個不同目錄</description>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>/home/hadoop/data/hadoopdata/data</value>
<description>datanode 的數據存儲目錄</description>
</property>
<property>
<name>dfs.replication</name>
<value>2</value>
<description>HDFS 的數據塊的副本存儲個數, 默認是3</description>
</property>
<property>
<name>dfs.secondary.http.address</name>
<value>hadoop3:50090</value>
<description>secondarynamenode 運行節點的信息,和 namenode 不同節點</description>
</property>
</configuration>
5、mapred-site.xml
:
<configuration>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
</configuration>
6、yarn-site.xml
:
<configuration>
<!-- Site specific YARN configuration properties -->
<property>
<name>yarn.resourcemanager.hostname</name>
<value>hadoop1</value> // 這個必須與 resourcemanager 服務器一致,我這里是 hadoop1
</property>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
<description>YARN 集群為 MapReduce 程序提供的 shuffle 服務</description>
</property>
</configuration>
7、slaves
:
vim slaves
hadoop1
hadoop2
hadoop3
8、將 hadoop-2.7.5/
分發到其他集群:
scp -r hadoop-2.7.5/ hadoop2:/home/hadoop/apps
scp -r hadoop-2.7.5/ hadoop3:/home/hadoop/apps
9、添加環境變量(每台服務器都需要添加):
[hadoop@hadoop1 apps]$ vim ~/.bashrc
export HADOOP_HOME=/home/hadoop/apps/hadoop-2.7.5
export PATH=$PATH:$HADOOP_HOME/bin:$HADOOP_HOME/sbin:
[hadoop@hadoop1 apps]$ source ~/.bashrc
[hadoop@hadoop1 apps]$ hadoop version
Hadoop 2.7.5
Subversion https://shv@git-wip-us.apache.org/repos/asf/hadoop.git -r 18065c2b6806ed4aa6a3187d77cbe21bb3dba075
Compiled by kshvachk on 2017-12-16T01:06Z
Compiled with protoc 2.5.0
From source with checksum 9f118f95f47043332d51891e37f736e9
This command was run using /home/hadoop/apps/hadoop-2.7.5/share/hadoop/common/hadoop-common-2.7.5.jar
10、格式化:
[hadoop@hadoop1 bin]$ hadoop namenode -format
21/06/13 14:34:26 INFO util.GSet: Computing capacity for map NameNodeRetryCache
21/06/13 14:34:26 INFO util.GSet: VM type = 64-bit
21/06/13 14:34:26 INFO util.GSet: 0.029999999329447746% max memory 966.7 MB = 297.0 KB
21/06/13 14:34:26 INFO util.GSet: capacity = 2^15 = 32768 entries
21/06/13 14:34:27 INFO namenode.FSImage: Allocated new BlockPoolId: BP-57205935-192.168.131.137-1623566067094
21/06/13 14:34:27 INFO common.Storage: Storage directory /home/hadoop/data/hadoopdata/name has been successfully formatted.
21/06/13 14:34:27 INFO namenode.FSImageFormatProtobuf: Saving image file /home/hadoop/data/hadoopdata/name/current/fsimage.ckpt_0000000000000000000 using no compression
21/06/13 14:34:27 INFO namenode.FSImageFormatProtobuf: Image file /home/hadoop/data/hadoopdata/name/current/fsimage.ckpt_0000000000000000000 of size 322 bytes saved in 0 seconds.
21/06/13 14:34:28 INFO namenode.NNStorageRetentionManager: Going to retain 1 images with txid >= 0
21/06/13 14:34:28 INFO util.ExitUtil: Exiting with status 0
21/06/13 14:34:28 INFO namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at hadoop1/192.168.131.137
************************************************************/
格式化 Namenode 時不能創建目錄
java.io.IOException: Cannot create directory /home/hadoop/data/hadoopdata/na
# 解決方法
https://blog.csdn.net/qq_40414738/article/details/99544777
start-dfs.sh 時沒有權限
[hadoop@hadoop1 sbin]$ sudo start-dfs.sh
[sudo] password for hadoop:
sudo: start-dfs.sh: command not found
[hadoop@hadoop1 sbin]$ start-dfs.sh
Starting namenodes on [hadoop1]
hadoop@hadoop1's password:
hadoop1: namenode running as process 4090. Stop it first.
hadoop@hadoop2's password: hadoop@hadoop1's password: hadoop@hadoop3's password:
hadoop2: mkdir: cannot create directory `/home/hadoop/apps/hadoop-2.7.5/logs': Permission denied
hadoop2: chown: cannot access `/home/hadoop/apps/hadoop-2.7.5/logs': No such file or directory
hadoop2: starting datanode, logging to /home/hadoop/apps/hadoop-2.7.5/logs/hadoop-hadoop-datanode-hadoop2.out
hadoop2: /home/hadoop/apps/hadoop-2.7.5/sbin/hadoop-daemon.sh: line 159: /home/hadoop/apps/hadoop-2.7.5/logs/hadoop-hadoop-datanode-hadoop2.out: No such file or directory
hadoop2: head: cannot open `/home/hadoop/apps/hadoop-2.7.5/logs/hadoop-hadoop-datanode-hadoop2.out' for reading: No such file or directory
hadoop2: /home/hadoop/apps/hadoop-2.7.5/sbin/hadoop-daemon.sh: line 177: /home/hadoop/apps/hadoop-2.7.5/logs/hadoop-hadoop-datanode-hadoop2.out: No such file or directory
hadoop2: /home/hadoop/apps/hadoop-2.7.5/sbin/hadoop-daemon.sh: line 178: /home/hadoop/apps/hadoop-2.7.5/logs/hadoop-hadoop-datanode-hadoop2.out: No such file or directory
解決辦法:https://www.cnblogs.com/zknublx/p/8066693.html
開啟 hdfs
[hadoop@hadoop1 sbin]$ start-dfs.sh
Starting namenodes on [hadoop1]
hadoop1: starting namenode, logging to /home/hadoop/apps/hadoop-2.7.5/logs/hadoop-hadoop-namenode-hadoop1.out
hadoop3: starting datanode, logging to /home/hadoop/apps/hadoop-2.7.5/logs/hadoop-hadoop-datanode-hadoop3.out
hadoop2: starting datanode, logging to /home/hadoop/apps/hadoop-2.7.5/logs/hadoop-hadoop-datanode-hadoop2.out
hadoop1: datanode running as process 4822. Stop it first.
Starting secondary namenodes [hadoop3]
hadoop3: starting secondarynamenode, logging to /home/hadoop/apps/hadoop-2.7.5/logs/hadoop-hadoop-secondarynamenode-hadoop3.out
[hadoop@hadoop1 sbin]$ jps
4822 DataNode
5465 NameNode
5724 Jps
開啟 yarn
[hadoop@hadoop1 sbin]$ start-yarn.sh
starting yarn daemons
starting resourcemanager, logging to /home/hadoop/apps/hadoop-2.7.5/logs/yarn-hadoop-resourcemanager-hadoop1.out
hadoop3: starting nodemanager, logging to /home/hadoop/apps/hadoop-2.7.5/logs/yarn-hadoop-nodemanager-hadoop3.out
hadoop2: starting nodemanager, logging to /home/hadoop/apps/hadoop-2.7.5/logs/yarn-hadoop-nodemanager-hadoop2.out
hadoop1: starting nodemanager, logging to /home/hadoop/apps/hadoop-2.7.5/logs/yarn-hadoop-nodemanager-hadoop1.out
查看集群 jps 進程
[hadoop@hadoop1 sbin]$ jps
5891 Jps
4822 DataNode
5465 NameNode
5770 ResourceManager
5870 NodeManager
[hadoop@hadoop2 ~]$ jps
2643 NodeManager
2532 DataNode
2717 Jps
[hadoop@hadoop3 ~]$ jps
2816 DataNode
2994 NodeManager
3091 Jps
web 管理界面
https://www.cnblogs.com/qingyunzong/p/8496127.html
https://www.cnblogs.com/qingyunzong/p/8668880.html
https://blog.csdn.net/Beans___Lee/article/details/105265244
https://blog.csdn.net/qq_43106863/article/details/101530233
https://blog.csdn.net/ASN_forever/article/details/80701003
https://blog.csdn.net/weixin_41552767/article/details/107221454
https://segmentfault.com/a/1190000038229319
VMware 搭建 hadoop 完全分布式集群
5. zookeeper 集群安裝
這里規划三台服務器:hadoop1,hadoop2,hadoop3
下載地址:http://mirrors.hust.edu.cn/apache/ZooKeeper/
此處使用的是3.4.10版本
1、解壓:
[hadoop@hadoop1 ~]$ cd /home/hadoop/apps
[hadoop@hadoop1 ~]$ tar -zxvf zookeeper-3.4.10.tar.gz -C apps/
2、修改配置文件:
[hadoop@hadoop1 zookeeper-3.4.10]$ cd conf/
[hadoop@hadoop1 conf]$ mv zoo_sample.cfg zoo.cfg
[hadoop@hadoop1 conf]$ vim zoo.cfg
# dataDir 內存數據庫快照存放地址
dataDir=/home/hadoop/data/zkdata/
# 末尾添加,配置ZK監聽客戶端連接的端口
dataLogDir=/home/hadoop/log/zklog # 事務日志
server.1=hadoop1:2888:3888
server.2=hadoop2:2888:3888
server.3=hadoop3:2888:3888
# server.serverid=host:tickpot:electionport
# server:固定寫法
# serverid:每個服務器的指定ID(必須處於1-255之間,必須每一台機器不能重復)
# host:主機名
# tickpot:心跳通信端口
# electionport:選舉端口
3、將配置文件分發到集群其他機器中:
[hadoop@hadoop1 bin]$ scp -r zookeeper-3.4.10/ hadoop2:/home/hadoop/apps
[hadoop@hadoop1 bin]$ scp -r zookeeper-3.4.10/ hadoop3:/home/hadoop/apps
注意:以下操作 4、5、6 每台服務器都需要這樣操作!
4、創建日志目錄、配置服務器 ID:
[hadoop@hadoop1 bin]$ mkdir -p /home/hadoop/log
[hadoop@hadoop1 bin]$ sudo chown hadoop:hadoop log/ # 給 hadoop 用戶添加權限
[hadoop@hadoop1 bin]$ mkdir -p /home/hadoop/data/zkdata
[hadoop@hadoop1 ~]$ cd /home/hadoop/data/zkdata
[hadoop@hadoop1 ~]$ echo 1 > myid # myid,里面存放的內容就是服務器的 id,就是 server.1=hadoop01:2888:3888 當中的 id, 就是 1
5、配置環境變量:
vim ~/.bashrc
export ZOOKEEPER_HOME=/home/hadoop/apps/zookeeper-3.4.10
export PATH=$PATH:$ZOOKEEPER_HOME/bin
source ~/.bashrc
6、啟動 zkserver
服務,檢查:
啟動:zkServer.sh start
停止:zkServer.sh stop
查看狀態:zkServer.sh status
ps -aux | grep 'zookeeper'
[hadoop@hadoop1 bin]$ zkServer.sh start
ZooKeeper JMX enabled by default
Using config: /home/hadoop/apps/zookeeper-3.4.10/bin/../conf/zoo.cfg
Starting zookeeper ... STARTED
[hadoop@hadoop1 bin]$ zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /home/hadoop/apps/zookeeper-3.4.10/bin/../conf/zoo.cfg
Error contacting service. It is probably not running.
[hadoop@hadoop1 bin]$ jps
2769 Jps
2723 QuorumPeerMain
[hadoop@hadoop2 bin]$ zkServer.sh start
ZooKeeper JMX enabled by default
Using config: /home/hadoop/apps/zookeeper-3.4.10/bin/../conf/zoo.cfg
Starting zookeeper ... STARTED
[hadoop@hadoop2 bin]$ zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /home/hadoop/apps/zookeeper-3.4.10/bin/../conf/zoo.cfg
Mode: leader
[hadoop@hadoop2 bin]$ jps
[hadoop@hadoop3 bin]$ zkServer.sh start
ZooKeeper JMX enabled by default
Using config: /home/hadoop/apps/zookeeper-3.4.10/bin/../conf/zoo.cfg
Starting zookeeper ... STARTED
[hadoop@hadoop3 bin]$ zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /home/hadoop/apps/zookeeper-3.4.10/bin/../conf/zoo.cfg
Mode: follower
[hadoop@hadoop3 bin]$ jps
1898 QuorumPeerMain
1951 Jps
6. HBase 集群搭建
[hadoop@hadoop1 apps]$ tar -zxvf hbase-1.2.6-bin.tar.gz
[hadoop@hadoop1 apps]$ cd hbase-1.2.6/conf/ # 配置文件目錄
[hadoop@hadoop1 conf]$ vim hbase-env.sh
# 修改 jdk 路徑 和 zookeeper
export JAVA_HOME=/home/hadoop/apps/jdk1.8.0_261
# Tell HBase whether it should manage it's own instance of Zookeeper or not.
export HBASE_MANAGES_ZK=false
1、修改 hbase-site.xml
:
<configuration>
<property>
<!-- 指定 hbase 在 HDFS 上存儲的路徑 -->
<name>hbase.rootdir</name>
<value>hdfs://hadoop1:9000/hbase</value>
</property>
<property>
<!-- 指定 hbase 是分布式的 -->
<name>hbase.cluster.distributed</name>
<value>true</value>
</property>
<property>
<!-- 指定 zk 的地址,多個用“,”分割 -->
<name>hbase.zookeeper.quorum</name>
<value>hadoop1:2181,hadoop2:2181,hadoop3:2181</value>
</property>
<property>
<name>hbase.master.maxclockskew</name>
<value>180000</value>
<description>Time difference of regionserver from master</description>
</property>
</configuration>
2、修改 regionservers
:
vim regionservers
hadoop1
hadoop2
hadoop3
3、修改 backup-masters
(該文件是不存在的,先自行創建):
# 備用 master
vim backup-masters
hadoop3
4、要把 hadoop 的 hdfs-site.xml 和 core-site.xml 放到 hbase-1.2.6/conf 下
5、刪除 docs
(刪除 HBase
目錄下的 docs
文件夾):
rm -rf docs
# 分發
scp -r hbase-1.2.6/ hadoop2:/home/hadoop/apps/
scp -r hbase-1.2.6/ hadoop3:/home/hadoop/apps/
6、時間同步:
HBase
集群對於時間的同步要求的比 HDFS
嚴格,所以,集群啟動之前千萬記住要進行 時間同步,要求相差不要超過 30s:
方法一:
# 配置 hbase-site.xml,三個服務器之間時間差不超過 30s
<property>
<name>hbase.master.maxclockskew</name>
<value>180000</value>
<description>Time difference of regionserver from master</description>
</property>
方法二:
- 從節點服務器與主節點服務器時間同步
- 所有節點與網絡服務器時間同步
[hadoop@hadoop2 hbase-1.2.6]$ sudo yum install -y ntp
# 與 time.nist.gov 網絡服務器同步
[hadoop@hadoop2 hbase-1.2.6]$ sudo /usr/sbin/ntpdate time.nist.gov
15 Jun 20:54:27 ntpdate[1780]: step time server 132.163.97.3 offset -28785.531070 sec
# 設置定時任務
[hadoop@hadoop2 hbase-1.2.6]$ crontab -l
# 服務器時間同步
0 */1 * * * /usr/sbin/ntpdate time.nist.gov
參考:http://www.hnbian.cn/posts/f2bc4737.html
7、系統環境變量:
# 所有節點都需要修改
[hadoop@hadoop1 bin]$ vim ~/.bashrc
export HBASE_HOME=/home/hadoop/apps/hbase-1.2.6
export PATH=$PATH:$HBASE_HOME/bin
source ~/.bashrc
8、啟動 hbase
集群:
[hadoop@hadoop1 bin]$ pwd
/home/hadoop/apps/hbase-1.2.6/bin
[hadoop@hadoop1 bin]$ start-hbase.sh
starting master, logging to /home/hadoop/apps/hbase-1.2.6/logs/hbase-hadoop-master-hadoop1.out
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option PermSize=128m; support was removed in 8.0
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=128m; support was removed in 8.0
hadoop3: starting regionserver, logging to /home/hadoop/apps/hbase-1.2.6/logs/hbase-hadoop-regionserver-hadoop3.out
hadoop2: starting regionserver, logging to /home/hadoop/apps/hbase-1.2.6/logs/hbase-hadoop-regionserver-hadoop2.out
hadoop1: starting regionserver, logging to /home/hadoop/apps/hbase-1.2.6/logs/hbase-hadoop-regionserver-hadoop1.out
hadoop3: starting master, logging to /home/hadoop/apps/hbase-1.2.6/logs/hbase-hadoop-master-hadoop3.out
注意:在哪個集群啟動,哪個就是主節點
Master
,啟動hbase
之前一定先啟動zookeeper、hadoop
!!!
9、檢查:
[hadoop@hadoop1 bin]$ jps
1668 NameNode
2164 NodeManager
4054 Jps
3590 HMaster
1495 QuorumPeerMain
3719 HRegionServer
1768 DataNode
2061 ResourceManager
[hadoop@hadoop2 hbase-1.2.6]$ jps
1440 QuorumPeerMain
2305 Jps
2149 HRegionServer
1622 NodeManager
1511 DataNode
[hadoop@hadoop3 bin]$ jps
2722 Jps
1682 NodeManager
1436 QuorumPeerMain
1501 DataNode
2654 HRegionServer
若有節點 HRegionServer 進程沒起來就手動啟動
# 啟動 master 進程
hbase-daemon.sh start master
# 啟動HRegionServer進程
hbase-daemon.sh start regionserver
10、web-ui
:
11、測試:
[hadoop@hadoop1 bin]$ hbase shell
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/home/hadoop/apps/hbase-1.2.6/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/home/hadoop/apps/hadoop-2.7.5/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
HBase Shell; enter 'help<RETURN>' for list of supported commands.
Type "exit<RETURN>" to leave the HBase Shell
Version 1.2.6, rUnknown, Mon May 29 02:25:32 CDT 2017
hbase(main):001:0> list
TABLE
0 row(s) in 0.6420 seconds
=> []
hbase(main):002:0> helo 'list'
NoMethodError: undefined method `helo' for #<Object:0x3ab35b9c>
hbase(main):004:0> create 't1', {NAME => 'f1'}, {NAME => 'f2'}, {NAME => 'f3'}
0 row(s) in 1.6840 seconds
=> Hbase::Table - t1
hbase(main):005:0> list
TABLE
t1
1 row(s) in 0.0290 seconds
=> ["t1"]
hbase(main):006:0> desc 't1'
Table t1 is ENABLED
t1
COLUMN FAMILIES DESCRIPTION
{NAME => 'f1', BLOOMFILTER => 'ROW', VERSIONS => '1', IN_MEMORY => 'false', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOC
K_ENCODING => 'NONE', TTL => 'FOREVER', COMPRESSION => 'NONE', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE
=> '65536', REPLICATION_SCOPE => '0'}
{NAME => 'f2', BLOOMFILTER => 'ROW', VERSIONS => '1', IN_MEMORY => 'false', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOC
K_ENCODING => 'NONE', TTL => 'FOREVER', COMPRESSION => 'NONE', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE
=> '65536', REPLICATION_SCOPE => '0'}
{NAME => 'f3', BLOOMFILTER => 'ROW', VERSIONS => '1', IN_MEMORY => 'false', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOC
K_ENCODING => 'NONE', TTL => 'FOREVER', COMPRESSION => 'NONE', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE
=> '65536', REPLICATION_SCOPE => '0'}
3 row(s) in 0.3510 seconds
參考文章: