Docker部署Hadoop+Hive


 
Docker部署Hadoop+Hive
由於hadoop與hive等存在版本兼容問題,安裝前可以先通過官網確認版本兼容情況:
http://hive.apache.org/downloads.html
本次使用的各版本配置如下:
  • Docker 19.03.8
  • JDK 1.8
  • Hadoop 3.2.0
  • Hive 3.1.2
  • mysql 8.0.1
  • mysql-connector-java-5.1.49.jar
  • hive_jdbc_2.5.15.1040

Hadoop部分:

一、拉取鏡像

docker pull registry.cn-hangzhou.aliyuncs.com/hadoop_test/hadoop_base

二、運行容器

進入容器看worker里面有三台機子,分別是Master、Slave1、Slave2
關於worker路徑,通過etc/profile環境變量配置的文件即可查看hadoop安裝目錄
#隨意建立個容器查看配置情況
docker run -it --name hadopp-test registry.cn-hangzhou.aliyuncs.com/hadoop_test/hadoop_base
#查看系統變量路徑 vim etc
/profile
#查看worker情況 vim
/usr/local/hadoop/etc/hadoop/workers
建立hadoop用的內部網絡 #指定固定ip號段 docker network create
--driver=bridge --subnet=172.19.0.0/16 hadoop
建立Master容器,映射端口 10000端口為hiveserver2端口,后面本地客戶端要通過beeline連接hive使用,有其他組件要安裝的話可以提前把端口都映射出來,畢竟后面容器運行后再添加端口還是有點麻煩的 docker run
-itd --network hadoop -h Master --name Master -p 9870:9870 -p 8088:8088 -p 10000:10000 registry.cn-hangzhou.aliyuncs.com/hadoop_test/hadoop_base bash
創建Slave1容器 docker run
-itd --network hadoop -h Slave1 --name Slave1 registry.cn-hangzhou.aliyuncs.com/hadoop_test/hadoop_base bash
創建Slave2容器 docker run
-itd --network hadoop -h Slave2 --name Slave2 registry.cn-hangzhou.aliyuncs.com/hadoop_test/hadoop_base bash
三台機器,都修改host vim
/etc/hosts 172.19.0.2 Master 172.19.0.3 Slave1 172.19.0.4 Slave2

三、啟動hadoop

雖然容器里面已經把hadoop路徑配置在系統變量里面,但每次進入需要運行source /etc/profile才能生效使用
  • 進入master,啟動hadoop,先格式化hdfs
#進入Master容器
docker exec -it Master bash
#進入后格式化hdfs
root@Master:/# hadoop namenode -format
  • 啟動全部,包含hdfs和yarn
root@Master:/usr/local/hadoop/sbin# ./start-all.sh
可以看到服務起來了,本地范圍宿主機ip的8088及9870端口可以看到監控信息
Starting namenodes on [Master]
Master: Warning: Permanently added 'master,172.19.0.4' (ECDSA) to the list of known hosts.
Starting datanodes
Slave1: Warning: Permanently added 'slave1,172.19.0.3' (ECDSA) to the list of known hosts.
Slave2: Warning: Permanently added 'slave2,172.19.0.2' (ECDSA) to the list of known hosts.
Slave1: WARNING: /usr/local/hadoop/logs does not exist. Creating.
Slave2: WARNING: /usr/local/hadoop/logs does not exist. Creating.
Starting secondary namenodes [Master]
Starting resourcemanager
Starting nodemanagers
查看分布式文件系統狀態
root@Master:/usr/local/hadoop/sbin# hdfs dfsadmin -report

四、運行內置WordCount例子

把license作為需要統計的文件
root@Master:/usr/local/hadoop# cat LICENSE.txt > file1.txt
root@Master:/usr/local/hadoop# ls
LICENSE.txt NOTICE.txt README.txt bin etc file1.txt include lib libexec logs sbin share
在 HDFS 中創建 input 文件夾
root@Master:/usr/local/hadoop# hadoop fs -mkdir /input
上傳 file1.txt 文件到 HDFS 中
root@Master:/usr/local/hadoop# hadoop fs -put file1.txt /input
2020-09-14 11:02:01,183 INFO sasl.SaslDataTransferClient: SASL encryption trust check: localHostTrusted = false, remoteHostTrusted = false
查看 HDFS 中 input 文件夾里的內容
root@Master:/usr/local/hadoop# hadoop fs -ls /input
Found 1 items
-rw-r--r-- 2 root supergroup 150569 2020-09-14 11:02 /input/file1.txt
運作 wordcount 例子程序
root@Master:/usr/local/hadoop# hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-3.2.1.jar wordcount /input /output
查看 HDFS 中的 /output 文件夾的內容
root@Master:/usr/local/hadoop# hadoop fs -ls /output
Found 2 items
-rw-r--r-- 2 root supergroup 0 2020-09-14 11:09 /output/_SUCCESS
-rw-r--r-- 2 root supergroup 35324 2020-09-14 11:09 /output/part-r-00000
查看 part-r-00000文件的內容,就是運行的結果
root@Master:/usr/local/hadoop# hadoop fs -cat /output/part-r-00000
Hadoop 部分結束了

 

HIVE部分:

Hive下載后上傳到容器目錄下:
下載地址: https://mirrors.tuna.tsinghua.edu.cn/apache/hive/hive-3.1.2/apache-hive-3.1.2-bin.tar.gz

一、解壓安裝包

# 拷貝安裝包進Master容器
docker cp apache-hive-3.1.2-bin.tar.gz Master:/usr/local
# 進入容器
docker exec -it Master bash
cd /usr/local/
# 解壓安裝包
tar xvf apache-hive-3.1.2-bin.tar.gz

二、修改配置文件

root@Master:/usr/local/apache-hive-3.1.2-bin/conf# cp hive-default.xml.template hive-site.xml
root@Master:/usr/local/apache-hive-3.1.2-bin/conf# vim hive-site.xml
在最前面添加下面配置:
<property>
<name>system:java.io.tmpdir</name>
<value>/tmp/hive/java</value>
</property>
<property>
<name>system:user.name</name>
<value>${user.name}</value>
</property>

三、配置Hive相關環境變量

vim /etc/profile
 
#文本最后添加
export HIVE_HOME="/usr/local/apache-hive-3.1.2-bin"
export PATH=$PATH:$HIVE_HOME/bin
配置后執行source /etc/profile 生效
source /etc/profile

四、配置mysql作為元數據庫

1.拉取Mysql鏡像並生產容器:
#拉取鏡像
docker pull mysql:8:0.18
#建立容器
docker run --name mysql_hive -p 4306:3306 --net hadoop --ip 172.19.0.5 -v /root/mysql:/var/lib/mysql -e MYSQL_ROOT_PASSWORD=abc123456 -d mysql:8.0.18
#進入容器
docker exec -it mysql_hive bash
#進入myslq
mysql -uroot -p
#密碼上面建立容器時候已經設置abc123456
#建立hive數據庫
create database hive;
#修改遠程連接權限
ALTER USER 'root'@'%' IDENTIFIED WITH mysql_native_password BY 'abc123456';
ps:容器ip地址類似window的分配機制,如果不設置固定ip容器重啟后會自動分配ip,因此數據庫相關的容器建議設置為固定ip。
2.回去Master容器,修改關聯數據庫的配置
docker exec -it Master bash
vim /usr/local/apache-hive-3.1.2-bin/conf/hive-site.xml
搜索關鍵詞修改數據庫url、驅動、用戶名,url根據上面建容器時候地址。
#還請注意hive配置文件里面使用&作為分隔,高版本myssql需要SSL驗證,在這里設置關閉
<property>
<name>javax.jdo.option.ConnectionUserName</name>
<value>root</value>
</property>
<property>
<name>javax.jdo.option.ConnectionPassword</name>
<value>abc123456</value>
</property>
<property>
<name>javax.jdo.option.ConnectionURL</name>
<value>jdbc:mysql://172.19.0.5:3306/hive?createDatabaseIfNotExist=true&useSSL=false</value>
</property>
<property>
<name>javax.jdo.option.ConnectionDriverName</name>
<value>com.mysql.jdbc.Driver</value>
</property>
<property>
<name>hive.metastore.schema.verification</name>
<value>false</value>
<property>
3.mysql驅動上傳到hive的lib下
#前面已經跟hive安裝包一起上傳到容器/usr/local目錄
root@Master:/usr/local# cp mysql-connector-java-5.1.49.jar /usr/local/apache-hive-3.1.2-bin/lib
4.jar包配置修改
對hive的lib文件夾下的部分文件做修改,不然初始化數據庫的時候會報錯
#slf4j這個包hadoop及hive兩邊只能有一個,這里刪掉hive這邊
root@Master:/usr/local/apache-hive-3.1.2-bin/lib# rm log4j-slf4j-impl-2.10.0.jar
 
#guava這個包hadoop及hive兩邊只刪掉版本低的那個,把版本高的復制過去,這里刪掉hive,復制hadoop的過去
root@Master:/usr/local/hadoop/share/hadoop/common/lib# cp guava-27.0-jre.jar /usr/local/apache-hive-3.1.2-bin/lib
root@Master:/usr/local/hadoop/share/hadoop/common/lib# rm /usr/local/apache-hive-3.1.2-bin/lib/guava-19.0.jar
 
#把文件hive-site.xml第3225行的特殊字符刪除
root@Master: vim /usr/local/apache-hive-3.1.2-bin/conf/hive-site.xml

五、初始化元數據庫

root@Master:/usr/local/apache-hive-3.1.2-bin/bin# schematool -initSchema -dbType mysql
成功后提示:
Metastore connection URL: jdbc:mysql://172.19.0.5:3306/hive?createDatabaseIfNotExist=true&useSSL=false
Metastore Connection Driver : com.mysql.jdbc.Driver
Metastore connection User: root
Starting metastore schema initialization to 3.1.0
Initialization script hive-schema-3.1.0.mysql.sql
Initialization script completed
schemaTool completed

六、驗證

  • 我們先創建一個數據文件放到/usr/local
cd /usr/local
vim test.txt
1,jack
2,hel
3,nack
  • 進入hive交互界面
root@Master:/usr/local# hive
Hive Session ID = 7bec2ab6-e06d-4dff-8d53-a64611875aeb
 
Logging initialized using configuration in jar:file:/usr/local/apache-hive-3.1.2-bin/lib/hive-common-3.1.2.jar!/hive-log4j2.properties Async: true
Hive-on-MR is deprecated in Hive 2 and may not be available in the future versions. Consider using a different execution engine (i.e. spark, tez) or using Hive 1.X releases.
Hive Session ID = 5cdee915-5c95-4834-bd3b-bec6f3d90e5b
hive> create table test(
> id int
> ,name string
> )
> row format delimited
> fields terminated by ',';
OK
Time taken: 1.453 seconds
hive> load data local inpath '/usr/local/test.txt' into table test;
Loading data to table default.test
OK
Time taken: 0.63 seconds
hive> select * from test;
OK
1 jack
2 hel
3 nack
Time taken: 1.611 seconds, Fetched: 3 row(s)
hive安裝完畢

啟動 Hiveserver2

一、修改hadoop的一些權限配置

root@Master:/usr/local# vim /usr/local/hadoop/etc/hadoop/core-site.xml
加入以下配置
<property>
<name>hadoop.proxyuser.root.hosts</name>
<value>*</value>
</property>
<property>
<name>hadoop.proxyuser.root.groups</name>
<value>*</value>
</property>
重啟hdfs:
root@Master:/usr/local/hadoop/sbin# ./stop-dfs.sh
Stopping namenodes on [Master]
Stopping datanodes
Stopping secondary namenodes [Master]
root@Master:/usr/local/hadoop/sbin# ./start-dfs.sh
Starting namenodes on [Master]
Starting datanodes
Starting secondary namenodes [Master]

二、修改hiveserver2JVM參數

修改heapsize參數默認是256M,修改為15360M(15G)根據機器配置修改,否則會報內存溢出
root@Master: vi /usr/local/apache-hive-3.1.2-bin/bin/hive-config.sh
# Default to use 256MB
export HADOOP_HEAPSIZE=${HADOOP_HEAPSIZE:-15360}

三、后台啟動hiveserver2

root@Master:/usr/local/hadoop/sbin# nohup hiveserver2 >/dev/null 2>/dev/null &
[2] 7713

三、驗證

通過beeline連接並查詢
查看10000端口運行正常,beeline命令,!connect連接,結果查詢正常
root@Master:/usr/local/hadoop/sbin# netstat -ntulp |grep 10000
tcp 0 0 0.0.0.0:10000 0.0.0.0:* LISTEN 7388/java
root@Master:/usr/local/hadoop/sbin# beeline
Beeline version 3.1.2 by Apache Hive
beeline> !connect jdbc:hive2://localhost:10000/default
Connecting to jdbc:hive2://localhost:10000/default
Enter username for jdbc:hive2://localhost:10000/default: root
Enter password for jdbc:hive2://localhost:10000/default: *********
Connected to: Apache Hive (version 3.1.2)
Driver: Hive JDBC (version 3.1.2)
Transaction isolation: TRANSACTION_REPEATABLE_READ
0: jdbc:hive2://localhost:10000/default> select * from test;
INFO : Compiling command(queryId=root_20200915075948_0672fa16-435e-449c-9fcd-f71fcdf6841c): select * from test
INFO : Concurrency mode is disabled, not creating a lock manager
INFO : Semantic Analysis Completed (retrial = false)
INFO : Returning Hive schema: Schema(fieldSchemas:[FieldSchema(name:test.id, type:int, comment:null), FieldSchema(name:test.name, type:string, comment:null)], properties:null)
INFO : Completed compiling command(queryId=root_20200915075948_0672fa16-435e-449c-9fcd-f71fcdf6841c); Time taken: 2.584 seconds
INFO : Concurrency mode is disabled, not creating a lock manager
INFO : Executing command(queryId=root_20200915075948_0672fa16-435e-449c-9fcd-f71fcdf6841c): select * from test
INFO : Completed executing command(queryId=root_20200915075948_0672fa16-435e-449c-9fcd-f71fcdf6841c); Time taken: 0.004 seconds
INFO : OK
INFO : Concurrency mode is disabled, not creating a lock manager
+----------+------------+
| test.id | test.name |
+----------+------------+
| 1 | jack |
| 2 | hel |
| 3 | nack |
+----------+------------+
3 rows selected (3.194 seconds)
0: jdbc:hive2://localhost:10000/default>
Hiveserver2配置完畢


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM