一、Node2節點配置
二、Master節點配置
三、Node1節點配置
四、啟動Hive並測試
下載Hive包:https://cloud.189.cn/t/zqaieevYNrau (訪問碼:c10p)
下載mysql-jar包:https://cloud.189.cn/t/2IzYzuARVzQ3 (訪問碼:nc8j)
下載result.json文件:https://cloud.189.cn/t/FjmUJ3NbiMza (訪問碼:3ev9)
下載moivescsv.csv文件:https://cloud.189.cn/t/UvUBFzb2q6ba (訪問碼:8pk4)
一、Node2節點配置
Node2節點執行:
首先在windows中傳輸mysql-connector-java-5.1.5-bin.jar到node2
1、安裝mysqld/mariadb服務(建議使用離線源)
[root@node2 ~]# yum -y install mariadb-server
2、啟動mysql服務,並設置開機自啟
[root@node2 ~]# systemctl start mariadb
[root@node2 ~]# systemctl enable mariadb
3、初始化mysql並設置密碼並測試登陸mysql
[root@node2 ~]# mysql_secure_installation
4、Node2將jar包發給node1
[root@node2 ~]# scp mysql-connector-java-5.1.5-bin.jar node1:/root

二、Master節點配置
Master節點執行操作
1、傳輸tar包到master節點
使用SecureFX進行傳輸
2、Master創建文件夾並解壓hive壓縮包,並將tar包傳輸給node1
[root@master ~]# mkdir -p /usr/hive
[root@master ~]# tar -zxf apache-hive-2.3.7-bin.tar.gz -c /usr/hive
[root@master ~]# scp apache-hive-2.3.7-bin.tar.gz node1:/root

3、Master修改環境變量並驗證
[root@master ~]# vi /etc/profile

[root@master ~]# source /etc/profile
master配置客戶端
4、更換jar包
[root@master ~]# cp /usr/hive/apache-hive-2.3.7-bin/lib/jline-2.12.jar /opt/bigdata/hadoop-3.0.0/lib

5、添加hive-site.xml
[root@master ~]# cd /usr/hive/apache-hive-2.3.7-bin/conf
[root@master conf]# vi hive-site.xml
<configuration>
<property>
<name>hive.metastore.warehouse.dir</name>
<value>/user/hive_remote/warehouse</value>
</property>
<property>
<name>hive.metastore.local</name>
<value>false</value>
</property>
<property>
<name>hive.metastore.uris</name>
<value>thrift://node1:9083</value>
</property>
</configuration>
三、Node1節點配置
Node1節點操作
1、Node1將jar包復制到lib中
[root@node1 ~]# cp mysql-connector-java-5.1.5-bin.jar /usr/hive/apache-hive-2.3.7-bin/lib

2、Node1復制配置文件
[root@node1 ~]# cd /usr/hive/apache-hive-2.3.7-bin/conf
[root@node1 conf]# cp hive-env.sh.template hive-env.sh

3、Node1添加環境變量
在hive-env.sh中添加hadoop的目錄
[root@node1 conf]# vi hive-env.sh
在首行添加
HADOOP_HOME=/opt/bigdata/hadoop-3.0.0

4、Node1創建hive-site.xml文件
[root@node1 ~]# cd /usr/hive/apache-hive-2.3.7-bin/conf
[root@node1 conf]# vi hive-site.xml
<configuration>
<property>
<name>hive.metastore.warehouse.dir</name>
<value>/user/hive_remote/warehouse</value>
</property>
<property>
<name>javax.jdo.option.ConnectionURL</name>
<value>jdbc:mysql://node2:3306/hivecreateDatabaseIfNotExist=true</value>
</property>
<property>
<name>javax.jdo.option.ConnectionDriverName</name>
<value>com.mysql.jdbc.Driver</value>
</property>
<property>
<name>javax.jdo.option.ConnectionUserName</name>
<value>root</value>
</property>
<property>
<name>javax.jdo.option.ConnectionPassword</name>
<value>000000</value>
</property>
<property>
<name>hive.metastore.schema.verification</name>
<value>false</value>
</property>
<property>
<name>datanucleus.schema.autoCreateAll</name>
<value>true</value>
</property>
</configuration>
四、啟動Hive並導入json/csv格式文件進行測試
##### 啟動hive[root@node1 ~]cd /usr/hive/apache-hive-2.3.7-bin/
[root@node1 apache-hive-2.3.7-bin]bin/hive

創建result表,並導入數據
hive>create table result(json string);
hive>load data local inpath ‘/root/result.json’ into table result;
hive>select * from result;

創建moives表,並導入數據,查詢
hive> create table movies(a string,b string,c string,d string,e int)
> row format serde
> 'org.apache.hadoop.hive.serde2.OpenCSVSerde'
> with
> SERDEPROPERTIES
> ("separatorChar"=",")
> STORED AS TEXTFILE;
hive> load data local inpath '/root/moviescsv.csv' into table movies;

