一、基本概念
The Apache Hive™ data warehouse software facilitates reading, writing, and managing large datasets residing in distributed storage and queried using SQL syntax.
Hive數據倉庫軟件,致力於解決讀寫、管理分布式存儲中的大規模數據集,以及使用SQL語法進行查詢的問題。
Hive用於解決海量結構化日志的數據統計問題。
Hive是基於Hadoop的一個數據倉庫工具。本質是將HQL(Hive的查詢語言)轉化成MapReduce程序。
HIve處理的數據存儲在HDFS
HIve分析數據底層的默認實現是MapReduce
執行程序運行在Yarn上
Hive的優缺點
優點:
可以快速進行數據分析,不需要寫MapReduce程序。
MapReduce適合處理大數據,不適合處理小數據
缺點:
HQL表達能力有限,迭代式算法不能表達,粒度較粗,調優比較困難。
自定義函數類別:
- UDF
- UDAF
- UDTF
架構原理
執行順序:解析器-編譯器-優化器-執行器
Hive與數據庫對比
HIve相比數據庫,讀多寫少,沒有索引,需要暴力掃描所有數據,即使引入了MapReduce機制,也不適合實時查詢,擴展性和Hadoop的是一致的,擴展性強。
二、安裝與啟動
需要啟動Hadoop的HDFS和Yarn
配置conf/hive-env.sh
export HADOOP_HOME=/usr/local/hadoop(改成hadoop-home路徑)
export HIVE_CONF_DIR=/ur/local/hive/conf
啟動
bin/hive
三、Hive語句
顯示數據庫
show databases;
使用本地模式執行
hive> SET mapreduce.framework.name=local;
創建表、插入記錄、查詢記錄
use default;
#### 創建表
create table student(id int,name string);
#### 插入記錄
insert into table student values(1,'fonxian');
#### 查詢記錄
select * from student;
在Hadoop上查看記錄
從文件系統加載數據
創建數據文本student.txt
3,kafka
4,flume
5,hbase
6,zookeeper
創建表,定義分隔符
create table stu1(id int,name string) row format delimited fields terminated by ',';
加載數據
load data local inpath '/usr/local/hive/data/student.txt' into table stu1;
查看數據后的執行效果
四、Hive Hook使用
添加依賴
<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<groupId>hive-hook-example</groupId>
<artifactId>Hive-hook-example</artifactId>
<version>1.0</version>
<dependencies>
<dependency>
<groupId>org.apache.hive</groupId>
<artifactId>hive-exec</artifactId>
<version>1.1.0</version>
</dependency>
</dependencies>
</project>
創建HiveExampleHook
public class HiveExampleHook implements ExecuteWithHookContext {
public void run(HookContext hookContext) throws Exception {
System.out.println("operation name :" + hookContext.getQueryPlan().getOperationName());
System.out.println(hookContext.getQueryPlan().getQueryPlan());
System.out.println("Hello from the hook !!");
}
}
編譯好,獲得Hive-hook-example-1.0.jar
hive> add jar Hive-hook-example-1.0.jar
hive> set hive.exec.pre.hooks=HiveExampleHook;
hive> select * from student;
operation name :QUERY
Query(queryId:fangzhijie_20191221231550_0e949bbf-f8f7-45a8-8726-c1cdd679cef9, queryType:null, queryAttributes:{queryString=select * from student}, queryCounters:null, stageGraph:Graph(nodeType:STAGE, roots:null, adjacencyList:null), stageList:null, done:false, started:true)
Hello from the hook !!
OK
Time taken: 1.718 seconds
Time taken: 1.68 seconds
五、使用MySQL存儲元數據
在本地安裝mysql,創建hive-site.xml
<configuration>
<property>
<name>javax.jdo.option.ConnectionURL</name>
<value>jdbc:mysql://127.0.0.1:3306/metastore?createDatabaseIfNotExist=true</value>
<description>JDBC connect string for a JDBC metastore</description>
</property>
<property>
<name>javax.jdo.option.ConnectionDriverName</name>
<value>com.mysql.jdbc.Driver</value>
<description>Driver class name for a JDBC metastore</description>
</property>
<property>
<name>javax.jdo.option.ConnectionUserName</name>
<value>root</value>
<description>username to use against metastore database</description>
</property>
<property>
<name>javax.jdo.option.ConnectionPassword</name>
<value>123456</value>
<description>password to use against metastore database</description>
</property>
</configuration>
執行bin/hive
,查看數據庫,發現有創建表。
在hive中執行reate table aaa(id int);
,HDFS中有創建該文件,且metastore的TBLS表中有記錄。
六、Beeline
HiveServer2 (introduced in Hive 0.11) has its own CLI called Beeline. HiveCLI is now deprecated in favor of Beeline, as it lacks the multi-user, security, and other capabilities of HiveServer2. To run HiveServer2 and Beeline from shell:
HiveServer2有自己的客戶端,叫Beeline。HiveCLI目前已經廢棄了,建議使用Beeline。
使用Beeline連接HiveServer2
beeline -u "jdbc:hive2://host:port/;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=hiveserver2" -n username -p password
七、報錯信息解決&問題定位
修改配置不生效
可能是配置路徑的問題,查看hive-env.sh,最后發現hive配置路徑寫錯。
錯誤的路徑配置,導致根本找不到配置路徑
export HIVE_CONF_DIR=/ur/local/hive/conf
正確的配置
export HIVE_CONF_DIR=/usr/local/hive/conf
插入數據失敗
hive> insert into table student values(1,'fonxian');
Query ID = fangzhijie_20191205061055_6c8c233e-2d46-470a-972d-38f36bb8068c
Total jobs = 3
Launching Job 1 out of 3
Number of reduce tasks is set to 0 since there's no reduce operator
Starting Job = job_1575495654045_0004, Tracking URL = http://localhost:8088/proxy/application_1575495654045_0004/
Kill Command = /usr/local/hadoop/bin/hadoop job -kill job_1575495654045_0004
Hadoop job information for Stage-1: number of mappers: 0; number of reducers: 0
2019-12-05 06:10:58,803 Stage-1 map = 0%, reduce = 0%
Ended Job = job_1575495654045_0004 with errors
Error during job, obtaining debugging information...
FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask
MapReduce Jobs Launched:
Stage-Stage-1: HDFS Read: 0 HDFS Write: 0 FAIL
Total MapReduce CPU Time Spent: 0 msec
解決方法:執行下面的命令
hive> SET mapreduce.framework.name=local;
分析:
參考官方文檔
Hive compiler generates map-reduce jobs for most queries. These jobs are then submitted to the Map-Reduce cluster indicated by the variable: mapred.job.tracker
Hive編譯器 為大多數查詢操作生成MR任務,這些任務之后會被提交到MR集群。
Hive fully supports local mode execution. To enable this, the user can enable the following option:
Hive支持本地模式執行,用戶可以使用下列操作:
hive> SET mapreduce.framework.name=local;
參考文檔
Hive Getting Started
尚硅谷大數據課程之Hive
hive-hook-example
Beeline 官方文檔