Hive——環境搭建


Hive——環境搭建

相關hadoop和mysql環境已經搭建好。我博客中也有相關搭建的博客。

 

一、下載Hive並解壓到指定目錄(本次使用版本hive-1.1.0-cdh5.7.0,下載地址:http://archive.cloudera.com/cdh5/cdh/5/)

tar zxvf ./hive-1.1.0-cdh5.7.0.tar.gz -C ~/app/

二、Hive配置:參考官網:https://cwiki.apache.org/confluence/display/Hive/GettingStarted#GettingStarted-InstallationandConfiguration

1、配置環境變量

1)vi .bash_profile

    export HIVE_HOME=/home/hadoop/app/hive-1.1.0-cdh5.7.0
    export PATH=$HIVE_HOME/bin:$PATH

2)source .bash_profile

source .bash_profile

 

2、hive-1.1.0-cdh5.7.0/conf/hive-env.sh

1)cp hive-env.sh.template hive-env.sh

cp hive-env.sh.template hive-env.sh

 2)vi hive-env.sh 添加HADOOP_HOME

    HADOOP_HOME=/home/hadoop/app/hadoop-2.6.0-cdh5.7.0

3、hive-1.1.0-cdh5.7.0/conf/hive-site.xml(自己創建配置)

(mysql驅動包需要自己手動拷貝到hive-1.1.0-cdh5.7.0/lib中)。

 <configuration> 
        <!-- 配置連接串 -->
        <property>
            <name>javax.jdo.option.ConnectionURL</name>
            <!-- 數據庫名稱:zhaotao_hive -->
            <!-- createDatabaseIfNotExist=true:當數據庫不存在的時候,自動幫你創建 -->
            <value>jdbc:mysql://localhost:3306/rdb_hive?createDatabaseIfNotExist=true</value>
        </property>
        <!-- mysql的driver類 -->
        <property>
            <name>javax.jdo.option.ConnectionDriverName</name>
            <value>com.mysql.jdbc.Driver</value>
        </property>
        <!-- 用戶名 -->
        <property>
            <name>javax.jdo.option.ConnectionUserName</name>
            <value>root</value>
        </property>
        <!-- 密碼 -->
        <property>
            <name>javax.jdo.option.ConnectionPassword</name>
            <value>root</value>
        </property>
    </configuration>

 

三、啟動hive

hive-1.1.0-cdh5.7.0/bin/hive

啟動日志:

[hadoop@hadoop01 bin]$ ./hive
which: no hbase in (/home/hadoop/app/hive-1.1.0-cdh5.7.0/bin:/home/hadoop/app/hadoop-2.6.0-cdh5.7.0
/bin:/home/hadoop/app/jdk1.8.0_131/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/home/hadoop/
.local/bin:/home/hadoop/bin)
Logging initialized using configuration in jar:file:/home/hadoop/app/hive-1.1.0-cdh5.7.0/lib/
hive-common-1.1.0-cdh5.7.0.jar!/hive-log4j.properties
WARNING: Hive CLI is deprecated and migration to Beeline is recommended.
hive>

啟動后會自動在mysql庫上建立數據庫和表:

mysql> show tables;
+--------------------+
| Tables_in_rdb_hive |
+--------------------+
| CDS                |
| DATABASE_PARAMS    |
| DBS                |
| FUNCS              |
| FUNC_RU            |
| GLOBAL_PRIVS       |
| PARTITIONS         |
| PART_COL_STATS     |
| ROLES              |
| SDS                |
| SEQUENCE_TABLE     |
| SERDES             |
| SKEWED_STRING_LIST |
| TAB_COL_STATS      |
| TBLS               |
| VERSION            |
+--------------------+

四、hive簡單入門

使用hive實現wordcount。

1、創建表:create table hive_wordcount(context string);

 

hive> create table hive_wordcount(context string);
OK
Time taken: 1.203 seconds
hive> show tables;
OK
hive_wordcount
Time taken: 0.19 seconds, Fetched: 1 row(s)

2、導入數據:load data local inpath '/home/hadoop/data/hello.txt' into table hive_wordcount;

hive> load data local inpath '/home/hadoop/data/hello.txt' into table hive_wordcount;
Loading data to table default.hive_wordcount
Table default.hive_wordcount stats: [numFiles=1, totalSize=44]
OK
Time taken: 2.294 seconds

3、查詢表數據看是否導成功:select * from hive_wordcount;

hello.txt內容:

Deer Bear River
Car Car River
Deer Car Bear
hive> select * from hive_wordcount;
OK
Deer Bear River
Car Car River
Deer Car Bear
Time taken: 0.588 seconds, Fetched: 3 row(s)

4、使用sql實現wordcount:select word,count(1) from hive_wordcount lateral view explode(split(context,' ')) wc as word group by word;

hive> select word,count(1) from hive_wordcount lateral view explode(split(context,' ')) wc as 
word group by word;
Query ID = hadoop_20180904070404_b23d8c2e-161b-4e65-a2cc-206ce343d9e8
Total jobs = 1
Launching Job 1 out of 1
Number of reduce tasks not specified. Estimated from input data size: 1
In order to change the average load for a reducer (in bytes):
  set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
  set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
  set mapreduce.job.reduces=<number>
Starting Job = job_1536010835653_0002, 
Kill Command = /home/hadoop/app/hadoop-2.6.0-cdh5.7.0/bin/hadoop job  -kill job_1536010835653_0002
Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 1
2018-09-04 07:05:49,279 Stage-1 map = 0%,  reduce = 0%
2018-09-04 07:06:01,893 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU 1.95 sec
2018-09-04 07:06:10,804 Stage-1 map = 100%,  reduce = 100%, Cumulative CPU 3.44 sec
MapReduce Total cumulative CPU time: 3 seconds 440 msec
Ended Job = job_1536010835653_0002
MapReduce Jobs Launched:
Stage-Stage-1: Map: 1  Reduce: 1   Cumulative CPU: 3.44 sec   HDFS Read: 8797 HDFS 
Write: 28 SUCCESS
Total MapReduce CPU Time Spent: 3 seconds 440 msec
OK
Bear    2
Car     3
Deer    2
River   2
Time taken: 37.441 seconds, Fetched: 4 row(s)

可以看到結果:

Bear    2
Car     3
Deer    2
River   2

注意:在創建表的時候遇到一個錯誤:

Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask. MetaException(message:
For direct MetaStore DB connections, we don't support retries at the client level.)

從字面意思是是連接msql有問題。從網上查詢大概有兩種解決辦法:

1、換mysql jdbc驅動包,比如換成  mysql-connector-java-5.1.34-bin.jar,但我試過,我這里沒有解決

2、換對應mysq 上MetaStore 數據庫的編碼,換成 latin1,親測,解決。

 


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM