1.安裝模式介紹:
Hive官網上介紹了Hive的3種安裝方式,分別對應不同的應用場景。
a、內嵌模式(元數據保村在內嵌的derby種,允許一個會話鏈接,嘗試多個會話鏈接時會報錯)
b、本地模式(本地安裝mysql 替代derby存儲元數據)
c、遠程模式(遠程安裝mysql 替代derby存儲元數據)
2.安裝准備:
前提:已經安裝java JDK1.7以上,hadoop可用,mysql可用。
下載:
http://mirror.bit.edu.cn/apache/hive/hive-1.2.2/
解壓
[hadoop@hadoop-master Hive]$ pwd
/usr/hadoop/Hive
[hadoop@hadoop-master Hive]$tar -xvf apache-hive-1.2.2-bin.tar.gz
3.mysql建用戶授權,建庫
[hadoop@hadoop-master]$ mysql -uroot -p MariaDB [(none)]> show databases; +--------------------+ | Database | +--------------------+ | information_schema | | mysql | | performance_schema | | test | +--------------------+ 4 rows in set (0.00 sec) MariaDB [(none)]> create user hive identified by 'hive' -> ; Query OK, 0 rows affected (0.11 sec) MariaDB [(none)]> select user from mysql.user; +--------------+ | user | +--------------+ | hive | | replicate | | root | | sample | | sample_col | | sample_table | | root | | replicate | | root | | | | root | | | | root | +--------------+ 13 rows in set (0.00 sec) MariaDB [(none)]> grant all privileges on *.* to hive@'%' identified by 'hive'; Query OK, 0 rows affected (0.03 sec) MariaDB [(none)]> flush privileges; Query OK, 0 rows affected (0.01 sec)
然后用新用戶登陸庫
mysql -uhive -phive MariaDB [(none)]> create database hive; Query OK, 1 row affected (0.00 sec)
MariaDB [hive]> use hive Database changed MariaDB [hive]> show databases; +--------------------+ | Database | +--------------------+ | information_schema | | hive | | mysql | | performance_schema | | test | +--------------------+ 5 rows in set (0.00 sec) MariaDB [hive]>
4.主機root添加環境變量
su - root [root@hadoop-master bin]# vi /etc/profile export JAVA_HOME=/usr/java/jdk1.7.0_79 export CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar export HIVE_HOME=/usr/hadoop/Hive/apache-hive-1.2.2-bin export HADOOP_HOME=/usr/hadoop/hadoop-2.7.5 export PATH=$HIVE_HOME/bin:/usr/hadoop/Hbase/hbase-1.3.1/bin:$HADOOP_HOME/bin:$JAVA_HOME/bin:$PATH [root@hadoop-master bin]# source /etc/profile
5.配置Hive
前提:將jdbc拷貝到lib下:/usr/hadoop/Hive/apache-hive-1.2.2-bin/lib/mariadb-java-client-1.7.4.jar 自己先測試jdbc能連庫能用
配置hive-site.xml
[hadoop@hadoop-master conf]$ su - hadoop [hadoop@hadoop-master conf]$ pwd /usr/hadoop/Hive/apache-hive-1.2.2-bin/conf
[hadoop@hadoop-master conf]$ cp hive-default.xml.template hive-site.xml
[hadoop@hadoop-master conf]echo ''>hive-site.xml [hadoop@hadoop-master conf]$ vi hive-site.xml <configuration> <property> <name>hive.metastore.warehouse.dir</name> <value>/user/hive/warehouse</value> </property> <property> <name>hive.metastore.local</name> <value>true</value> </property> <property> <name>javax.jdo.option.ConnectionURL</name> <value>jdbc:mariadb://192.168.48.129:3306/hive</value> </property> <property> <name>javax.jdo.option.ConnectionUserName</name> <value>hive</value> </property> <property> <name>javax.jdo.option.ConnectionPassword</name> <value>hive</value> </property> <property> <name>javax.jdo.option.ConnectionDriverName</name> <value>org.mariadb.jdbc.Driver</value> </property> <property> <name>datanucleus.schema.autoCreateAll</name> <value>true</value> </property> <property> <name>hive.metastore.schema.verification</name> <value>false</value> </property> </configuration>
配置hive-env.sh
[hadoop@hadoop-master conf]$ cp hive-env.sh.template hive-env.sh [hadoop@hadoop-master conf]$ vi hive-env.sh # Set HADOOP_HOME to point to a specific hadoop install directory export HADOOP_HOME=/usr/hadoop/hadoop-2.7.5 # Hive Configuration Directory can be controlled by: export HIVE_CONF_DIR=/usr/hadoop/Hive/apache-hive-1.2.2-bin/conf # Folder containing extra ibraries required for hive compilation/execution can be controlled by: # export HIVE_AUX_JARS_PATH=
6.在HDFS上建立/tmp和/user/hive/warehouse目錄,並賦予組用戶寫權限。
hadoop dfs -mkdir /tmp hadoop dfs -mkdir -p /user/hive/warehouse hadoop dfs -chmod g+w /tmp hadoop dfs -chmod g+w /user/hive/warehouse
7.啟動
[hadoop@hadoop-master ~]$ hive 18/10/22 19:10:42 WARN conf.HiveConf: HiveConf of name hive.metastore.local does not exist Logging initialized using configuration in jar:file:/usr/hadoop/Hive/apache-hive-1.2.2-bin/lib/hive-common-1.2.2.jar!/hive-log4j.properties hive>
成功安裝
8.查看mysql,hive庫下多了很多表
[hadoop@hadoop-master ~]$ mysql -uhive -phive Welcome to the MariaDB monitor. Commands end with ; or \g. Your MariaDB connection id is 373 Server version: 5.5.41-MariaDB-log MariaDB Server Copyright (c) 2000, 2014, Oracle, MariaDB Corporation Ab and others. Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. MariaDB [(none)]> show databases; +--------------------+ | Database | +--------------------+ | information_schema | | hive | | mysql | | performance_schema | | test | +--------------------+ 5 rows in set (0.01 sec) MariaDB [(none)]> use hive Reading table information for completion of table and column names You can turn off this feature to get a quicker startup with -A Database changed MariaDB [hive]> show tables; +---------------------------+ | Tables_in_hive | +---------------------------+ | BUCKETING_COLS | | CDS | | COLUMNS_V2 | | DATABASE_PARAMS | | DBS | | FUNCS | | FUNC_RU | | GLOBAL_PRIVS | | PARTITIONS | | PARTITION_KEYS | | PARTITION_KEY_VALS | | PARTITION_PARAMS | | PART_COL_STATS | | ROLES | | SDS | | SD_PARAMS | | SEQUENCE_TABLE | | SERDES | | SERDE_PARAMS | | SKEWED_COL_NAMES | | SKEWED_COL_VALUE_LOC_MAP | | SKEWED_STRING_LIST | | SKEWED_STRING_LIST_VALUES | | SKEWED_VALUES | | SORT_COLS | | TABLE_PARAMS | | TAB_COL_STATS | | TBLS | | VERSION | +---------------------------+ 29 rows in set (0.00 sec) MariaDB [hive]>
9.查看hdfs變化
[hadoop@hadoop-master ~]$ hadoop dfs -lsr / DEPRECATED: Use of this script to execute hdfs command is deprecated. Instead use the hdfs command for it. lsr: DEPRECATED: Please use 'ls -R' instead. drwxr-xr-x - hadoop supergroup 0 2018-03-19 19:54 /system drwx-w---- - hadoop supergroup 0 2018-10-22 19:10 /tmp drwx------ - hadoop supergroup 0 2018-02-22 23:34 /tmp/hadoop-yarn drwx------ - hadoop supergroup 0 2018-02-22 23:34 /tmp/hadoop-yarn/staging drwx------ - hadoop supergroup 0 2018-02-22 23:34 /tmp/hadoop-yarn/staging/hadoop drwx------ - hadoop supergroup 0 2018-02-22 23:41 /tmp/hadoop-yarn/staging/hadoop/.staging drwxr-xr-x - hadoop supergroup 0 2018-02-22 23:34 /tmp/hadoop-yarn/staging/history drwxrwxrwt - hadoop supergroup 0 2018-02-22 23:34 /tmp/hadoop-yarn/staging/history/done_intermediate drwxrwx--- - hadoop supergroup 0 2018-02-22 23:41 /tmp/hadoop-yarn/staging/history/done_intermediate/hadoop -rwxrwx--- 3 hadoop supergroup 60426 2018-02-22 23:41 /tmp/hadoop-yarn/staging/history/done_intermediate/hadoop/job_1519369589927_0001-1519371280230-hadoop-word+count-1519371687036-5-1-SUCCEEDED-default-1519371298933.jhist -rwxrwx--- 3 hadoop supergroup 353 2018-02-22 23:41 /tmp/hadoop-yarn/staging/history/done_intermediate/hadoop/job_1519369589927_0001.summary -rwxrwx--- 3 hadoop supergroup 120116 2018-02-22 23:41 /tmp/hadoop-yarn/staging/history/done_intermediate/hadoop/job_1519369589927_0001_conf.xml drwx-wx-wx - hadoop supergroup 0 2018-10-22 19:10 /tmp/hive drwx------ - hadoop supergroup 0 2018-10-22 19:10 /tmp/hive/hadoop drwx------ - hadoop supergroup 0 2018-10-22 19:10 /tmp/hive/hadoop/f0cbabab-0d18-4b1a-8d77-f7efb35ca986 drwx------ - hadoop supergroup 0 2018-10-22 19:10 /tmp/hive/hadoop/f0cbabab-0d18-4b1a-8d77-f7efb35ca986/_tmp_space.db -rw-r--r-- 3 hadoop supergroup 18 2018-02-23 22:39 /upload drwxr-xr-x - hadoop supergroup 0 2018-10-22 03:11 /user drwxr-xr-x - hadoop supergroup 0 2018-02-23 22:38 /user/hadoop drwxr-xr-x - hadoop supergroup 0 2018-02-22 23:41 /user/hadoop/output -rw-r--r-- 3 hadoop supergroup 0 2018-02-22 23:41 /user/hadoop/output/_SUCCESS -rw-r--r-- 3 hadoop supergroup 6757 2018-02-22 23:41 /user/hadoop/output/part-r-00000 drwxr-xr-x - hadoop supergroup 0 2018-02-23 22:45 /user/hadoop/upload -rw-r--r-- 3 hadoop supergroup 18 2018-02-23 22:45 /user/hadoop/upload/my-local.txt drwxr-xr-x - hadoop supergroup 0 2018-10-22 03:11 /user/hive drwxrwxr-x - hadoop supergroup 0 2018-10-22 03:11 /user/hive/warehouse drwxr-xr-x - hadoop supergroup 0 2018-03-16 01:32 /user/test22 drwxr-xr-x - hadoop supergroup 0 2018-03-16 02:02 /user/test22/input -rw-r--r-- 3 hadoop supergroup 4277 2018-03-16 02:02 /user/test22/input/hadoop-env.sh -rw-r--r-- 3 hadoop supergroup 1449 2018-03-16 02:02 /user/test22/input/httpfs-env.sh -rw-r--r-- 3 hadoop supergroup 1527 2018-03-16 02:02 /user/test22/input/kms-env.sh -rw-r--r-- 3 hadoop supergroup 1383 2018-03-16 02:02 /user/test22/input/mapred-env.sh -rw-r--r-- 3 hadoop supergroup 4567 2018-03-16 02:02 /user/test22/input/yarn-env.sh drwxr-xr-x - hadoop supergroup 0 2018-03-16 01:32 /user/test22/output drwxr-xr-x - hadoop supergroup 0 2018-03-16 01:32 /user/test22/output/count -rw-r--r-- 3 hadoop supergroup 0 2018-03-16 01:32 /user/test22/output/count/_SUCCESS -rw-r--r-- 3 hadoop supergroup 6757 2018-03-16 01:32 /user/test22/output/count/part-r-00000
10.測試
hive中建庫建表
進入hive 創建一個測試庫,一個測試表
[hadoop@hadoop-master ~]$ hive 18/10/22 19:10:42 WARN conf.HiveConf: HiveConf of name hive.metastore.local does not exist Logging initialized using configuration in jar:file:/usr/hadoop/Hive/apache-hive-1.2.2-bin/lib/hive-common-1.2.2.jar!/hive-log4j.properties hive> > ; hive> create database hive_test; OK Time taken: 2.558 seconds hive> show databases; OK default hive_test Time taken: 0.774 seconds, Fetched: 2 row(s) hive>
准備數據
[hadoop@hadoop-master pendingLoadData]$ pwd /usr/hadoop/Hive/pendingLoadData [hadoop@hadoop-master pendingLoadData]$ cat pepole.txt 1001 aaaa 1002 bbbb 1003 cccc 1004 dddd
創建測試表
[hadoop@hadoop-master pendingLoadData]$ hive
hive> use hive_test; OK Time taken: 0.263 seconds hive> create table dep(id int,name string) row format delimited fields terminated by '\t'; OK Time taken: 0.843 seconds hive> load data local inpath '/usr/hadoop/Hive/pendingLoadData/pepole.txt' into table hive_test.dep; Loading data to table hive_test.dep Table hive_test.dep stats: [numFiles=1, totalSize=3901] OK Time taken: 2.898 seconds hive> load data local inpath '/usr/hadoop/Hive/pendingLoadData/pepole.txt' into table hive_test.dep; Loading data to table hive_test.dep Table hive_test.dep stats: [numFiles=1, numRows=0, totalSize=40, rawDataSize=0] OK Time taken: 0.355 seconds hive> select * from dep; OK 1001 aaaa 1002 bbbb 1003 cccc 1004 dddd Time taken: 0.119 seconds, Fetched: 4 row(s) hive>
查看mysql
[hadoop@hadoop-master ~]$ mysql -uhive -phive MariaDB [(none)]> use hive MariaDB [hive]> select * from DBS; +-------+-----------------------+------------------------------------------------------------+-----------+------------+------------+ | DB_ID | DESC | DB_LOCATION_URI | NAME | OWNER_NAME | OWNER_TYPE | +-------+-----------------------+------------------------------------------------------------+-----------+------------+------------+ | 1 | Default Hive database | hdfs://hadoop-master:9000/user/hive/warehouse | default | public | ROLE | | 2 | NULL | hdfs://hadoop-master:9000/user/hive/warehouse/hive_test.db | hive_test | hadoop | USER | +-------+-----------------------+------------------------------------------------------------+-----------+------------+------------+ 2 rows in set (0.04 sec) MariaDB [hive]> select * from TBLS; +--------+-------------+-------+------------------+--------+-----------+-------+----------+---------------+--------------------+--------------------+ | TBL_ID | CREATE_TIME | DB_ID | LAST_ACCESS_TIME | OWNER | RETENTION | SD_ID | TBL_NAME | TBL_TYPE | VIEW_EXPANDED_TEXT | VIEW_ORIGINAL_TEXT | +--------+-------------+-------+------------------+--------+-----------+-------+----------+---------------+--------------------+--------------------+ | 1 | 1540262810 | 2 | 0 | hadoop | 0 | 1 | dep | MANAGED_TABLE | NULL | NULL | +--------+-------------+-------+------------------+--------+-----------+-------+----------+---------------+--------------------+--------------------+ 1 row in set (0.00 sec)
查看hdfs
[hadoop@hadoop-master pendingLoadData]$ hadoop dfs -lsr / DEPRECATED: Use of this script to execute hdfs command is deprecated. Instead use the hdfs command for it. lsr: DEPRECATED: Please use 'ls -R' instead. drwxr-xr-x - hadoop supergroup 0 2018-03-19 19:54 /system drwx-w---- - hadoop supergroup 0 2018-10-22 19:10 /tmp drwx------ - hadoop supergroup 0 2018-02-22 23:34 /tmp/hadoop-yarn drwx------ - hadoop supergroup 0 2018-02-22 23:34 /tmp/hadoop-yarn/staging drwx------ - hadoop supergroup 0 2018-02-22 23:34 /tmp/hadoop-yarn/staging/hadoop drwx------ - hadoop supergroup 0 2018-02-22 23:41 /tmp/hadoop-yarn/staging/hadoop/.staging drwxr-xr-x - hadoop supergroup 0 2018-02-22 23:34 /tmp/hadoop-yarn/staging/history drwxrwxrwt - hadoop supergroup 0 2018-02-22 23:34 /tmp/hadoop-yarn/staging/history/done_intermediate drwxrwx--- - hadoop supergroup 0 2018-02-22 23:41 /tmp/hadoop-yarn/staging/history/done_intermediate/hadoop -rwxrwx--- 3 hadoop supergroup 60426 2018-02-22 23:41 /tmp/hadoop-yarn/staging/history/done_intermediate/hadoop/job_1519369589927_0001-1519371280230-hadoop-word+count-1519371687036-5-1-SUCCEEDED-default-1519371298933.jhist -rwxrwx--- 3 hadoop supergroup 353 2018-02-22 23:41 /tmp/hadoop-yarn/staging/history/done_intermediate/hadoop/job_1519369589927_0001.summary -rwxrwx--- 3 hadoop supergroup 120116 2018-02-22 23:41 /tmp/hadoop-yarn/staging/history/done_intermediate/hadoop/job_1519369589927_0001_conf.xml drwx-wx-wx - hadoop supergroup 0 2018-10-22 19:10 /tmp/hive drwx------ - hadoop supergroup 0 2018-10-22 19:10 /tmp/hive/hadoop drwx------ - hadoop supergroup 0 2018-10-22 20:03 /tmp/hive/hadoop/f0cbabab-0d18-4b1a-8d77-f7efb35ca986 drwx------ - hadoop supergroup 0 2018-10-22 19:10 /tmp/hive/hadoop/f0cbabab-0d18-4b1a-8d77-f7efb35ca986/_tmp_space.db -rw-r--r-- 3 hadoop supergroup 18 2018-02-23 22:39 /upload drwxr-xr-x - hadoop supergroup 0 2018-10-22 03:11 /user drwxr-xr-x - hadoop supergroup 0 2018-02-23 22:38 /user/hadoop drwxr-xr-x - hadoop supergroup 0 2018-02-22 23:41 /user/hadoop/output -rw-r--r-- 3 hadoop supergroup 0 2018-02-22 23:41 /user/hadoop/output/_SUCCESS -rw-r--r-- 3 hadoop supergroup 6757 2018-02-22 23:41 /user/hadoop/output/part-r-00000 drwxr-xr-x - hadoop supergroup 0 2018-02-23 22:45 /user/hadoop/upload -rw-r--r-- 3 hadoop supergroup 18 2018-02-23 22:45 /user/hadoop/upload/my-local.txt drwxr-xr-x - hadoop supergroup 0 2018-10-22 03:11 /user/hive drwxrwxr-x - hadoop supergroup 0 2018-10-22 19:30 /user/hive/warehouse drwxrwxr-x - hadoop supergroup 0 2018-10-22 20:03 /user/hive/warehouse/hive_test.db drwxrwxr-x - hadoop supergroup 0 2018-10-22 20:03 /user/hive/warehouse/hive_test.db/dep -rwxrwxr-x 3 hadoop supergroup 40 2018-10-22 20:03 /user/hive/warehouse/hive_test.db/dep/pepole.txt drwxr-xr-x - hadoop supergroup 0 2018-03-16 01:32 /user/test22
通過hadoop web查看 http://xx:xx:xx:xx:50070

問題及方案
問題1.
java.sql.SQLException: Unable to open a test connection to the given database. JDBC url = jdbc:mariadb://192.168.48.129:3306/hive, username = hive. Terminating connection pool (set lazyInit to true if you expect to start your database after your app). Original Exception: ------ java.sql.SQLSyntaxErrorException: Access denied for user 'hive'@'%' to database 'hive' 解決:mysql -uhive -phive 看有沒有權限訪問hive庫(沒有的話,可能能hive庫是在別的用戶下建的,先刪除后,在用hive用戶登陸,然后建庫)
問題2.
Caused by: javax.jdo.JDOException: Couldnt obtain a new sequence (unique id) : (conn=362) Cannot execute statement: impossible to write to binary log since BINLOG_FORMAT = STATEMENT and at least one table uses a storage engine limited to row-based logging. InnoDB is limited to row-logging when transaction isolation level is READ COMMITTED or READ UNCOMMITTED. 解決:mysql>set global binlog_format='ROW' 說明: 這是因為,mysql默認的binlog_format是STATEMENT。 從 MySQL 5.1.12 開始,可以用以下三種模式來實現:基於SQL語句的復制(statement-based replication, SBR),基於行的復制(row-based replication, RBR),混合模式復制(mixed-based replication, MBR)。相應地,binlog的格式也有三種:STATEMENT,ROW,MIXED。 如果你采用默認隔離級別REPEATABLE-READ,那么建議binlog_format=ROW。如果你是READ-COMMITTED隔離級別,binlog_format=MIXED和binlog_format=ROW效果是一樣的,binlog記錄的格式都是ROW,對主從復制來說是很安全的參數。 Query OK, 0 rows affected (0.02 sec) mysql> SET GLOBAL binlog_format=ROW; Query OK, 0 rows affected (0.00 sec) 但是這樣只是一次性的,重啟后失效。 方案二 永久生效,需要修改my.ini # binary logging format - ROW binlog_format=ROW
