2. 配置过程
2.1. 解压文件配置环境变量
- 解压文件
$ tar -xzvf hive-x.y.z.tar.gz
- 1
- 配置环境变量
export HIVE_HOME={{pwd}} export PATH=$HIVE_HOME/bin:$PATH
- 1
- 2
- 将mysql的JDBC包放到hive的lib文件夹中。注意是.jar文件
2.2. 配置hive-env.sh
HADOOP_HOME=/opt/modules/hadoop-3.0.3 export HIVE_CONF_DIR=/opt/modules/hive-3.0.0-bin
- 1
- 2
2.3. 配置hive-site.xml
<configuration> <property> <name>hive.metastore.schema.verification</name> <value>false</value> </property> <property> <name>hive.metastore.warehouse.dir</name> <value>/user/hive/warehouse</value> </property> <property> <name>hive.exec.mode.local.auto</name> <value>false</value> <description> Let Hive determine whether to run in local mode automatically < /description> </property> <property> <name>hive.metastore.uris</name> <value>thrift://spark.bigdata.com:9083</value> <description>Thrift URI for the remote metastore. Used by metastore client to connect to remote metastore.</description> </property> <property> <name>javax.jdo.option.ConnectionURL</name> <value>jdbc:mysql://spark.bigdata.com:3306/hive?createDatabaseIfNotExist=true </value> </property> <property> <name>javax.jdo.option.ConnectionDriverName</name> <value>com.mysql.jdbc.Driver</value> </property> <property> <name>javax.jdo.option.ConnectionUserName</name> <value>hive</value> </property> <property> <name>javax.jdo.option.ConnectionPassword</name> <value>hivemysql</value> </property> <!-- 显示表的列名 --> <property> <name>hive.cli.print.header</name> <value>true</value> </property> <!-- 显示数据库名称 --> <property> <name>hive.cli.print.current.db</name> <value>true</value> </property> </configuration>
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 12
- 13
- 14
- 15
- 16
- 17
- 18
- 19
- 20
- 21
- 22
- 23
- 24
- 25
- 26
- 27
- 28
- 29
- 30
- 31
- 32
- 33
- 34
- 35
- 36
- 37
- 38
- 39
- 40
- 41
- 42
- 43
- 44
- 45
- 46
- 47
- 48
- 49
- 50
- 51
2.4. 初始化hive
- 使用schematool初始化
schematool -dbType mysql -initSchema
- 1
此处会出现第一个问题:直接修改hive-default.xml,配置未生效,可以看下面的LOG,我已经将hive-default.xml中的hiveuser改为了hive,URL,Driver也做了相应的修改,可是执行schematool -dbType mysql -initSchema进行初始化的时候,还是使用了默认的URL,Driver和user APP。所以此处需要将hive-default.xml改为hive-site.xml,执行初始化成功,但在hive官网上没有明确说明(或者是我没理解透)。
[hadoop@spark conf]$ schematool -dbType mysql -initSchema
SLF4J: Class path contains multiple SLF4J bindings. SLF4J: Found binding in [jar:file:/opt/modules/hive-3.0.0-bin/lib/log4j-slf4j-impl-2.10.0.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: Found binding in [jar:file:/opt/modules/hadoop-3.0.3/share/hadoop/common/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation. SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory] Metastore connection URL: jdbc:derby:;databaseName=metastore_db;create=true Metastore Connection Driver : org.apache.derby.jdbc.EmbeddedDriver Metastore connection User: APP Starting metastore schema initialization to 3.0.0 Initialization script hive-schema-3.0.0.mysql.sql Error: Syntax error: Encountered "<EOF>" at line 1, column 64. (state=42X01,code=30000) org.apache.hadoop.hive.metastore.HiveMetaException: Schema initialization FAILED! Metastore state would be inconsistent !! Underlying cause: java.io.IOException : Schema script failed, errorcode 2 Use --verbose for detailed stacktrace. *** schemaTool failed ***
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 12
- 13
- 14
- 15
- 16
- 17
- 18
- (可选)在mysql中进行metastore初始化
这一步不太确定是否一定需要,可以先跳过,后面发现有问题的话再执行这一句。
mysql> source /opt/modules/hive-3.0.0-bin/scripts/metastore/upgrade/mysql/hive-schema-3.0.0.mysql.sql
- 1
- 启动hive的metastore服务
这一步在官网上完全没有找到,但是没有这一步的话就会一直报错FAILED: HiveException java.lang.RuntimeException: Unable to instantiate org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient。
[hadoop@spark bin]$ ./hive --service metastore &
- 1
- 最后进入hive,执行show databases;测试
hive (default)> show databases; OK database_name default Time taken: 3.157 seconds, Fetched: 1 row(s)