以下兩種方法都可以,推薦用方法一!
如果有誤,請見博客
方法一:
步驟一: yum -y install mysql-server
步驟二:service mysqld start
步驟三:mysql -u root -p
Enter password: (默認是空密碼,按enter)
mysql > CREATE USER 'hive'@'%' IDENTIFIED BY 'hive';
mysql > GRANT ALL PRIVILEGES ON *.* TO 'hive'@'%' WITH GRANT OPTION;
mysql > flush privileges;
mysql > exit;
步驟四:去hive的安裝目錄下的lib,下將 mysql-connector-java-5.1.21.jar 傳到這個目錄下。
步驟五:去hive的安裝目錄下的conf,下配置hive-site.xml
<property>
<name>javax.jdo.option.ConnectionDriverName</name>
<value>com.mysql.jdbc.Driver</value>
<description>Driver class name for a JDBC metastore</description>
</property>
<property>
<name>javax.jdo.option.ConnectionURL</name>
<value>jdbc:mysql://localhost:3306/hive?createDatabaseIfNotExist=true</value>
<description>JDBC connect string for a JDBC metastore</description>
</property>
<property>
<name>javax.jdo.option.ConnectionUserName</name>
<value>hive< /value>
<description>Username to use against metastore database</description>
</property>
<property>
<name>javax.jdo.option.ConnectionPassword</name>
<value>hive< /value>
<description>password to use against metastore database</description>
</property>
步驟六:切換到root用戶,配置/etc/profile文件,source生效。
步驟七:OK成功!
編程這一塊時,可以用自己的IP啦, 如 “jdbc:hive//192.168.80.128:10000/hivebase”
方法二:
步驟一:
[root@HadoopSlave2 app]# yum -y install mysql-server
Loaded plugins: fastestmirror, refresh-packagekit, security
Loading mirror speeds from cached hostfile
* base: mirror.bit.edu.cn
* extras: mirror.bit.edu.cn
* updates: mirror.bit.edu.cn
base | 3.7 kB 00:00
extras | 3.4 kB 00:00
updates | 3.4 kB 00:00
updates/primary_db | 3.1 MB 00:04
Setting up Install Process
Resolving Dependencies
--> Running transaction check
---> Package mysql-server.x86_64 0:5.1.73-7.el6 will be installed
--> Processing Dependency: mysql = 5.1.73-7.el6 for package: mysql-server-5.1.73-7.el6.x86_64
--> Processing Dependency: perl-DBI for package: mysql-server-5.1.73-7.el6.x86_64
--> Processing Dependency: perl-DBD-MySQL for package: mysql-server-5.1.73-7.el6.x86_64
--> Processing Dependency: perl(DBI) for package: mysql-server-5.1.73-7.el6.x86_64
--> Running transaction check
---> Package mysql.x86_64 0:5.1.73-7.el6 will be installed
--> Processing Dependency: mysql-libs = 5.1.73-7.el6 for package: mysql-5.1.73-7.el6.x86_64
---> Package perl-DBD-MySQL.x86_64 0:4.013-3.el6 will be installed
---> Package perl-DBI.x86_64 0:1.609-4.el6 will be installed
--> Running transaction check
---> Package mysql-libs.x86_64 0:5.1.71-1.el6 will be updated
---> Package mysql-libs.x86_64 0:5.1.73-7.el6 will be an update
--> Finished Dependency Resolution
Dependencies Resolved
=======================================================================================================================================================================
Package Arch Version Repository Size
=======================================================================================================================================================================
Installing:
mysql-server x86_64 5.1.73-7.el6 base 8.6 M
Installing for dependencies:
mysql x86_64 5.1.73-7.el6 base 894 k
perl-DBD-MySQL x86_64 4.013-3.el6 base 134 k
perl-DBI x86_64 1.609-4.el6 base 705 k
Updating for dependencies:
mysql-libs x86_64 5.1.73-7.el6 base 1.2 M
Transaction Summary
=======================================================================================================================================================================
Install 4 Package(s)
Upgrade 1 Package(s)
Total download size: 12 M
Downloading Packages:
(1/5): mysql-5.1.73-7.el6.x86_64.rpm | 894 kB 00:01
(2/5): mysql-libs-5.1.73-7.el6.x86_64.rpm | 1.2 MB 00:02
(3/5): mysql-server-5.1.73-7.el6.x86_64.rpm | 8.6 MB 00:15
(4/5): perl-DBD-MySQL-4.013-3.el6.x86_64.rpm | 134 kB 00:00
(5/5): perl-DBI-1.609-4.el6.x86_64.rpm | 705 kB 00:01
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------
Total 548 kB/s | 12 MB 00:21
Running rpm_check_debug
Running Transaction Test
Transaction Test Succeeded
Running Transaction
Warning: RPMDB altered outside of yum.
** Found 3 pre-existing rpmdb problem(s), 'yum check' output follows:
1:libreoffice-core-4.0.4.2-9.el6.x86_64 has missing requires of libjawt.so()(64bit)
1:libreoffice-core-4.0.4.2-9.el6.x86_64 has missing requires of libjawt.so(SUNWprivate_1.1)(64bit)
1:libreoffice-ure-4.0.4.2-9.el6.x86_64 has missing requires of jre >= ('0', '1.5.0', None)
Updating : mysql-libs-5.1.73-7.el6.x86_64 1/6
Installing : perl-DBI-1.609-4.el6.x86_64 2/6
Installing : perl-DBD-MySQL-4.013-3.el6.x86_64 3/6
Installing : mysql-5.1.73-7.el6.x86_64 4/6
Installing : mysql-server-5.1.73-7.el6.x86_64 5/6
Cleanup : mysql-libs-5.1.71-1.el6.x86_64 6/6
Verifying : mysql-5.1.73-7.el6.x86_64 1/6
Verifying : mysql-libs-5.1.73-7.el6.x86_64 2/6
Verifying : perl-DBD-MySQL-4.013-3.el6.x86_64 3/6
Verifying : mysql-server-5.1.73-7.el6.x86_64 4/6
Verifying : perl-DBI-1.609-4.el6.x86_64 5/6
Verifying : mysql-libs-5.1.71-1.el6.x86_64 6/6
Installed:
mysql-server.x86_64 0:5.1.73-7.el6
Dependency Installed:
mysql.x86_64 0:5.1.73-7.el6 perl-DBD-MySQL.x86_64 0:4.013-3.el6 perl-DBI.x86_64 0:1.609-4.el6
Dependency Updated:
mysql-libs.x86_64 0:5.1.73-7.el6
Complete!
[root@HadoopSlave2 app]#
步驟二:
[root@HadoopSlave2 app]# service mysqld start
Initializing MySQL database: Installing MySQL system tables...
OK
Filling help tables...
OK
To start mysqld at boot time you have to copy
support-files/mysql.server to the right place for your system
PLEASE REMEMBER TO SET A PASSWORD FOR THE MySQL root USER !
To do so, start the server, then issue the following commands:
/usr/bin/mysqladmin -u root password 'new-password'
/usr/bin/mysqladmin -u root -h HadoopSlave2 password 'new-password'
Alternatively you can run:
/usr/bin/mysql_secure_installation
which will also give you the option of removing the test
databases and anonymous user created by default. This is
strongly recommended for production servers.
See the manual for more instructions.
You can start the MySQL daemon with:
cd /usr ; /usr/bin/mysqld_safe &
You can test the MySQL daemon with mysql-test-run.pl
cd /usr/mysql-test ; perl mysql-test-run.pl
Please report any problems with the /usr/bin/mysqlbug script!
[ OK ]
Starting mysqld: [ OK ]
[root@HadoopSlave2 app]#
步驟三:
[root@HadoopSlave2 app]# mysql -u root -p
Enter password: (默認是空密碼,按enter)
Welcome to the MySQL monitor. Commands end with ; or \g.
Your MySQL connection id is 2
Server version: 5.1.73 Source distribution
Copyright (c) 2000, 2013, Oracle and/or its affiliates. All rights reserved.
Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
mysql> set password for root@localhost=password('rootroot');
Query OK, 0 rows affected (0.10 sec)
mysql> flush privileges;
Query OK, 0 rows affected (0.00 sec)
mysql> exit;
Bye
[root@HadoopSlave2 app]#
步驟四:
[root@HadoopSlave2 app]# mysql -uroot -prootroot
Welcome to the MySQL monitor. Commands end with ; or \g.
Your MySQL connection id is 4
Server version: 5.1.73 Source distribution
Copyright (c) 2000, 2013, Oracle and/or its affiliates. All rights reserved.
Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
mysql> create user 'hive'@'%' identified by 'hive';
Query OK, 0 rows affected (0.00 sec)
mysql> grant all on *.* to 'hive'@'HadoopSlave2' identified by 'hive';
Query OK, 0 rows affected (0.00 sec)
mysql> flush privileges;
Query OK, 0 rows affected (0.00 sec)
mysql> set password for hive@HadoopSlave2=password('hive');
Query OK, 0 rows affected (0.00 sec)
mysql> flush privileges;
Query OK, 0 rows affected (0.00 sec)
mysql> exit;
Bye
[root@HadoopSlave2 app]#
步驟五:
[root@HadoopSlave2 app]# mysql -uhive -phive
ERROR 1045 (28000): Access denied for user 'hive'@'localhost' (using password: YES)
[root@HadoopSlave2 app]# mysql -uroot -prootroot
Welcome to the MySQL monitor. Commands end with ; or \g.
Your MySQL connection id is 8
Server version: 5.1.73 Source distribution
Copyright (c) 2000, 2013, Oracle and/or its affiliates. All rights reserved.
Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
mysql> grant all on *.* to 'hive'@'localhost' identified by 'hive';
Query OK, 0 rows affected (0.00 sec)
mysql> set password for hive@localhost=password('hive');
Query OK, 0 rows affected (0.00 sec)
mysql> flush privileges;
Query OK, 0 rows affected (0.00 sec)
mysql> exit;
Bye
[root@HadoopSlave2 app]#
步驟六:
[root@HadoopSlave2 app]# mysql -uhive -phive
Welcome to the MySQL monitor. Commands end with ; or \g.
Your MySQL connection id is 9
Server version: 5.1.73 Source distribution
Copyright (c) 2000, 2013, Oracle and/or its affiliates. All rights reserved.
Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
mysql> CREATE DATABASE hive;
Query OK, 1 row affected (0.00 sec)
mysql> exit;
Bye
[root@HadoopSlave2 app]#
步驟七:
[root@HadoopSlave2 app]# su hadoop
[hadoop@HadoopSlave2 app]$ cd hive-1.2.1/
[hadoop@HadoopSlave2 hive-1.2.1]$ cd conf/
[hadoop@HadoopSlave2 conf]$ ll
total 188
-rw-rw-r--. 1 hadoop hadoop 1139 Apr 30 2015 beeline-log4j.properties.template
-rw-rw-r--. 1 hadoop hadoop 168431 Jun 19 2015 hive-default.xml.template
-rw-rw-r--. 1 hadoop hadoop 2378 Apr 30 2015 hive-env.sh.template
-rw-rw-r--. 1 hadoop hadoop 2662 Apr 30 2015 hive-exec-log4j.properties.template
-rw-rw-r--. 1 hadoop hadoop 3050 Apr 30 2015 hive-log4j.properties.template
-rw-rw-r--. 1 hadoop hadoop 1593 Apr 30 2015 ivysettings.xml
[hadoop@HadoopSlave2 conf]$ cp hive-env.sh.template hive-env.sh
[hadoop@HadoopSlave2 conf]$ cp hive-default.xml.template hive-site.xml
[hadoop@HadoopSlave2 conf]$ cp hive-exec-log4j.properties.template hive-exec-log4j.properties
[hadoop@HadoopSlave2 conf]$ cp hive-log4j.properties.template hive-log4j.properties
[hadoop@HadoopSlave2 conf]$ vim hive-site.xml
< property>
< name>javax.jdo.option.ConnectionDriverName< /name>
< value>com.mysql.jdbc.Driver< /value>
< description>Driver class name for a JDBC metastore< /description>
< /property>
< property>
< name>javax.jdo.option.ConnectionURL< /name>
< value>jdbc:mysql://HadoopSlave2:3306/hive?characterEncoding=UTF-8< /value>
< description>JDBC connect string for a JDBC metastore< /description>
< /property>
< property>
< name>javax.jdo.option.ConnectionUserName< /name>
< value>hive< /value>
< description>Username to use against metastore database< /description>
< /property>
< property>
< name>javax.jdo.option.ConnectionPassword< /name>
< value>hive< /value>
< description>password to use against metastore database< /description>
< /property>
步驟八:
步驟九:
[hadoop@HadoopSlave2 lib]$ cd ..
[hadoop@HadoopSlave2 hive-1.2.1]$ su root
Password:
[root@HadoopSlave2 hive-1.2.1]# vim /etc/profile
export JAVA_HOME=/home/hadoop/app/jdk1.7.0_79
export HADOOP_HOME=/home/hadoop/app/hadoop-2.6.0
export ZOOKEEPER_HOME=/home/hadoop/app/zookeeper-3.4.6
export HIVE_HOME=/home/hadoop/app/hive-1.2.1
export PATH=$PATH:$JAVA_HOME/bin:$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$ZOOKEEPER_HOME/bin:$HIVE_HOME/bin
[root@HadoopSlave2 hive-1.2.1]# source /etc/profile
hive啟動時如果遇到以下錯誤:
Exceptionin thread "main"java.lang.RuntimeException:
java.lang.IllegalArgumentException:java.net.URISyntaxException:
Relative path in absolute URI:${system:java.io.tmpdir%7D/$%7Bsystem:user.name%7D
則,在hive 安裝目錄下,創建一個臨時的IO文件iotmp
< property>
< name>hive.querylog.location< /name>
< value>/home/hadoop/app/hive-1.2.1/iotmp< /value>
< description>Location of Hive run time structured log file< /description>
< /property>
< property>
< name>hive.exec.local.scratchdir< /name>
< value>/home/hadoop/app/hive-1.2.1/iotmp< /value>
< description>Local scratch space for Hive jobs< /description>
< /property>
< property>
< name>hive.downloaded.resources.dir< /name>
< value>/home/hadoop/app/hive-1.2.1/iotmp< /value>
< description>Temporary local directory for added resources in the remote file system.< /description>
< /property>
或者,/usr/local/data/hive/iotmp
主節點與從節點,都測試過,均出現這個問題!!!
[hadoop@HadoopSlave2 hive-1.2.1]$ su root
Password:
[root@HadoopSlave2 hive-1.2.1]# cd ..
[root@HadoopSlave2 app]# clear
[root@HadoopSlave2 app]# service mysqld start
Starting mysqld: [ OK ]
[root@HadoopSlave2 app]# su hadoop
[hadoop@HadoopSlave2 app]$ cd hive-1.2.1/
[hadoop@HadoopSlave2 hive-1.2.1]$ bin/hive
Logging initialized using configuration in file:/home/hadoop/app/hive-1.2.1/conf/hive-log4j.properties
Exception in thread "main" java.lang.RuntimeException: java.lang.RuntimeException: Unable to instantiate org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient
at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:522)
at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:677)
at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:621)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
Caused by: java.lang.RuntimeException: Unable to instantiate org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient
at org.apache.hadoop.hive.metastore.MetaStoreUtils.newInstance(MetaStoreUtils.java:1523)
at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.<init>(RetryingMetaStoreClient.java:86)
at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:132)
at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:104)
at org.apache.hadoop.hive.ql.metadata.Hive.createMetaStoreClient(Hive.java:3005)
at org.apache.hadoop.hive.ql.metadata.Hive.getMSC(Hive.java:3024)
at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:503)
... 8 more
Caused by: java.lang.reflect.InvocationTargetException
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
at org.apache.hadoop.hive.metastore.MetaStoreUtils.newInstance(MetaStoreUtils.java:1521)
... 14 more
Caused by: MetaException(message:Got exception: java.io.IOException No FileSystem for scheme: hfds)
at org.apache.hadoop.hive.metastore.MetaStoreUtils.logAndThrowMetaException(MetaStoreUtils.java:1213)
at org.apache.hadoop.hive.metastore.Warehouse.getFs(Warehouse.java:106)
at org.apache.hadoop.hive.metastore.Warehouse.getDnsPath(Warehouse.java:140)
at org.apache.hadoop.hive.metastore.Warehouse.getDnsPath(Warehouse.java:146)
at org.apache.hadoop.hive.metastore.Warehouse.getWhRoot(Warehouse.java:159)
at org.apache.hadoop.hive.metastore.Warehouse.getDefaultDatabasePath(Warehouse.java:177)
at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.createDefaultDB_core(HiveMetaStore.java:600)
at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.createDefaultDB(HiveMetaStore.java:620)
at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.init(HiveMetaStore.java:461)
at org.apache.hadoop.hive.metastore.RetryingHMSHandler.<init>(RetryingHMSHandler.java:66)
at org.apache.hadoop.hive.metastore.RetryingHMSHandler.getProxy(RetryingHMSHandler.java:72)
at org.apache.hadoop.hive.metastore.HiveMetaStore.newRetryingHMSHandler(HiveMetaStore.java:5762)
at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.<init>(HiveMetaStoreClient.java:199)
at org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient.<init>(SessionHiveMetaStoreClient.java:74)
... 19 more
現在,到了這一步,不知道是哪里出現錯誤!
猜想一:是不是與hive 1.* 與 hive 0.*的版本,導致的問題?
猜想二:是不是與HadoopSlave2的大寫有關,一般別人都是如,master、slave1、slave2等。
經過本人測驗 ,不是版本的問題,也不是大寫的問題。
單節點偽分布集群(weekend110)的Hive子項目啟動順序
1 復習ha相關 + weekend110的hive的元數據庫mysql方式安裝配置
對於,三點集群,最好起名為,master、slave1、slave2
五點集群,最好起名為,master、slave1、slave2、slave3、slave4
單點集群,也最好起名為小寫!!!
而是把, hdfs 寫成了 hfds。
http://www.cnblogs.com/braveym/p/6685045.html
對於,hive的更高級配置,請見 hive安裝配置 !
[root@HadoopSlave1 hadoop]# cd ..
[root@HadoopSlave1 etc]# cd ..
[root@HadoopSlave1 hadoop-2.6.0]# cd ..
[root@HadoopSlave1 app]# service mysqld start
Starting mysqld: [ OK ]
[root@HadoopSlave1 app]# su hadoop
[hadoop@HadoopSlave1 app]$ cd hive-1.2.1/
[hadoop@HadoopSlave1 hive-1.2.1]$ bin/hive
Logging initialized using configuration in file:/home/hadoop/app/hive-1.2.1/conf/hive-log4j.properties
[ERROR] Terminal initialization failed; falling back to unsupported
java.lang.IncompatibleClassChangeError: Found class jline.Terminal, but interface was expected
at jline.TerminalFactory.create(TerminalFactory.java:101)
at jline.TerminalFactory.get(TerminalFactory.java:158)
at jline.console.ConsoleReader.<init>(ConsoleReader.java:229)
at jline.console.ConsoleReader.<init>(ConsoleReader.java:221)
at jline.console.ConsoleReader.<init>(ConsoleReader.java:209)
at org.apache.hadoop.hive.cli.CliDriver.setupConsoleReader(CliDriver.java:787)
at org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:721)
at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:681)
at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:621)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
Exception in thread "main" java.lang.IncompatibleClassChangeError: Found class jline.Terminal, but interface was expected
at jline.console.ConsoleReader.<init>(ConsoleReader.java:230)
at jline.console.ConsoleReader.<init>(ConsoleReader.java:221)
at jline.console.ConsoleReader.<init>(ConsoleReader.java:209)
at org.apache.hadoop.hive.cli.CliDriver.setupConsoleReader(CliDriver.java:787)
at org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:721)
at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:681)
at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:621)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
[hadoop@HadoopSlave1 hive-1.2.1]$
參考: http://blog.csdn.net/jdplus/article/details/46493553
解決方案(Hive on Spark Getting Started):
1.Delete jline from the Hadoop lib directory (it's only pulled in transitively from ZooKeeper). 2.export HADOOP_USER_CLASSPATH_FIRST=true
接着,運行啊!
參考:http://blog.csdn.net/zhumin726/article/details/8027802
http://blog.csdn.net/xgjianstart/article/details/52192879
解決方法:jline版本不一致,把HADOOP_HOME/share/hadoop/yarn/lib和HIVE_HOME/lib下的jline-**.jar版本一致就可以了,復制其中一個高版本覆蓋另一個。
即,取其中的高版本即可。
OK,啟動成功!
解決方案,見 hive安裝配置及遇到的問題解決
# Set HADOOP_HOME to point to a specific hadoop install directory
export HADOOP_HOME=/home/hadoop/app/hadoop-2.6.0
# Hive Configuration Directory can be controlled by:
export HIVE_CONF_DIR=/home/hadoop/app/hive-1.2.1/conf
# Folder containing extra ibraries required for hive compilation/execution can be controlled by:
export HIVE_AUX_JARS_PATH=/home/hadoop/app/hive-1.2.1/lib
再,保存,用 source ./hive-env.sh(生效文件)
在修改之前,要相應的創建目錄,以便與配置文件中的路徑相對應,否則在運行hive時會報錯的。
mkdir -p /home/hadoop/data/hive-1.2.1/warehouse
mkdir -p /home/hadoop/data/hive-1.2.1/tmp
mkdir -p /home/hadoop/data/hive-1.2.1/log
<property>
<name>hive.metastore.warehouse.dir</name>
<value>/home/hadoop/data/hive-1.2.1/warehouse</value>
<description>location of default database for the warehouse</description>
</property>
<property>
<name>hive.exec.scratchdir</name>
<value>/home/hadoop/data/hive-1.2.1/tmp</value>
</property>
<property>
<name>hive.querylog.location</name>
<value>/home/hadoop/data/hive-1.2.1/log</value>
</property>
到此,hive-site.xml文件修改完成!
然后在conf文件夾下,cp hive-log4j.properties.template hive-log4j.proprties
打開hive-log4j.proprties文件,sudo gedit hive-log4j.proprties (Ubuntu系統里),若是CentOS里,則 vim hive-log4j.properties
尋找hive.log.dir=
這個是當hive運行時,相應的日志文檔存儲到什么地方
( 我的: hive.log.dir=/home/hadoop/data/hive-1.2.1/log/${user.name} )
hive.log.file=hive.log
這個是hive日志文件的名字是什么
默認的就可以,只要您能認出是日志就好。
只有一個比較重要的需要修改一下,否則會報錯。
log4j.appender.EventCounter=org.apache.hadoop.log.metrics.EventCounter
如果沒有修改的話會出現:
WARNING: org.apache.hadoop.metrics.EventCounter is deprecated.
please use org.apache.hadoop.log.metrics.EventCounter in all the log4j.properties files.
(只要按照警告提示修改即可)。
至此,hive-log4j.proprties文件修改完成。
接下來,是重頭戲!
接下來要配置的是以MySQL作為存儲元數據庫的hive的安裝(此中模式下是將hive的metadata存儲在mysql中,mysql的運行環境支撐雙向同步和集群工作環境,這樣的話
,至少兩台數據庫服務器上匯備份hive的元數據),要使用hadoop來創建相應的文件夾路徑,
並設置權限:
bin/hadoop fs -mkdir /user/hadoop/hive/warehouse
bin/hadoop fs -mkdir /user/hadoop/hive/tmp
bin/hadoop fs -mkdir /user/hadoop/hive/log
bin/hadoop fs -chmod g+w /user/hadoop/hive/warehouse
bin/hadoop fs -chmod g+w /user/hadoop/hive/tmp
bin/hadoop fs -chmod g+w /user/hadoop/hive/log
繼續配置hive-site.xml
[1]
<property>
<name>hive.metastore.warehouse.dir</name>
<value>hdfs://localhost:9000/user/hadoop/hive/warehouse</value>
(這是單節點的) (我的是3節點集群,在HadoopSlave1里安裝的hive,所以是HadoopMaster,但是會報錯誤啊!)
(所以啊,若是單節點倒無所謂,若是跟我這樣,3節點啊,最好是HadoopMaster這台機器上,安裝Hive啊!!!血淋淋的教訓)
(這里就與前面的hadoop fs -mkdir -p /user/hadoop/hive/warehouse相對應)
(要么,就是如在HadoopSlave1上,則為/home/hadoop/data/hive-1.2.1/warehouse)
</property>
其中localhost指的是筆者的NameNode的hostname;
[2]
<property>
<name>hive.exec.scratchdir</name>
<value>hdfs://localhost:9000/user/hadoop/hive/scratchdir</value>
</property>
[3]
//這個沒有變化與derby配置時相同
<property>
<name>hive.querylog.location</name>
<value>/usr/hive/log</value>
</property>
-------------------------------------
[4]
<property>
<name>javax.jdo.option.ConnectionURL</name>
<value>jdbc:mysql://localhost:3306/hive?createDatabaseIfNoExist=true</value>
</property>
javax.jdo.option.ConnectionURL
這個參數使用來設置元數據連接字串
注意紅字部分在hive-site.xml中是有的,不用自己添加。
我自己的錯誤:沒有在文件中找到這個屬性,然后就自己添加了結果導致開啟hive一直報錯。最后找到了文件中的該屬性選項然后修改,才啟動成功。
Unableto open a test connection to the given database. JDBC url =jdbc:derby:;databaseName=/usr/local/hive121/metastore_db;create=true,username = hive. Terminating connection pool (set lazyInit to true ifyou expect to start your database after your app). OriginalException: ------
java.sql.SQLException:Failed to create database '/usr/local/hive121/metastore_db', see thenext exception for details.
atorg.apache.derby.impl.jdbc.SQLE
-------------------------------------
[5]
<property>
<name>javax.jdo.option.ConnectionDriverName</name>
<value>com.mysql.jdbc.Driver</value>
</property>
javax.jdo.option.ConnectionDriverName
關於在hive中用java來開發與mysql進行交互時,需要用到一個關於mysql的connector,這個可以將java語言描述的對database進行的操作轉化為mysql可以理解的語句。
connector是一個用java語言描述的jar文件,而這個connector可以在官方網站上下載,經驗正是connector與mysql的版本號不一致也可以運行。
connector要copy到/usr/local/hive1.2.1/lib目錄下
[6]
<property>
<name>javax.jdo.option.ConnectorUserName</name>
<value>hive</value>
</property>
這個javax.jdo.option.ConnectionUserName
是用來設置hive存放元數據的數據庫(這里是mysql數據庫)的用戶名稱的。
[7]
--------------------------------------
<property>
<name>javax.jdo.option.ConnectionPassword</name>
<value>hive</value>
</property>
這個javax.jdo.option.ConnetionPassword是用來設置,
用戶登錄數據庫(上面的數據庫)的時候需要輸入的密碼的.
[8]
<property>
<name>hive.aux.jars.path</name>
<value>file:///usr/local/hive/lib/hive-hbase-handler-0.13.1.jar,file:///usr/local/hive/lib/protobuf-java-2.5.0.jar,file:///us
r/local/hive/lib/hbase-client-0.96.0-hadoop2.jar,file:///usr/local/hive/lib/hbase-common-0.96.0-hadoop2.jar,file:///usr/local
/hive/lib/zookeeper-3.4.5.jar,file:///usr/local/hive/lib/guava-11.0.2.jar</value>
</property>
/相應的jar包要從hbase的lib文件夾下復制到hive的lib文件夾下。
[9]
<property>
<name>hive.metastore.uris</name>
<value>thrift://localhost:9083</value>
</property>
</configuration>
---------------------------------------- 到此原理介紹完畢
要使用Hadoop來創建相應的文件路徑,
並且要為它們設定權限:
bin/hadoop fs -mkdir -p /usr/hive/warehouse
bin/hadoop fs -mkdir -p /usr/hive/tmp
bin/hadoop fs -mkdir -p /usr/hive/log
bin/hadoop fs -chmod g+w /usr/hive/warehouse
bin/hadoop fs -chmod g+w /usr/hive/tmp
bin/hadoop fs -chmod g+w /usr/hive/log