hiveserver2連接出錯如下:Error: Could not open client transport with JDBC Uri: jdbc:hive2://hadoop01:10000: java.net.ConnectException: Connection refused (Connection refused) (state=08S01,code=0)
1.看hiveserver2服務和HiveMetaStore是否啟動
[root@hadoop01 ~]# ps -ef |grep -i metastore
root 20607 1 0 Mar06 ? 00:01:19 /root/servers/jdk1.8.0/bin/java -Xmx256m -Djava.net.preferIPv4Stack=true -Dhadoop.log.dir=/root/servers/hadoop-2.8.5/logs -Dhadoop.log.file=hadoop.log -Dhadoop.home.dir=/root/servers/hadoop-2.8.5 -Dhadoop.id.str=root -Dhadoop.root.logger=INFO,console -Djava.library.path=/root/servers/hadoop-2.8.5/lib/native -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -Dproc_metastore -Dlog4j.configurationFile=hive-log4j2.properties -Djava.util.logging.config.file=/root/servers/hive-apache-2.3.6/conf/parquet-logging.properties -Dhadoop.security.logger=INFO,NullAppender org.apache.hadoop.util.RunJar /root/servers/hive-apache-2.3.6/lib/hive-metastore-2.3.6.jar org.apache.hadoop.hive.metastore.HiveMetaStore
root 111660 111580 0 05:40 pts/2 00:00:00 grep --color=auto -i metastore
[root@hadoop01 ~]# ps -ef |grep -i hiveserver2
root 20729 1 0 Mar06 ? 00:06:36 /root/servers/jdk1.8.0/bin/java -Xmx256m -Djava.net.preferIPv4Stack=true -Dhadoop.log.dir=/root/servers/hadoop-2.8.5/logs -Dhadoop.log.file=hadoop.log -Dhadoop.home.dir=/root/servers/hadoop-2.8.5 -Dhadoop.id.str=root -Dhadoop.root.logger=INFO,console -Djava.library.path=/root/servers/hadoop-2.8.5/lib/native -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -Dproc_hiveserver2 -Dlog4j.configurationFile=hive-log4j2.properties -Djava.util.logging.config.file=/root/servers/hive-apache-2.3.6/conf/parquet-logging.properties -Dhadoop.security.logger=INFO,NullAppender org.apache.hadoop.util.RunJar /root/servers/hive-apache-2.3.6/lib/hive-service-2.3.6.jar org.apache.hive.service.server.HiveServer2
root 111762 111580 0 05:40 pts/2 00:00:00 grep --color=auto -i hiveserver2
2.看Hadoop安全模式是否關閉
[root@hadoop01 ~]# hdfs dfsadmin -safemode get
Safe mode is OFF # 表示正常
如果為:Safe mode is ON 處理方法見https://www.cnblogs.com/-xiaoyu-/p/11399287.html
3.瀏覽器打開http://hadoop01:50070/看Hadoop集群是否正常啟動
4.看MySQL服務是否啟動
[root@hadoop01 ~]# service mysqld status
Redirecting to /bin/systemctl status mysqld.service
● mysqld.service - MySQL 8.0 database server
Loaded: loaded (/usr/lib/systemd/system/mysqld.service; disabled; vendor preset: disabled)
Active: active (running) since Sun 2020-01-05 23:30:18 CST; 8min ago
Process: 5463 ExecStartPost=/usr/libexec/mysql-check-upgrade (code=exited, status=0/SUCCESS)
Process: 5381 ExecStartPre=/usr/libexec/mysql-prepare-db-dir mysqld.service (code=exited, status=0/SUCCESS)
Process: 5357 ExecStartPre=/usr/libexec/mysql-check-socket (code=exited, status=0/SUCCESS)
Main PID: 5418 (mysqld)
Status: "Server is operational"
Tasks: 46 (limit: 17813)
Memory: 512.5M
CGroup: /system.slice/mysqld.service
└─5418 /usr/libexec/mysqld --basedir=/usr
Jan 05 23:29:55 hadoop01 systemd[1]: Starting MySQL 8.0 database server...
Jan 05 23:30:18 hadoop01 systemd[1]: Started MySQL 8.0 database server.
Active: active (running) since Sun 2020-01-05 23:30:18 CST; 8min ago 表示啟動正常
如沒有啟動則:service mysqld start 啟動mysql
注意注意注意注意注意注意注意注意注意注意注意注意注意注意注意注意注意注意注意注意注意注意注意注意注意注意注意注意注意注意注意注意注意注意注意注意注意注意注意注意注意注意注意注意注意注意注意注意注意注意注意注意注意注意注意注意注意注意注意:
一定要用本地mysql工具連接mysql服務器,看是否能正常進行連接!!!!!(只是檢查)
如不能連接看下:
配置只要是root用戶+密碼,在任何主機上都能登錄MySQL數據庫。
1.進入mysql
[root@hadoop102 mysql-libs]# mysql -uroot -p000000
2.顯示數據庫
mysql>show databases;
3.使用mysql數據庫
mysql>use mysql;
4.展示mysql數據庫中的所有表
mysql>show tables;
5.展示user表的結構
mysql>desc user;
6.查詢user表
mysql>select User, Host, Password from user;
7.修改user表,把Host表內容修改為%,其中%表示所有主機都可以連接,否則只有本機可以連接
mysql>update user set host='%' where host='localhost';
8.刪除root用戶的其他host
mysql>delete from user where Host='hadoop102';
mysql>delete from user where Host='127.0.0.1';
mysql>delete from user where Host='::1';
9.刷新
mysql>flush privileges; # 一定要刷新權限,否則可能權限不起效
10.退出
mysql>quit;
檢查mysql-connector-java-5.1.27.tar.gz驅動包是否一句放入:/root/servers/hive-apache-2.3.6/lib下面
<value>jdbc:mysql://hadoop01:3306/hive?createDatabaseIfNotExist=true</value>
#查看mysql里面是否有上面指定的庫hive 如果 mysql中沒有庫請看 第 7 步
mysql> show databases;
+--------------------+
| Database |
+--------------------+
| hive |
| information_schema |
| mysql |
| performance_schema |
| sys |
+--------------------+
5 rows in set (0.01 sec)
3306后面的hive是元數據庫,可以自己指定 比如:
<value>jdbc:mysql://hadoop01:3306/metastore?createDatabaseIfNotExist=true</value>
5.看Hadoop配置文件core-site.xml有沒有加如下配置
<property>
<name>hadoop.proxyuser.root.hosts</name> -- root為當前Linux的用戶,我的是root用戶
<value>*</value>
</property>
<property>
<name>hadoop.proxyuser.root.groups</name>
<value>*</value>
</property>
如果linux用戶為自己名字 如:xiaoyu
則配置如下:
<property>
<name>hadoop.proxyuser.xiaoyu.hosts</name>
<value>*</value>
</property>
<property>
<name>hadoop.proxyuser.xiaoyu.groups</name>
<value>*</value>
</property>
6.其他問題
# HDFS文件權限問題
<property>
<name>dfs.permissions</name>
<value>false</value>
</property>
7.org.apache.hadoop.hive.metastore.hivemetaexception: failed to get schema version.
schematool -dbType mysql -initSchema
8.最后一句 別下載錯包
apache hive-2.3.6下載地址:
http://mirror.bit.edu.cn/apache/hive/hive-2.3.6/
Index of /apache/hive/hive-2.3.6
Icon Name Last modified Size Description
[DIR] Parent Directory -
[ ] apache-hive-2.3.6-bin.tar.gz 23-Aug-2019 02:53 221M (下載這個)
[ ] apache-hive-2.3.6-src.tar.gz 23-Aug-2019 02:53 20M
9.重要
所有東西都檢查啦,還是出錯!!!
jps查看所有機器開啟的進程全部關閉,然后 重啟 設備,再
開啟zookeeper(如果有)
開啟hadoop集群
開啟mysql服務
開啟hiveserver2
beeline連接
配置文件如下,僅供參考,以實際自己配置為准
hive-site.xml
<configuration>
<property>
<name>javax.jdo.option.ConnectionURL</name>
<value>jdbc:mysql://hadoop01:3306/hive?createDatabaseIfNotExist=true</value>
</property>
<property>
<name>javax.jdo.option.ConnectionDriverName</name>
<value>com.mysql.jdbc.Driver</value>
</property>
<property>
<name>javax.jdo.option.ConnectionUserName</name>
<value>root</value>
</property>
<property>
<name>javax.jdo.option.ConnectionPassword</name>
<value>12345678</value>
</property>
<property>
<name>hive.cli.print.current.db</name>
<value>true</value>
</property>
<property>
<name>hive.cli.print.header</name>
<value>true</value>
</property>
<property>
<name>hive.server2.thrift.bind.host</name>
<value>hadoop01</value>
</property>
<property>
<name>hive.metastore.schema.verification</name>
<value>false</value>
</property>
<property>
<name>datanucleus.schema.autoCreateAll</name>
<value>true</value>
</property>
<!--
<property>
<name>hive.metastore.uris</name>
<value>thrift://node03.hadoop.com:9083</value>
</property>
-->
</configuration>
core-site.xml
<configuration>
<!-- 指定HDFS中NameNode的地址 -->
<property>
<name>fs.defaultFS</name>
<value>hdfs://hadoop01:9000</value>
</property>
<!-- 指定Hadoop運行時產生文件的存儲目錄 -->
<property>
<name>hadoop.tmp.dir</name>
<value>/root/servers/hadoop-2.8.5/data/tmp</value>
</property>
<property>
<name>hadoop.proxyuser.root.hosts</name>
<value>*</value>
</property>
<property>
<name>hadoop.proxyuser.root.groups</name>
<value>*</value>
</property>
</configuration>
hdfs-site.xml
<configuration>
<property>
<name>dfs.replication</name>
<value>3</value>
</property>
<!-- 指定Hadoop輔助名稱節點主機配置 第三台 -->
<property>
<name>dfs.namenode.secondary.http-address</name>
<value>hadoop03:50090</value>
</property>
<property>
<name>dfs.permissions</name>
<value>false</value>
</property>
</configuration>
mapred-site.xml
<configuration>
<!-- 指定MR運行在Yarn上 -->
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
<!-- 歷史服務器端地址 第三台 -->
<property>
<name>mapreduce.jobhistory.address</name>
<value>hadoop03:10020</value>
</property>
<!-- 歷史服務器web端地址 -->
<property>
<name>mapreduce.jobhistory.webapp.address</name>
<value>hadoop03:19888</value>
</property>
</configuration>
yarn-site.xml
<configuration>
<!-- Site specific YARN configuration properties -->
<!-- Reducer獲取數據的方式 -->
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
<!-- 指定YARN的ResourceManager的地址 第二台 -->
<property>
<name>yarn.resourcemanager.hostname</name>
<value>hadoop02</value>
</property>
<!-- 日志聚集功能使能 -->
<property>
<name>yarn.log-aggregation-enable</name>
<value>true</value>
</property>
<!-- 日志保留時間設置7天 -->
<property>
<name>yarn.log-aggregation.retain-seconds</name>
<value>604800</value>
</property>
</configuration>