1、在當前服務器啟動hiveserver2服務,遠程客戶端通過beeline連接
報錯信息如下:
root@master:~# beeline -u jdbc:hive2//master:10000 ls: cannot access /data1/hadoop/hive/lib/hive-jdbc-*-standalone.jar: No such file or directory SLF4J: Class path contains multiple SLF4J bindings. SLF4J: Found binding in [jar:file:/data1/hadoop/hive/lib/log4j-slf4j-impl-2.4.1.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: Found binding in [jar:file:/data1/hadoop/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation. SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory] scan complete in 1ms 19/06/17 06:29:13 [main]: ERROR beeline.ClassNameCompleter: Fail to parse the class name from the Jar file due to the exception:java.io.FileNotFoundException: org/ehcache/sizeof/impl/sizeof-agent.jar (No such file or directory) scan complete in 762ms #這里提示找不到文件或者目錄 No known driver to handle "jdbc:hive2//master:10000" Beeline version 2.1.0 by Apache Hive beeline>
其實這個問題是由於jdbc協議地址寫錯造成的,在hive2之后少了個“:”
改成以下這個形式即可:
# beeline -u jdbc:hive2://master:10000 (這是在命令行直接輸入)
或者
先輸入beeline
然后輸入:
!connect jdbc:hive2://master:10000
2、用戶不被允許
# beeline -u jdbc:hive2://master:10000 -n root ls: cannot access /data1/hadoop/hive/lib/hive-jdbc-*-standalone.jar: No such file or directory SLF4J: Class path contains multiple SLF4J bindings. SLF4J: Found binding in [jar:file:/data1/hadoop/hive/lib/log4j-slf4j-impl-2.4.1.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: Found binding in [jar:file:/data1/hadoop/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation. SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory] Connecting to jdbc:hive2://master:10000 Error: Failed to open new session: java.lang.RuntimeException: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.authorize.AuthorizationException): User: root is not allowed to impersonate root (state=,code=0) Beeline version 2.1.0 by Apache Hive beeline>
(1)修改core-site.xml文件,加入如下選項:
<property> <name>hadoop.proxyuser.root.hosts</name> #配置成*的意義,表示任意節點使用 hadoop 集群的代理用戶 root 都能訪問 hdfs 集群 <value>*</value> </property> <property> <name>hadoop.proxyuser.root.groups</name> #表示代理用戶的組所屬 <value>*</value> </property>
上述的proxyuser后面的root即是報錯時User后面的用戶,如:User: root is not allowed to
如果報錯為:User: yjt is not allowed to,那么修改如下:
hadoop.proxyuser.yjt.hosts
hadoop.proxyuser.yjt.groups
這樣改的原因:
主要原因是hadoop引入了一個安全偽裝機制,使得hadoop 不允許上層系統直接將實際用戶傳遞到hadoop層,而是將實際用戶傳遞給一個超級代理,由此代理在hadoop上執行操作,避免任意客戶端隨意操作hadoop,如下圖:
圖上的超級代理是“Oozie”,你自己的超級代理是上面設置的proxyuser后面的“xxx”。
而hadoop內部還是延用linux對應的用戶和權限。即你用哪個linux用戶啟動hadoop,對應的用戶也就成為hadoop的內部用戶
(2)在hdfs-sitx.xml文件添加:
<property>
<name>dfs.webhdfs.enabled</name>
<value>true</value>
</property>
上訴配置完以后,重新啟動hadoop集群,其實只要hdfs就可以了。
接着執行如下命令:
/# beeline -u jdbc:hive2://master:10000 -n root ls: cannot access /data1/hadoop/hive/lib/hive-jdbc-*-standalone.jar: No such file or directory SLF4J: Class path contains multiple SLF4J bindings. SLF4J: Found binding in [jar:file:/data1/hadoop/hive/lib/log4j-slf4j-impl-2.4.1.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: Found binding in [jar:file:/data1/hadoop/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation. SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory] Connecting to jdbc:hive2://master:10000 Error: Failed to open new session: java.lang.RuntimeException: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.authorize.AuthorizationException): User: root is not allowed to impersonate root (state=,code=0) Beeline version 2.1.0 by Apache Hive beeline>
what???,不對???
如果上訴配置正確,集群也已經重啟,還是報這樣的錯誤,那么查看一下啟動hiveserver2的機器是否啟動了namenode節點?
(1)如果沒有啟動namenode節點,那么查看你配置的用戶(這里是root用戶)是否有對應的操作hdfs目錄的權限(權限控制可能會導致該錯誤)
(2)如果該節點啟動了namenode,那么使用 hdfs haadmin -getAllServiceState 檢查該節點的namenode的狀態是否是standby。如果是,kill掉,讓該節點的namenode變成active。
我碰到的問題就是上訴2,也就是namenode節點是standby。
kill掉以后,再試試
root@master:/# hdfs haadmin -getAllServiceState master:8020 standby slave1:8020 active root@master:/# hdfs haadmin -getAllServiceState master:8020 active 19/06/17 08:49:08 INFO ipc.Client: Retrying connect to server: slave1/172.17.0.3:8020. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=1, sleepTime=1000 MILLISECONDS) slave1:8020 Failed to connect: Call From master/172.17.0.2 to slave1:8020 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused root@master:/# beeline -u jdbc:hive2://master:10000 -n root ls: cannot access /data1/hadoop/hive/lib/hive-jdbc-*-standalone.jar: No such file or directory SLF4J: Class path contains multiple SLF4J bindings. SLF4J: Found binding in [jar:file:/data1/hadoop/hive/lib/log4j-slf4j-impl-2.4.1.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: Found binding in [jar:file:/data1/hadoop/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation. SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory] Connecting to jdbc:hive2://master:10000 Connected to: Apache Hive (version 2.1.0) Driver: Hive JDBC (version 2.1.0) 19/06/17 08:49:16 [main]: WARN jdbc.HiveConnection: Request to set autoCommit to false; Hive does not support autoCommit=false. Transaction isolation: TRANSACTION_REPEATABLE_READ Beeline version 2.1.0 by Apache Hive 0: jdbc:hive2://master:10000>
0: jdbc:hive2://master:10000> show databases;
OK
+----------------+--+
| database_name |
+----------------+--+
| default |
| myhive |
+----------------+--+
發現正常連接了。
借鑒:https://blog.csdn.net/qq_16633405/article/details/82190440