一、登錄Cloudera Manager (http://192.168.201.128:7180/cmf/login)時,無法訪問web頁面
針對此問題網上有較多的解決方案(e.g. https://www.cnblogs.com/zlslch/p/7078119.html), 如果還不能解決你的問題,請看下面的解決方案。
登錄MySQL數據庫(或利用Navicat),會發現有一個mysql數據庫(下圖所示),在mysql數據庫中有一個user表,將User="root"的兩條記錄進行刪除
select * from user; delete from user where User='root';


再次登錄http://192.168.201.128:7180/cmf/login,發現登錄成功!

二、利用Navicat連接MySql數據庫時,錯誤信息:Can't connect to MySQL server on 'xxxxx'(10038)

解決方案:
查看網絡的端口信息:netstat -ntpl,下圖狀態為正常狀態(不是請進行如下操作),如果沒有netstat,在CentOS 7下利用yum -y install net-tools進行安裝。

查看防火牆的狀態,發現3306的端口是丟棄狀態:
iptables -vnL
這里要清除防火牆中鏈中的規則
iptables -F
再次連接MySql數據庫,發現連接成功!

三、無法啟動NameNode,查看日志發現如下錯誤...
Exception in thread "main" org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.SafeModeException): Cannot delete /tmp/hadoop-yarn/staging/hadoop/.staging/job_1490689337938_0001. Name node is in safe mode. The reported blocks 48 needs additional 5 blocks to reach the threshold 0.9990 of total blocks 53. The number of live datanodes 2 has reached the minimum number 0. Safe mode will be turned off automatically once the thresholds have been reached. at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkNameNodeSafeMode(FSNamesystem.java:1327) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.delete(FSNamesystem.java:3713) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.delete(NameNodeRpcServer.java:953) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.delete(ClientNamenodeProtocolServerSideTranslatorPB.java:611) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2049) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2045) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1698) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2045)
什么是安全模式?
安全模式是HDFS所處的一種特殊狀態,在這種狀態下,文件系統只接受讀數據請求,而不接受刪除、修改等變更請求。在NameNode主節點啟動時,HDFS首先進入安全模式,DataNode在啟動的時候會向namenode匯報可用的block等狀態,當整個系統達到安全標准時,HDFS自動離開安全模式。如果HDFS出於安全模式下,則文件block不能進行任何的副本復制操作,因此達到最小的副本數量要求是基於datanode啟動時的狀態來判定的,啟動時不會再做任何復制(從而達到最小副本數量要求)原博文:https://blog.csdn.net/bingduanlbd/article/details/51900512。
1、集群升級維護時手動進入安全模式
hadoop dfsadmin -safemode enter
2、退出安全模式:
hadoop dfsadmin -safemode leave
3、返回安全模式是否開啟的信息
hadoop dfsadmin -safemode get
因此,當發現namenode處於安全模式,無法啟動時,可以使用hadoop dfsadmin -safemode leave退出安全模式,重啟namenode解決問題!
四、INFO hdfs.DFSClient: Exception in createBlockOutputStream java.net.NoRouteToHostException: No route to host
16/07/27 01:29:26 INFO hdfs.DFSClient: Exception in createBlockOutputStream java.net.NoRouteToHostException: No route to host at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.hdfs.DFSOutputStream.createSocketForPipeline(DFSOutputStream.java:1537) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.createBlockOutputStream(DFSOutputStream.java:1313) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1266) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:449) 16/07/27 01:29:26 INFO hdfs.DFSClient: Abandoning BP-555863411-172.16.95.100-1469590594354:blk_1073741825_1001 16/07/27 01:29:26 INFO hdfs.DFSClient: Excluding datanode DatanodeInfoWithStorage[172.16.95.101:50010,DS-ee00e1f8-5143-4f06-9ef8-b0f862fce649,DISK] 16/07/27 01:29:26 INFO hdfs.DFSClient: Exception in createBlockOutputStream java.net.NoRouteToHostException: No route to host at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.hdfs.DFSOutputStream.createSocketForPipeline(DFSOutputStream.java:1537) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.createBlockOutputStream(DFSOutputStream.java:1313) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1266) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:449) 16/07/27 01:29:26 INFO hdfs.DFSClient: Abandoning BP-555863411-172.16.95.100-1469590594354:blk_1073741826_1002 16/07/27 01:29:26 INFO hdfs.DFSClient: Excluding datanode DatanodeInfoWithStorage[172.16.95.102:50010,DS-eea51eda-0a07-4583-9eee-acd7fc645859,DISK] 16/07/27 01:29:26 WARN hdfs.DFSClient: DataStreamer Exception org.apache.hadoop.ipc.RemoteException(java.io.IOException): File /wc/mytemp/123._COPYING_ could only be replicated to 0 nodes instead of minReplication (=1). There are 2 datanode(s) running and 2 node(s) are excluded in this operation. at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:1547) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getNewBlockTargets(FSNamesystem.java:3107) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:3031) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:724) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:492) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:969) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2049) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2045) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2043) at org.apache.hadoop.ipc.Client.call(Client.java:1475) at org.apache.hadoop.ipc.Client.call(Client.java:1412) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229) at com.sun.proxy.$Proxy9.addBlock(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.addBlock(ClientNamenodeProtocolTranslatorPB.java:418) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:497) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:191) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102) at com.sun.proxy.$Proxy10.addBlock(Unknown Source) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.locateFollowingBlock(DFSOutputStream.java:1459) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1255) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:449) put: File /wc/mytemp/123._COPYING_ could only be replicated to 0 nodes instead of minReplication (=1). There are 2 datanode(s) running and 2 node(s) are excluded in this operation. [hadoop@master bin]$ service firewall The service command supports only basic LSB actions (start, stop, restart, try-restart, reload, force-reload, status). For other actions, please try to use systemctl.
- 檢查防火牆是否關閉,如果沒有關閉將所有節點的防火牆進行關閉。在CentOS 6中命令 service iptables stop,在CentOS 7中命令 service firewalld stop
- 檢查所有的主機,/etc/selinux/config下的SELINUX,設置SELINUX=disabled。
- 再檢測上述問題即可解決。為了防止防火牆開機重啟,執行命令systemctl disable firewalld.service
五、hadoop 運行出錯,發現是ClusterId不一致問題
- 進入/dfs/nn/current,利用cat VERSION,查看確認各節點的clusterID是否一致
- 如果不一致,將主節點的clusterID進行拷貝,並修改各不一致的子節點的clusterID,保存退出,即可解決問題!
六、解決SecureCRT等軟件連接Linux速度緩慢問題,(有時出現 The semaphore timeout period has expired)
編輯sshd_config文件 ---->vi /etc/ssh/sshd_config
在文件ssh_config中添加如下代碼,並保存退出,重啟service sshd restart 或重啟 reboot 即可。
兩種情況類似一塊兒處理了.......,避免再出現問題
UseDns no
ClientAliveInterval 60

七、登錄MySQL5.7時出現ERROR 1045 (28000): Access denied for user 'root'@'localhost'
- 使用vi /etc/my.cnf,打開mysql配置文件,在文件中[mysqld]加入代碼 skip-grant-tables, 退出並保存。
- 使用service mysql restart, 重啟MySQL服務
- 然后再次進入到終端當中,敲入
mysql -u root -p命令然后回車,當需要輸入密碼時,直接按enter鍵,便可以不用密碼登錄到數據庫當中 update mysql.user set authentication_string=password('123456') where user='root';- 再次進入到之前的配置文件中,將代碼 skip-grant-tables進行刪除即可。
- 如果實在不行,請參考https://www.cnblogs.com/yanqr/p/9753445.html

