hadoop 學習之異常篇


本文旨在給予自己在學習hadoop過程中遇到的問題的一個記錄和解決方法。

一、

copyFromLocal: java.io.IOException: File /user/hadoop/slaves could only be replicated to 0 nodes, instead of 1
14/06/09 13:45:00 ERROR hdfs.DFSClient: Exception closing file /user/hadoop/slaves : org.apache.hadoop.ipc.RemoteException: java.io.IOException: File /user/hadoop/slaves could only be replicated to 0 nodes, instead of 1
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1271)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:422)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:508)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:959)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:955)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:396)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:953)

這個問題是在我進行偽分布式的情況下進行文件的上傳出現的,首先我查看了我的hdfs-site.xml中的replication值,發現我沒有配置錯誤。

解決:重新將文件系統格式化

hadoop namenode -format

 

二、

[hadoop@localhost logs]$ hadoop fs -ls
ls: Cannot access .: No such file or directory.

這個問題是在查看文件系統中內容時候出現的,因為文件系統中現在沒有任何文件,所以會出現這個問題。

解決:可以創建一個新的文件夾或上傳一個新的文件。

三、

Exception in thread "main" org.apache.hadoop.security.AccessControlException: org.apache.hadoop.security.AccessControlException: Permission denied: user=dvqfq6prcjdsh4p\hadoop, access=WRITE, inode="hadoop":hadoop:supergroup:rwxr-xr-x
    at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
    at sun.reflect.NativeConstructorAccessorImpl.newInstance(Unknown Source)
    at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(Unknown Source)
    at java.lang.reflect.Constructor.newInstance(Unknown Source)
    at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:96)
    at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:58)
    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.<init>(DFSClient.java:2710)
    at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:492)
    at org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:195)
    at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:484)
    at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:384)
    at com.hadoop.hdfs.test.FileCopyWithProgess.main(FileCopyWithProgess.java:27)

這個問題出現在本地使用eclipse向hdfs中寫入文件時出現的權限問題

解決:在hdfs-site.xml加入如下代碼

<property>
  <name>dfs.permissions</name>
  <value>false</value>
  <description>
    If "true", enable permission checking in HDFS.
    If "false", permission checking is turned off,
    but all other behavior is unchanged.
    Switching from one parameter value to the other does not change the mode,
    owner or group of files or directories.
  </description>
</property>

 

 


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM