- hdfs dfs -put 從本地上傳到hdfs上出現異常

與namenode 同台機器的datanode錯誤日志信息如下:
2015-12-03 09:54:03,083 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Slow BlockReceiver write data to disk cost:727ms (threshold=300ms)
2015-12-03 09:54:03,991 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Starting CheckDiskError Thread
2015-12-03 09:54:03,991 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Exception for BP-254367353-10.172.153.46-1448878000030:blk_1073741847_1023
java.io.IOException: No space left on device
at java.io.FileOutputStream.writeBytes(Native Method)
at java.io.FileOutputStream.write(FileOutputStream.java:345)
at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:613)
at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:781)
at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:730)
at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:137)
at org.apache.hadoop.hdfs.protoco2015-12-03 09:54:04,050 WARN org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Block BP-254367353-10.172.153.46-1448878000030:blk_1073741847_1023 unfinalized and removed.
2015-12-03 09:54:04,054 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: opWriteBlock BP-254367353-10.172.153.46-1448878000030:blk_1073741847_1023 received exception java.io.IOException: No space left on device
2015-12-03 09:54:04,054 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: hd1:50010:DataXceiver error processing WRITE_BLOCK operation src: /10.165.114.138:57315 dst: /10.172.153.46:50010
java.io.IOException: No space left on device
at java.io.FileOutputStream.writeBytes(Native Method)
at java.io.FileOutputStream.write(FileOutputStream.java:345)
at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:613)
at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:781)
at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:730)
at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:137)
at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:74)
at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:235)
at java.lang.Thread.run(Thread.java:745)
datanode所在機器的錯誤日志信息如下:
2015-12-03 17:54:04,111 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(10.172.218.18, datanodeUuid=7c882efa-f159-4477-a322-30cf55c84598, infoPort=50075, ipcPort=50020, storageInfo=lv=-56;cid=CID-183048c9-89b2-44b4-a224-21f04d2a8065;nsid=275180848;c=0):Failed to transfer BP-254367353-10.172.153.46-1448878000030:blk_1073741850_1026 to 10.172.153.46:50010 got
java.net.SocketException: Original Exception : java.io.IOException: Connection reset by peer
at sun.nio.ch.FileChannelImpl.transferTo0(Native Method)
at sun.nio.ch.FileChannelImpl.transferToDirectly(FileChannelImpl.java:433)
at sun.nio.ch.FileChannelImpl.transferTo(FileChannelImpl.java:565)
at org.apache.hadoop.net.SocketOutputStream.transferToFully(SocketOutputStream.java:223)
at org.apache.hadoop.hdfs.server.datanode.BlockSender.sendPacket(BlockSender.java:559)
at org.apache.hadoop.hdfs.server.datanode.BlockSender.sendBlock(BlockSender.java:728)
at org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer.run(DataNode.java:2017)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.io.IOException: Connection reset by peer
... 8 more
2015-12-03 17:54:04,146 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Starting CheckDiskError Thread
2015-12-03 17:57:39,288 INFO org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner: Verification succeeded for BP-254367353-10.172.153.46-1448878000030:blk_1073741850_1026
從日志可以看出設備上空間不足,服務器磁盤空間較小,只好刪除一些垃圾數據。
