最近在hdfs寫文件的時候發現一個問題,create寫入正常,append寫入報錯,每次都能重現,代碼示例如下:
FileSystem fs = FileSystem.get(conf); OutputStream out = fs.create(file); IOUtils.copyBytes(in, out, 4096, true); //正常 out = fs.append(file); IOUtils.copyBytes(in, out, 4096, true); //報錯
通過hdfs fsck命令檢查出問題的文件,發現只有一個副本,難道是因為這個?
看FileSystem.append執行過程:
org.apache.hadoop.fs.FileSystem
public abstract FSDataOutputStream append(Path var1, int var2, Progressable var3) throws IOException;
實現類在這里:
org.apache.hadoop.hdfs.DistributedFileSystem
public FSDataOutputStream append(Path f, final int bufferSize, final Progressable progress) throws IOException { this.statistics.incrementWriteOps(1); Path absF = this.fixRelativePart(f); return (FSDataOutputStream)(new FileSystemLinkResolver<FSDataOutputStream>() { public FSDataOutputStream doCall(Path p) throws IOException, UnresolvedLinkException { return DistributedFileSystem.this.dfs.append(DistributedFileSystem.this.getPathName(p), bufferSize, progress, DistributedFileSystem.this.statistics); } public FSDataOutputStream next(FileSystem fs, Path p) throws IOException { return fs.append(p, bufferSize); } }).resolve(this, absF); }
這里會調用DFSClient.append方法
org.apache.hadoop.hdfs.DFSClient
private DFSOutputStream append(String src, int buffersize, Progressable progress) throws IOException { this.checkOpen(); DFSOutputStream result = this.callAppend(src, buffersize, progress); this.beginFileLease(result.getFileId(), result); return result; } private DFSOutputStream callAppend(String src, int buffersize, Progressable progress) throws IOException { LocatedBlock lastBlock = null; try { lastBlock = this.namenode.append(src, this.clientName); } catch (RemoteException var6) { throw var6.unwrapRemoteException(new Class[]{AccessControlException.class, FileNotFoundException.class, SafeModeException.class, DSQuotaExceededException.class, UnsupportedOperationException.class, UnresolvedPathException.class, SnapshotAccessControlException.class}); } HdfsFileStatus newStat = this.getFileInfo(src); return DFSOutputStream.newStreamForAppend(this, src, buffersize, progress, lastBlock, newStat, this.dfsClientConf.createChecksum()); }
DFSClient.append最終會調用NameNodeRpcServer的append方法
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer
public LocatedBlock append(String src, String clientName) throws IOException { this.checkNNStartup(); String clientMachine = getClientMachine(); if (stateChangeLog.isDebugEnabled()) { stateChangeLog.debug("*DIR* NameNode.append: file " + src + " for " + clientName + " at " + clientMachine); } this.namesystem.checkOperation(OperationCategory.WRITE); LocatedBlock info = this.namesystem.appendFile(src, clientName, clientMachine); this.metrics.incrFilesAppended(); return info; }
這里調用到FSNamesystem.append
org.apache.hadoop.hdfs.server.namenode.FSNamesystem
LocatedBlock appendFile(String src, String holder, String clientMachine) throws AccessControlException, SafeModeException, ... lb = this.appendFileInt(src, holder, clientMachine, cacheEntry != null); private LocatedBlock appendFileInt(String srcArg, String holder, String clientMachine, boolean logRetryCache) throws ... lb = this.appendFileInternal(pc, src, holder, clientMachine, logRetryCache); private LocatedBlock appendFileInternal(FSPermissionChecker pc, String src, String holder, String clientMachine, boolean logRetryCache) throws AccessControlException, UnresolvedLinkException, FileNotFoundException, IOException { assert this.hasWriteLock(); INodesInPath iip = this.dir.getINodesInPath4Write(src); INode inode = iip.getLastINode(); if (inode != null && inode.isDirectory()) { throw new FileAlreadyExistsException("Cannot append to directory " + src + "; already exists as a directory."); } else { if (this.isPermissionEnabled) { this.checkPathAccess(pc, src, FsAction.WRITE); } try { if (inode == null) { throw new FileNotFoundException("failed to append to non-existent file " + src + " for client " + clientMachine); } else { INodeFile myFile = INodeFile.valueOf(inode, src, true); BlockStoragePolicy lpPolicy = this.blockManager.getStoragePolicy("LAZY_PERSIST"); if (lpPolicy != null && lpPolicy.getId() == myFile.getStoragePolicyID()) { throw new UnsupportedOperationException("Cannot append to lazy persist file " + src); } else { this.recoverLeaseInternal(myFile, src, holder, clientMachine, false); myFile = INodeFile.valueOf(this.dir.getINode(src), src, true); BlockInfo lastBlock = myFile.getLastBlock(); if (lastBlock != null && lastBlock.isComplete() && !this.getBlockManager().isSufficientlyReplicated(lastBlock)) { throw new IOException("append: lastBlock=" + lastBlock + " of src=" + src + " is not sufficiently replicated yet."); } else { return this.prepareFileForWrite(src, iip, holder, clientMachine, true, logRetryCache); } } } } catch (IOException var11) { NameNode.stateChangeLog.warn("DIR* NameSystem.append: " + var11.getMessage()); throw var11; } } } public boolean isSufficientlyReplicated(BlockInfo b) { int replication = Math.min(this.minReplication, this.getDatanodeManager().getNumLiveDataNodes()); return this.countNodes(b).liveReplicas() >= replication; }
在append文件的時候,會首先取出這個文件最后一個block,然后會檢查這個block是否滿足副本要求,如果不滿足就拋出異常,如果滿足就准備寫入;
看來原因確實是因為文件只有1個副本導致append報錯,那為什么新建文件只有1個副本,后來找到原因是因為機架配置有問題導致的,詳見 https://www.cnblogs.com/barneywill/p/10114504.html