hadoop 的HDFS 的 standby namenode無法啟動事故處理


standby namenode無法啟動

現象:線上使用的2.5.0-cdh5.3.2版本Hadoop,開啟了了NameNode HA,HA采用QJM方式。hadoop的集群的namenode的standby節點宕掉,重啟無法啟動成功。

1. standby namenode出現異常時的現象

異常現象1:
hadoop的集群的namenode的standby節點宕掉,重啟無法啟動成功。查看hadoop-hdfs-namenode-hostname.log,發現報出如下異常:" org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader: Encountered exception on operation ReassignLeaseOp "

2016-12-02 04:00:50,567 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: 10.10.110.110:50010 is added to blk_1189739479_116038994{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[[DISK]DS-5830f95b-cb53-4de4-977c-96b3eef06603:NORMAL|FINALIZED], ReplicaUnderConstruction[[DISK]DS-cf11c8ea-95b7-43f7-979f-a4a78c612b61:NORMAL|FINALIZED], ReplicaUnderConstruction[[DISK]DS-9e9b856f-6364-44b7-bc5a-b277423a6eda:NORMAL|FINALIZED], ReplicaUnderConstruction[[DISK]DS-dafd3788-bf7b-4512-8839-154a718a4297:NORMAL|FINALIZED]]} size 0
2016-12-02 04:00:50,568 ERROR org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader: Encountered exception on operation ReassignLeaseOp [leaseHolder=DFSClient_NONMAPREDUCE_-1432667135_1, path=/home/baoxin/hive/AccessToken/Token/20161202/2016120202, newHolder=HDFS_NameNode, opCode=OP_REASSIGN_LEASE, txid=746502271]
java.lang.IllegalStateException
    at com.google.common.base.Preconditions.checkState(Preconditions.java:129)
    at org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.applyEditLogOp(FSEditLogLoader.java:648)
    at org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadEditRecords(FSEditLogLoader.java:228)
    at org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadFSEdits(FSEditLogLoader.java:137)
    at org.apache.hadoop.hdfs.server.namenode.FSImage.loadEdits(FSImage.java:820)
    at org.apache.hadoop.hdfs.server.namenode.FSImage.loadEdits(FSImage.java:801)
    at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer.doTailEdits(EditLogTailer.java:232)
    at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.doWork(EditLogTailer.java:331)
    at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.access$200(EditLogTailer.java:284)
    at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread$1.run(EditLogTailer.java:301)
    at org.apache.hadoop.security.SecurityUtil.doAsLoginUserOrFatal(SecurityUtil.java:411)
    at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.run(EditLogTailer.java:297)
2016-12-02 04:00:50,607 WARN org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException as:lei.chen (auth:SIMPLE) cause:org.apache.hadoop.ipc.StandbyException: Operation category READ is not supported in state standby
2016-12-02 04:00:50,607 INFO org.apache.hadoop.ipc.Server: IPC Server handler 23 on 8020, call org.apache.hadoop.hdfs.protocol.ClientProtocol.getEZForPath from 10.10.110.152:40258 Call#0 Retry#0: org.apache.hadoop.ipc.StandbyException: Operation category READ is not supported in state standby
2016-12-02 04:00:50,722 WARN org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException as:lei.chen (auth:SIMPLE) cause:org.apache.hadoop.ipc.StandbyException: Operation category WRITE is not supported in state standby
2016-12-02 04:00:50,723 INFO org.apache.hadoop.ipc.Server: IPC Server handler 55 on 8020, call org.apache.hadoop.hdfs.protocol.ClientProtocol.mkdirs from 10.10.110.152:40258 Call#1 Retry#0: org.apache.hadoop.ipc.StandbyException: Operation category WRITE is not supported in state standby
2016-12-02 04:00:51,050 FATAL org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer: Unknown error encountered while tailing edits. Shutting down standby NN.
java.io.IOException: java.lang.IllegalStateException
    at org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadEditRecords(FSEditLogLoader.java:238)
    at org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadFSEdits(FSEditLogLoader.java:137)
    at org.apache.hadoop.hdfs.server.namenode.FSImage.loadEdits(FSImage.java:820)
    at org.apache.hadoop.hdfs.server.namenode.FSImage.loadEdits(FSImage.java:801)
    at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer.doTailEdits(EditLogTailer.java:232)
    at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.doWork(EditLogTailer.java:331)
    at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.access$200(EditLogTailer.java:284)
    at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread$1.run(EditLogTailer.java:301)
    at org.apache.hadoop.security.SecurityUtil.doAsLoginUserOrFatal(SecurityUtil.java:411)
    at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.run(EditLogTailer.java:297)
Caused by: java.lang.IllegalStateException
    at com.google.common.base.Preconditions.checkState(Preconditions.java:129)
    at org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.applyEditLogOp(FSEditLogLoader.java:648)
    at org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadEditRecords(FSEditLogLoader.java:228)
    ... 9 more
2016-12-02 04:00:51,054 INFO org.apache.hadoop.util.ExitUtil: Exiting with status 1
2016-12-02 04:00:51,057 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at db-nn1.inc.mtime.com/1xx.1xx.11x.10x
************************************************************/

通過上面異常日志,發現namenode的standby節點無法重啟,是因為加載 /home/baoxin/hive/AccessToken/Token/20161202/2016120202 遇到了異常。
從 HDFS 下載 /home/baoxin/hive/AccessToken/Token/20161202/2016120202這個文件,打開發現這個文件的最后一行是不完整的hive記錄,說明該hdfs文件是損壞的。
於是,我們通過hadoop 命令,將這個文件刪除了。然后重啟standby 的namenode,發現問題依舊,異常依然是在加載home/baoxin/hive/AccessToken/Token/20161202/2016120202日志時,而且 使用 hadoop fsck -blocks 命令檢查hdfs,發現出現了一個名字叫“null”的異常塊。

** 異常現象2:**
hive sql 任務報出:“org.apache.hadoop.ipc.RemoteException(java.lang.ArrayIndexOutOfBoundsException): java.lang.ArrayIndexOutOfBoundsException”異常。

hive任務,hbase相關讀寫操作任務,都會報出"ArrayIndexOutOfBoundsException"的異常.
並且hive相關的MR任務根本無法成功提交到hadoop上。
但是workcout之類的純MapReduce程序可以成功提交到hadoop集群的(hadoop jar的方式)。

hive> select count(*) from ECommerce.GoodsSubOrder; 
Total jobs = 1 
Launching Job 1 out of 1 
Number of reduce tasks determined at compile time: 1 
In order to change the average load for a reducer (in bytes): 
set hive.exec.reducers.bytes.per.reducer=<number> 
In order to limit the maximum number of reducers: 
set hive.exec.reducers.max=<number> 
In order to set a constant number of reducers: 
set mapreduce.job.reduces=<number> 
org.apache.hadoop.ipc.RemoteException(java.lang.ArrayIndexOutOfBoundsException): java.lang.ArrayIndexOutOfBoundsException 

at org.apache.hadoop.ipc.Client.call(Client.java:1411) 
at org.apache.hadoop.ipc.Client.call(Client.java:1364) 
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206) 
at com.sun.proxy.$Proxy15.delete(Unknown Source) 
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.delete(ClientNamenodeProtocolTranslatorPB.java:513) 
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) 
at java.lang.reflect.Method.invoke(Method.java:483) 
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187) 
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102) 
at com.sun.proxy.$Proxy16.delete(Unknown Source) 
at org.apache.hadoop.hdfs.DFSClient.delete(DFSClient.java:1853) 
at org.apache.hadoop.hdfs.DistributedFileSystem$11.doCall(DistributedFileSystem.java:599) 
at org.apache.hadoop.hdfs.DistributedFileSystem$11.doCall(DistributedFileSystem.java:595) 
at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) 
at org.apache.hadoop.hdfs.DistributedFileSystem.delete(DistributedFileSystem.java:595) 
at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:539) 
at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1295) 
at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1292) 
at java.security.AccessController.doPrivileged(Native Method) 

2.異常處理過程:

1. 將報錯的文件從Hadoop上刪除(先備份),standby NN仍然起不來,錯誤信息一樣

hadoop fs -rm /home/baoxin/hive/AccessToken/Token/20161202/2016120202

2.懷疑是Standby NameNode的editlog有問題

懷疑可能只是Standby上的fsimage文件或edits文件有問題,於是我們在Standby上執行了.


hdfs hadoop namenode -bootstrapStandby  ##改過程會自動從Active Namenode上獲取最新的fsimage文件,
#並從Journalnode日志服務器上下載並執行新的edits文件。 加載edits時仍然遇到上面相同的報錯。

因為NameNOde每次啟動都需要加載editlog,所以為了跳過此操作,我們將Active進入安全模式,並進行
saveNameSpace操作將dump出來的fsimage和txid等拷貝到Standby NN上,重啟就可以避免加載editlog了。

hadoop dfsadmin -safemode enter
hadoop dfsadmin -saveNamespace  ## 將namenode的fsimage信息dump出來,
hadoop dfsadmin -safemode leave

但namenode無法進⼊入安全模式,報錯信息:無法連接standby服務

safemode: Call From db-nn1.inc.mtime.com/10.10.110.100 to db-nn1.inc.mtime.com:802
0 failed on connection exception: java.net.ConnectException: Connection refused; F
or more details see: http://wiki.apache.org/hadoop/ConnectionRefused

於是,查看etc/hadoop/hdfs-site.xml配置文件dfs.namenode.name.dir配置項下對應的NN的配置。

 <property>
    <name>dfs.namenode.name.dir</name>
    <value>/home/hadoop/apache-hadoop/hadoop/var/dfs/nn</value>
 </property>

直接將active nn的 "${dfs.namenode.name.dir}/current"目錄下的fsimage和editlog等copy到standby上,啟動,仍然報同樣的錯。

3.參考博客:http://heylinux.com/archives/3398.html對edit log 文件進行修復和修復。

在修復edit log 之前,需要現將active的namenode進入安全模式,或者先停止,避免有namenode上新的操作
,會更新editlog文件。
進入安全模式

hadoop dfsadmin -safemode enter  ##將active namenode 進入安全模式,或者直接關閉
#hadoop dfsadmin -safemode leave

將editlog文件翻譯成xml文件

##將editlog文件翻譯成xml文件
hdfs oev -i edits_0000000000075549590-0000000000075551044 -o edits_0000000000075549590-0000000000075551044.xml

** vi 打開edits_0000000000075549590-0000000000075551044.xml 文件**
在XML文件中搜索 找到如下操作

[leaseHolder=DFSClient_NONMAPREDUCE_-1432667135_1, path=/home/baoxin/hive/AccessToken/Token/20161202/2016120202, newHolder=HDFS_NameNode, opCode=OP_REASSIGN_LEASE, txid=746502271]

的edit log 記錄,直接刪掉。

##將editlog xml文件轉換成原來的二進制文件
hdfs oev -i edits_0000000000075549590-0000000000075551044.xml -o edits_0000000000075549590-0000000000075551044_new -p binary

然后將新生成的binary文件同步到journalnode日志服務器中。重啟standby的namenode,發現報如下錯誤:

2016-12-02 15:39:20,933 WARN org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Encountered exception loading fsimage
java.io.IOException: There appears to be a gap in the edit log.  We expected txid 746515168, but got txid 746515169.
    at org.apache.hadoop.hdfs.server.namenode.MetaRecoveryContext.editLogLoaderPrompt(MetaRecoveryContext.java:94)
    at org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadEditRecords(FSEditLogLoader.java:209)
    at org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadFSEdits(FSEditLogLoader.java:137)
    at org.apache.hadoop.hdfs.server.namenode.FSImage.loadEdits(FSImage.java:820)
    at org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:678)
    at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:281)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1006)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:736)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:531)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:587)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:754)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:738)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1427)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1493)
2016-12-02 15:39:20,937 INFO org.mortbay.log: Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@db-nn1.inc.mtime.com:50070
2016-12-02 15:39:21,037 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Stopping NameNode metrics system...

原來edit log的txid是連續的,我們手工把編輯日志中的異常操作刪掉,edit log的txid就變成是不連續的,namenode啟動是時加載editlog就會報出期望的txid不一致的異常。

因此,不能直接把異常操作刪掉, 可以將原有操作替換成其它的內容,比如無關痛癢的修改一個現有文件的權限"opCode=OP_SET_PERMISSIONS",並保留TXID 240823292以確保edits文件的完整性。
修改完成后,重新打包成二進制的edit log文件,並同步到journalnode日志服務器中,重啟standby的namenode。發現報錯的操作日志記錄已經不在報錯,但下一個editlog 日志記錄又有問題,同樣性質的異常:

2016-12-02 15:21:35,184 ERROR org.apache.hadoop.hdfs.server.namenode.FSEditLogLoad
er: Encountered exception on operation ReassignLeaseOp [leaseHolder=HDFS_NameNode,
path=/home/baoxin/hive/AccessToken/Token/20161202/2016120202, newHolder=HDFS_Name
Node, opCode=OP_REASSIGN_LEASE, txid=746544405]

於是,重復上面的動作,發現還有很多hdfs文件的對應操作日志對應的動作都有問題,而hdfs文件本身是沒有問題的,就寫了一個工具程序,將出現問題時間點后的操作日志都改成了"opCode=OP_SET_PERMISSIONS"動作。
修改edit log文件的工具程序如下:

package temp;

import java.io.BufferedReader;
import java.io.BufferedWriter;
import java.io.File;
import java.io.FileNotFoundException;
import java.io.FileOutputStream;
import java.io.FileReader;
import java.io.IOException;
import java.io.OutputStreamWriter;
import java.io.PrintWriter;
import java.io.UnsupportedEncodingException;
import java.util.ArrayList;
import java.util.HashMap;
import java.util.List;
import java.util.Map;

public class ReplaceXML {
	
	static List<String> list = new ArrayList<String>();

	/**
	 * 1 : editlog
	 * 2 : 需要替換的文件
	 * 3 : 替換為
	 * 4 : 目的文件
	 * @param args
	 * @throws UnsupportedEncodingException
	 * @throws FileNotFoundException
	 */
	public static void main(String[] args) throws Exception {
		System.out.println("---------------------");
		readFileByLines(args[1]);
	    BufferedWriter bw = new BufferedWriter(new OutputStreamWriter(new FileOutputStream(args[3], true), "UTF-8"));
	    PrintWriter printWriter = new PrintWriter(bw);
	    String recordTemplate =
				"<RECORD>\n" +
				"    <OPCODE>OP_SET_PERMISSIONS</OPCODE>\n" +
				"    <DATA>\n" +
				"      #TXID#\n" +
				"      <SRC>/home/baoxin/hive/AccessToken/Token/20161202/2016120202</SRC>\n" +
				"      <MODE>504</MODE>\n" +
				"    </DATA>\n" +
				"</RECORD>";

		String header="<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n" +
				"<EDITS>\n" +
				"  <EDITS_VERSION>-59</EDITS_VERSION>\n";
		String tail="</EDITS>";
	    File file = new File(args[0]);
		BufferedReader reader = null;
		try {
			reader = new BufferedReader(new FileReader(file));
			int line = 0;
			// 一次讀一條記錄,讀入記錄為null時,
			printWriter.write(header);
			Map<String,String> info = null;
			while( (info = readRecord(reader ) ) != null){
				line++;
				String record = info.get("RECORD");
				String txid = info.get("TXID");
				record =  recordTemplate.replace("#TXID#",txid.trim());
				System.out.println("line " + line);
				printWriter.write(record + "\n");
				printWriter.flush();
			}
		} catch (IOException e) {
			e.printStackTrace();
		} finally {

			if (reader != null) {
				try {
					reader.close();
					printWriter.write(tail);
				} catch (IOException e1) {
				}
			}
			if (printWriter != null) {
				printWriter.close();
			}
		}
	}

	public static Map<String,String> readRecord(BufferedReader reader) throws IOException {

		Map<String,String> map = new HashMap<String,String>();
		StringBuilder record = new StringBuilder();
		String tempStr = null;
		while ((tempStr = reader.readLine()) != null) {
			record.append(tempStr).append("\n");
			if (tempStr.trim().equals("<RECORD>")) { ///開始
			}else if (tempStr.trim().equals("</RECORD>")){  ///結束
				map.put("RECORD",record.toString());
				return map;
			}else{
//				if(tempStr.trim().contains("<PATH>")){
//					map.put("PATH",tempStr);
//				}
				if(tempStr.trim().contains("<TXID>")){
					map.put("TXID",tempStr);
				}
			}
		}
		return null;
	}

	public static void readFileByLines(String fileName) {
		File file = new File(fileName);
		BufferedReader reader = null;
		try {
			reader = new BufferedReader(new FileReader(file));
			String tempString = null;
			int line = 1;
			// 一次讀一行,讀入null時文件結束
			while ((tempString = reader.readLine()) != null) {
				// 把當前行號顯示出來
				line++;
				list.add(tempString);
			}
			reader.close();
		} catch (IOException e) {
			e.printStackTrace();
		} finally {
			if (reader != null) {
				try {
					reader.close();
				} catch (IOException e1) {
				}
			}
		}
	}

}

修改完成后,重namdenode,能成功啟動了。

然后。
手工執行dfsadmin -safemode leave

手工刪除損壞的塊fsck —delete

namenode啟動后,沒重啟datanode,提交hivejob時報錯,重啟datanode即可。報錯信息如下:

java.io.EOFException: Premature EOF: no length prefix available
at org.apache.hadoop.hdfs.protocolPB.PBHelper.vintPrefixed(PBHelper.java:2109)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.createBlockOutputStream
(DFSOutputStream.java:1380)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(D
FSOutputStream.java:1302)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.jav
a:536)
Job Submission failed with exception 'java.io.EOFException(Premature EOF: no lengt
h prefix available)'
FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.mr.MapR
edTask
```

至此,standby namenode啟動異常解決,集群恢復正常。分析出現本次異常的原因是:集群規划比較簡單,集群的
的壓力較大,造成整個HDFS集群頻繁重啟,引起hdfs文件損壞,namenode的editlog 出現損壞造成的。






###3.其他解決辦法:
*1.正常情況下,saveNamespace操作時可以用的。將dump出來文件替換standby namenode即可。*
```
hadoop dfsadmin -saveNamespace  ## 將namenode的fsimage信息 dump出來,
```

*2.使用 HDFS數據恢復模式使用*

HDFS數據恢復模式實質上是一種NameNode的啟動方式,我們可以通過指定啟動NameNode的參數來選擇是否以這樣的方式啟動NameNode,輸入hdfs namenode -help即可獲取這些參數,
```
$ hdfs namenode -help
Usage: java NameNode [-backup] | 
    [-checkpoint] | 
    ...
    [-bootstrapStandby] | 
    [-recover [ -force] ] | // 以數據恢復模式的啟動方式,force參數表示后面所有的提醒都默認選擇第一個
    [-metadataVersion ]  ]
```

此種方式與之前腳本啟動方式略有不同的一點在於,此類方式是前台啟動的,用戶能直觀地看到NameNode的啟動過程。




###參考文檔和文章:
[HDFS QJM方式的官方文檔(HDFS High Availability Using the Quorum Journal Manager)](http://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-hdfs/HDFSHighAvailabilityWithQJM.html)

[[HADOOP] Standby NN無法啟動](https://www.thinksaas.cn/group/topic/399536/)

[Hadoop運維筆記 之 Namenode異常停止后無法正常啟動](http://ju.outofmemory.cn/entry/98888)


[HDFS數據恢復模式](http://blog.csdn.net/androidlushangderen/article/details/52679606)


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM