執行bin/hdfs haadmin -transitionToActive nn1時出現,Automatic failover is enabled for NameNode at bigdata-pro02.kfk.com/192.168.80.152:8020 Refusing to manually manage HA state的解決辦法(圖文詳解)


 

 

  不多說,直接上干貨!

 

 

  首先,

 

 

 

 

 

 

 

 

 

 

 

 

 

    那么,你也許,第一感覺,是想到的是

全網最詳細的Hadoop HA集群啟動后,兩個namenode都是standby的解決辦法(圖文詳解)

 

 

 

    這里,nn1,不多贅述了。很簡單,大家自行去看。

 

    總的是nn,我的bigdata-pro01.kfk.com是nn1,我的bigdata-pro02.kfk.com是nn2。

 

 

 

     是因為,在配置文件上,如下:

 

 

 

 

    

[kfk@bigdata-pro02 hadoop-2.6.0]$ bin/hdfs haadmin -transitionToActive nn1
Automatic failover is enabled for NameNode at bigdata-pro02.kfk.com/192.168.80.152:8020
Refusing to manually manage HA state, since it may cause
a split-brain scenario or other incorrect state. If you are very sure you know what you are doing, please specify the forcemanual flag.
[kfk@bigdata-pro02 hadoop-2.6.0]$ 

 

 

 

 

 

 

     

 

解決辦法:

    是因為開啟了zkfc 自動選active的namenode   不能手動切換了 zkfc會自動選擇namenode節點作為active的。

 

 

 

 

 

 

  所以,

啟動並測試

   

  1、先停止掉Hadoop和zookeeper的進程。

 

  2、啟動zookeeper進程。

 

  3、開啟zkfc進程

[kfk@bigdata-pro01 hadoop-2.6.0]$ pwd
/opt/modules/hadoop-2.6.0
[kfk@bigdata-pro01 hadoop-2.6.0]$ sbin/hadoop-daemon.sh start zkfc 
starting zkfc, logging to /opt/modules/hadoop-2.6.0/logs/hadoop-kfk-zkfc-bigdata-pro01.kfk.com.out

 

 

 

  4、進入Hadoop的安裝目錄下面的sbin目錄中,找到start-dfs.sh命令可以啟動NameNode,當然這里需要你在配置了NameNode主節點的Hadoop節點上面來執行他。

     或者,直接sbin/start-all.sh

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

復制代碼
[kfk@bigdata-pro02 hadoop-2.6.0]$ bin/hdfs -help
Usage: hdfs [--config confdir] COMMAND
       where COMMAND is one of:
  dfs                  run a filesystem command on the file systems supported in Hadoop.
  namenode -format     format the DFS filesystem
  secondarynamenode    run the DFS secondary namenode
  namenode             run the DFS namenode
  journalnode          run the DFS journalnode
  zkfc                 run the ZK Failover Controller daemon
  datanode             run a DFS datanode
  dfsadmin             run a DFS admin client
  haadmin              run a DFS HA admin client
  fsck                 run a DFS filesystem checking utility
  balancer             run a cluster balancing utility
  jmxget               get JMX exported values from NameNode or DataNode.
  mover                run a utility to move block replicas across
                       storage types
  oiv                  apply the offline fsimage viewer to an fsimage
  oiv_legacy           apply the offline fsimage viewer to an legacy fsimage
  oev                  apply the offline edits viewer to an edits file
  fetchdt              fetch a delegation token from the NameNode
  getconf              get config values from configuration
  groups               get the groups which users belong to
  snapshotDiff         diff two snapshots of a directory or diff the
                       current directory contents with a snapshot
  lsSnapshottableDir   list all snapshottable dirs owned by the current user
                        Use -help to see options
  portmap              run a portmap service
  nfs3                 run an NFS version 3 gateway
  cacheadmin           configure the HDFS cache
  crypto               configure HDFS encryption zones
  storagepolicies      get all the existing block storage policies
  version              print the version

Most commands print help when invoked w/o parameters.
復制代碼

 

 

 

 

 

 

 

復制代碼
[kfk@bigdata-pro02 hadoop-2.6.0]$ 
[kfk@bigdata-pro02 hadoop-2.6.0]$ bin/hdfs haadmin -help
Usage: DFSHAAdmin [-ns <nameserviceId>]
    [-transitionToActive <serviceId> [--forceactive]]
    [-transitionToStandby <serviceId>]
    [-failover [--forcefence] [--forceactive] <serviceId> <serviceId>]
    [-getServiceState <serviceId>]
    [-checkHealth <serviceId>]
    [-help <command>]

Generic options supported are
-conf <configuration file>     specify an application configuration file
-D <property=value>            use value for given property
-fs <local|namenode:port>      specify a namenode
-jt <local|resourcemanager:port>    specify a ResourceManager
-files <comma separated list of files>    specify comma separated files to be copied to the map reduce cluster
-libjars <comma separated list of jars>    specify comma separated jar files to include in the classpath.
-archives <comma separated list of archives>    specify comma separated archives to be unarchived on the compute machines.

The general command line syntax is
bin/hadoop command [genericOptions] [commandOptions]

[kfk@bigdata-pro02 hadoop-2.6.0]$ 
復制代碼

 

     注意,其實自帶的命令里,都提供了,若兩者都是standby狀態怎么執行。若兩者都是active狀態怎么執行。這里,不多贅述。

 

 

 

 

 

  如果,還是沒解決的話,則

bin/hdfs haadmin -transitionToActive nn1

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

歡迎大家,加入我的微信公眾號:大數據躺過的坑        人工智能躺過的坑
 
 
 

同時,大家可以關注我的個人博客

   http://www.cnblogs.com/zlslch/   和     http://www.cnblogs.com/lchzls/      http://www.cnblogs.com/sunnyDream/   

   詳情請見:http://www.cnblogs.com/zlslch/p/7473861.html

 

  人生苦短,我願分享。本公眾號將秉持活到老學到老學習無休止的交流分享開源精神,匯聚於互聯網和個人學習工作的精華干貨知識,一切來於互聯網,反饋回互聯網。
  目前研究領域:大數據、機器學習、深度學習、人工智能、數據挖掘、數據分析。 語言涉及:Java、Scala、Python、Shell、Linux等 。同時還涉及平常所使用的手機、電腦和互聯網上的使用技巧、問題和實用軟件。 只要你一直關注和呆在群里,每天必須有收獲

 

      對應本平台的討論和答疑QQ群:大數據和人工智能躺過的坑(總群)(161156071) 

 

 

 

 

 

 

 

 

 

 

 

 

 


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM