Hadoop在啟動時的坑——start-all.sh報錯


1、若你用的Linux系統是CentOS的話,這是一個坑:

  它會提示你JAVA_HOME找不到,現在去修改文件:

.修改hadoop配置文件,手動指定JAVA_HOME環境變量
    [${hadoop_home}/etc/hadoop/hadoop-env.sh]
    ...
    export JAVA_HOME=/soft/jdk
    ...

  這是CentOS的一個大坑,手動配置JAVA_HOME環境變量。

 

2、啟動后無NameNode進程

如果在啟動Hadoop,start-all.sh之后一切正常。但是Jps查看進程時發現進程中少了一個NameNode進程,不要慌張。

  跳轉解決 :https://www.cnblogs.com/dongxiucai/p/9636177.html

3、一定要設置ssh免密登陸,切記

配置SSH
        1)檢查是否安裝了ssh相關軟件包(openssh-server + openssh-clients + openssh)
            $yum list installed | grep ssh

        2)檢查是否啟動了sshd進程
            $>ps -Af | grep sshd
        
        3)在client側生成公私秘鑰對。
            $>ssh-keygen -t rsa -P '' -f ~/.ssh/id_rsa
        
        4)生成~/.ssh文件夾,里面有id_rsa(私鑰) + id_rsa.pub(公鑰)

        5)追加公鑰到~/.ssh/authorized_keys文件中(文件名、位置固定)
            $>cd ~/.ssh
            $>cat id_rsa.pub >> authorized_keys
        
        6)修改authorized_keys的權限為644.
            $>chmod 644 authorized_keys
        
        7)測試
            $>ssh localhost

4、報以下錯誤:

This script is Deprecated. Instead use start-dfs.sh and start-yarn.sh
15/01/23 20:23:41 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Starting namenodes on [Java HotSpot(TM) Client VM warning: You have loaded library /hadoop/hadoop-2.6.0/lib/native/libhadoop.so.1.0.0 which might have disabled stack guard. The VM will try to fix the stack guard now.
It's highly recommended that you fix the library with 'execstack -c <libfile>', or link it with '-z noexecstack'.
hd-m1]
sed: -e expression #1, char 6: unknown option to `s'
-c: Unknown cipher type 'cd'

  這個是因為你的操作系統、jdk、hadoop的位數不匹配,有32位的又有64位的。

  查看位數跳轉:https://www.cnblogs.com/dongxiucai/p/9637403.html

  常規解決方案為 :

主要是環境變量設置好:
    在 /etc/profile  中加入

export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/native
export HADOOP_OPTS="-Djava.library.path=$HADOOP_HOME/lib:$HADOOP_COMMON_LIB_NATIVE_DIR"

然后重新編譯   source  /etc/profile  

並把相同配置添加到hadoop-env.sh文件末尾

  一般情況都能解決。

   若是還不行的話,就需要更換相匹配的版本了

5、報以下錯誤:

  mkdir: cannot create directory ‘/soft/hadoop-2.7.3/logs’: Permission denied

  這是在創建logs時無權限,原因是/soft目錄的用戶權限為root,需要修改為hadoop用戶權限

  注意:hadoop為用戶名,/soft為安裝目錄,因人而異

  解決方案:

1、先切換到root用戶
    su root
2、修改/soft目錄的用戶權限,記住要遞歸
    chown -R hadoop:hadoop /soft   // -R是遞歸修改
3、查看修改結果
    drwxr-xr-x. 3 hadoop hadoop 4096 8月  11 06:13 hadoop
    drwxr-xr-x. 3 hadoop hadoop 4096 8月  11 06:20 jdk
    修改成功

 6、格式化namenode后啟動hdfs報:

 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Incorrect configuration: namenode address dfs.namenode.servicerpc-address or dfs.namenode.rpc-address is not configured.
Starting namenodes on []
py_1: starting namenode, logging to /soft/hadoop/hadoop/hadoop-2.5.0-cdh5.3.6/logs/hadoop-hadoop-namenode-hjt-virtual-machine.out
py_3: starting namenode, logging to /soft/hadoop/hadoop/hadoop-2.5.0-cdh5.3.6/logs/hadoop-hadoop-namenode-ubuntu.out
py_2: starting namenode, logging to /soft/hadoop/hadoop/hadoop-2.5.0-cdh5.3.6/logs/hadoop-hadoop-namenode-cyrg.out
py_1: starting datanode, logging to /soft/hadoop/hadoop/hadoop-2.5.0-cdh5.3.6/logs/hadoop-hadoop-datanode-hjt-virtual-machine.out
py_3: starting datanode, logging to /soft/hadoop/hadoop/hadoop-2.5.0-cdh5.3.6/logs/hadoop-hadoop-datanode-ubuntu.out
py_2: starting datanode, logging to /soft/hadoop/hadoop/hadoop-2.5.0-cdh5.3.6/logs/hadoop-hadoop-datanode-cyrg.out

  發現,namenode一共啟動了3台,全部啟動。在反復檢查過所有的配置后發現沒有錯。

  其實大家現在可以看到我的機器的名稱:py_1、py_2、py_3就是名稱帶下滑線,這個切記。改為py01、py02、py03完美解決。

  有幫助的話,點個推薦讓更多人看到


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM