cdh版本的hadoop安裝及配置(偽分布式模式) MapReduce配置 yarn配置


安裝hadoop需要jdk依賴,我這里是用jdk8

jdk版本:jdk1.8.0_151

hadoop版本:hadoop-2.5.0-cdh5.3.6

hadoop下載地址:鏈接:https://pan.baidu.com/s/1qZNeVFm 密碼:ciln

jdk下載地址:鏈接:https://pan.baidu.com/s/1qZLddl6 密碼:c9w3

一切准備好以后,開始安裝

1、上傳hadoop軟件包和jdk軟件包到Linux系統指定目錄:/opt/softwares/cdh

2、解壓 hadoop軟件包和jdk軟件包到指定目錄:/opt/modules/cdh/

解壓命令:tar -zxvf hadoop-2.5.0-cdh5.3.6.tar.gz -C /opt/modules/cdh/

     tar -zxvf jdk-8u151-linux-x64.tar.gz -C /opt/modules/cdh

 

3、jdk環境變量配置

在/etc/profile文件中配置

3.1 sudo vi  /etc/profile

==========================================================================

#JAVA_HOME#
export JAVA_HOME=/opt/modules/jdk1.8.0_151
export CLASSPATH=.:$JAVA_HOME/jre/lib/rt.jar:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar
export PATH=$PATH:$JAVA_HOME/bin

==========================================================================

3.2 source /etc/profile

 4、測試java是否已經安裝成功

4.1 java -version

5、hadoop配置

  5.1 刪除hadoop/share/doc

  5.2 修改配置文件

    3個?-env,sh文件(hadoop,mapred,yarn)

      export JAVA_HOME=/opt/modules/jdk1.8.0_151

    4個?-site.xml文件(core-site.xml 、hdfs-site.xml、mapred-site.xml、yarn-site.xml)

      core-site.xml       

        <property>
          <name>fs.defaultFS</name>
          <value>hdfs://hadoop01.xningge.com:8020</value>
        </property>
        <property>
          <name>hadoop.tmp.dir</name>
          <value>/opt/modules/cdh/hadoop-2.5.0-cdh5.3.6/data/tmp</value>
        </property>

      hdfs-site.xml

        <property>
          <name>dfs.replication</name>
          <value>1</value>
        </property>
        <property>
          <name>dfs.namenode.secondary.http-address</name>
          <value>hadoop01.xningge.com:50090</value>
        </property>
        <property>
          <name>dfs.permissions.enabled</name>
          <value>false</value>
        </property>

      mapred-site.xml

        <property>
          <name>mapreduce.framework.name</name>
          <value>yarn</value>
        </property>
        <property>
          <name>mapreduce.jobhistory.address</name>
          <value>hadoop01.xningge.com:10020</value>
        </property>
        <property>
          <name>mapreduce.jobhistory.webapp.address</name>
          <value>hadoop01.xningge.com:19888</value>
        </property>

      yarn-site.xml

        <property>
          <name>yarn.resourcemanager.hostname</name>
          <value>hadoop01.xningge.com</value>
        </property>
        <property>
          <name>yarn.nodemanager.aux-services</name>
          <value>mapreduce_shuffle</value>
        </property>
        <property>
          <name>yarn.log-aggregation-enable</name>
          <value>true</value>
        </property>
        <property>
          <name>yarn.log-aggregation.retain-seconds</name>
          <value>86400</value>
        </property>

    1個slaves

        hadoop01.xningge.com

6、格式化namenode

  $ bin/hdfs namenode -format

7、開啟各服務

  $ sbin/hadoop-daemon.sh start namenode
  $ sbin/hadoop-daemon.sh start datanode
  $ sbin/hadoop-daemon.sh start secondarynamenode
  $ sbin/mr-jobhistory-daemon.sh start historyserver
  $ sbin/yarn-daemon.sh start resourcemanager
  $ sbin/yarn-daemon.sh start nodemanager
  配置SSH免密登陸可使用:
  $ sbin/start-dfs.sh
  $ sbin/start-yarn.sh
  $ sbin/start-all.sh

 

8、基本測試

  $ bin/hdfs dfs -mkdir -p /user/xningge/mapreduce/input
  $ bin/hdfs dfs -put /opt/datas/wc.input  /user/xningge/mapreduce/input 
  $ bin/hdfs dfs -get /user/xningge/mapreduce/input/wc.input  /
  $ bin/hdfs dfs -cat /user/xningge/mapreduce/input/wc.input

9、跑一個簡單的job

  $ bin/yarn jar share/hadoop/mapreduce2/hadoop-mapreduce-examples-2.5.0-cdh5.3.6.jar wordcount  /user/xningge/mapreduce/input  /user/xningge/mapreduce/output

 


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM