hadoop2.7.7 分布式集群安裝與配置


環境准備

服務器四台:

系統信息 角色 hostname IP地址
Centos7.4 Mster hadoop-master-001 10.0.15.100
Centos7.4 Slave hadoop-slave-001 10.0.15.99
Centos7.4 Slave hadoop-slave-002 10.0.15.98
Centos7.4 Slave hadoop-slave-003 10.0.15.97

 

 

 

 

 

 

 

四台節點統一操作操作

創建操作用戶 gourpadd hduser useradd hduser -g hduser 切換用戶並配置java環境變量 筆者這里用的1.8的 JAVA_HOME=~/jdk1.8.0_151 PATH=$PATH:$JAVA_HOME/bin export JAVA_HOME export PATH 配置/etc/hosts 10.0.15.100 hadoop-master-001
10.0.15.99 hadoop-data-001
10.0.15.98 hadoop-data-002
10.0.15.97 hadoop-data-003 設置ssh免密 這個網上比較多,這里不在累述

安裝流程(所有節點,包括master與slave)

下載hadoop並安裝

http://mirror.bit.edu.cn/apache/hadoop/common/hadoop-2.7.7/hadoop-2.7.7.tar.gz tar -zxvf  hadoop-2.7.7.tar.gz

移動並修改權限

chown hduser:hduser hadoop-2.7.7 mv hadoop-2.7.7 /usr/local/hadoop

切換用戶並配置環境變量

su - hduser vim .basrc #變量信息
export JAVA_HOME=/home/hduser/jdk1.8.0_151 export HADOOP_HOME=/usr/local/hadoop export PATH=$PATH:$HADOOP_HOME/bin export PATH=$PATH:$HADOOP_HOME/sbin export HADOOP_MAPRED_HOME=$HADOOP_HOME export HADOOP_COMMON_HOME=$HADOOP_HOME export HADOOP_HDFS_HOME=$HADOOP_HOME export YARN_HOME=$HADOOP_HOME export HADOOP_COMMON_HOME=$HADOOP_HOME export HADOOP_HDFS_HOME=$HADOOP_HOME export YARN_HOME=$HADOOP_HOME export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/native export HADOOP_OPTS="-Djava.library.path=$HADOOP_HOME/lib" export JAVA_LIBRARY_PATH=$HADOOP_HOME/lib/native:$JAVA_LIBRARY_PATH

修改Master配置文件

vim hadoop-env.sh /**/ 配置java路徑 export JAVA_HOME=/home/hduser/jdk1.8.0_151 /**/
vim core-site.xml /**/
<configuration>
    <property>
        <name>fs.default.name</name>
        <value>hdfs://hadoop-master-001:9000</value>
    </property>
    <property>
        <name>hadoop.tmp.dir</name>
        <value>/data/hadoop_data/hadoop_tmp</value>
    </property>
</configuration>
/**/
vim hdfs-site.xml /**/
<configuration>
    <property>
        <name>dfs.replication</name>
        <value>3</value>
    </property>
    <property>
        <name>dfs.namenode.name.dir</name>
        <value>file:/data/hadoop_data/hdfs/namenode</value>        #創建真實的路徑用來存放名稱節點
    </property>
    <property>
        <name>dfs.datanode.data.dir</name>
        <value>file:/data/hadoop_data/hdfs/datanode</value>        #創建真實的路徑用了存放數據
    </property>
</configuration>
/**/
vim mapred-site.xml /**/
<configuration>
    <property>
        <name>mapreduce.framework.name</name>
        <value>yarn</value>
    </property>
</configuration>
/**/
vim yarn-site.xml /**/
<configuration>
    <property>
        <name>yarn.resourcemanager.hostname</name>
        <value>hadoop-master-001</value>
    </property>
    <property>
        <name>yarn.nodemanager.aux-services</name>
        <value>mapreduce_shuffle</value>
    </property>
    <property>
        <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
        <value>org.apache.hadoop.mapred.ShuffleHandler</value>
    </property>
    <property>
        <name>yarn.resourcemanager.address</name>
        <value>hadoop-master-001:8050</value>
    </property>
    <property>
        <name>yarn.resourcemanager.scheduler.address</name>
        <value>hadoop-master-001:8030</value>
    </property>
    <property>
        <name>yarn.resourcemanager.resource-tracker.address</name>
        <value>hadoop-master-001:8025</value>
    </property>
    #使用hadoop yarn運行pyspark時,不添加下面兩個參數會報錯
    <property>
        <name>yarn.nodemanager.pmem-check-enabled</name>
        <value>false</value>
    </property>
    <property>
        <name>yarn.nodemanager.vmem-check-enabled</name>
        <value>false</value>
    </property>
</configuration>
/**/

修改Slave配置文件 

vim hadoop-env.sh /**/ 配置java路徑 export JAVA_HOME=/home/hduser/jdk1.8.0_151 /**/
vim core-site.xml /**/
<configuration>
    <property>
        <name>fs.default.name</name>
        <value>hdfs://hadoop-master-001:9000</value>
    </property>
    <property>
        <name>hadoop.tmp.dir</name>
        <value>/data/hadoop_data/hadoop_tmp</value>
    </property>
</configuration>
/**/
vim hdfs-site.xml /**/
<configuration>
    <property>
        <name>dfs.replication</name>
        <value>3</value>
    </property>
    <property>
        <name>dfs.datanode.data.dir</name>
        <value>file:/home/data/hadoop_data/hdfs/datanode</value>
    </property>
</configuration>
/**/
vim mapred-site.xml /**/
<configuration>
    <property>
        <name>mapred.job.tracker</name>
        <value>hadoop-master-001:54311</value>
    </property>
</configuration>
/**/
vim yarn-site.xml /**/
<configuration>
    <property>
        <name>yarn.nodemanager.aux-services</name>
        <value>mapreduce_shuffle</value>
    </property>
    <property>
        <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
        <value>org.apache.hadoop.mapred.ShuffleHandler</value>
    </property>
    <property>
        <name>yarn.resourcemanager.address</name>
        <value>hadoop-master-001:8050</value>
    </property>
    <property>
        <name>yarn.resourcemanager.scheduler.address</name>
        <value>hadoop-master-001:8030</value>
    </property>
    <property>
        <name>yarn.resourcemanager.resource-tracker.address</name>
        <value>hadoop-master-001:8025</value>
    </property>
    #使用hadoop yarn運行pyspark時,不添加下面兩個參數會報錯
    <property>
        <name>yarn.nodemanager.pmem-check-enabled</name>
        <value>false</value>
    </property>
    <property>
        <name>yarn.nodemanager.vmem-check-enabled</name>
        <value>false</value>
    </property>
</configuration>
/**/

其他操作(所有節點,包括master與slave)

#執行hadoop 命令報WARNING解決辦法
vim log4j.properties添加如下行 log4j.logger.org.apache.hadoop.util.NativeCodeLoader=ERROR

 啟動操作

安裝並配置完成后返回master節點格式化namenode cd /data/hadoop_data/hdfs/namenode hadoop namenode -format 在master節點執行命令 start-all.sh         //啟動 stop-all.sh         //關閉

 異常處理

hadoop數據節點查看hdfs文件時: ls: No Route to Host from  hadoop-data-002/10.0.15.98 to hadoop-master-001:9000 failed on socket timeout exception: java.net.NoRouteToHostException: 沒有到主機的路由; For more details see:  http://wiki.apache.org/hadoop/NoRouteToHost 解決方式數據節點telnet namenode的9000端口 正常原因/etc/hosts中主機名與ip地址不符或者端口未開放防火牆引起

效果圖

 

 

擴展連接

spark集群安裝並集成到hadoop集群


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM