Hadoop生產環境搭建(含HA、Federation)


Hadoop生產環境搭建

1. 將安裝包hadoop-2.x.x.tar.gz存放到某一目錄下,並解壓。
2. 修改解壓后的目錄中的文件夾etc/hadoop下的配置文件(若文件不存在,自己創建。)
    包括hadoop-env.sh,mapred-site.xml,core-site.xml,hdfs-site.xml,yarn-site.xml
3. 格式化並啟動HDFS
4. 啟動YARN
以上整個過程與Hadoop單機Hadoop測試環境搭建基本一致,不同的是步驟2中配置文件設置內容以及步驟3的詳細過程。

HDFS2.0的HA配置方法(主備NameNode)
注意事項:
    1)主備Namenode有多種配置方法,本次使用JournalNode方式。至少准備三個節點作為JournalNode
    2)主備兩個Namenode應放於不同的機器上,獨享機器。(HDFS2.0中吳煦配置secondaryNamenode,備NameNode已經代替它完成相應的功能)
    3)主備NameNode之間有兩種切換方式,手動切換和自動切換。其中自動切換是借助Zookeeper實現的。因此需要單獨部署一個Zookeeper集群,通常為奇數個,至少3個。

==================================================================================
HSFS HA部署架構和流程

HSFS HA部署架
    三個JournalNode
    兩個NameNode
    N個DataNode

HDFS HA部署流程——hdfs-site.xml配置
dfs.nameservices 集群中命名服務列表(自定義)
dfs.ha.namenodes.${ns}命名服務中的namenode邏輯名稱(自定義)
dfs.namenode.rpc-address.${ns}.${nn} 命名服務中邏輯名稱對應的RPC地址
dfs.namenode.http-address.${ns}.${nn} 命名服務中邏輯名稱對應的HTTP地址
dfs.namenode.name.dir NameNode fsimage存放目錄
dfs.namenode.shared.edits.dir 主備NameNode同步元信息的共享存儲系統
dfs.journalnode.edits.dir Journal Node數據存放目錄

HDFS HA部署流程——hdfs-site.xml配置實例
<?xml version="1.0" encoding="UTF-8"?>
<configuration>
    <property>
        <name>dfs.nameservices</name>
        <value>hadoop-rokid</value>
    </property>
    <property>
        <name>dfs.ha.namenodes.hadoop-rokid</name>
        <value>nn1,nn2</value>
    </property>
    <property>
        <name>dfs.namenode.rpc-adress.hadoop-rokid.nn1</name>
        <value>nn1:8020</value>
    </property>
    <property>
        <name>dfs.namenode.rpc-adress.hadoop-rokid.nn2</name>
        <value>nn2:8020</value>
    </property>
    <property>
        <name>dfs.namenode.http-adress.hadoop-rokid.nn1</name>
        <value>nn1:50070</value>
    </property>
    <property>
        <name>dfs.namenode.http-adress.hadoop-rokid.nn2</name>
        <value>nn2:50070</value>
    </property>
    <property>
        <name>dfs.namenode.name.dir</name>
        <value>file:///home/zhangzhenghai/cluster/hadoop/dfs/name</value>
    </property>
    <property>
        <name>dfs.namenode.shared.edits.dir</name>
        <value>qjournal://jnode1:8485;jnode2:8485;jnode3:8485/hadoop-rokid</value>
    </property>
    <property>
        <name>dfs.datanode.data.dir</name>
        <value>file:///home/zhangzhenghai/cluster/hadoop/dfs/data</value>
    </property>
    <property>
        <name>dfs.ha.automatic-failover.enabled</name>
        <value>false</value>
    </property>
    <property>
        <name>dfs.journalnode.edits.dir</name>
        <value>/home/zhangzhenghai/cluster/hadoop/dfs/journal</value>
    </property>
</configuration>
HDFS HA部署流程——core-site.xml配置實例
<?xml version="1.0" encoding="UTF-8"?>
<configuration>
    <property>
        <name>fs.default.name</name>
        <value>hdfs://nn1:8020</value>
    </property>
</configuration>
HDFS HA部署流程——slaves配置實例
列出集群中的所有機器名稱列表

啟動順序:
    Hadoop2.x上機實踐(部署多機-HDFS HA+YARN)
    HA
    注意:所有操作均在Hadoop部署目錄下進行。
    啟動Hadoop集群:
    step1:
    在各個JournalNode節點上,輸入以下命令啟動journalNode服務,
    sbin/hadoop-daemon.sh start journalnode

    step2:
    在[nn1]上,對其進行格式化,並啟動,
    bin/hdfs namenode -format
    sbin/hadoop-daemon.sh start namenode

    step3:
    在[nn2]上,同步nn1的元數據信息,
    bin/hdfs namenode -bootstrapStandby

    step4:
    啟動[nn2],
    sbin/hadoop-daemon.sh start namenode

    經過以上四步驟,nn1和nn2均處於standby狀態

    step5:
    將[nn1]切換成Active
    bin/hdfs haadmin -transitionToActive nn1

    step6:
    在[nn1]上,啟動所有datanode
    sbin/hadoop-daemons.sh start datanode

==================================================================================
Hadoop HA+Federation部署架構和流程

HSFS HA+Federation部署架構
    三個JournalNode
    四個Namenode(每兩個互備)
    N個DataNode

HDFS HA+Federation部署流程——hdfs-site.xml配置

<?xml version="1.0" encoding="UTF-8"?>
<configuration>
    <property>
        <name>dfs.nameservices</name>
        <value>hadoop-rokid1,hadoop-rokid2</value>
    </property>
    <!-- hadoop-rokid1 -->
    <property>
        <name>dfs.ha.namenodes.hadoop-rokid1</name>
        <value>nn1,nn2</value>
    </property>
    <property>
        <name>dfs.namenode.rpc-adress.hadoop-rokid1.nn1</name>
        <value>nn1:8020</value>
    </property>
    <property>
        <name>dfs.namenode.rpc-adress.hadoop-rokid1.nn2</name>
        <value>nn2:8020</value>
    </property>
    <property>
        <name>dfs.namenode.http-adress.hadoop-rokid1.nn1</name>
        <value>nn1:50070</value>
    </property>
    <property>
        <name>dfs.namenode.http-adress.hadoop-rokid1.nn2</name>
        <value>nn2:50070</value>
    </property>
    <!-- hadoop-rokid2 -->
    <property>
        <name>dfs.ha.namenodes.hadoop-rokid2</name>
        <value>nn3,nn4</value>
    </property>
    <property>
        <name>dfs.namenode.rpc-adress.hadoop-rokid2.nn3</name>
        <value>nn3:8020</value>
    </property>
    <property>
        <name>dfs.namenode.rpc-adress.hadoop-rokid2.nn4</name>
        <value>nn4:8020</value>
    </property>
    <property>
        <name>dfs.namenode.http-adress.hadoop-rokid2.nn3</name>
        <value>nn3:50070</value>
    </property>
    <property>
        <name>dfs.namenode.http-adress.hadoop-rokid2.nn4</name>
        <value>nn4:50070</value>
    </property>
    <property>
        <name>dfs.namenode.name.dir</name>
        <value>file:///home/zhangzhenghai/cluster/hadoop/dfs/name</value>
    </property>
    <!-- hadoop-rokid1 JournalNode配置 兩者配置不一樣 每一個namespace下 只存其一-->
    <property>
        <name>dfs.namenode.shared.edits.dir</name>
        <value>qjournal://jnode1:8485;jnode2:8485;jnode3:8485/hadoop-rokid1</value>
    </property>
    <!-- hadoop-rokid2 JournalNode配置 兩者配置不一樣 每一個namespace下 只存其一-->
    <property>
        <name>dfs.namenode.shared.edits.dir</name>
        <value>qjournal://jnode1:8485;jnode2:8485;jnode3:8485/hadoop-rokid2</value>
    </property>
    <property>
        <name>dfs.datanode.data.dir</name>
        <value>file:///home/zhangzhenghai/cluster/hadoop/dfs/data</value>
    </property>
    <property>
        <name>dfs.ha.automatic-failover.enabled</name>
        <value>false</value>
    </property>
    <property>
        <name>dfs.journalnode.edits.dir</name>
        <value>/home/zhangzhenghai/cluster/hadoop/dfs/journal</value>
    </property>
</configuration>

啟動順序:
在nn1和nn2兩個節點上進行如下操作:
    步驟1:在各個JournalNode節點上,輸入以下命令啟動JournalNode服務:
    sbin/hadoop-daemon.sh start journalnode
    步驟2:在[nn1]上,對其進行格式化,並啟動:
    bin/hdfs namenode -format -clusterId hadoop-rokid1
    sbin/hadoop-daemon.sh start namenode
    步驟3:在[nn2]上,同步nn1的元數據信息
    bin/hdfs namenode bootstrapStandby
    步驟4:在[nn2]上,啟動NameNode
    sbin/hadooop-daemon.sh start namenode
    (經過以上四個步驟,nn1和nn2均處於standby狀態)
    步驟5:在[nn1]上,將NameNode切換為Active
    bin/hdfs haadmin -ns hadoop-rokid1 -transitionToActive nn1
在nn3和nn4兩個節點上進行如下操作:
    步驟1:在各個JournalNode節點上,輸入以下命令啟動JournalNode服務:
    sbin/hadoop-daemon.sh start journalnode
    步驟2:在[nn3]上,對其進行格式化,並啟動:
    bin/hdfs namenode -format -clusterId hadoop-rokid2
    sbin/hadoop-daemon.sh start namenode
    步驟3:在[nn4]上,同步nn3的元數據信息
    bin/hdfs namenode bootstrapStandby
    步驟4:在[nn4]上,啟動NameNode
    sbin/hadooop-daemon.sh start namenode
    (經過以上四個步驟,nn3和nn4均處於standby狀態)
    步驟5:在[nn3]上,將NameNode切換為Active
    bin/hdfs haadmin -ns hadoop-rokid2 -transitionToActive nn3
最后:在[nn1]上,啟動所有datanode
    sbin/hadoop-daemons.sh start datanode
==================================================================================
Yarn部署架構
    ResourceManager
    N個NodeManager

yarn-site.xml配置實例
<?xml version="1.0" encoding="UTF-8"?>
<configuration>
    <property>
        <name>yarn.resourcemanager.hostname</name>
        <value>YARN001</value>
    </property>
    <property>
        <name>yarn.resourcemanager.address</name>
        <value>${yarn.resourcemanager.hostname}:8032</value>
    </property>
    <property>
        <name>yarn.resourcemanager.scheduler.address</name>
        <value>${yarn.resourcemanager.hostname}:8030</value>
    </property>
    <property>
        <name>yarn.resourcemanager.webapp.address</name>
        <value>${yarn.resourcemanager.hostname}:8088</value>
    </property>
    <property>
        <name>yarn.resourcemanager.webapp.https.address</name>
        <value>${yarn.resourcemanager.hostname}:8090</value>
    </property>
    <property>
        <name>yarn.resourcemanager.resource-tracker.address</name>
        <value>${yarn.resourcemanager.hostname}:8031</value>
    </property>
    <property>
        <name>yarn.resourcemanager.admin.address</name>
        <value>${yarn.resourcemanager.hostname}:8033</value>
    </property>
    <property>
        <name>yarn.resourcemanager.scheduler.class</name>
        <value>org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler</value>
    </property>
    <property>
        <name>yarn.scheduler.fair.allocation.file</name>
        <value>${yarn.home.dir}/etc/hadoop/fairscheduler.xml</value>
    </property>
    <property>
        <name>yarn.nodemanager.local-dirs</name>
        <value>/home/zhangzhenghai/cluster/hadoop/yarn/local</value>
    </property>
    <property>
        <name>yarn.log-aggregation-enable</name>
        <value>true</value>
    </property>
    <property>
        <name>yarn.nodemanager.remote-app-log-dir</name>
        <value>/tmp/logs</value>
    </property>
    <property>
        <name>yarn.nodemanager.resource.memory-mb</name>
        <value>30720</value>
    </property>
    <property>
        <name>yarn.nodemanager.resource.cpu-vcores</name>
        <value>12</value>
    </property>
    <property>
        <name>yarn.nodemanager.aux-services</name>
        <value>mapreduce_shuffle</value>
    </property>
</configuration>

fairscheduler.xml配置實例
<?xml version="1.0" encoding="UTF-8"?>
<allocations>
    <queue name="basic">
        <minResources>102400 mb, 50 vcores</minResources>
        <maxResources>153600 mb, 100 vcores</maxResources>
        <maxRunningApps>200</maxRunningApps>
        <minSharePreemptionTimeout>300</minSharePreemptionTimeout>
        <weight>1.0</weight>
        <aclSubmitApps>root,yarn,search,hdfs</aclSubmitApps>
    </queue>
    <queue name="queue1">
        <minResources>102400 mb, 50 vcores</minResources>
        <maxResources>153600 mb, 100 vcores</maxResources>
    </queue>
    <queue name="queue2">
        <minResources>102400 mb, 50 vcores</minResources>
        <maxResources>153600 mb, 100 vcores</maxResources>
    </queue>
</allocations>

mapred-site.xml配置實例
<?xml version="1.0" encoding="UTF-8"?>
<configuration>
    <property>
        <name>mapreduce.framework.name</name>
        <value>yarn</value>
        <description>The runtime framework for executing MapReduce jobs. Can be one of local classic or yarn.</description>
    </property>
    <!-- jobhistory properties -->
    <property>
        <name>mapreduce.jobhistory.address</name>
        <value>jobhistory:10020</value>
        <description>MapReduce JobHistory Server IPC host:port</description>
    </property>
    <property>
        <name>mapreduce.jobhistory.webapp.address</name>
        <value>jobhistory:19888</value>
        <description>MapReduce JobHistory Server Web UI host:port</description>
    </property>
</configuration>

YARN啟動/停止步驟
在YARN001上執行以下命令
啟動YARN:
    sbin/start-yarn.sh
停止YARN:
    sbin/stop-yarn.sh
啟動MR-JobHistory:
    sbin/mr-jobhistory-daemon.sh start historyserver

#############################OVER#####################################################################

 


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM