如何实现Hadoop与Spark的统一部署


二、Hadoop部署

2.1 Hadoop安装(三台机器可同步进行)

  1. 下载hadoop2.7.7(hadoop-2.7.7.tar.gz)
  2. 解压 tar -zxvf hadoop-2.7.7.tar.gz ,并在主目录下创建tmp、dfs、dfs/name、dfs/node、dfs/data
cd /opt/hadoop-2.7.7 mkdir tmp mkdir dfs mkdir dfs/name mkdir dfs/node mkdir dfs/data 

2.2 Hadoop配置

以下操作都在hadoop-2.7.7/etc/hadoop下进行

2.2.1 编辑hadoop-env.sh文件,修改JAVA_HOME配置项为JDK安装目录

export JAVA_HOME=/opt/jdk 

2.2.2 编辑core-site.xml文件,添加以下内容

其中master为计算机名,/opt/hadoop-2.7.7/tmp为手动创建的目录

<configuration> <property> <name>fs.defaultFS</name> <value>hdfs://master:9000</value> </property> <property> <name>io.file.buffer.size</name> <value>131072</value> </property> <property> <name>hadoop.tmp.dir</name> <value>file:/opt/hadoop-2.7.7/tmp</value> <description>Abasefor other temporary directories.</description> </property> <property> <name>hadoop.proxyuser.spark.hosts</name> <value>*</value> </property> <property> <name>hadoop.proxyuser.spark.groups</name> <value>*</value> </property> </configuration> 

2.2.3 编辑hdfs-site.xml文件,添加以下内容

其中master为计算机名,
file:/opt/hadoop-2.7.7/dfs/name和file:/opt/hadoop-2.7.7/dfs/data为手动创建目录

<configuration> <property> <name>dfs.namenode.secondary.http-address</name> <value>master:9001</value> </property> <property> <name>dfs.namenode.name.dir</name> <value>file:/opt/hadoop-2.7.7/dfs/name</value> </property> <property> <name>dfs.datanode.data.dir</name> <value>file:/opt/hadoop-2.7.7/dfs/data</value> </property> <property> <name>dfs.replication</name> <value>3</value> </property> <property> <name>dfs.webhdfs.enabled</name> <value>true</value> </property> </configuration> 

复制mapred-site.xml.template并重命名为mapred-site.xml

cp mapred-site.xml.template mapred-site.xml 

2.2.4 编辑mapred-site.xml文件,添加以下内容

其中master为计算机名

<configuration> <property> <name>mapreduce.framework.name</name> <value>yarn</value> </property> <property> <name>mapreduce.jobhistory.address</name> <value>master:10020</value> </property> <property> <name>mapreduce.jobhistory.webapp.address</name> <value>master:19888</value> </property> </configuration> 

2.2.5 编辑yarn-site.xml文件,添加以下内容

其中master为计算机名

<configuration> <!-- Site specific YARN configuration properties --> <property> <name>yarn.nodemanager.aux-services</name> <value>mapreduce_shuffle</value> </property> <property> <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name> <value>org.apache.hadoop.mapred.ShuffleHandler</value> </property> <property> <name>yarn.resourcemanager.address</name> <value>master:8032</value> </property> <property> <name>yarn.resourcemanager.scheduler.address</name> <value>master:8030</value> </property> <property> <name>yarn.resourcemanager.resource-tracker.address</name> <value>master:8035</value> </property> <property> <name>yarn.resourcemanager.admin.address</name> <value>master:8033</value> </property> <property> <name>yarn.resourcemanager.webapp.address</name> <value>master:8088</value> </property> </configuration> 

2.2.6 修改slaves文件,添加集群节点(多机添加多个)

添加以下

master
slave1
slave2

2.2.7 Hadoop集群搭建

hadoop配置集群,可以将配置文件etc/hadoop下内容同步到其他机器上,既2.2.1-2.2.6无需在一个个配置。

cd /opt/hadoop-2.7.7/etc scp -r hadoop root@另一台机器名:/opt/hadoop-2.7.7/etc 

2.3 Hadoop启动

1.格式化一个新的文件系统,进入到hadoop-2.7.7/bin下执行:

./hadoop namenode -format

2.启动hadoop,进入到hadoop-2.7.7/sbin下执行:

./start-all.sh

看到如下内容说明启动成功

root@master:/opt/hadoop-2.7.7/sbin# ./start-all.sh This script is Deprecated. Instead use start-dfs.sh and start-yarn.sh Starting namenodes on [master] master: starting namenode, logging to /opt/hadoop-2.7.7/logs/hadoop-root-namenode-master.out slave2: starting datanode, logging to /opt/hadoop-2.7.7/logs/hadoop-root-datanode-slave2.out master: starting datanode, logging to /opt/hadoop-2.7.7/logs/hadoop-root-datanode-master.out slave1: starting datanode, logging to /opt/hadoop-2.7.7/logs/hadoop-root-datanode-slave1.out Starting secondary namenodes [master] master: starting secondarynamenode, logging to /opt/hadoop-2.7.7/logs/hadoop-root-secondarynamenode-master.out starting yarn daemons starting resourcemanager, logging to /opt/hadoop-2.7.7/logs/yarn-root-resourcemanager-master.out slave2: starting nodemanager, logging to /opt/hadoop-2.7.7/logs/yarn-root-nodemanager-slave2.out slave1: starting nodemanager, logging to /opt/hadoop-2.7.7/logs/yarn-root-nodemanager-slave1.out master: starting nodemanager, logging to /opt/hadoop-2.7.7/logs/yarn-root-nodemanager-master.out 

2.4 Hadoop集群检查

方法1:检查hadoop集群,进入hadoop-2.7.7/bin下执行

./hdfs dfsadmin -report

查看Live datanodes 节点个数,例如:Live datanodes (3),则表示3台都启动成功

root@master:/opt/hadoop-2.7.7/bin# ./hdfs dfsadmin -report Configured Capacity: 621051420672 (578.40 GB) Present Capacity: 577317355520 (537.67 GB) DFS Remaining: 577317281792 (537.67 GB) DFS Used: 73728 (72 KB) DFS Used%: 0.00% Under replicated blocks: 0 Blocks with corrupt replicas: 0 Missing blocks: 0 Missing blocks (with replication factor 1): 0 ------------------------------------------------- Live datanodes (3): 

方法2:访问8088端口,http://192.168.241.132:8088/cluster/nodes

方法3:访问50070端口http://192.168.241.132:50070/

三、Spark部署

3.1 Spark安装(三台机器可同步进行)

  1. 下载spark-2.1.0-bin-hadoop2.7.tgz,放到opt下解压。
  2. 将spark环境变量配置到/etc/profile中
export SPARK_HOME=/opt/spark-2.1.0-bin-hadoop2.7 export PATH=$JAVA_HOME/bin:$SPARK_HOME/bin:$PATH 

3.2 Spark配置

1.进入spark-2.1.0-bin-hadoop2.7/conf复制spark-env.sh.template并重命名为spark-env.sh

cp spark-env.sh.template spark-env.sh 

编辑spark-env.sh文件,添加以下内容

export JAVA_HOME=/opt/jdk export SPARK_MASTER_IP=192.168.241.132 export SPARK_WORKER_MEMORY=8g export SPARK_WORKER_CORES=4 export SPARK_EXECUTOR_MEMORY=4g export HADOOP_HOME=/opt/hadoop-2.7.7/ export HADOOP_CONF_DIR=/opt/hadoop-2.7.7/etc/hadoop export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/opt/jdk/jre/lib/amd64 

2.把slaves.template拷贝为slaves,并编辑 slaves文件

cp slaves.template slaves 

编辑slaves文件,添加以下内容(多机添加多个)

master
slave1
slave2

3.3 配置Spark集群

可以将配置文件spark-2.1.0-bin-hadoop2.7/conf下内容同步到其他机器上,既3.2无需在一个个配置。

scp -r conf root@另一台机器名:/opt/spark-2.1.0-bin-hadoop2.7 

3.4 Spark启动

启动spark,进入spark-2.1.0-bin-hadoop2.7/sbin下执行

./start-all.sh

3.5 Spark集群检查

访问http://192.168.241.134:8080/

注意:配置Spark集群,需要保证子节点内容和主节点内容一致。

这样Hadoop和Spark就都搭建好了。


免责声明!

本站转载的文章为个人学习借鉴使用,本站对版权不负任何法律责任。如果侵犯了您的隐私权益,请联系本站邮箱yoyou2525@163.com删除。



 
粤ICP备18138465号  © 2018-2025 CODEPRJ.COM