一、修改hosts文件
在主節點,就是第一台主機的命令行下;
vim /etc/hosts
我的是三台雲主機:
在原文件的基礎上加上;
ip1 master worker0 namenode
ip2 worker1 datanode1
ip3 worker2 datanode2
其中的ipN代表一個可用的集群IP,ip1為master的主節點,ip2和iip3為從節點。
二、ssh互信(免密碼登錄)
注意我這里配置的是root用戶,所以以下的家目錄是/root
如果你配置的是用戶是xxxx,那么家目錄應該是/home/xxxxx/
#在主節點執行下面的命令:
ssh-keygen -t rsa -P '' #一路回車直到生成公鑰
scp /root/.ssh/id_rsa.pub root@worker1:/root/.ssh/id_rsa.pub.master #從master節點拷貝id_rsa.pub到worker主機上,並且改名為id_rsa.pub.master scp /root/.ssh/id_rsa.pub root@worker1:/root/.ssh/id_rsa.pub.master #同上,以后使用workerN代表worker1和worker2.
scp /etc/hosts root@workerN:/etc/hosts #統一hosts文件,讓幾個主機能通過host名字來識別彼此
#在對應的主機下執行如下命令: cat /root/.ssh/id_rsa.pub >> /root/.ssh/authorized_keys #master主機 cat /root/.ssh/id_rsa.pub.master >> /root/.ssh/authorized_keys #workerN主機
這樣master主機就可以無密碼登錄到其他主機,這樣子在運行master上的啟動腳本時和使用scp命令時候,就可以不用輸入密碼了。
三、安裝基礎環境(JAVA和SCALA環境)
1.Java1.8環境搭建:
配置master的java環境
#下載jdk1.8的rpm包
wget --no-check-certificate --no-cookies --header "Cookie: oraclelicense=accept-securebackup-cookie" http://download.oracle.com/otn-pub/java/jdk/8u112-b15/jdk-8u112-linux-x64.rpm rpm -ivh jdk-8u112-linux-x64.rpm
#增加JAVA_HOME
vim etc/profile
#增加如下行: #Java home export JAVA_HOME=/usr/java/jdk1.8.0_112/
#刷新配置:
source /etc/profile #當然reboot也是可以的
配置workerN主機的java環境
#使用scp命令進行拷貝 scp jdk-8u112-linux-x64.rpm root@workerN:/root
#其他的步驟如master節點配置一樣
2.Scala2.12.2環境搭建:
Master節點:
#下載scala安裝包: wget -O "scala-2.12.2.rpm" "https://downloads.lightbend.com/scala/2.12.1/scala-2.12.2.rpm"
#安裝rpm包: rpm -ivh scala-2.12.2.rpm
#增加SCALA_HOME vim /etc/profile
#增加如下內容; #Scala Home export SCALA_HOME=/usr/share/scala #刷新配置 source /etc/profile
WorkerN節點;
#使用scp命令進行拷貝 scp scala-2.12.2.rpm root@workerN:/root #其他的步驟如master節點配置一樣
四、Hadoop2.7.3完全分布式搭建
MASTER節點:
1.下載二進制包:
wget http://www-eu.apache.org/dist/hadoop/common/hadoop-2.7.3/hadoop-2.7.3.tar.gz
2.解壓並移動至相應目錄
我的習慣是將軟件放置/opt目錄下:
tar -xvf hadoop-2.7.3.tar.gz mv hadoop-2.7.3 /opt
3.修改相應的配置文件:
(1)/etc/profile:
增加如下內容:
#hadoop enviroment export HADOOP_HOME=/opt/hadoop-2.7.3/ export PATH="$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$PATH" export HADOOP_CONF_DIR=$HADOOP_HOME/etc/hadoop export YARN_CONF_DIR=$HADOOP_HOME/etc/hadoop
(2)$HADOOP_HOME/etc/hadoop/hadoop-env.sh
修改JAVA_HOME 如下:
export JAVA_HOME=/usr/java/jdk1.8.0_112/
(3)$HADOOP_HOME/etc/hadoop/slaves
worker1
worker2
(4)$HADOOP_HOME/etc/hadoop/core-site.xml
<configuration> <property> <name>fs.defaultFS</name> <value>hdfs://master:9000</value> </property> <property> <name>io.file.buffer.size</name> <value>131072</value> </property> <property> <name>hadoop.tmp.dir</name> <value>/opt/hadoop-2.7.3/tmp</value> </property> </configuration>
(5)$HADOOP_HOME/etc/hadoop/hdfs-site.xml
<configuration> <property> <name>dfs.namenode.secondary.http-address</name> <value>master:50090</value> </property> <property> <name>dfs.replication</name> <value>2</value> </property> <property> <name>dfs.namenode.name.dir</name> <value>file:/opt/hadoop-2.7.3/hdfs/name</value> </property> <property> <name>dfs.datanode.data.dir</name> <value>file:/opt/hadoop-2.7.3/hdfs/data</value> </property> </configuration>
(6)$HADOOP_HOME/etc/hadoop/mapred-site.xml
復制template,生成xml:
cp mapred-site.xml.template mapred-site.xml
內容:
<configuration> <property> <name>mapreduce.framework.name</name> <value>yarn</value> </property> <property> <name>mapreduce.jobhistory.address</name> <value>master:10020</value> </property> <property> <name>mapreduce.jobhistory.address</name> <value>master:19888</value> </property> </configuration>
(7)$HADOOP_HOME/etc/hadoop/yarn-site.xml
<!-- Site specific YARN configuration properties --> <property> <name>yarn.nodemanager.aux-services</name> <value>mapreduce_shuffle</value> </property> <property> <name>yarn.resourcemanager.address</name> <value>master:8032</value> </property> <property> <name>yarn.resourcemanager.scheduler.address</name> <value>master:8030</value> </property> <property> <name>yarn.resourcemanager.resource-tracker.address</name> <value>master:8031</value> </property> <property> <name>yarn.resourcemanager.admin.address</name> <value>master:8033</value> </property> <property> <name>yarn.resourcemanager.webapp.address</name> <value>master:8088</value> </property>
至此master節點的hadoop搭建完畢
再啟動之前我們需要
格式化一下namenode
hadoop namenode -format
WorkerN節點:
(1)復制master節點的hadoop文件夾到worker上:
scp -r /opt/hadoop-2.7.3 root@wokerN:/opt #注意這里的N要改為1或者2
(2)修改/etc/profile:
過程如master一樣
五、Spark2.1.0完全分布式環境搭建:
MASTER節點:
1.下載文件:
wget -O "spark-2.1.0-bin-hadoop2.7.tgz" "http://d3kbcqa49mib13.cloudfront.net/spark-2.1.0-bin-hadoop2.7.tgz"
2.解壓並移動至相應的文件夾;
tar -xvf spark-2.1.0-bin-hadoop2.7.tgz mv spark-2.1.0-bin-hadoop2.7 /opt
3.修改相應的配置文件:
(1)/etc/profie
#Spark enviroment export SPARK_HOME=/opt/spark-2.1.0-bin-hadoop2.7/ export PATH="$SPARK_HOME/bin:$PATH"
(2)$SPARK_HOME/conf/spark-env.sh
cp spark-env.sh.template spark-env.sh
#配置內容如下:
export SCALA_HOME=/usr/share/scala export JAVA_HOME=/usr/java/jdk1.8.0_112/ export SPARK_MASTER_IP=master export SPARK_WORKER_MEMORY=1g export HADOOP_CONF_DIR=/opt/hadoop-2.7.3/etc/hadoop
(3)$SPARK_HOME/conf/slaves
cp slaves.template slaves
配置內容如下
master
worker1
worker2
WorkerN節點:
將配置好的spark文件復制到workerN節點
scp spark-2.1.0-bin-hadoop2.7 root@workerN:/opt
修改/etc/profile,增加spark相關的配置,如MASTER節點一樣
六、啟動集群的腳本
啟動集群腳本start-cluster.sh如下:
#!/bin/bash echo -e "\033[31m ========Start The Cluster======== \033[0m" echo -e "\033[31m Starting Hadoop Now !!! \033[0m" /opt/hadoop-2.7.3/sbin/start-all.sh echo -e "\033[31m Starting Spark Now !!! \033[0m" /opt/spark-2.1.0-bin-hadoop2.7/sbin/start-all.sh echo -e "\033[31m The Result Of The Command \"jps\" : \033[0m" jps echo -e "\033[31m ========END======== \033[0m"
截圖如下:
關閉集群腳本stop-cluser.sh如下:
#!/bin/bash echo -e "\033[31m ===== Stoping The Cluster ====== \033[0m" echo -e "\033[31m Stoping Spark Now !!! \033[0m" /opt/spark-2.1.0-bin-hadoop2.7/sbin/stop-all.sh echo -e "\033[31m Stopting Hadoop Now !!! \033[0m" /opt/hadoop-2.7.3/sbin/stop-all.sh echo -e "\033[31m The Result Of The Command \"jps\" : \033[0m" jps echo -e "\033[31m ======END======== \033[0m"
截圖如下:
七、測試一下集群:
這里我都用最簡單最常用的Wordcount來測試好了!
1.測試hadoop
測試的源文件的內容為:
Hello hadoop
hello spark
hello bigdata
然后執行下列命令:
hadoop fs -mkdir -p /Hadoop/Input hadoop fs -put wordcount.txt /Hadoop/Input hadoop jar /opt/hadoop-2.7.3/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.3.jar wordcount /Hadoop/Input /Hadoop/Output
等待mapreduce執行完畢后,查看結果;
hadoop fs -cat /Hadoop/Output/*
hadoop集群搭建成功!
2.測試spark
為了避免麻煩這里我們使用spark-shell,做一個簡單的worcount的測試
用於在測試hadoop的時候我們已經在hdfs上存儲了測試的源文件,下面就是直接拿來用就好了!
spark-shell
val file=sc.textFile("hdfs://master:9000/Hadoop/Input/wordcount.txt") val rdd = file.flatMap(line => line.split(" ")).map(word => (word,1)).reduceByKey(_+_) rdd.collect() rdd.foreach(println)
退出的話使用如下命令:
:quit
至此我們這篇文章就結束了。