本文檔環境基於ubuntu16.04版本,(轉發請注明出處:http://www.cnblogs.com/zhangyongli2011/ 如發現有錯,請留言,謝謝)
一、准備
1.1 軟件版本
- Ubuntu 16.04.6 (ubuntu-16.04.6-server-amd64.iso)
- JDK 1.8 (jdk-8u201-linux-x64.tar.gz)
- Hadoop 2.7.7 (hadoop-2.7.7.tar.gz)
- Spark 2.1.0 (spark-2.1.0-bin-hadoop2.7.tgz)
1.2 網絡規划
本文規划搭建3台機器組成集群模式,IP與計算機名分別為, 如果是單台搭建,只需填寫一個即可
192.168.241.132 master
192.168.241.133 slave1
192.168.241.134 slave2
1.3 軟件包拷貝
可將上述軟件包拷貝到3台機器的opt目錄下
- JDK 1.8
- Hadoop 2.7.7
- Spark 2.1.0
1.4 SSH設置
修改/etc/ssh/sshd_config文件,將以下三項開啟yes狀態
PermitRootLogin yes
PermitEmptyPasswords yes
PasswordAuthentication yes
重啟ssh服務
service ssh restart
這樣root用戶可直接登陸,以及為后續ssh無密碼登錄做准備。
1.5 綁定IP和修改計算機名
1.5.1 修改/etc/hosts,添加IP綁定,並注釋127.0.1.1(不注釋會影響hadoop集群)
root@master:/opt# cat /etc/hosts
127.0.0.1 localhost
#127.0.1.1 ubuntu
# The following lines are desirable for IPv6 capable hosts
::1 localhost ip6-localhost ip6-loopback
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
192.168.241.132 master
192.168.241.133 slave1
192.168.241.134 slave2
1.5.2 修改/etc/hostname,為綁定計算機名。(計算機名和上面hosts綁定名必須一致)
1.6 SSH無密碼登陸(需提前安裝ssh)
1.用rsa生成密鑰,一路回車。
ssh-keygen -t rsa
2.進到當前用戶的隱藏目錄(.ssh)
cd ~/.ssh
3.把公鑰復制一份,並改名為authorized_keys
cp id_rsa.pub authorized_keys
這步執行完后,在當前機器執行ssh localhost可以無密碼登錄本機了。
如本機裝有ssh-copy-id命令,可以通過
ssh-copy-id root@第二台機器名
然后輸入密碼,在此之后在登陸第二台機器,可以直接
ssh[空格]第二台機器名
進行登錄。初次執行會提示確認,輸入yes和登陸密碼,之后就沒提示了。
1.7 JDK安裝(三台機器可同步進行)
下載:jdk-8u201-linux-x64.tar.gz 包,放到/opt下解壓
1.7.1 將解壓后的文件夾重命名
mv jdk1.8.0_201 jdk
1.7.2 將JDK環境變量配置到/etc/profile中
export JAVA_HOME=/opt/jdk
export JRE_HOME=/opt/jdk/jre
export CLASSPATH=$JAVA_HOME/lib:$JRE_HOME/lib
export PATH=$JAVA_HOME/bin:$PATH
1.7.3 檢查JDK是否配置好
source /etc/profile
java -version
提示以下信息代表JDK安裝完成:
java version "1.8.0_201"
Java(TM) SE Runtime Environment (build 1.8.0_201-b09)
Java HotSpot(TM) 64-Bit Server VM (build 25.201-b09, mixed mode)
1.8 其他配置
1.8.1 網絡配置
修改為固定IP ,/etc/network/interfaces
# The loopback network interface
auto lo
iface lo inet loopback
# The primary network interface
auto eth0
#iface eth0 inet dhcp
iface eth0 inet static
address 192.168.241.132
netmask 255.255.255.0
gateway 192.168.20.1
重啟網絡
service networking restart
1.8.2 DNS配置
第一種方法,永久改
修改/etc/resolvconf/resolv.conf.d/base(這個文件默認是空的)
nameserver 119.6.6.6
保存后執行
resolvconf -u
查看resolv.conf 文件就可以看到我們的設置已經加上
cat /etc/resolv.conf
重啟resolv
/etc/init.d/resolvconf restart
第二種方法,臨時改
修改 /etc/resolv.conf文件,增加
nameserver 119.6.6.6
重啟resolv
/etc/init.d/resolvconf restart
二、Hadoop部署
2.1 Hadoop安裝(三台機器可同步進行)
- 下載hadoop2.7.7(hadoop-2.7.7.tar.gz)
- 解壓 tar -zxvf hadoop-2.7.7.tar.gz ,並在主目錄下創建tmp、dfs、dfs/name、dfs/node、dfs/data
cd /opt/hadoop-2.7.7
mkdir tmp
mkdir dfs
mkdir dfs/name
mkdir dfs/node
mkdir dfs/data
2.2 Hadoop配置
以下操作都在hadoop-2.7.7/etc/hadoop下進行
2.2.1 編輯hadoop-env.sh文件,修改JAVA_HOME配置項為JDK安裝目錄
export JAVA_HOME=/opt/jdk
2.2.2 編輯core-site.xml文件,添加以下內容
其中master為計算機名,/opt/hadoop-2.7.7/tmp為手動創建的目錄
<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://master:9000</value>
</property>
<property>
<name>io.file.buffer.size</name>
<value>131072</value>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>file:/opt/hadoop-2.7.7/tmp</value>
<description>Abasefor other temporary directories.</description>
</property>
<property>
<name>hadoop.proxyuser.spark.hosts</name>
<value>*</value>
</property>
<property>
<name>hadoop.proxyuser.spark.groups</name>
<value>*</value>
</property>
</configuration>
2.2.3 編輯hdfs-site.xml文件,添加以下內容
其中master為計算機名,
file:/opt/hadoop-2.7.7/dfs/name和file:/opt/hadoop-2.7.7/dfs/data為手動創建目錄
<configuration>
<property>
<name>dfs.namenode.secondary.http-address</name>
<value>master:9001</value>
</property>
<property>
<name>dfs.namenode.name.dir</name>
<value>file:/opt/hadoop-2.7.7/dfs/name</value>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>file:/opt/hadoop-2.7.7/dfs/data</value>
</property>
<property>
<name>dfs.replication</name>
<value>3</value>
</property>
<property>
<name>dfs.webhdfs.enabled</name>
<value>true</value>
</property>
</configuration>
復制mapred-site.xml.template並重命名為mapred-site.xml
cp mapred-site.xml.template mapred-site.xml
2.2.4 編輯mapred-site.xml文件,添加以下內容
其中master為計算機名
<configuration>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
<property>
<name>mapreduce.jobhistory.address</name>
<value>master:10020</value>
</property>
<property>
<name>mapreduce.jobhistory.webapp.address</name>
<value>master:19888</value>
</property>
</configuration>
2.2.5 編輯yarn-site.xml文件,添加以下內容
其中master為計算機名
<configuration>
<!-- Site specific YARN configuration properties -->
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
<property>
<name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
<value>org.apache.hadoop.mapred.ShuffleHandler</value>
</property>
<property>
<name>yarn.resourcemanager.address</name>
<value>master:8032</value>
</property>
<property>
<name>yarn.resourcemanager.scheduler.address</name>
<value>master:8030</value>
</property>
<property>
<name>yarn.resourcemanager.resource-tracker.address</name>
<value>master:8035</value>
</property>
<property>
<name>yarn.resourcemanager.admin.address</name>
<value>master:8033</value>
</property>
<property>
<name>yarn.resourcemanager.webapp.address</name>
<value>master:8088</value>
</property>
</configuration>
2.2.6 修改slaves文件,添加集群節點(多機添加多個)
添加以下
master
slave1
slave2
2.2.7 Hadoop集群搭建
hadoop配置集群,可以將配置文件etc/hadoop下內容同步到其他機器上,既2.2.1-2.2.6無需在一個個配置。
cd /opt/hadoop-2.7.7/etc
scp -r hadoop root@另一台機器名:/opt/hadoop-2.7.7/etc
2.3 Hadoop啟動
1.格式化一個新的文件系統,進入到hadoop-2.7.7/bin下執行:
./hadoop namenode -format
2.啟動hadoop,進入到hadoop-2.7.7/sbin下執行:
./start-all.sh
看到如下內容說明啟動成功
root@master:/opt/hadoop-2.7.7/sbin# ./start-all.sh
This script is Deprecated. Instead use start-dfs.sh and start-yarn.sh
Starting namenodes on [master]
master: starting namenode, logging to /opt/hadoop-2.7.7/logs/hadoop-root-namenode-master.out
slave2: starting datanode, logging to /opt/hadoop-2.7.7/logs/hadoop-root-datanode-slave2.out
master: starting datanode, logging to /opt/hadoop-2.7.7/logs/hadoop-root-datanode-master.out
slave1: starting datanode, logging to /opt/hadoop-2.7.7/logs/hadoop-root-datanode-slave1.out
Starting secondary namenodes [master]
master: starting secondarynamenode, logging to /opt/hadoop-2.7.7/logs/hadoop-root-secondarynamenode-master.out
starting yarn daemons
starting resourcemanager, logging to /opt/hadoop-2.7.7/logs/yarn-root-resourcemanager-master.out
slave2: starting nodemanager, logging to /opt/hadoop-2.7.7/logs/yarn-root-nodemanager-slave2.out
slave1: starting nodemanager, logging to /opt/hadoop-2.7.7/logs/yarn-root-nodemanager-slave1.out
master: starting nodemanager, logging to /opt/hadoop-2.7.7/logs/yarn-root-nodemanager-master.out
2.4 Hadoop集群檢查
方法1:檢查hadoop集群,進入hadoop-2.7.7/bin下執行
./hdfs dfsadmin -report
查看Live datanodes 節點個數,例如:Live datanodes (3),則表示3台都啟動成功
root@master:/opt/hadoop-2.7.7/bin# ./hdfs dfsadmin -report
Configured Capacity: 621051420672 (578.40 GB)
Present Capacity: 577317355520 (537.67 GB)
DFS Remaining: 577317281792 (537.67 GB)
DFS Used: 73728 (72 KB)
DFS Used%: 0.00%
Under replicated blocks: 0
Blocks with corrupt replicas: 0
Missing blocks: 0
Missing blocks (with replication factor 1): 0
-------------------------------------------------
Live datanodes (3):
方法2:訪問8088端口,http://192.168.241.132:8088/cluster/nodes

方法3:訪問50070端口http://192.168.241.132:50070/

三、Spark部署
3.1 Spark安裝(三台機器可同步進行)
- 下載spark-2.1.0-bin-hadoop2.7.tgz,放到opt下解壓。
- 將spark環境變量配置到/etc/profile中
export SPARK_HOME=/opt/spark-2.1.0-bin-hadoop2.7
export PATH=$JAVA_HOME/bin:$SPARK_HOME/bin:$PATH
3.2 Spark配置
1.進入spark-2.1.0-bin-hadoop2.7/conf復制spark-env.sh.template並重命名為spark-env.sh
cp spark-env.sh.template spark-env.sh
編輯spark-env.sh文件,添加以下內容
export JAVA_HOME=/opt/jdk
export SPARK_MASTER_IP=192.168.241.132
export SPARK_WORKER_MEMORY=8g
export SPARK_WORKER_CORES=4
export SPARK_EXECUTOR_MEMORY=4g
export HADOOP_HOME=/opt/hadoop-2.7.7/
export HADOOP_CONF_DIR=/opt/hadoop-2.7.7/etc/hadoop
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/opt/jdk/jre/lib/amd64
2.把slaves.template拷貝為slaves,並編輯 slaves文件
cp slaves.template slaves
編輯slaves文件,添加以下內容(多機添加多個)
master
slave1
slave2
3.3 配置Spark集群
可以將配置文件spark-2.1.0-bin-hadoop2.7/conf下內容同步到其他機器上,既3.2無需在一個個配置。
scp -r conf root@另一台機器名:/opt/spark-2.1.0-bin-hadoop2.7
3.4 Spark啟動
啟動spark,進入spark-2.1.0-bin-hadoop2.7/sbin下執行
./start-all.sh
3.5 Spark集群檢查
訪問http://192.168.241.134:8080/

注意:配置Spark集群,需要保證子節點內容和主節點內容一致。
這樣Hadoop集群和Spark集群就都搭建好了。
(轉發請注明出處:http://www.cnblogs.com/zhangyongli2011/ 如發現有錯,請留言,謝謝)
