首先准備3台電腦或虛擬機,分別是Master,Worker1,Worker2,安裝操作系統(本文中使用CentOS7)。
1、配置集群,以下步驟在Master機器上執行
1.1、關閉防火牆:systemctl stop firewalld.service
1.2、設置機器ip為靜態ip
1.2.1、修改配置
cd /etc/sysconfig/network-scripts/ vim ifcfg-eno16777736 更改內容如下: BOOTPROTO=static #配置靜態IP,網關,子網掩碼 IPADDR=192.168.232.133 NETMASK=255.255.255.0 GATEWAY=192.168.232.2 #取消networkmanager 管理 NM_CONTROLLED=no ONBOOT=yes
1.2.2、重啟網絡服務:systemctl restart network.service
1.3、設置機器名hostname:hostnamectl set-hostname Master
1.4、設置/etc/hosts
192.168.232.133 Master 192.168.232.134 Worker1 192.168.232.135 Worker2
1.5、按以上5個步驟配置Worker1,Worker2
1.6、測試集群內機器是否可相互ping通:ping Worker1
2、配置ssh免密碼登錄
2.1、 配置Master無密碼登錄所有Worker
2.1.1、在Master節點上生成密碼對,在Master上執行以下命令:
ssh-keygen -t rsa -P ''
生成的密鑰對:id_rsa和id_rsa.pub,默認存儲在"/root/.ssh"目錄下。
2.1.2、在Master節點上做如下配置,把id_rsa.pub追加到授權的key里面去。
cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
2.1.3、修改ssh配置文件"/etc/ssh/sshd_config"的下列內容:
RSAAuthentication yes # 啟用 RSA 認證 PubkeyAuthentication yes # 啟用公鑰私鑰配對認證方式 AuthorizedKeysFile .ssh/authorized_keys # 公鑰文件路徑(和上面生成的文件同)
2.1.4、重啟ssh服務,才能使剛才設置有效:service sshd restart
2.1.5、驗證無密碼登錄本機是否成功:ssh Master
2.1.6、把公鑰復制到所有的Worker機器上。使用scp命令進行復制公鑰:
scp /root/.ssh/id_rsa.pub root@Worker1:/root/ scp /root/.ssh/id_rsa.pub root@Worker2:/root/
2.2、配置Worker1機器
2.2.1、在"/root/"下創建".ssh"文件夾,如果已經存在就不需要創建了。
mkdir /root/.ssh
2.2.2、將Master的公鑰追加到Worker1的授權文件"authorized_keys"中去。
cat /root/id_rsa.pub >> /root/.ssh/authorized_keys
2.2.3、修改"/etc/ssh/sshd_config",具體步驟參考前面Master設置的第1.3和第1.4。
2.2.4、用Master使用ssh無密碼登錄Worker1
ssh worker1
2.2.5、刪除"/root/"目錄下的"id_rsa.pub"文件。
rm –r /root/id_rsa.pub
2.2.6、重復上面的5個步驟把Worker2服務器進行相同的配置。
2.3、 配置所有Worker無密碼登錄Master
2.3.1、在Worker1節點上生成密碼對,並把自己的公鑰追加到"authorized_keys"文件中,執行下面命令:
ssh-keygen -t rsa -P ''
cat /root/.ssh/id_rsa.pub >> /root/.ssh/authorized_keys
2.3.2、將Worker1節點的公鑰"id_rsa.pub"復制到Master節點的"/root/"目錄下。
scp /root/.ssh/id_rsa.pub root@Master:/root/
2.3.3、在Master節點將Worker1的公鑰追加到Master的授權文件"authorized_keys"中去。
cat ~/id_rsa.pub >> ~/.ssh/authorized_keys
2.3.4、在Master節點刪除"id_rsa.pub"文件。
rm –r /root/id_rsa.pub
2.3.5、測試從Worker1免密碼登錄到Master:ssh Master
2.4、按照上面的步驟把Worker2和Master之間建立起無密碼登錄。這樣,Master能無密碼登錄每個Worker,每個Worker也能無密碼登錄到Master。
3、在Master安裝Java、Scala,把下載的安裝包解壓即可tar -xzvf ...
4、在Master安裝配置Hadoop
4.1、配置hdfs-site.xml
<configuration> <property> <name>dfs.replication</name> <value>2</value> </property> <property> <name>dfs.namenode.secondary.http-address</name> <value>Master:50090</value> </property> <property> <name>dfs.namenode.name.dir</name> <value>/usr/etc/hadoop-2.7.5/hdfs/name</value> </property> <property> <name>dfs.datanode.data.dir</name> <value>/usr/etc/hadoop-2.7.5/hdfs/data</value> </property> <property> <name>dfs.namenode.checkpoint.dir</name> <value>/usr/etc/hadoop-2.7.5/hdfs/namesecondary</value> </property> </configuration>
4.2、配置yarn-site.xml
<configuration> <property> <name>yarn.resourcemanager.hostname</name> <value>Master</value> </property> <property> <name>yarn.nodemanager.aux-services</name> <value>mapreduce_shuffle</value> </property> <property> <name>yarn.resourcemanager.address</name> <value>Master:8032</value> </property> <property> <name>yarn.resourcemanager.scheduler.address</name> <value>Master:8030</value> </property> <property> <name>yarn.resourcemanager.resource-tracker.address</name> <value>Master:8031</value> </property> <property> <name>yarn.resourcemanager.admin.address</name> <value>Master:8033</value> </property> <property> <name>yarn.resourcemanager.webapp.address</name> <value>Master:8088</value> </property> </configuration>
4.3、配置mapred-site.xml
<configuration> <property> <name>mapreduce.framework.name</name> <value>yarn</value> </property> </configuration>
4.4、配置hadoop-env.sh
export JAVA_HOME=/usr/etc/jdk1.8.0_161 export HADOOP_HOME=/usr/etc/hadoop-2.7.5 export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/native export HADOOP_OPTS="-Djava.library.path=$HADOOP_HOME/lib" export HADOOP_CONF_DIR=${HADOOP_CONF_DIR:-"/etc/hadoop"}
4.5、配置core-site.xml
<configuration> <property> <name>fs.defaultFS</name> <value>hdfs://Master:9000</value> </property> <property> <name>hadoop.tmp.dir</name> <value>/usr/etc/hadoop-2.7.5/tmp</value> </property> <property> <name>hadoop.native.lib</name> <value>true</value> </property> </configuration>
4.6、配置slaves
Worker1
Worker2
5、在Master安裝配置Spark
5.1、配置spark-env.sh
export JAVA_HOME=/usr/etc/jdk1.8.0_161 export SCALA_HOME=/usr/etc/scala-2.12.4 export HADOOP_HOME=/usr/etc/hadoop-2.7.5 export HADOOP_CONF_DIR=/usr/etc/hadoop-2.7.5/etc/hadoop export SPARK_MASTER_IP=Master export SPARK_WORKER_MEMORY=1g export SPARK_EXECUTOR_MEMORY=1g export SPARK_DRIVER_MEMORY=500m export SPARK_WORKER_CORES=2 export SPARK_HOME=/usr/etc/spark-2.3.0-bin-hadoop2.7 export SPARK_DIST_CLASSPATH=$(/usr/etc/hadoop-2.7.5/bin/hadoop classpath)
5.2、配置spark-defaults.conf
spark.eventLog.enabled true spark.eventLog.dir hdfs://Master:9000/historyserverforSpark spark.yarn.historyServer.address Master:18080 spark.history.fs.logDirectory hdfs://Master:9000/historyserverforSpark spark.executor.extraJavaOptions -XX:+PrintGCDetails -Dkey=value -Dnumbers="one two three"
5.3、配置slaves
Worker1
Worker2
6、在Master配置環境變量/etc/profile,並通過source /etc/profile使生效
export PATH USER LOGNAME MAIL HOSTNAME HISTSIZE HISTCONTROL export JAVA_HOME=/usr/etc/jdk1.8.0_161 export JRE_HOME=/usr/etc/jdk1.8.0_161/jre export SCALA_HOME=/usr/etc/scala-2.12.4 export HADOOP_HOME=/usr/etc/hadoop-2.7.5 export HADOOP_CONF_DIR=${HADOOP_HOME}/etc/hadoop export HADOOP_COMMON_LIB_NATIVE_DIR=${HADOOP_HOME}/lib/native export HADOOP_OPTS="-Djava.library.path=${HADOOP_HOME}/lib" export SPARK_HOME=/usr/etc/spark-2.3.0-bin-hadoop2.7 export HIVE_HOME=/usr/etc/apache-hive-2.1.1-bin export CLASSPATH=.:$JAVA_HOME/lib:$JRE_HOME/lib:$SCALA_HOME/lib:$HADOOP_HOME/lib PATH=$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$SPARK_HOME/bin:$SPARK_HOME/sbin:$HIVE_HOME/bin:$SCALA_HOME/bin:$JAVA_HOME/bin:$PATH export JAVA_HOME PATH
7、在Master通過scp命令拷貝java,scala,hadoop,spark,/etc/profile到Worker1,Worker2機器上
8、在Master機器上運行命令:hadoop namenode -format,格式化磁盤
9、在Master機器上運行命令:start-hdfs.sh,啟動hdfs服務,可在瀏覽器通過Master:50070訪問
10、在Master機器上運行命令:進入spark的bin目錄,start-all.sh,啟動Spark,可在瀏覽器通過Master:8080訪問
11、在Master機器上運行命令:start-history-server.sh,啟動Spark歷史服務,可在瀏覽器通過Master:18080訪問
12、測試集群application運行
12.1、使用spark-submit提交Application:
./spark-submit --class org.apache.spark.examples.SparkPi --master spark://Master:7077 ../examples/jars/spark-examples_2.11-2.3.0.jar 100000
--class:命名空間(包名)+類名;--master:spark集群的master;.jar:jar包位置;10000:任務個數
12.2、啟動spark-shell,運行woldcount程序:
sc.textFile("/README.md").flatMap(_.split(" ")).map((_,1)).reduceByKey(_+_).map(pair=>(pair._2,pair._1).sortByKey(false,1).map(pair=>(pair._2,pair._1)).saveAsTextFile("/resdir/wordcount")