Hive默認使用MapReduce作為執行引擎,即Hive on mr,Hive還可以使用Tez和Spark作為其執行引擎,分別為Hive on Tez和Hive on Spark。由於MapReduce中間計算均需要寫入磁盤,而Spark是放在內存中,所以總體來講Spark比MapReduce快很多。默認情況下,Hive on Spark 在YARN模式下支持Spark。
因為本人在之前搭建的集群中,部署的環境為:
hadoop2.7.3
hive2.3.4
scala2.12.8
kafka2.12-2.10
jdk1.8_172
hbase1.3.3
sqoop1.4.7
zookeeper3.4.12
#java export JAVA_HOME=/usr/java/jdk1.8.0_172-amd64 export JRE_HOME=$JAVA_HOME/jre export PATH=$JAVA_HOME/bin:$JRE_HOME/bin:$PATH export CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar #hbase export HBASE_HOME=/home/workspace/hbase-1.2.9 export PATH=$HBASE_HOME/bin:$PATH #hadoop export HADOOP_HOME=/home/workspace/hadoop-2.7.3 export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/native export HADOOP_OPTS="-Djava.library.path=$HADOOP_HOME/lib/native" export PATH=$PATH:$HADOOP_HOME/bin:$HADOOP_HOME/sbin ###enable hadoop native library export LD_LIBRARY_PATH=$HADOOP_HOME/lib/native #hive export HIVE_HOME=/home/workspace/software/apache-hive-2.3.4 export HIVE_CONF_DIR=$HIVE_HOME/conf export PATH=.:$HIVE_HOME/bin:$PATH export HADOOP_CLASSPATH=$HADOOP_CLASSPATH:$HIVE_HOME/lib/* export HCAT_HOME=$HIVE_HOME/hcatalog export PATH=$HCAT_HOME/bin:$PATH #Sqoop export SQOOP_HOME=/home/workspace/software/sqoop-1.4.7.bin__hadoop-2.6.0 export PATH=$PATH:$SQOOP_HOME/bin # zookeeper export ZK_HOME=/home/workspace/software/zookeeper-3.4.12 export PATH=$ZK_HOME/bin:$PATH #maven export MAVEN_HOME=/home/workspace/software/apache-maven-3.6.0 export M2_HOME=$MAVEN_HOME export PATH=$PATH:$MAVEN_HOME/bin #scala export SCALA_HOME=/home/workspace/software/scala-2.11.12 export PATH=$SCALA_HOME/bin:$PATH #kafka export KAFKA_HOME=/home/workspace/software/kafka_2.11-2.1.0 export PATH=$KAFKA_HOME/bin:$PATH #kylin export KYLIN_HOME=/home/workspace/software/apache-kylin-2.6.0 export KYLIN_CONF_HOME=$KYLIN_HOME/conf export PATH=:$PATH:$KYLIN_HOME/bin:$CATALINE_HOME/bin export tomcat_root=$KYLIN_HOME/tomcat #變量名小寫 export hive_dependency=$HIVE_HOME/conf:$HIVE_HOME/lib/*:$HCAT_HOME/share/hcatalog/hive-hcatalog-core-2.3.4.jar #變量名小寫 #spark export SPARK_HOME=/home/workspace/software/spark-2.0.0 export PATH=:$PATH:$SPARK_HOME/bin
現在想部署spark上去,鑒於hive2.3.4支持的spark版本為2.0.0,所以決定部署spark2.0.0,但是spark2.0.0,默認是基於scala2.11.8編譯的,所以,決定基於scala2.12.8手動編譯一下spark源碼,然后進行部署。本文默認認為前面那些組件都已經安裝好了,本篇只講如何編譯spark源碼,如果其他的組件部署不清楚,請參見本人的相關博文。
1. 下載spark2.0.0源碼
cd /home/workspace/software wget http://archive.apache.org/dist/spark/spark-2.0.0/spark-2.0.0.tgz tar -xzf spark-2.0.0.tgz cd spark-2.0.0
2. 修改pom.xml改為用scala2.12.8編譯
vim pom.xml
修改scala依賴版本為2.12.8(原來為2.11.8)
<scala.version>2.12.8</scala.version> <scala.binary.version>2.12</scala.binary.version>
3. 修改make-distribution.sh
cd /home/workspace/software/spark-2.0.0/dev vim make-distribution.sh
修改其中的VERSION,SCALA_VERSION,SPARK_HADOOP_VERSION,SPARK_HIVE為對應的版本值

其中SPARK_HIVE=1表示打包hive,非1值為不打包hive。
此步非必須,若不給定,它也會從maven源中下載,為節省編譯時間,直接給定;
4. 下載zinc0.3.9
Zinc is a long-running server version of SBT’s incremental compiler. When run locally as a background process, it speeds up builds of Scala-based projects like Spark. Developers who regularly recompile Spark with Maven will be the most interested in Zinc. The project site gives instructions for building and running zinc; OS X users can install it using brew install zinc.
If using the build/mvn package zinc will automatically be downloaded and leveraged for all builds. This process will auto-start after the first time build/mvn is called and bind to port 3030 unless the ZINC_PORT environment variable is set. The zinc process can subsequently be shut down at any time by running build/zinc-<version>/bin/zinc -shutdown and will automatically restart whenever build/mvn is called.
wget https://downloads.typesafe.com/zinc/0.3.9/zinc-0.3.9.tgz #下載zinc-0.3.9.tgz,scala編譯庫,如果不事先下載,編譯時會自動下載
將zinc-0.3.9.tgz解壓到/home/workspace/software/spark-2.0.0/build目錄下
tar -xzvf zinc-0.3.9.tgz -C /home/workspace/software/spark-2.0.0/build
5. 下載scala2.12.8 binary file
wget https://downloads.lightbend.com/scala/2.12.8/scala-2.12.8.tgz #下載scala-2.12.8.tgz,scala編譯庫,如果不事先下載,編譯時會自動下載 tar -xzvf scala-2.12.8.tgz -C /home/workspace/software/spark-2.0.0/build

6. 編譯spark
cd /home/workspace/software/spark-2.0.0/dev ./make-distribution.sh --name "hadoop2.7.3-with-hive" --tgz -Dhadoop.version=2.7.3 -Dscala-2.12 -Phadoop-2.7 -Pyarn -Phive -Phive-thriftserver -Pparquet-provided -DskipTests clean package #或者 #./make-distribution.sh --name "hadoop2.7-with-hive" --tgz "-Pyarn,-Phive,-Phive-thriftserver,hadoop-provided,hadoop-2.7,parquet-provided,-Dscala-2.12,-Dhadoop.version=2.7.3,-DskipTests" clean package ####參數解釋: # -DskipTests,不執行測試用例,但編譯測試用例類生成相應的class文件至target/test-classes下。 # -Dhadoop.version 和-Phadoop: Hadoop 版本號,不加此參數時hadoop 版本為1.0.4 。 # -Pyarn :是否支持Hadoop YARN ,不加參數時為不支持yarn 。 # -Phive和-Phive-thriftserver:是否在Spark SQL 中支持hive ,hive jdbc支持,不加此參數時為不支持hive 。 # –with-tachyon :是否支持內存文件系統Tachyon ,不加此參數時不支持tachyon 。 # –tgz :在根目錄下生成 spark-$VERSION-bin.tgz ,不加此參數時不生成tgz 文件,只生成/dist 目錄。 # –name :和–tgz結合可以生成spark-$VERSION-bin-$NAME.tgz的部署包,不加此參數時NAME為hadoop的版本號。 # -Phadoop-provided: 不包含hadoop生態的其他庫文件,在yarn模式部署時,不包含此參數,可能會造成有些文件有多個不同的版本,加入此參數后,一些hadoop生態的工程將不會被包含進來,如ZooKeeper,Hadoop.
或者使用maven編譯
cd /home/workspace/software/spark-2.0.0
export MAVEN_OPTS="-Xmx6g -XX:MaxPermSize=2g -XX:ReservedCodeCacheSize=2g" #jdk 1.8不需要設置這個參數,但是1.8以下版本jdk需要設置這個參數 ../build/mvn -Dscala-2.12.8 -Phadoop-provided -Pparquet-provided -Phadoop-2.7 -Dhadoop.version=2.7.3 -Pyarn -Phive -Phive-thriftserver -DskipTests clean package #也可以使用maven編譯
下面截圖時使用/make-distribution.sh編譯的截圖

編譯時間大概在半小時以上。

編譯出來的二進制包在/home/workspace/software/spark-2.0.0根目錄下

注:如果編譯過程中出現類似
[ERROR] Failed to execute goal net.alchim31.maven:scala-maven-plugin:3.2.2:testCompile (scala-test-compile-first) on project spark-core_2.11: Execution scala-test-compile-first of goal net.alchim31.maven:scala-maven-plugin:3.2.2:testCompile failed. CompileFailed -> [Help 1] [ERROR] [ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch. [ERROR] Re-run Maven using the -X switch to enable full debug logging. [ERROR] [ERROR] For more information about the errors and possible solutions, please read the following articles: [ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/PluginExecutionException [ERROR] [ERROR] After correcting the problems, you can resume the build with the command [ERROR] mvn <goals> -rf :spark-core_2.11
這樣的錯誤,先執行一下:
./change-scala-version.sh 2.11
ps -ef | grep zinc
kill -9 {zinc process id}
然后重新編譯即可。
編譯完成!
