(1)添加pom.xml中的依賴包
注意依賴包必須跟cdh中的組件版本一致。附上cdh3.2.1版的pom.xml內容:
<?xml version="1.0" encoding="UTF-8"?> <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd"> <modelVersion>4.0.0</modelVersion> <groupId>spark_lenovo</groupId> <artifactId>spark</artifactId> <version>1.0-SNAPSHOT</version> <properties> <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding> <maven.compiler.source>1.8</maven.compiler.source> <maven.compiler.target>1.8</maven.compiler.target> <spark.scala.version>2.11</spark.scala.version> <spark.version>2.4.0</spark.version> <hadoop.version>3.0.0-cdh6.3.2</hadoop.version> <hbase.version>2.1.0-cdh6.3.2</hbase.version> <jar.scope>compile</jar.scope> </properties> <dependencies> <!--spark--> <dependency> <groupId>org.apache.spark</groupId> <artifactId>spark-core_${spark.scala.version}</artifactId> <version>${spark.version}</version> <scope>${jar.scope}</scope> </dependency> <dependency> <groupId>org.apache.spark</groupId> <artifactId>spark-sql_${spark.scala.version}</artifactId> <version>${spark.version}</version> <scope>${jar.scope}</scope> </dependency> <dependency> <groupId>org.apache.spark</groupId> <artifactId>spark-hive_${spark.scala.version}</artifactId> <version>${spark.version}</version> <scope>${jar.scope}</scope> </dependency> <dependency> <groupId>org.apache.spark</groupId> <artifactId>spark-streaming_${spark.scala.version}</artifactId> <version>${spark.version}</version> <scope>${jar.scope}</scope> </dependency> <dependency> <groupId>org.apache.spark</groupId> <artifactId>spark-mllib_${spark.scala.version}</artifactId> <version>${spark.version}</version> <scope>${jar.scope}</scope> </dependency> <dependency> <groupId>org.apache.hadoop</groupId> <artifactId>hadoop-client</artifactId> <version>${hadoop.version}</version> <scope>${jar.scope}</scope> </dependency> <!--mysql jdbc驅動 --> <dependency> <groupId>mysql</groupId> <artifactId>mysql-connector-java</artifactId> <version>6.0.5</version> </dependency> <!-- <dependency>--> <!-- <groupId>mysql</groupId>--> <!-- <artifactId>mysql-connector-java</artifactId>--> <!-- <version>5.1.39</version>--> <!-- </dependency>--> <!-- <dependency>--> <!-- <groupId>junit</groupId>--> <!-- <artifactId>junit</artifactId>--> <!-- <version>4.12</version>--> <!-- </dependency>--> <!--hbase--> <dependency> <groupId>org.apache.hbase</groupId> <artifactId>hbase</artifactId> <version>${hbase.version}</version> </dependency> <dependency> <groupId>org.apache.hbase</groupId> <artifactId>hbase-server</artifactId> <version>${hbase.version}</version> </dependency> <dependency> <groupId>org.apache.hbase</groupId> <artifactId>hbase-client</artifactId> <version>${hbase.version}</version> </dependency> <dependency> <groupId>org.apache.hbase</groupId> <artifactId>hbase-common</artifactId> <version>${hbase.version}</version> </dependency> <dependency> <groupId>org.apache.hbase</groupId> <artifactId>hbase-mapreduce</artifactId> <version>${hbase.version}</version> </dependency> </dependencies> <build> <plugins> <!-- 編譯scala的插件 --> <plugin> <groupId>net.alchim31.maven</groupId> <artifactId>scala-maven-plugin</artifactId> <version>3.2.2</version> <executions> <execution> <goals> <goal>compile</goal> </goals> </execution> </executions> </plugin> <!-- 編譯java的插件 --> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-compiler-plugin</artifactId> <version>3.5.1</version> <configuration> <source>1.8</source> <target>1.8</target> </configuration> </plugin> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-shade-plugin</artifactId> <version>2.4.1</version> <executions> <execution> <phase>package</phase> <goals> <goal>shade</goal> </goals> <configuration> <filters> <filter> <artifact>*:*</artifact> <excludes> <exclude>META-INF/*.SF</exclude> <exclude>META-INF/*.DSA</exclude> <exclude>META-INF/*.RSA</exclude> </excludes> </filter> </filters> <transformers> <transformer implementation="org.apache.maven.plugins.shade.resource.AppendingTransformer"> <resource>META-INF/spring.handlers</resource> </transformer> <transformer implementation="org.apache.maven.plugins.shade.resource.AppendingTransformer"> <resource>META-INF/spring.schemas</resource> </transformer> <transformer implementation="org.apache.maven.plugins.shade.resource.ManifestResourceTransformer"> <mainClass>${groupId}.com.bigdata.CellPhoneToHbase</mainClass> </transformer> </transformers> <createDependencyReducedPom>false</createDependencyReducedPom> </configuration> </execution> </executions> </plugin> </plugins> </build> <repositories> <!-- 由於hadoop版本是cdh的,所以需要添加cdh倉庫--> <repository> <id>cloudera</id> <name>cloudera</name> <url>https://repository.cloudera.com/artifactory/cloudera-repos</url> </repository> </repositories> </project>
(2)打包
A. 編譯
這里選擇extract to the target JAR就是將所有的依賴包也都一並打包了;如果選擇copy to the output…就只打包自己寫的文件。
如果選擇extract to the target JAR就會出現以下內容:
否則會出現以下內容:
B. 構建
在彈出的選擇框中點擊build
C. 查看
打包前是這樣:
打包后是這樣:
如果選擇extract to the target JAR就會出現以下內容:
否則會出現以下內容:
使用解壓軟件打開jar包,可以看到里面的內容:
(3)執行jar包
上傳jar包至Linux的其中一台spark節點服務器上
執行命令:
spark-submit --class lenovo.didi202009demo --master local /data/lrxtest/spark.jar
(4)Q&A
A. org.apache.hadoop.mapred.InvalidInputException: Input path does not exist: hdfs://Master11:9000/user
在寫spark 讀取本地文件命令的時候報hdfs上文件不存在的錯…
讀取文件是分兩種情況:
(首先要確保文件路徑寫對了!!!!!)
1. 如果讀取hdfs上的文件時報這個錯,那么去看hdfs上是否有這個文件!!
hdfs dfs -ls / ( / 后面寫要讀取的文件的路徑)
1
如果沒有那么就創建文件,或者把本地文件上傳到hdfs上:
上傳本地文件:
hdfs dfs -put /usr/local/spark/test.txt /user/
創建文件:
hdfs dfs -mkdir -p /user/test/
2. 如果讀取的是本地文件,那么就好好看看命令,讀取本地文件的時候文件路徑前面要加 file:
我出錯就是因為沒加file: 這個單詞
錯誤的命令:
scala> sc.textFile("/usr/local/spark/test.txt").flatMap(_.split(" ")).map((_,1)).reduceByKey(_+_).collect
准確的命令:
sc.textFile("file:/usr/local/spark/test.txt").flatMap(_.split(" ")).map((_,1)).reduceByKey(_+_).collect