Spark在Windows上調試


1. 背景

(1) spark的一般開發與運行流程是在本地Idea或Eclipse中寫好對應的spark代碼,然后打包部署至驅動節點,然后運行spark-submit。然而,當運行時異常,如空指針或數據庫連接等出現問題時,又需要再次修改優化代碼,然后再打包....有木有可能只需一次部署?

(2) 當新版本的spark發布時,想立刻馬上體驗新特性,而當前沒有現成的spark集群,或spark集群版本較老,又如何體驗新特性呢?

2. 方案

(1) 無需多次打包測試,直接在本地測試或調試通過,然后只需要打包部署一次即可。

spark支持standalone本地模式,初始化SparkConf時,設置master時,僅需指定"local[*]"或"local[1]"

(2) 基於本地模式,即使無現有的spark集群,也可以調試新版本的spark

只需在sbt或maven的配置文件中增加新版本的依賴即可。

(3) 設置spark的日志級別

spark默認打印INFO信息,比如我只想打印take操作后的少許數據,但調用spark時打印日志太多,就得從一大堆日志中進行查找。因此更改spark的默認日志級別。具體配置如下:

# Set everything to be logged to the console
log4j.rootCategory=INFO, console
log4j.appender.console=org.apache.log4j.ConsoleAppender
log4j.appender.console.target=System.err
log4j.appender.console.layout=org.apache.log4j.PatternLayout
log4j.appender.console.layout.ConversionPattern=%d{yy/MM/dd HH:mm:ss} %p %c{1}: %m%n

# Settings to quiet third party logs that are too verbose
log4j.logger.org.spark_project.jetty=ERROR
log4j.logger.org.spark_project=ERROR
log4j.logger.org.apache.spark=ERROR
log4j.logger.org.apache.parquet=ERROR
log4j.logger.parquet=ERROR
log4j.logger.io.netty=ERROR
log4j.logger.org.apache.hadoop=FATAL


# SPARK-9183: Settings to avoid annoying messages when looking up nonexistent UDFs in SparkSQL with Hive support
log4j.logger.org.apache.hadoop.hive.metastore.RetryingHMSHandler=FATAL

# 控制台輸出
log4j.appender.stdout=org.apache.log4j.ConsoleAppender
log4j.appender.stdout.layout=org.apache.log4j.PatternLayout
log4j.appender.stdout.layout.ConversionPattern=%d{yyyy-MM-dd HH:mm:ss,SSS} %5p %c{1}:%L - %m%n
View Code

(4) 測試代碼

import org.apache.spark.{SparkConf, SparkContext}

object Test {

  def main(args: Array[String]): Unit = {
    val sc = new SparkContext(new SparkConf().setMaster("local[1]").setAppName("test"))
    println(sc.version)
    sc.parallelize(List(1,2,3,4)).foreach(println)
    sc.stop()
  }

}
View Code

  運行結果

log4j: Trying to find [log4j.xml] using context classloader sun.misc.Launcher$AppClassLoader@18b4aac2.
log4j: Trying to find [log4j.xml] using sun.misc.Launcher$AppClassLoader@18b4aac2 class loader.
log4j: Trying to find [log4j.xml] using ClassLoader.getSystemResource().
log4j: Trying to find [log4j.properties] using context classloader sun.misc.Launcher$AppClassLoader@18b4aac2.
log4j: Using URL [file:/E:/IntelliJWorkSpace/AIMind-backend/aimind_backend/pipeline-tools/target/classes/log4j.properties] for automatic log4j configuration.
log4j: Reading configuration from URL file:/E:/IntelliJWorkSpace/AIMind-backend/aimind_backend/pipeline-tools/target/classes/log4j.properties
log4j: Parsing for [root] with value=[INFO, console].
log4j: Level token is [INFO].
log4j: Category root set to INFO
log4j: Parsing appender named "console".
log4j: Parsing layout options for "console".
log4j: Setting property [conversionPattern] to [%d{yy/MM/dd HH:mm:ss} %p %c{1}: %m%n].
log4j: End of parsing for "console".
log4j: Setting property [target] to [System.err].
log4j: Parsed "console" options.
log4j: Parsing for [org.spark_project.jetty] with value=[ERROR].
log4j: Level token is [ERROR].
log4j: Category org.spark_project.jetty set to ERROR
log4j: Handling log4j.additivity.org.spark_project.jetty=[null]
log4j: Parsing for [org.spark_project] with value=[ERROR].
log4j: Level token is [ERROR].
log4j: Category org.spark_project set to ERROR
log4j: Handling log4j.additivity.org.spark_project=[null]
log4j: Parsing for [org.apache.spark] with value=[ERROR].
log4j: Level token is [ERROR].
log4j: Category org.apache.spark set to ERROR
log4j: Handling log4j.additivity.org.apache.spark=[null]
log4j: Parsing for [org.apache.hadoop.hive.metastore.RetryingHMSHandler] with value=[FATAL].
log4j: Level token is [FATAL].
log4j: Category org.apache.hadoop.hive.metastore.RetryingHMSHandler set to FATAL
log4j: Handling log4j.additivity.org.apache.hadoop.hive.metastore.RetryingHMSHandler=[null]
log4j: Parsing for [parquet] with value=[ERROR].
log4j: Level token is [ERROR].
log4j: Category parquet set to ERROR
log4j: Handling log4j.additivity.parquet=[null]
log4j: Parsing for [io.netty] with value=[ERROR].
log4j: Level token is [ERROR].
log4j: Category io.netty set to ERROR
log4j: Handling log4j.additivity.io.netty=[null]
log4j: Parsing for [org.apache.hadoop] with value=[FATAL].
log4j: Level token is [FATAL].
log4j: Category org.apache.hadoop set to FATAL
log4j: Handling log4j.additivity.org.apache.hadoop=[null]
log4j: Parsing for [org.apache.parquet] with value=[ERROR].
log4j: Level token is [ERROR].
log4j: Category org.apache.parquet set to ERROR
log4j: Handling log4j.additivity.org.apache.parquet=[null]
log4j: Finished configuring.
2.4.1
1
2
3
4
View Code

3. 參考

(1) https://www.jianshu.com/p/c4b6ed734e72

(2) https://blog.csdn.net/weixin_41122339/article/details/81141913

按照如上兩個鏈接的方法,在windows環境上調試spark:下載winutils.exe -> 配置環境變量,重啟womdows, 增加spark依賴....

4.  異常解決

 (1) 按照如上第一個鏈接配置spark的輸出日志級別時,總是還能顯示出spark的INFO、DEBUG信息,隨單步調試排查了下,發現"Class path contains multiple SLF4J bindings."異常,找到本地的包倉庫地址,刪除非slf4j對應的包即可

 


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM