在交互環境下使用 Pyspark 提交任務給 Spark 解決 : java.sql.SQLException: No suitable driver


 

在 jupyter 上啟用 local 交互環境和 spark 進行交互使用 imapla 來幫助 spark 取數據卻失敗了

from pyspark.sql import SparkSession

jdbc_url= "jdbc:impala://data1.hundun-new.sa:21050/rawdata;UseNativeQuery=1"
spark = SparkSession.builder \
.appName("sa-test") \
.master("local") \
.getOrCreate()

# properties = {
#     "driver": "com.cloudera.ImpalaJDBC41",
#     "AuthMech": "1",
# #     "KrbRealm": "EXAMPLE.COM",
# #     "KrbHostFQDN": "impala.example.com",
#     "KrbServiceName": "impala"
# }


# df = spark.read.jdbc(url=jdbc_url, table="(/*SA(default)*/ SELECT date, event, count(*) AS c FROM events WHERE date=CURRENT_DATE() GROUP BY 1,2) a")
df = spark.read.jdbc(url=jdbc_url, table="(/*SA(production)*/ SELECT date, event, count(*) AS c FROM events WHERE date=CURRENT_DATE())")
df.select(df['date'], df['event'], df['c'] * 10000).show()


y4JJavaError: An error occurred while calling o32.jdbc.
: java.sql.SQLException: No suitable driver
    at java.sql.DriverManager.getDriver(DriverManager.java:315)
    at org.apache.spark.sql.execution.datasources.jdbc.JDBCOptions$$anonfun$6.apply(JDBCOptions.scala:105)
    at org.apache.spark.sql.execution.datasources.jdbc.JDBCOptions$$anonfun$6.apply(JDBCOptions.scala:105)
    at scala.Option.getOrElse(Option.scala:121)
    at org.apache.spark.sql.execution.datasources.jdbc.JDBCOptions.<init>(JDBCOptions.scala:104)
    at org.apache.spark.sql.execution.datasources.jdbc.JDBCOptions.<init>(JDBCOptions.scala:35)
    at org.apache.spark.sql.execution.datasources.jdbc.JdbcRelationProvider.createRelation(JdbcRelationProvider.s

可以清楚的看到報出的錯誤 No suitable driver ,我們需要添加上 impala 的 jdbc driver 才能正常運行。

 

首先我們下載一個 impala 的 jdbc driver 

http://repo.odysseusinc.com/artifactory/community-libs-release-local/com/cloudera/ImpalaJDBC41/2.6.3/ImpalaJDBC41-2.6.3.jar

然后我們在申請 ss 的時候通過 cnofig 指定該 impala driver 的路徑即可

from pyspark.sql import SparkSession

jdbc_url= "jdbc:impala://data1.hundun-new.sa:21050/rawdata;UseNativeQuery=1"
spark = SparkSession.builder \
.appName("sa-test") \
.master("local") \
.config('spark.driver.extraClassPath', '/usr/share/java/ImpalaJDBC41-2.6.3.jar') \
.getOrCreate()

 

這里我在 stackoverflow 上還找到另外一種方法

EDIT

The answers from How to load jar dependenices in IPython Notebook are already listed in the link I shared myself, and do not work for me. I already tried to configure the environment variable from the notebook:

import os os.environ['PYSPARK_SUBMIT_ARGS'] = '--driver-class-path /path/to/postgresql.jar --jars /path/to/postgresql.jar'

There's nothing wrong with the file path or the file itself since it works fine when I specify it and run the pyspark-shell.

 

 

Reference:

https://spark.apache.org/docs/latest/configuration.html    Spark Configuration 

https://stackoverflow.com/questions/51772350/how-to-specify-driver-class-path-when-using-pyspark-within-a-jupyter-notebook    How to specify driver class path when using pyspark within a jupyter notebook?


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM