【spark】連接ClickHouse最優方案調研


Spark JDBC方案

查詢下垂研究:

spark jdbc連接mysql:

context.sparkSession.read.format("jdbc").options(config.toSparkJDBCMap).load().selectExpr("title")
.filter("phone='13725848961'")
.filter(row=>row.getAs[String]("phone").startsWith("123"))

mysql生成的sql日志:

2021-03-11T03:34:52.019572Z    234927 Connect    root@180.167.157.90 on aut_house using SSL/TLS
2021-03-11T03:34:52.030608Z    234927 Query    /* mysql-connector-java-8.0.17 (Revision: 16a712ddb3f826a1933ab42b0039f7fb9eebc6ec) */SELECT  @@session.auto_increment_increment AS auto_increment_increment, @@character_set_client AS character_set_client, @@character_set_connection AS character_set_connection, @@character_set_results AS character_set_results, @@character_set_server AS character_set_server, @@collation_server AS collation_server, @@collation_connection AS collation_connection, @@init_connect AS init_connect, @@interactive_timeout AS interactive_timeout, @@license AS license, @@lower_case_table_names AS lower_case_table_names, @@max_allowed_packet AS max_allowed_packet, @@net_write_timeout AS net_write_timeout, @@performance_schema AS performance_schema, @@query_cache_size AS query_cache_size, @@query_cache_type AS query_cache_type, @@sql_mode AS sql_mode, @@system_time_zone AS system_time_zone, @@time_zone AS time_zone, @@tx_isolation AS transaction_isolation, @@wait_timeout AS wait_timeout
2021-03-11T03:34:52.047211Z    234927 Query    SET NAMES utf8mb4
2021-03-11T03:34:52.057985Z    234927 Query    SET character_set_results = NULL
2021-03-11T03:34:52.068195Z    234927 Query    SET autocommit=1
2021-03-11T03:34:52.079011Z    234927 Query    SELECT `title` FROM house WHERE (`phone` IS NOT NULL) AND (`phone` = '13725848961')

spark的執行計划:

== Physical Plan ==
*(1) Filter <function1>.apply
+- *(1) Scan JDBCRelation(house) [numPartitions=1] [title#1,phone#4] PushedFilters: [*IsNotNull(phone), *EqualTo(phone,13725848961)], ReadSchema: struct<title:string,phone:string>
root
 |-- title: string (nullable = true)
 |-- phone: string (nullable = true)

 

初步結論: spark jdbc是能夠支持查詢下沉的,對於filterExpr和selectExpr會下沉

  • ClickHouse
== Physical Plan ==
*(1) Project [service#0]
+- *(1) Scan JDBCRelation(tbtest) [numPartitions=1] [service#0] PushedFilters: [*IsNotNull(metric), *EqualTo(metric,CPU_Idle_Time_alan)], ReadSchema: struct<service:string>

實驗測出,Spark JDBC連接ClickHouse也會查詢下沉

未來優化方向:

ClickHouse 查詢是通過一個distribute view到每個節點拉數據,然后view進行merge。這個動作跟spark的driver側有點類似。所以未來考慮編寫一個spark source,partition去每個節點拉數據,driver側進行merge。跳過distribute view這個操作


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM