數據湖-Apache Hudi


Hudi特性

  • 數據湖處理非結構化數據、日志數據、結構化數據

  • 支持較快upsert/delete, 可插入索引

  • Table Schema

  • 小文件管理Compaction

  • ACID語義保證,多版本保證 並具有回滾功能

  • savepoint 用戶數據恢復的保存點

  • 支持多種分析引擎 spark、hive、presto

編譯Hudi

git clone https://github.com/apache/hudi.git && cd hudi
mvn clean package -DskipTests

hudi 高度耦合spark

執行spark-shell測試Hudi

bin/spark-shell  --packages org.apache.spark:spark-avro_2.11:2.4.5   --conf 'spark.serializer=org.apache.spark.serializer.KryoSerializer' --jars /Users/macwei/IdeaProjects/hudi-master/packaging/hudi-spark-bundle/target/hudi-spark-bundle_2.11-0.6.1-SNAPSHOT.jar

hudi 寫入數據

// spark-shell
import org.apache.hudi.QuickstartUtils._
import scala.collection.JavaConversions._
import org.apache.spark.sql.SaveMode._
import org.apache.hudi.DataSourceReadOptions._
import org.apache.hudi.DataSourceWriteOptions._
import org.apache.hudi.config.HoodieWriteConfig._

val tableName = "hudi_trips_cow"
val basePath = "file:///tmp/hudi_trips_cow"
val dataGen = new DataGenerator

// spark-shell
val inserts = convertToStringList(dataGen.generateInserts(10))
val df = spark.read.json(spark.sparkContext.parallelize(inserts, 2))
df.write.format("hudi").
  options(getQuickstartWriteConfigs).
  option(PRECOMBINE_FIELD_OPT_KEY, "ts").
  option(RECORDKEY_FIELD_OPT_KEY, "uuid").
  option(PARTITIONPATH_FIELD_OPT_KEY, "partitionpath").
  option(TABLE_NAME, tableName).
  mode(Overwrite).
  save(basePath)

讀取hudi數據:

  val tripsSnapshotDF = spark.
  read.
  format("hudi").
  load(basePath + "/*/*/*/*")
  
  tripsSnapshotDF.createOrReplaceTempView("hudi_trips_snapshot")
  
  spark.sql("select fare, begin_lon, begin_lat, ts from  hudi_trips_snapshot where fare > 20.0").show()
  
  +------------------+-------------------+-------------------+-------------+
|              fare|          begin_lon|          begin_lat|           ts|
+------------------+-------------------+-------------------+-------------+
| 64.27696295884016| 0.4923479652912024| 0.5731835407930634|1609771934700|
| 93.56018115236618|0.14285051259466197|0.21624150367601136|1610087553306|
| 33.92216483948643| 0.9694586417848392| 0.1856488085068272|1609982888463|
| 27.79478688582596| 0.6273212202489661|0.11488393157088261|1610187369637|
|34.158284716382845|0.46157858450465483| 0.4726905879569653|1610017361855|
|  43.4923811219014| 0.8779402295427752| 0.6100070562136587|1609795685223|
| 66.62084366450246|0.03844104444445928| 0.0750588760043035|1609923236735|
| 41.06290929046368| 0.8192868687714224|  0.651058505660742|1609838517703|
+------------------+-------------------+-------------------+-------------+


spark.sql("select _hoodie_commit_time, _hoodie_record_key, _hoodie_partition_path, rider, driver, fare from  hudi_trips_snapshot").show()


+-------------------+--------------------+----------------------+---------+----------+------------------+
|_hoodie_commit_time|  _hoodie_record_key|_hoodie_partition_path|    rider|    driver|              fare|
+-------------------+--------------------+----------------------+---------+----------+------------------+
|     20210110225218|3c7ef0e7-86fb-444...|  americas/united_s...|rider-213|driver-213| 64.27696295884016|
|     20210110225218|222db9ca-018b-46e...|  americas/united_s...|rider-213|driver-213| 93.56018115236618|
|     20210110225218|3fc72d76-f903-4ca...|  americas/united_s...|rider-213|driver-213|19.179139106643607|
|     20210110225218|512b0741-e54d-426...|  americas/united_s...|rider-213|driver-213| 33.92216483948643|
|     20210110225218|ace81918-0e79-41a...|  americas/united_s...|rider-213|driver-213| 27.79478688582596|
|     20210110225218|c76f82a1-d964-4db...|  americas/brazil/s...|rider-213|driver-213|34.158284716382845|
|     20210110225218|73145bfc-bcb2-424...|  americas/brazil/s...|rider-213|driver-213|  43.4923811219014|
|     20210110225218|9e0b1d58-a1c4-47f...|  americas/brazil/s...|rider-213|driver-213| 66.62084366450246|
|     20210110225218|b8fccca1-9c28-444...|    asia/india/chennai|rider-213|driver-213|17.851135255091155|
|     20210110225218|6144be56-cef9-43c...|    asia/india/chennai|rider-213|driver-213| 41.06290929046368|
+-------------------+--------------------+----------------------+---------+----------+------------------+

對比

數據導入至hadoop方案: maxwell、canal、flume、sqoop

hudi是通用方案

  • hudi 支持presto、spark sql下游查詢

  • hudi存儲依賴hdfs

  • hudi可以當作數據源或數據庫,支持PB級別

概念

Timeline: 時間戳

state:即時狀態

原子寫入操作

compaction: 后台協調hudi中差異數據

rollback: 回滾

savepoint: 數據還原

任何操作都有以下狀態:

  • Requested 已安排操作行為,但是沒有開始
  • Inflight 正在執行當前操作
  • Completed 已完成操作

hudi提供兩種表類型:

  • CopyOnWrite 適用全量數據,列式存儲,寫入過程執行同步合並重寫文件
  • MergeOnRead 增量數據,基於列式(parquet)和行式(avro)存儲,更新記錄到增量文件(日志文件),壓縮同步和異步生成新版本文件,延遲更低

hudi查詢類型:

  • 快照查詢 查詢最新快照表數據,如果是MergeOnRead表,動態合並最新版本基本數據和增量數據用於顯示查詢;如果是CopyOnWrite,直接查詢Parquet表,同時提供upsert、delete操作
  • 增量查詢 只能看到寫入表的新數據
  • 優化讀查詢 給定時間段的一個查詢

資料參考


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM