spark中RDD轉化成DataSet類型的方式進行訪問


1)創建一個樣例類

scala> case class People(name:String,age:Long)
defined class People

2)創建DataSet

scala> val caseClassDS = Seq(People("Andy",32)).toDS()
caseClassDS: org.apache.spark.sql.Dataset[People] = [name: string, age: bigint]

這樣people不僅僅有類型,而且還有了結構,這樣用起來會更加方便一些。

3)caseClassDS.你會發現這里有很多種方法,也可以show,也可以limit等等。

scala> caseClassDS.
agg describe intersect reduce toDF
alias distinct isLocal registerTempTable toJSON
apply drop isStreaming repartition toJavaRDD
as dropDuplicates javaRDD rollup toLocalIterator
cache dtypes join sample toString
checkpoint except joinWith schema transform
coalesce explain limit select union
col explode map selectExpr unionAll
collect filter mapPartitions show unpersist
collectAsList first na sort where
columns flatMap orderBy sortWithinPartitions withColumn
count foreach persist sparkSession withColumnRenamed
createGlobalTempView foreachPartition printSchema sqlContext withWatermark
createOrReplaceTempView groupBy queryExecution stat write
createTempView groupByKey randomSplit storageLevel writeStream
crossJoin head randomSplitAsList take
cube inputFiles rdd takeAsList

這里和dataframe差不多的。

4)這里我們用createGlobalTempView試一試

scala> caseClassDS.createGlobalTempView("People")

5)好了,我們這是時候想想,可以用spark.sql查詢,select語句直接查詢想查詢的內容即可。

scala> spark.sql("select * from global_temp.People").show()
+----+---+
|name|age|
+----+---+
|Andy| 32|
+----+---+

 

 

 


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM