spark中RDD转化成DataSet类型的方式进行访问


1)创建一个样例类

scala> case class People(name:String,age:Long)
defined class People

2)创建DataSet

scala> val caseClassDS = Seq(People("Andy",32)).toDS()
caseClassDS: org.apache.spark.sql.Dataset[People] = [name: string, age: bigint]

这样people不仅仅有类型,而且还有了结构,这样用起来会更加方便一些。

3)caseClassDS.你会发现这里有很多种方法,也可以show,也可以limit等等。

scala> caseClassDS.
agg describe intersect reduce toDF
alias distinct isLocal registerTempTable toJSON
apply drop isStreaming repartition toJavaRDD
as dropDuplicates javaRDD rollup toLocalIterator
cache dtypes join sample toString
checkpoint except joinWith schema transform
coalesce explain limit select union
col explode map selectExpr unionAll
collect filter mapPartitions show unpersist
collectAsList first na sort where
columns flatMap orderBy sortWithinPartitions withColumn
count foreach persist sparkSession withColumnRenamed
createGlobalTempView foreachPartition printSchema sqlContext withWatermark
createOrReplaceTempView groupBy queryExecution stat write
createTempView groupByKey randomSplit storageLevel writeStream
crossJoin head randomSplitAsList take
cube inputFiles rdd takeAsList

这里和dataframe差不多的。

4)这里我们用createGlobalTempView试一试

scala> caseClassDS.createGlobalTempView("People")

5)好了,我们这是时候想想,可以用spark.sql查询,select语句直接查询想查询的内容即可。

scala> spark.sql("select * from global_temp.People").show()
+----+---+
|name|age|
+----+---+
|Andy| 32|
+----+---+

 

 

 


免责声明!

本站转载的文章为个人学习借鉴使用,本站对版权不负任何法律责任。如果侵犯了您的隐私权益,请联系本站邮箱yoyou2525@163.com删除。



 
粤ICP备18138465号  © 2018-2025 CODEPRJ.COM