Spark DataFrame選取多列


val df = sc.parallelize(Seq(
  (0,"cat26",30.9), 
  (1,"cat67",28.5), 
  (2,"cat56",39.6),
  (3,"cat8",35.6))).toDF("Hour", "Category", "Value")

//或者從文件讀取成List
val cols = List("Hour", "Value")

scala> df.select(cols.head, cols.tail: _*).show
+----+----------+
|Hour|Value|
+----+----------+
|   1|      28.5|
|   3|      35.6|
|   2|      39.6|
|   0|      30.9|
+----+----------+

 


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM