spark 从RDD createDataFrame 的坑


Scala:

import org.apache.spark.ml.linalg.Vectors val data = Seq( (7, Vectors.dense(0.0, 0.0, 18.0, 1.0), 1.0), (8, Vectors.dense(0.0, 1.0, 12.0, 0.0), 0.0), (9, Vectors.dense(1.0, 0.0, 15.0, 0.1), 0.0) ) val df = spark.createDataset(data).toDF("id", "features", "clicked") 

Python:

from pyspark.ml.linalg import Vectors df = spark.createDataFrame([ (7, Vectors.dense([0.0, 0.0, 18.0, 1.0]), 1.0,), (8, Vectors.dense([0.0, 1.0, 12.0, 0.0]), 0.0,), (9, Vectors.dense([1.0, 0.0, 15.0, 0.1]), 0.0,)], ["id", "features", "clicked"]) 
如果是pair rdd则:

    stratified_CV_data = training_data.union(test_data) #pair rdd
    #schema = StructType([
    #   StructField("label", IntegerType(), True),
    #   StructField("features", VectorUDT(), True)])
    vectorized_CV_data = sqlContext.createDataFrame(stratified_CV_data, ["label", "features"]) #,schema) 

因为spark交叉验证的数据集必须是data frame,也是醉了!


免责声明!

本站转载的文章为个人学习借鉴使用,本站对版权不负任何法律责任。如果侵犯了您的隐私权益,请联系本站邮箱yoyou2525@163.com删除。



 
粤ICP备18138465号  © 2018-2025 CODEPRJ.COM