Spark LR邏輯回歸中RDD轉DF中VectorUDT設置


  System.setProperty("hadoop.home.dir", "C:\\hadoop-2.7.2");
  val spark = SparkSession.builder().config(new SparkConf().setAppName("LR").setMaster("local[*]")).config("spark.sql.warehouse.dir", "file:///").getOrCreate()

  val sc = spark.sparkContext

  val rdd = sc.textFile("C:\\Users\\Daxin\\Documents\\GitHub\\OptimizedRF\\sql_data\\LRDATA")


  val schemaString = "label features"
  //  val fields = schemaString.split(" ").map(StructField(_, StringType, true))
  //  org.apache.spark.ml.linalg.SQLDataTypes.VectorType替換org.apache.spark.ml.linalg.VectorUDT(一個spark包私有的類型)
  val fields = Array(StructField("label", DoubleType, true), StructField("features", org.apache.spark.ml.linalg.SQLDataTypes.VectorType, true))

  val rowRdd = rdd.map {
    x =>
      Row(x.split(",")(1).toDouble, Vectors.dense(Array[Double](x.split(",")(0).toDouble)))
  }

  val schema = StructType(fields)


  val Array(train, test) = spark.createDataFrame(rowRdd, schema).randomSplit(Array[Double](0.6, 0.4))

  val lr = new LinearRegression()
    .setMaxIter(100)
    .setRegParam(0.3)
    .setElasticNetParam(0.8) //.setTol(0.01) // 收斂閾值


  val lrModel = lr.fit(train)

  println(lrModel.transform(test).columns.toBuffer)

  lrModel.transform(test).select("label", "prediction").show()
  
  println(s"Coefficients: ${lrModel.coefficients} Intercept: ${lrModel.intercept}")

 


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM