pyspark - 邏輯回歸


是在整理文件時, 翻到的, 感覺是好久以前的代碼了, 不過看了, 還是可以的. 起碼注釋還是蠻清晰的. 那時候我真的是妥妥的調包man....

# 邏輯回歸-標准化套路

from pyspark.ml.feature import VectorAssembler
import pandas as pd

# 1. 准備數據 - 樣本數據集
sample_dataset = [
    (0, "male", 37, 10, "no", 3, 18, 7, 4),
    (0, "female", 27, 4, "no", 4, 14, 6, 4),
    (0, "female", 32, 15, "yes", 1, 12, 1, 4),
    (0, "male", 57, 15, "yes", 5, 18, 6, 5),
    (0, "male", 22, 0.75, "no", 2, 17, 6, 3),
    (0, "female", 32, 1.5, "no", 2, 17, 5, 5),
    (0, "female", 22, 0.75, "no", 2, 12, 1, 3),
    (0, "male", 57, 15, "yes", 2, 14, 4, 4),
    (0, "female", 32, 15, "yes", 4, 16, 1, 2),
    (0, "male", 22, 1.5, "no", 4, 14, 4, 5),
    (0, "male", 37, 15, "yes", 2, 20, 7, 2),
    (0, "male", 27, 4, "yes", 4, 18, 6, 4),
    (0, "male", 47, 15, "yes", 5, 17, 6, 4),
    (0, "female", 22, 1.5, "no", 2, 17, 5, 4),
    (0, "female", 27, 4, "no", 4, 14, 5, 4),
    (0, "female", 37, 15, "yes", 1, 17, 5, 5),
    (0, "female", 37, 15, "yes", 2, 18, 4, 3),
    (0, "female", 22, 0.75, "no", 3, 16, 5, 4),
    (0, "female", 22, 1.5, "no", 2, 16, 5, 5),
    (0, "female", 27, 10, "yes", 2, 14, 1, 5),
    (1, "female", 32, 15, "yes", 3, 14, 3, 2),
    (1, "female", 27, 7, "yes", 4, 16, 1, 2),
    (1, "male", 42, 15, "yes", 3, 18, 6, 2),
    (1, "female", 42, 15, "yes", 2, 14, 3, 2),
    (1, "male", 27, 7, "yes", 2, 17, 5, 4),
    (1, "male", 32, 10, "yes", 4, 14, 4, 3),
    (1, "male", 47, 15, "yes", 3, 16, 4, 2),
    (0, "male", 37, 4, "yes", 2, 20, 6, 4)
]

columns = ["affairs", "gender", "age", "label", "children", "religiousness", "education", "occupation", "rating"]

# pandas構建dataframe,方便
pdf = pd.DataFrame(sample_dataset, columns=columns)

# 2. 特征選取:affairs為目標值,其余為特征值 - 這是工作中最麻煩的地方, 多張表, 數據清洗
df2 = df.select("affairs","age", "religiousness", "education", "occupation", "rating")

# 3. 合並特征-將多列特征合並為一列"feature", 如果是離散數據, 需要先 onehot 再合並, 挺繁瑣的
# 3.1 用於計算特征向量的字段
colArray2 = ["age", "religiousness", "education", "occupation", "rating"]
# 3.2 計算出特征向量
df3 = VectorAssembler().setInputCols(colArray2).setOutputCol("features").transform(df2)

# 4. 划分分為訓練集和測試集(隨機)
trainDF, testDF = df3.randomSplit([0.8,0.2])
# print("訓練集:")
# trainDF.show(10)
# print("測試集:")
# testDF.show(10)

# 5. 訓練模型
from pyspark.ml.classification import LogisticRegression
# 5.1 創建邏輯回歸訓練器
lr = LogisticRegression()
# 5.2 訓練模型
model = lr.setLabelCol("affairs").setFeaturesCol("features").fit(trainDF)
# 5.3 預測數據
model.transform(testDF).show()

# todo 
# 6. 評估, 交叉驗證, 保存, 封裝.....

主要也是作為一個歷史的筆記, 當然也作為一個反例, 即如果不懂原理,來調用包的話, 你會發現, ML 其實是多么的無聊, 至少從代碼套路上看這樣的.


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM