模型融合---Stacking調參總結


1. 回歸

訓練了兩個回歸器,GBDT和Xgboost,用這兩個回歸器做stacking

使用之前已經調好參的訓練器

gbdt_nxf = GradientBoostingRegressor(learning_rate=0.06,n_estimators=250,
                                  min_samples_split=700,min_samples_leaf=70,max_depth=6,
                                  max_features='sqrt',subsample=0.8,random_state=75)
xgb_nxf = XGBRegressor(learning_rate=0.06,max_depth=6,n_estimators=200,random_state=75)

  

事先建好stacking要用到的矩陣

from sklearn.model_selection import KFold,StratifiedKFold
kf = StratifiedKFold(n_splits=5,random_state=75,shuffle=True)

from sklearn.metrics import r2_score

train_proba = np.zeros((len(gbdt_train_data),2))
train_proba = pd.DataFrame(train_proba)
train_proba.columns = ['gbdt_nxf','xgb_nxf']

test_proba = np.zeros((len(gbdt_test_data),2))
test_proba = pd.DataFrame(test_proba)
test_proba.columns = ['gbdt_nxf','xgb_nxf']

  

reg_names = ['gbdt_nxf','xgb_nxf']

for i,reg in enumerate([gbdt_nxf,xgb_nxf]):
    pred_list = []
    col = reg_names[i]
    for train_index,val_index in kf.split(gbdt_train_data,gbdt_train_label):
        x_train = gbdt_train_data.loc[train_index,:].values
        y_train = gbdt_train_label[train_index]
        x_val = gbdt_train_data.loc[val_index,:].values
        y_val = gbdt_train_label[val_index]
        
        reg.fit(x_train,y_train)
        y_vali = reg.predict(x_val)
        train_proba.loc[val_index,col] = y_vali
        print('%s cv r2 %s'%(col,r2_score(y_val,y_vali)))
        
        y_testi = reg.predict(gbdt_test_data.values)
        pred_list.append(y_testi)
    test_proba.loc[:,col] = np.mean(np.array(pred_list),axis=0)

r2值最高為0.79753,效果還不是特別的好

然后用五折交叉驗證,每折都預測整個測試集,得到五個預測的結果,求平均,就是新的預測集;而訓練集就是五折中任意四折預測該折的訓練集得到的標簽的集合

因為有兩個訓練器,GBDT和Xgboost,所以我們得到了兩列的train_proba

最后對新的訓練集和測試集做回歸,得到我們的結果

 

#使用邏輯回歸做stacking
from sklearn.preprocessing import StandardScaler
from sklearn.linear_model import LogisticRegression
scalar = StandardScaler()
# train_proba = train_proba.values
# test_proba = test_proba.values

scalar.fit(train_proba)
train_proba = scalar.transform(train_proba)
test_proba = scalar.transform(test_proba)

lr = LogisticRegression(tol=0.0001,C=0.5,random_state=24,max_iter=10)

kf = StratifiedKFold(n_splits=5,random_state=75,shuffle=True)
r2_list = []
pred_list = []
for train_index,val_index in kf.split(train_proba,gbdt_train_label):#訓練集的標簽還是一開始真實的訓練集的標簽
        x_train = train_proba[train_index]
        y_train = gbdt_train_label[train_index]
        x_val = train_proba[val_index]
        y_val = gbdt_train_label[val_index]
        
        lr.fit(x_train,y_train)
        y_vali = lr.predict(x_val)
        print('lr stacking cv r2 %s'%(r2_score(y_val,y_vali)))
        
        r2_list.append(r2_score(y_val,y_vali))
        
        y_testi = lr.predict(test_proba)
        pred_list.append(y_testi)

print(lr.coef_,lr.n_iter_)#過擬合很嚴重

 

 

2. 分類

  經過對每個單模型進行調參之后,我們可以把這些模型進行 stacking 集成。

  如上圖所示,我們將數據集分成均勻的5部分進行交叉訓練,使用其中的4部分訓練,之后將訓練好的模型對剩下的1部分進行預測,同時預測測試集;經過5次cv之后,我們可以得到訓練集每個樣本的預測值,同時得到測試集的5個預測值,我們將測試集的5個測試集進行平均。有多少個基模型,我們會得到幾組不同的預測值;最后使用一個模型對上一步得到預測結果再進行訓練預測,得到stacking結果。stacking模型一般使用線性模型。

  stacking 有點像神經網絡,基模型就像底層的神經網絡對輸入數據進行特征的提取,如下圖所示:

首先我們先定義一個DataFrame 格式數據結構榮來存儲中間預測結果:

train_proba = np.zeros((len(train), 6))
train_proba = pd.DataFrame(train_proba)
train_proba.columns = ['rf','ada','etc','gbc','sk_xgb','sk_lgb']

test_proba = np.zeros((len(test), 6))
test_proba = pd.DataFrame(test_proba)
test_proba.columns = ['rf','ada','etc','gbc','sk_xgb','sk_lgb']

定義基模型,交叉訓練預測

rf = RandomForestClassifier(n_estimators=700, max_depth=13, min_samples_split=30,\
                            min_weight_fraction_leaf=0.0, random_state=24, verbose=0)

ada = AdaBoostClassifier(n_estimators=450, learning_rate=0.1, random_state=24)

gbc = GradientBoostingClassifier(learning_rate=0.08,n_estimators=150,max_depth=9,
                                  min_samples_leaf=70,min_samples_split=900,
                                  max_features='sqrt',subsample=0.8,random_state=10)

etc = ExtraTreesClassifier(n_estimators=290, max_depth=12, min_samples_split=30,random_state=24)

sk_xgb = XGBClassifier(learning_rate=0.05,n_estimators=400,
                        min_child_weight=20,max_depth=3,subsample=0.8, colsample_bytree=0.8,
                        reg_lambda=1., random_state=10)

sk_lgb = LGBMClassifier(num_leaves=31,max_depth=3,learing_rate=0.03,n_estimators=600,
                     subsample=0.8, colsample_bytree=0.9, objective='binary', 
                     min_child_weight=0.001, subsample_freq=1, min_child_samples=10,
                     reg_alpha=0.0, reg_lambda=0.0, random_state=10, n_jobs=-1, 
                     silent=True, importance_type='split')

kf = StratifiedKFold(n_splits=5,random_state=233,shuffle=True)

clf_name = ['rf','ada','etc','gbc','sk_xgb','sk_lgb']
for i,clf in enumerate([rf,ada,etc,gbc,sk_xgb,sk_lgb]):
    pred_list = []
    col = clf_name[i] 
    for train_index, val_index in kf.split(train,label):
        X_train = train.loc[train_index,:].values
        y_train = label[train_index]
        X_val = train.loc[val_index,:].values
        y_val = label[val_index]

        clf.fit(X_train, y_train)
        y_vali = clf.predict_proba(X_val)[:,1]
        train_proba.loc[val_index,col] = y_vali
        print("%s cv auc %s" % (col, roc_auc_score(y_val, y_vali)))

        y_testi = clf.predict_proba(test.values)[:,1]
        pred_list.append(y_testi)

    test_proba.loc[:,col] = np.mean(np.array(pred_list),axis=0)

使用邏輯回歸做最后的stacking  

scaler = StandardScaler()
train_proba = train_proba.values
test_proba = test_proba.values

scaler.fit(train_proba)
train_proba = scaler.transform(train_proba)
test_proba = scaler.transform(test_proba)


lr = LogisticRegression(tol=0.0001, C=0.5, random_state=24, max_iter=10)

kf = StratifiedKFold(n_splits=5,random_state=244,shuffle=True)
auc_list = []
pred_list = []
for train_index, val_index in kf.split(train_proba,label):
    X_train = train_proba[train_index]
    y_train = label[train_index]
    X_val = train_proba[val_index]
    y_val = label[val_index]

    lr.fit(X_train, y_train)
    y_vali = lr.predict_proba(X_val)[:,1]
    print("lr stacking cv auc %s" % (roc_auc_score(y_val, y_vali)))

    auc_list.append(roc_auc_score(y_val, y_vali))

    y_testi = lr.predict_proba(test_proba)[:,1]
    pred_list.append(y_testi)

print(lr.coef_, lr.n_iter_)

最終各個基模型和stacking模型的 auc 得分如下圖所示:  

 分別為 0.8415,0.8506,0.8511,0.8551,0.8572,0.8580,0.8584。

 


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM