二分類GBDT調參過程:
Aarshay Jain對Gradient Tree Boosting總結了一套調參方法,如何衡量參數對整體模型性能的影響力呢?基於經驗,Aarshay提出他的見解:“最大葉節點數”(max_leaf_nodes)和“最大樹深度”(max_depth)對整體模型性能的影響大於“分裂所需最小樣本數”(min_samples_split)、“葉節點最小樣本數”(min_samples_leaf)及“葉節點最小權重總值”(min_weight_fraction_leaf),而“分裂時考慮的最大特征數”(max_features)的影響力最小
1:默認參數下結果
#!/usr/bin/python # -*- coding: UTF-8 -*- import pandas as pd import numpy as np from sklearn.ensemble import GradientBoostingClassifier from sklearn import cross_validation, metrics from sklearn.grid_search import GridSearchCV import matplotlib.pylab as plt train= pd.read_csv('train_modified.csv') print(train.shape) target='Disbursed'# Disbursed的值就是二元分類的輸出 IDcol= 'ID' train['Disbursed'].value_counts() x_columns= [x for x in train.columns if x not in [target, IDcol]] ##訓練數據排除id,label列 X= train[x_columns] y= train['Disbursed'] gbm0= GradientBoostingClassifier(random_state=10) gbm0.fit(X,y) y_pred= gbm0.predict(X) y_predprob= gbm0.predict_proba(X)[:,1] print("Accuracy : %.4g" % metrics.accuracy_score(y.values, y_pred)) print("AUC Score (Train): %f" % metrics.roc_auc_score(y, y_predprob))
結果:
Accuracy: 0.9852
AUCScore (Train): 0.900531
2:首先從步長(learning rate)和迭代次數(n_estimators)入手調參,采用網格搜索法
####調參,提高模型的泛化能力### ##1:首先從步長(learning rate)和迭代次數(n_estimators)入手### param_test1={'n_estimators':range(20,81,10)} gsearch1=GridSearchCV(estimator=GradientBoostingClassifier(learning_rate=0.1,min_samples_split=300,min_samples_leaf=20,max_depth=8,max_features='sqrt',subsample=0.8,random_state=10),param_grid=param_test1, scoring='roc_auc',iid=False,cv=5) gsearch1.fit(X,y) print(gsearch1.grid_scores_, gsearch1.best_params_, gsearch1.best_score_)
結果:
([mean:0.81285, std: 0.01967, params: {'n_estimators': 20},
mean: 0.81438, std: 0.01947, params:{'n_estimators': 30},
mean: 0.81451, std: 0.01933, params:{'n_estimators': 40},
mean: 0.81618, std: 0.01848, params:{'n_estimators': 50},
mean: 0.81751, std: 0.01736, params:{'n_estimators': 60},
mean: 0.81547, std: 0.01900, params:{'n_estimators': 70},
mean: 0.81299, std: 0.01860, params:{'n_estimators': 80}],
{'n_estimators': 60},
0.8175146087398375)
3:對決策樹最大深度max_depth和內部節點再划分所需最小樣本數min_samples_split進行網格搜索。
##2:其次調節決策樹最大深度和內部節點划分最小化樣本數 param_test2= {'max_depth':range(3,14,2), 'min_samples_split':range(100,801,200)} gsearch2= GridSearchCV(estimator = GradientBoostingClassifier(learning_rate=0.1,n_estimators=60, min_samples_leaf=20, max_features='sqrt', subsample=0.8,random_state=10), param_grid= param_test2, scoring='roc_auc', iid=False, cv=5) gsearch2.fit(X,y) print(gsearch2.grid_scores_,gsearch2.best_params_, gsearch2.best_score_)
結果:
([mean: 0.82688, std: 0.01287, params: {'min_samples_split': 100, 'max_depth': 3},
mean: 0.82711, std: 0.01443, params: {'min_samples_split': 300, 'max_depth': 3},
mean: 0.82733, std: 0.01351, params: {'min_samples_split': 500, 'max_depth': 3},
mean: 0.82872, std: 0.01281, params: {'min_samples_split': 700, 'max_depth': 3},
mean: 0.83417, std: 0.01043, params: {'min_samples_split': 100, 'max_depth': 5},
mean: 0.83332, std: 0.00986, params: {'min_samples_split': 300, 'max_depth': 5},
mean: 0.83429, std: 0.01283, params: {'min_samples_split': 500, 'max_depth': 5},
mean: 0.83271, std: 0.01234, params: {'min_samples_split': 700, 'max_depth': 5},
mean: 0.83680, std: 0.01234, params: {'min_samples_split': 100, 'max_depth': 7},
mean: 0.83857, std: 0.00731, params: {'min_samples_split': 300, 'max_depth': 7},
mean: 0.84071, std:0.00962, params: {'min_samples_split': 500, 'max_depth': 7},
mean: 0.83569, std: 0.00971, params: {'min_samples_split': 700, 'max_depth': 7},
mean: 0.83299, std: 0.01172, params: {'min_samples_split': 100, 'max_depth': 9},
mean: 0.83595, std: 0.01123, params: {'min_samples_split': 300, 'max_depth': 9},
mean: 0.83638, std: 0.01125, params: {'min_samples_split': 500, 'max_depth': 9},
mean: 0.83715, std: 0.01208, params: {'min_samples_split': 700, 'max_depth': 9},
mean: 0.82469, std: 0.01214, params: {'min_samples_split': 100, 'max_depth': 11},
mean: 0.83347, std: 0.01178, params: {'min_samples_split': 300, 'max_depth': 11},
mean: 0.83508, std: 0.00896, params: {'min_samples_split': 500, 'max_depth': 11},
mean: 0.83521, std: 0.00894, params: {'min_samples_split': 700, 'max_depth': 11},
mean: 0.81720, std: 0.00604, params: {'min_samples_split': 100, 'max_depth': 13},
mean: 0.82795, std: 0.00703, params: {'min_samples_split': 300, 'max_depth': 13},
mean: 0.83100, std: 0.01037, params: {'min_samples_split': 500, 'max_depth': 13},
mean: 0.83068, std: 0.01322, params: {'min_samples_split': 700, 'max_depth': 13}],
{'min_samples_split': 500, 'max_depth': 7}, 0.8407071633581745)
4:對內部節點再划分所需最小樣本數min_samples_split和葉子節點最少樣本數min_sample_leaf
##3:再對內部節點再划分所需最小樣本數min_samples_split和葉子節點最少樣本數 param_test3= {'min_samples_split':range(800,1900,200),'min_samples_leaf':range(60,101,10)} gsearch3= GridSearchCV(estimator = GradientBoostingClassifier(learning_rate=0.1,n_estimators=60,max_depth=7,max_features='sqrt',subsample=0.8,random_state=10), param_grid= param_test3, scoring='roc_auc', iid=False, cv=5) gsearch3.fit(X,y) print(gsearch3.grid_scores_,gsearch3.best_params_, gsearch3.best_score_)
結果:
([mean: 0.83701, std: 0.01077, params: {'min_samples_split': 800, 'min_samples_leaf': 60},
mean: 0.83857, std: 0.01021, params: {'min_samples_split': 1000, 'min_samples_leaf': 60},
mean: 0.83881, std: 0.01227, params: {'min_samples_split': 1200, 'min_samples_leaf': 60},
mean: 0.83888, std: 0.01307, params: {'min_samples_split': 1400, 'min_samples_leaf': 60},
mean: 0.83784, std: 0.00984, params: {'min_samples_split': 1600, 'min_samples_leaf': 60},
mean: 0.83786, std: 0.01061, params: {'min_samples_split': 1800, 'min_samples_leaf': 60},
mean: 0.83782, std: 0.01003, params: {'min_samples_split': 800, 'min_samples_leaf':70},
mean: 0.84019, std: 0.00898, params: {'min_samples_split': 1000, 'min_samples_leaf': 70},
mean: 0.83636, std: 0.01161, params: {'min_samples_split': 1200, 'min_samples_leaf': 70},
mean: 0.83853, std: 0.01137, params: {'min_samples_split': 1400,'min_samples_leaf': 70},
mean: 0.83704, std: 0.01165, params: {'min_samples_split': 1600, 'min_samples_leaf': 70},
mean: 0.83580, std: 0.01045, params: {'min_samples_split': 1800, 'min_samples_leaf': 70},
mean: 0.83803, std: 0.01045, params: {'min_samples_split': 800, 'min_samples_leaf': 80},
mean: 0.83781, std: 0.00986, params: {'min_samples_split': 1000, 'min_samples_leaf': 80},
mean: 0.83623, std: 0.00951, params: {'min_samples_split': 1200, 'min_samples_leaf': 80},
mean: 0.83769, std: 0.01154, params: {'min_samples_split': 1400, 'min_samples_leaf': 80},
mean: 0.83702, std: 0.00951, params: {'min_samples_split': 1600, 'min_samples_leaf': 80},
mean: 0.83577, std: 0.00995, params: {'min_samples_split': 1800, 'min_samples_leaf': 80},
mean:0.83738, std: 0.01087, params: {'min_samples_split': 800, 'min_samples_leaf': 90},
mean: 0.83844, std: 0.01101, params: {'min_samples_split': 1000, 'min_samples_leaf': 90},
mean: 0.83736, std: 0.01128, params: {'min_samples_split': 1200, 'min_samples_leaf': 90},
mean: 0.83831, std: 0.01234, params: {'min_samples_split': 1400, 'min_samples_leaf': 90},
mean: 0.83574, std: 0.01086, params: {'min_samples_split': 1600, 'min_samples_leaf': 90},
mean: 0.83559, std: 0.00917, params: {'min_samples_split': 1800, 'min_samples_leaf': 90},
mean: 0.83753, std: 0.01140, params: {'min_samples_split': 800, 'min_samples_leaf': 100},
mean: 0.83955, std: 0.00958, params: {'min_samples_split': 1000, 'min_samples_leaf': 100},
mean: 0.83774, std: 0.01172, params: {'min_samples_split': 1200, 'min_samples_leaf': 100},
mean: 0.83926, std: 0.01207, params: {'min_samples_split': 1400, 'min_samples_leaf': 100},
mean: 0.83473, std: 0.01047, params: {'min_samples_split': 1600, 'min_samples_leaf': 100},
mean: 0.83620, std: 0.01099, params: {'min_samples_split': 1800, 'min_samples_leaf': 100}],
{'min_samples_split': 1000, 'min_samples_leaf': 70}, 0.84018902830047)
5:調參完再次擬合:
##4:調參完擬合 gbm1= GradientBoostingClassifier(learning_rate=0.1, n_estimators=60,max_depth=7,min_samples_leaf =70, min_samples_split =1000, max_features='sqrt',subsample=0.8, random_state=10) gbm1.fit(X,y) y_pred= gbm1.predict(X) y_predprob= gbm1.predict_proba(X)[:,1] print"Accuracy : %.4g" % metrics.accuracy_score(y.values, y_pred) print"AUC Score (Train): %f" % metrics.roc_auc_score(y, y_predprob)
結果:
Accuracy : 0.9854
AUC Score (Train): 0.881348
6:再對最大特征數max_features進行網格搜索
###5:對max_features調參 param_test4= {'max_features':range(7,20,2)} gsearch4= GridSearchCV(estimator = GradientBoostingClassifier(learning_rate=0.1,n_estimators=60,max_depth=7, min_samples_leaf =70, min_samples_split =1000,subsample=0.8, random_state=10), param_grid= param_test4, scoring='roc_auc', iid=False, cv=5) gsearch4.fit(X,y) print(gsearch4.grid_scores_,gsearch4.best_params_, gsearch4.best_score_)
結果:
([mean: 0.84019, std: 0.00898, params: {'max_features': 7},
mean: 0.83522, std: 0.01191, params: {'max_features': 9},
mean: 0.83695, std: 0.01174, params: {'max_features': 11},
mean: 0.83795, std: 0.00909, params: {'max_features': 13},
mean: 0.83795, std: 0.01095, params: {'max_features': 15},
mean: 0.83505, std: 0.01040, params: {'max_features': 17},
mean: 0.83691, std: 0.00895, params: {'max_features': 19}],
{'max_features': 7}, 0.84018902830047)
7:對子采樣subsample的比例進行網格搜索:
##6:對subsample子采樣調參 param_test5= {'subsample':[0.6,0.7,0.75,0.8,0.85,0.9]} gsearch5= GridSearchCV(estimator = GradientBoostingClassifier(learning_rate=0.1,n_estimators=60,max_depth=7, min_samples_leaf =70, min_samples_split =1000,max_features=7, random_state=10), param_grid= param_test5, scoring='roc_auc', iid=False, cv=5) gsearch5.fit(X,y) print(gsearch5.grid_scores_,gsearch5.best_params_, gsearch5.best_score_)
結果:
([mean: 0.83467, std: 0.01074, params: {'subsample': 0.6},
mean: 0.83408, std: 0.01156, params: {'subsample': 0.7},
mean: 0.83390, std: 0.01317, params: {'subsample': 0.75},
mean: 0.84019, std: 0.00898, params: {'subsample': 0.8},
mean: 0.83988, std: 0.01040, params: {'subsample': 0.85},
mean: 0.83780, std: 0.01114, params: {'subsample': 0.9}],
{'subsample': 0.8}, 0.84018902830047)
8:可以減半步長,最大迭代次數加倍來增加我們模型的泛化能力。再次擬合模型:
#7:減小步長,增大迭代次數,泛化模型 gbm2= GradientBoostingClassifier(learning_rate=0.05, n_estimators=120,max_depth=7,min_samples_leaf =70, min_samples_split =1000, max_features=7, subsample=0.8,random_state=10) gbm2.fit(X,y) y_pred= gbm2.predict(X) y_predprob= gbm2.predict_proba(X)[:,1] print"Accuracy : %.4g" % metrics.accuracy_score(y.values, y_pred) print"AUC Score (Train): %f" % metrics.roc_auc_score(y, y_predprob)
結果:
Accuracy : 0.9854
AUC Score (Train): 0.882404
9:繼續減小步長,增大迭代次數
gbm3= GradientBoostingClassifier(learning_rate=0.01, n_estimators=600,max_depth=7,min_samples_leaf =70, min_samples_split =1000, max_features=7, subsample=0.8,random_state=10) gbm3.fit(X,y) y_pred= gbm3.predict(X) y_predprob= gbm3.predict_proba(X)[:,1] print"Accuracy : %.4g" % metrics.accuracy_score(y.values, y_pred) print"AUC Score (Train): %f" % metrics.roc_auc_score(y, y_predprob)
結果:
Accuracy : 0.9854
AUC Score (Train): 0.884857
減小步長增加迭代次數可以在保證泛化能力的基礎上增加一些擬合程度。
減少過擬合:
1:boosting主要關注減低偏差,指算法的期望預測與真實預測之間的偏差程度,模型本身具有擬合能力,在每次迭代上更加擬合數據,保證具有較小的偏差;選擇更小的方差variance,即選擇模型簡單,即決策樹深度較小值,這樣防止過擬合。
2:將訓練集切分成兩部分,一部分訓練GBDT模型,一部分輸入到model中生成GBDT特征。