二分类GBDT调参过程:
Aarshay Jain对Gradient Tree Boosting总结了一套调参方法,如何衡量参数对整体模型性能的影响力呢?基于经验,Aarshay提出他的见解:“最大叶节点数”(max_leaf_nodes)和“最大树深度”(max_depth)对整体模型性能的影响大于“分裂所需最小样本数”(min_samples_split)、“叶节点最小样本数”(min_samples_leaf)及“叶节点最小权重总值”(min_weight_fraction_leaf),而“分裂时考虑的最大特征数”(max_features)的影响力最小
1:默认参数下结果
#!/usr/bin/python # -*- coding: UTF-8 -*- import pandas as pd import numpy as np from sklearn.ensemble import GradientBoostingClassifier from sklearn import cross_validation, metrics from sklearn.grid_search import GridSearchCV import matplotlib.pylab as plt train= pd.read_csv('train_modified.csv') print(train.shape) target='Disbursed'# Disbursed的值就是二元分类的输出 IDcol= 'ID' train['Disbursed'].value_counts() x_columns= [x for x in train.columns if x not in [target, IDcol]] ##训练数据排除id,label列 X= train[x_columns] y= train['Disbursed'] gbm0= GradientBoostingClassifier(random_state=10) gbm0.fit(X,y) y_pred= gbm0.predict(X) y_predprob= gbm0.predict_proba(X)[:,1] print("Accuracy : %.4g" % metrics.accuracy_score(y.values, y_pred)) print("AUC Score (Train): %f" % metrics.roc_auc_score(y, y_predprob))
结果:
Accuracy: 0.9852
AUCScore (Train): 0.900531
2:首先从步长(learning rate)和迭代次数(n_estimators)入手调参,采用网格搜索法
####调参,提高模型的泛化能力### ##1:首先从步长(learning rate)和迭代次数(n_estimators)入手### param_test1={'n_estimators':range(20,81,10)} gsearch1=GridSearchCV(estimator=GradientBoostingClassifier(learning_rate=0.1,min_samples_split=300,min_samples_leaf=20,max_depth=8,max_features='sqrt',subsample=0.8,random_state=10),param_grid=param_test1, scoring='roc_auc',iid=False,cv=5) gsearch1.fit(X,y) print(gsearch1.grid_scores_, gsearch1.best_params_, gsearch1.best_score_)
结果:
([mean:0.81285, std: 0.01967, params: {'n_estimators': 20},
mean: 0.81438, std: 0.01947, params:{'n_estimators': 30},
mean: 0.81451, std: 0.01933, params:{'n_estimators': 40},
mean: 0.81618, std: 0.01848, params:{'n_estimators': 50},
mean: 0.81751, std: 0.01736, params:{'n_estimators': 60},
mean: 0.81547, std: 0.01900, params:{'n_estimators': 70},
mean: 0.81299, std: 0.01860, params:{'n_estimators': 80}],
{'n_estimators': 60},
0.8175146087398375)
3:对决策树最大深度max_depth和内部节点再划分所需最小样本数min_samples_split进行网格搜索。
##2:其次调节决策树最大深度和内部节点划分最小化样本数 param_test2= {'max_depth':range(3,14,2), 'min_samples_split':range(100,801,200)} gsearch2= GridSearchCV(estimator = GradientBoostingClassifier(learning_rate=0.1,n_estimators=60, min_samples_leaf=20, max_features='sqrt', subsample=0.8,random_state=10), param_grid= param_test2, scoring='roc_auc', iid=False, cv=5) gsearch2.fit(X,y) print(gsearch2.grid_scores_,gsearch2.best_params_, gsearch2.best_score_)
结果:
([mean: 0.82688, std: 0.01287, params: {'min_samples_split': 100, 'max_depth': 3},
mean: 0.82711, std: 0.01443, params: {'min_samples_split': 300, 'max_depth': 3},
mean: 0.82733, std: 0.01351, params: {'min_samples_split': 500, 'max_depth': 3},
mean: 0.82872, std: 0.01281, params: {'min_samples_split': 700, 'max_depth': 3},
mean: 0.83417, std: 0.01043, params: {'min_samples_split': 100, 'max_depth': 5},
mean: 0.83332, std: 0.00986, params: {'min_samples_split': 300, 'max_depth': 5},
mean: 0.83429, std: 0.01283, params: {'min_samples_split': 500, 'max_depth': 5},
mean: 0.83271, std: 0.01234, params: {'min_samples_split': 700, 'max_depth': 5},
mean: 0.83680, std: 0.01234, params: {'min_samples_split': 100, 'max_depth': 7},
mean: 0.83857, std: 0.00731, params: {'min_samples_split': 300, 'max_depth': 7},
mean: 0.84071, std:0.00962, params: {'min_samples_split': 500, 'max_depth': 7},
mean: 0.83569, std: 0.00971, params: {'min_samples_split': 700, 'max_depth': 7},
mean: 0.83299, std: 0.01172, params: {'min_samples_split': 100, 'max_depth': 9},
mean: 0.83595, std: 0.01123, params: {'min_samples_split': 300, 'max_depth': 9},
mean: 0.83638, std: 0.01125, params: {'min_samples_split': 500, 'max_depth': 9},
mean: 0.83715, std: 0.01208, params: {'min_samples_split': 700, 'max_depth': 9},
mean: 0.82469, std: 0.01214, params: {'min_samples_split': 100, 'max_depth': 11},
mean: 0.83347, std: 0.01178, params: {'min_samples_split': 300, 'max_depth': 11},
mean: 0.83508, std: 0.00896, params: {'min_samples_split': 500, 'max_depth': 11},
mean: 0.83521, std: 0.00894, params: {'min_samples_split': 700, 'max_depth': 11},
mean: 0.81720, std: 0.00604, params: {'min_samples_split': 100, 'max_depth': 13},
mean: 0.82795, std: 0.00703, params: {'min_samples_split': 300, 'max_depth': 13},
mean: 0.83100, std: 0.01037, params: {'min_samples_split': 500, 'max_depth': 13},
mean: 0.83068, std: 0.01322, params: {'min_samples_split': 700, 'max_depth': 13}],
{'min_samples_split': 500, 'max_depth': 7}, 0.8407071633581745)
4:对内部节点再划分所需最小样本数min_samples_split和叶子节点最少样本数min_sample_leaf
##3:再对内部节点再划分所需最小样本数min_samples_split和叶子节点最少样本数 param_test3= {'min_samples_split':range(800,1900,200),'min_samples_leaf':range(60,101,10)} gsearch3= GridSearchCV(estimator = GradientBoostingClassifier(learning_rate=0.1,n_estimators=60,max_depth=7,max_features='sqrt',subsample=0.8,random_state=10), param_grid= param_test3, scoring='roc_auc', iid=False, cv=5) gsearch3.fit(X,y) print(gsearch3.grid_scores_,gsearch3.best_params_, gsearch3.best_score_)
结果:
([mean: 0.83701, std: 0.01077, params: {'min_samples_split': 800, 'min_samples_leaf': 60},
mean: 0.83857, std: 0.01021, params: {'min_samples_split': 1000, 'min_samples_leaf': 60},
mean: 0.83881, std: 0.01227, params: {'min_samples_split': 1200, 'min_samples_leaf': 60},
mean: 0.83888, std: 0.01307, params: {'min_samples_split': 1400, 'min_samples_leaf': 60},
mean: 0.83784, std: 0.00984, params: {'min_samples_split': 1600, 'min_samples_leaf': 60},
mean: 0.83786, std: 0.01061, params: {'min_samples_split': 1800, 'min_samples_leaf': 60},
mean: 0.83782, std: 0.01003, params: {'min_samples_split': 800, 'min_samples_leaf':70},
mean: 0.84019, std: 0.00898, params: {'min_samples_split': 1000, 'min_samples_leaf': 70},
mean: 0.83636, std: 0.01161, params: {'min_samples_split': 1200, 'min_samples_leaf': 70},
mean: 0.83853, std: 0.01137, params: {'min_samples_split': 1400,'min_samples_leaf': 70},
mean: 0.83704, std: 0.01165, params: {'min_samples_split': 1600, 'min_samples_leaf': 70},
mean: 0.83580, std: 0.01045, params: {'min_samples_split': 1800, 'min_samples_leaf': 70},
mean: 0.83803, std: 0.01045, params: {'min_samples_split': 800, 'min_samples_leaf': 80},
mean: 0.83781, std: 0.00986, params: {'min_samples_split': 1000, 'min_samples_leaf': 80},
mean: 0.83623, std: 0.00951, params: {'min_samples_split': 1200, 'min_samples_leaf': 80},
mean: 0.83769, std: 0.01154, params: {'min_samples_split': 1400, 'min_samples_leaf': 80},
mean: 0.83702, std: 0.00951, params: {'min_samples_split': 1600, 'min_samples_leaf': 80},
mean: 0.83577, std: 0.00995, params: {'min_samples_split': 1800, 'min_samples_leaf': 80},
mean:0.83738, std: 0.01087, params: {'min_samples_split': 800, 'min_samples_leaf': 90},
mean: 0.83844, std: 0.01101, params: {'min_samples_split': 1000, 'min_samples_leaf': 90},
mean: 0.83736, std: 0.01128, params: {'min_samples_split': 1200, 'min_samples_leaf': 90},
mean: 0.83831, std: 0.01234, params: {'min_samples_split': 1400, 'min_samples_leaf': 90},
mean: 0.83574, std: 0.01086, params: {'min_samples_split': 1600, 'min_samples_leaf': 90},
mean: 0.83559, std: 0.00917, params: {'min_samples_split': 1800, 'min_samples_leaf': 90},
mean: 0.83753, std: 0.01140, params: {'min_samples_split': 800, 'min_samples_leaf': 100},
mean: 0.83955, std: 0.00958, params: {'min_samples_split': 1000, 'min_samples_leaf': 100},
mean: 0.83774, std: 0.01172, params: {'min_samples_split': 1200, 'min_samples_leaf': 100},
mean: 0.83926, std: 0.01207, params: {'min_samples_split': 1400, 'min_samples_leaf': 100},
mean: 0.83473, std: 0.01047, params: {'min_samples_split': 1600, 'min_samples_leaf': 100},
mean: 0.83620, std: 0.01099, params: {'min_samples_split': 1800, 'min_samples_leaf': 100}],
{'min_samples_split': 1000, 'min_samples_leaf': 70}, 0.84018902830047)
5:调参完再次拟合:
##4:调参完拟合 gbm1= GradientBoostingClassifier(learning_rate=0.1, n_estimators=60,max_depth=7,min_samples_leaf =70, min_samples_split =1000, max_features='sqrt',subsample=0.8, random_state=10) gbm1.fit(X,y) y_pred= gbm1.predict(X) y_predprob= gbm1.predict_proba(X)[:,1] print"Accuracy : %.4g" % metrics.accuracy_score(y.values, y_pred) print"AUC Score (Train): %f" % metrics.roc_auc_score(y, y_predprob)
结果:
Accuracy : 0.9854
AUC Score (Train): 0.881348
6:再对最大特征数max_features进行网格搜索
###5:对max_features调参 param_test4= {'max_features':range(7,20,2)} gsearch4= GridSearchCV(estimator = GradientBoostingClassifier(learning_rate=0.1,n_estimators=60,max_depth=7, min_samples_leaf =70, min_samples_split =1000,subsample=0.8, random_state=10), param_grid= param_test4, scoring='roc_auc', iid=False, cv=5) gsearch4.fit(X,y) print(gsearch4.grid_scores_,gsearch4.best_params_, gsearch4.best_score_)
结果:
([mean: 0.84019, std: 0.00898, params: {'max_features': 7},
mean: 0.83522, std: 0.01191, params: {'max_features': 9},
mean: 0.83695, std: 0.01174, params: {'max_features': 11},
mean: 0.83795, std: 0.00909, params: {'max_features': 13},
mean: 0.83795, std: 0.01095, params: {'max_features': 15},
mean: 0.83505, std: 0.01040, params: {'max_features': 17},
mean: 0.83691, std: 0.00895, params: {'max_features': 19}],
{'max_features': 7}, 0.84018902830047)
7:对子采样subsample的比例进行网格搜索:
##6:对subsample子采样调参 param_test5= {'subsample':[0.6,0.7,0.75,0.8,0.85,0.9]} gsearch5= GridSearchCV(estimator = GradientBoostingClassifier(learning_rate=0.1,n_estimators=60,max_depth=7, min_samples_leaf =70, min_samples_split =1000,max_features=7, random_state=10), param_grid= param_test5, scoring='roc_auc', iid=False, cv=5) gsearch5.fit(X,y) print(gsearch5.grid_scores_,gsearch5.best_params_, gsearch5.best_score_)
结果:
([mean: 0.83467, std: 0.01074, params: {'subsample': 0.6},
mean: 0.83408, std: 0.01156, params: {'subsample': 0.7},
mean: 0.83390, std: 0.01317, params: {'subsample': 0.75},
mean: 0.84019, std: 0.00898, params: {'subsample': 0.8},
mean: 0.83988, std: 0.01040, params: {'subsample': 0.85},
mean: 0.83780, std: 0.01114, params: {'subsample': 0.9}],
{'subsample': 0.8}, 0.84018902830047)
8:可以减半步长,最大迭代次数加倍来增加我们模型的泛化能力。再次拟合模型:
#7:减小步长,增大迭代次数,泛化模型 gbm2= GradientBoostingClassifier(learning_rate=0.05, n_estimators=120,max_depth=7,min_samples_leaf =70, min_samples_split =1000, max_features=7, subsample=0.8,random_state=10) gbm2.fit(X,y) y_pred= gbm2.predict(X) y_predprob= gbm2.predict_proba(X)[:,1] print"Accuracy : %.4g" % metrics.accuracy_score(y.values, y_pred) print"AUC Score (Train): %f" % metrics.roc_auc_score(y, y_predprob)
结果:
Accuracy : 0.9854
AUC Score (Train): 0.882404
9:继续减小步长,增大迭代次数
gbm3= GradientBoostingClassifier(learning_rate=0.01, n_estimators=600,max_depth=7,min_samples_leaf =70, min_samples_split =1000, max_features=7, subsample=0.8,random_state=10) gbm3.fit(X,y) y_pred= gbm3.predict(X) y_predprob= gbm3.predict_proba(X)[:,1] print"Accuracy : %.4g" % metrics.accuracy_score(y.values, y_pred) print"AUC Score (Train): %f" % metrics.roc_auc_score(y, y_predprob)
结果:
Accuracy : 0.9854
AUC Score (Train): 0.884857
减小步长增加迭代次数可以在保证泛化能力的基础上增加一些拟合程度。
减少过拟合:
1:boosting主要关注减低偏差,指算法的期望预测与真实预测之间的偏差程度,模型本身具有拟合能力,在每次迭代上更加拟合数据,保证具有较小的偏差;选择更小的方差variance,即选择模型简单,即决策树深度较小值,这样防止过拟合。
2:将训练集切分成两部分,一部分训练GBDT模型,一部分输入到model中生成GBDT特征。