轉載:https://blog.csdn.net/IqqIqqIqqIqq/article/details/78857411
1 基於sklearn的實現
from sklearn.datasets import load_boston from sklearn.ensemble import RandomForestRegressor import numpy as np #Load boston housing dataset as an example boston = load_boston() X = boston["data"] Y = boston["target"] names = boston["feature_names"] rf = RandomForestRegressor() rf.fit(X, Y) print "Features sorted by their score:" print sorted(zip(map(lambda x: round(x, 4), rf.feature_importances_), names), reverse=True)
輸出為
Features sorted by their score: [(0.5298, 'LSTAT'), (0.4116, 'RM'), (0.0252, 'DIS'), (0.0172, 'CRIM'), (0.0065, 'NOX'), (0.0035, 'PTRATIO'), (0.0021, 'TAX'), (0.0017, 'AGE'), (0.0012, 'B'), (0.0008, 'INDUS'), (0.0004, 'RAD'), (0.0001, 'CHAS'), (0.0, 'ZN')]
基於不純度對模型進行排序有幾點需要注意:
(1)基於不純度降低的特征選擇將會偏向於選擇那些具有較多類別的變量(bias)。
(2)當存在相關特征時,一個特征被選擇后,與其相關的其他特征的重要度則會變得很低,因為他們可以減少的不純度已經被前面的特征移除了。
2 准確率降低的均值
這種方法是直接測量每種特征對模型預測准確率的影響,基本思想是重新排列某一列特征值的順序,觀測降低了多少模型的准確率。對於不重要的特征,這種方法對模型准確率的影響很小,但是對於重要特征卻會極大降低模型的准確率。
下面是這種方法的示例:
from sklearn.cross_validation import ShuffleSplit from sklearn.metrics import r2_score from collections import defaultdict X = boston["data"] Y = boston["target"] rf = RandomForestRegressor() scores = defaultdict(list) #crossvalidate the scores on a number of different random splits of the data for train_idx, test_idx in ShuffleSplit(len(X), 100, .3): X_train, X_test = X[train_idx], X[test_idx] Y_train, Y_test = Y[train_idx], Y[test_idx] r = rf.fit(X_train, Y_train) acc = r2_score(Y_test, rf.predict(X_test)) for i in range(X.shape[1]): X_t = X_test.copy() np.random.shuffle(X_t[:, i]) shuff_acc = r2_score(Y_test, rf.predict(X_t)) scores[names[i]].append((acc-shuff_acc)/acc) print "Features sorted by their score:" print sorted([(round(np.mean(score), 4), feat) for feat, score in scores.items()], reverse=True)
輸出:
Features sorted by their score: [(0.7276, 'LSTAT'), (0.5675, 'RM'), (0.0867, 'DIS'), (0.0407, 'NOX'), (0.0351, 'CRIM'), (0.0233, 'PTRATIO'), (0.0168, 'TAX'), (0.0122, 'AGE'), (0.005, 'B'), (0.0048, 'INDUS'), (0.0043, 'RAD'), (0.0004, 'ZN'), (0.0001, 'CHAS')]