遞歸消除特征法使用一個基模型來進行多輪訓練,每輪訓練后,移除若干權值系數的特征,再基於新的特征集進行下一輪訓練。
sklearn官方解釋:對特征含有權重的預測模型(例如,線性模型對應參數coefficients),RFE通過遞歸減少考察的特征集規模來選擇特征。首先,預測模型在原始特征上訓練,每個特征指定一個權重。之后,那些擁有最小絕對值權重的特征被踢出特征集。如此往復遞歸,直至剩余的特征數量達到所需的特征數量。
RFECV 通過交叉驗證的方式執行RFE,以此來選擇最佳數量的特征:對於一個數量為d的feature的集合,他的所有的子集的個數是2的d次方減1(包含空集)。指定一個外部的學習算法,比如SVM之類的。通過該算法計算所有子集的validation error。選擇error最小的那個子集作為所挑選的特征
from sklearn.feature_selection import RFE
from sklearn.linear_model import LogisticRegression
#遞歸特征消除法,返回特征選擇后的數據
#參數estimator為基模型
#參數n_features_to_select為選擇的特征個數
RFE(estimator=LogisticRegression(), n_features_to_select=2).fit_transform(iris.data, iris.target)
Recursive feature elimination:一個遞歸特征消除示例,展示在數字分類任務中,像素之間的相關性
print(__doc__)
from sklearn.svm import SVC
from sklearn.datasets import load_digits
from sklearn.feature_selection import RFE
import matplotlib.pyplot as plt
# Load the digits dataset
digits = load_digits()
X = digits.images.reshape((len(digits.images), -1))
y = digits.target
# Create the RFE object and rank each pixel
svc = SVC(kernel="linear", C=1)
rfe = RFE(estimator=svc, n_features_to_select=1, step=1)
rfe.fit(X, y)
ranking = rfe.ranking_.reshape(digits.images[0].shape)
# Plot pixel ranking
plt.matshow(ranking, cmap=plt.cm.Blues)
plt.colorbar()
plt.title("Ranking of pixels with RFE")
plt.show()
Recursive feature elimination with cross-validation:一個遞歸特征消除示例,通過交叉驗證的方式自動調整所選特征的數量。
print(__doc__)
import matplotlib.pyplot as plt
from sklearn.svm import SVC
from sklearn.model_selection import StratifiedKFold
from sklearn.feature_selection import RFECV
from sklearn.datasets import make_classification
# Build a classification task using 3 informative features
X, y = make_classification(n_samples=1000, n_features=25, n_informative=3,
n_redundant=2, n_repeated=0, n_classes=8,
n_clusters_per_class=1, random_state=0)
# Create the RFE object and compute a cross-validated score.
svc = SVC(kernel="linear")
# The "accuracy" scoring is proportional to the number of correct
# classifications
rfecv = RFECV(estimator=svc, step=1, cv=StratifiedKFold(2),
scoring='accuracy')
rfecv.fit(X, y)
print("Optimal number of features : %d" % rfecv.n_features_)
# Plot number of features VS. cross-validation scores
plt.figure()
plt.xlabel("Number of features selected")
plt.ylabel("Cross validation score (nb of correct classifications)")
plt.plot(range(1, len(rfecv.grid_scores_) + 1), rfecv.grid_scores_)
plt.show()