特征選擇:方差選擇法、卡方檢驗、互信息法、遞歸特征消除、L1范數、樹模型


轉載:https://www.cnblogs.com/jasonfreak/p/5448385.html

特征選擇主要從兩個方面入手:

  • 特征是否發散:特征發散說明特征的方差大,能夠根據取值的差異化度量目標信息.
  • 特征與目標相關性:優先選取與目標高度相關性的.
  • 對於特征選擇,有時候我們需要考慮分類變量和連續變量的不同.

1.過濾法:按照發散性或者相關性對各個特征進行評分,設定閾值或者待選擇閾值的個數選擇特征

方差選擇法建議作為數值特征的篩選方法

計算各個特征的方差,然后根據閾值,選擇方差大於閾值的特征

from sklearn.feature_selection import VarianceThreshold from sklearn.datasets import load_iris import pandas as pd X,y = load_iris(return_X_y=True) X_df = pd.DataFrame(X,columns=list("ABCD")) #建議作為數值特征的篩選方法,對於分類特征可以考慮每個類別的占比問題
ts = 0.5 vt = VarianceThreshold(threshold=ts) vt.fit(X_df) #查看各個特征的方差
dict_variance = {} for i,j in zip(X_df.columns.values,vt.variances_): dict_variance[i] = j 
#獲取保留了的特征的特征名 ls
= list() for i,j in dict_variance.items(): if j >= ts: ls.append(i) X_new = pd.DataFrame(vt.fit_transform(X_df),columns=ls)

卡方檢驗:建議作為分類問題的分類變量的篩選方法

經典的卡方檢驗是檢驗定性自變量對定性因變量的相關性。假設自變量有N種取值,因變量有M種取值,考慮自變量等於i且因變量等於j的樣本頻數的觀察值與期望的差距,構建統計量:

 

from sklearn.feature_selection import VarianceThreshold,SelectKBest,chi2 from sklearn.datasets import load_iris import pandas as pd X,y = load_iris(return_X_y=True) X_df = pd.DataFrame(X,columns=list("ABCD")) (chi2,pval) = chi2(X_df,y) dict_feature = {} for i,j in zip(X_df.columns.values,chi2): dict_feature[i]=j #對字典按照values排序
ls = sorted(dict_feature.items(),key=lambda item:item[1],reverse=True) #特征選取數量
k =2 ls_new_feature=[] for i in range(k): ls_new_feature.append(ls[i][0]) X_new = X_df[ls_new_feature]

 互信息法:建議作為分類問題的分類變量的篩選方法

經典的互信息也是評價定性自變量對定性因變量的相關性的,為了處理定量數據,最大信息系數法被提出,互信息計算公式如下:

from sklearn.feature_selection import VarianceThreshold,SelectKBest,chi2 from sklearn.datasets import load_iris import pandas as pd from sklearn.feature_selection import mutual_info_classif #用於度量特征和離散目標的互信息
X,y = load_iris(return_X_y=True) X_df = pd.DataFrame(X,columns=list("ABCD")) feature_cat = ["A","D"] discrete_features = [] feature = X_df.columns.values.tolist() for k in feature_cat: if k in feature: discrete_features.append(feature.index(k)) mu = mutual_info_classif(X_df,y,discrete_features=discrete_features, n_neighbors=3, copy=True, random_state=None) dict_feature = {} for i,j in zip(X_df.columns.values,mu): dict_feature[i]=j #對字典按照values排序
ls = sorted(dict_feature.items(),key=lambda item:item[1],reverse=True) #特征選取數量
k =2 ls_new_feature=[] for i in range(k): ls_new_feature.append(ls[i][0]) X_new = X_df[ls_new_feature]
from sklearn.feature_selection import VarianceThreshold,SelectKBest,chi2 from sklearn.datasets import load_iris import pandas as pd from sklearn.feature_selection import mutual_info_classif,mutual_info_regression #用於度量特征和連續目標的互信息
X,y = load_iris(return_X_y=True) X_df = pd.DataFrame(X,columns=list("ABCD")) feature_cat = ["A","D"] discrete_features = [] feature = X_df.columns.values.tolist() for k in feature_cat: if k in feature: discrete_features.append(feature.index(k)) mu = mutual_info_regression(X_df,y,discrete_features=discrete_features, n_neighbors=3, copy=True, random_state=None) dict_feature = {} for i,j in zip(X_df.columns.values,mu): dict_feature[i]=j #對字典按照values排序
ls = sorted(dict_feature.items(),key=lambda item:item[1],reverse=True) #特征選取數量
k =2 ls_new_feature=[] for i in range(k): ls_new_feature.append(ls[i][0]) X_new = X_df[ls_new_feature]

 2.包裝法

遞歸特征消除法:用一個基模型來進行多輪訓練,每輪訓練后,消除若干權值系數的特征,再基於新的特征集進行下一輪訓練

 

from sklearn.datasets import load_iris import pandas as pd from sklearn.feature_selection import RFE,RFECV from sklearn.ensemble import RandomForestClassifier X,y = load_iris(return_X_y=True) X_df = pd.DataFrame(X,columns=list("ABCD")) refCV = RFECV(estimator=RandomForestClassifier(), step=0.5, cv =5, scoring=None, n_jobs=-1) refCV.fit(X_df,y) #保留特征的數量
refCV.n_features_ #保留特征的False、True標記
refCV.support_ feature_new = X_df.columns.values[refCV.support_] #交叉驗證分數
refCV.grid_scores_ 

 3.嵌入的方法

基於L1范數:使用帶懲罰項的基模型,除了篩選出特征外,同時也進行了降維 

from sklearn.datasets import load_iris import pandas as pd from sklearn.feature_selection import SelectFromModel from sklearn.linear_model import LogisticRegression X,y = load_iris(return_X_y=True) X_df = pd.DataFrame(X,columns=list("ABCD")) sf = SelectFromModel(estimator=LogisticRegression(penalty="l1", C=0.1), threshold=None, prefit=False, norm_order=1) sf.fit(X_df,y) X_new = X_df[X_df.columns.values[sf.get_support()]]

基於樹模型的特征選擇法:

樹模型中GBDT也可用來作為基模型進行特征選擇,使用feature_selection庫的SelectFromModel類結合GBDT模型

 

from sklearn.datasets import load_iris import pandas as pd from sklearn.feature_selection import SelectFromModel from sklearn.ensemble import GradientBoostingClassifier X,y = load_iris(return_X_y=True) X_df = pd.DataFrame(X,columns=list("ABCD")) sf = SelectFromModel(estimator=GradientBoostingClassifier(), threshold=None, prefit=False, norm_order=1) sf.fit(X_df,y) X_new = X_df[X_df.columns.values[sf.get_support()]]

 

 

 

 


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM