【轉】數據預處理之獨熱編碼(One-Hot Encoding)


原文鏈接:http://blog.csdn.net/dulingtingzi/article/details/51374487

問題由來

在很多機器學習任務中,特征並不總是連續值,而有可能是分類值。

例如,考慮一下的三個特征:

["male", "female"]

["from Europe", "from US", "from Asia"]

["uses Firefox", "uses Chrome", "uses Safari", "uses Internet Explorer"]

如果將上述特征用數字表示,效率會高很多。例如:

["male", "from US", "uses Internet Explorer"] 表示為[0, 1, 3]

["female", "from Asia", "uses Chrome"]表示為[1, 2, 1]

但是,即使轉化為數字表示后,上述數據也不能直接用在我們的分類器中。因為,分類器往往默認數據數據是連續的,並且是有序的。但是,按照我們上述的表示,數字並不是有序的,而是隨機分配的。

獨熱編碼

為了解決上述問題,其中一種可能的解決方法是采用獨熱編碼(One-Hot Encoding)。

獨熱編碼即 One-Hot 編碼,又稱一位有效編碼,其方法是使用N位狀態寄存器來對N個狀態進行編碼,每個狀態都由他獨立的寄存器位,並且在任意時候,其中只有一位有效。

例如:

自然狀態碼為:000,001,010,011,100,101

獨熱編碼為:000001,000010,000100,001000,010000,100000

可以這樣理解,對於每一個特征,如果它有m個可能值,那么經過獨熱編碼后,就變成了m個二元特征。並且,這些特征互斥,每次只有一個激活。因此,數據會變成稀疏的。

這樣做的好處主要有:

  1. 解決了分類器不好處理屬性數據的問題

  2. 在一定程度上也起到了擴充特征的作用

舉例

我們基於Python和Scikit-learn寫一個簡單的例子:

from sklearn import preprocessing

enc = preprocessing.OneHotEncoder()

enc.fit([[0, 0, 3], [1, 1, 0], [0, 2, 1], [1, 0, 2]])

enc.transform([[0, 1, 3]]).toarray()

輸出結果:

array([[ 1.,  0.,  0.,  1.,  0.,  0.,  0.,  0.,  1.]])

 

處理離散型特征和連續型特征並存的情況,如何做歸一化。
參考博客進行了總結:
https://www.quora.com/What-are-good-ways-to-handle-discrete-and-continuous-inputs-together
總結如下:
1、拿到獲取的原始特征,必須對每一特征分別進行歸一化,比如,特征A的取值范圍是[-1000,1000],特征B的取值范圍是[-1,1].
如果使用logistic回歸,w1*x1+w2*x2,因為x1的取值太大了,所以x2基本起不了作用。
所以,必須進行特征的歸一化,每個特征都單獨進行歸一化。
2、連續型特征歸一化的常用方法:
   2.1:Rescale bounded continuous features: All continuous input that are bounded, rescale them to [-1, 1] through x = (2x - max - min)/(max - min).線性放縮到[-1,1]
  2.2:Standardize all continuous features: All continuous input should be standardized and by this I mean, for every continuous feature, compute its mean (u) and standard deviation (s) and do x = (x - u)/s.放縮到均值為0,方差為1
1、離散型特征的處理方法:

a) Binarize categorical/discrete features: For all categorical features, represent them as multiple boolean features. For example, instead of having one feature called marriage_status, have 3 boolean features - married_status_single, married_status_married, married_status_divorced and appropriately set these features to 1 or -1. As you can see, for every categorical feature, you are adding k binary feature where k is the number of values that the categorical feature takes.對於離散的特征基本就是按照one-hot編碼,該離散特征有多少取值,就用多少維來表示該特征。

 

為什么使用one-hot編碼來處理離散型特征,這是有理由的,不是隨便拍腦袋想出來的!!!具體原因,分下面幾點來闡述: 
1、Why do we binarize categorical features?
We binarize the categorical input so that they can be thought of as a vector from the Euclidean space (we call this as embedding the vector in the Euclidean space).使用one-hot編碼,將離散特征的取值擴展到了歐式空間,離散特征的某個取值就對應歐式空間的某個點。
 
2、Why do we embed the feature vectors in the Euclidean space?
Because many algorithms for classification/regression/clustering etc. requires computing distances between features or similarities between features. And many definitions of distances and similarities are defined over features in Euclidean space. So, we would like our features to lie in the Euclidean space as well.將離散特征通過one-hot編碼映射到歐式空間,是因為,在回歸,分類,聚類等機器學習算法中,特征之間距離的計算或相似度的計算是非常重要的,而我們常用的距離或相似度的計算都是在歐式空間的相似度計算,計算余弦相似性,基於的就是歐式空間。


3、Why does embedding the feature vector in Euclidean space require us to binarize categorical features?
Let us take an example of a dataset with just one feature (say job_type as per your example) and let us say it takes three values 1,2,3.
Now, let us take three feature vectors x_1 = (1), x_2 = (2), x_3 = (3). What is the euclidean distance between x_1 and x_2, x_2 and x_3 & x_1 and x_3? d(x_1, x_2) = 1, d(x_2, x_3) = 1, d(x_1, x_3) = 2. This shows that distance between job type 1 and job type 2 is smaller than job type 1 and job type 3. Does this make sense? Can we even rationally define a proper distance between different job types? In many cases of categorical features, we can properly define distance between different values that the categorical feature takes. In such cases, isn't it fair to assume that all categorical features are equally far away from each other?
Now, let us see what happens when we binary the same feature vectors. Then, x_1 = (1, 0, 0), x_2 = (0, 1, 0), x_3 = (0, 0, 1). Now, what are the distances between them? They are sqrt(2). So, essentially, when we binarize the input, we implicitly state that all values of the categorical features are equally away from each other.
將離散型特征使用one-hot編碼,確實會讓特征之間的距離計算更加合理。比如,有一個離散型特征,代表工作類型,該離散型特征,共有三個取值,不使用one-hot編碼,其表示分別是x_1 = (1), x_2 = (2), x_3 = (3)。兩個工作之間的距離是,(x_1, x_2) = 1, d(x_2, x_3) = 1, d(x_1, x_3) = 2。那么x_1和x_3工作之間就越不相似嗎?顯然這樣的表示,計算出來的特征的距離是不合理。那如果使用one-hot編碼,則得到x_1 = (1, 0, 0), x_2 = (0, 1, 0), x_3 = (0, 0, 1),那么兩個工作之間的距離就都是sqrt(2).即每兩個工作之間的距離是一樣的,顯得更合理。
4、About the original question?
Note that our reason for why binarize the categorical features is independent of the number of the values the categorical features take, so yes, even if the categorical feature takes 1000 values, we still would prefer to do binarization.
對離散型特征進行one-hot編碼是為了讓距離的計算顯得更加合理。
5、Are there cases when we can avoid doing binarization?
Yes. As we figured out earlier, the reason we binarize is because we want some meaningful distance relationship between the different values. As long as there is some meaningful distance relationship, we can avoid binarizing the categorical feature. For example, if you are building a classifier to classify a webpage as important entity page (a page important to a particular entity) or not and let us say that you have the rank of the webpage in the search result for that entity as a feature, then 1] note that the rank feature is categorical, 2] rank 1 and rank 2 are clearly closer to each other than rank 1 and rank 3, so the rank feature defines a meaningful distance relationship and so, in this case, we don't have to binarize the categorical rank feature.

More generally, if you can cluster the categorical values into disjoint subsets such that the subsets have meaningful distance relationship amongst them, then you don't have binarize fully, instead you can split them only over these clusters. For example, if there is a categorical feature with 1000 values, but you can split these 1000 values into 2 groups of 400 and 600 (say) and within each group, the values have meaningful distance relationship, then instead of fully binarizing, you can just add 2 features, one for each cluster and that should be fine.
將離散型特征進行one-hot編碼的作用,是為了讓距離計算更合理,但如果特征是離散的,並且不用one-hot編碼就可以很合理的計算出距離,那么就沒必要進行one-hot編碼,比如,該離散特征共有1000個取值,我們分成兩組,分別是400和600,兩個小組之間的距離有合適的定義,組內的距離也有合適的定義,那就沒必要用one-hot 編碼
 
離散特征進行one-hot編碼后,編碼后的特征,其實每一維度的特征都可以看做是連續的特征。就可以跟對連續型特征的歸一化方法一樣,對每一維特征進行歸一化。比如歸一化到[-1,1]或歸一化到均值為0,方差為1
 
有些情況不需要進行特征的歸一化:
     It depends on your ML algorithms, some methods requires almost no efforts to normalize features or handle both continuous and discrete features, like tree based methods: c4.5, Cart, random Forrest, bagging or boosting. But most of parametric models (generalized linear models, neural network, SVM,etc) or methods using distance metrics (KNN, kernels, etc) will require careful work to achieve good results. Standard approaches including binary all features, 0 mean unit variance all continuous features, etc。
       基於樹的方法是不需要進行特征的歸一化,例如隨機森林,bagging 和 boosting等。基於參數的模型或基於距離的模型,都是要進行特征的歸一化。

one-hot編碼為什么可以解決類別型數據的離散值問題 
首先,one-hot編碼是N位狀態寄存器為N個狀態進行編碼的方式 
eg:高、中、低不可分,→ 用0 0 0 三位編碼之后變得可分了,並且成為互相獨立的事件 
→ 類似 SVM中,原本線性不可分的特征,經過project之后到高維之后變得可分了 
GBDT處理高維稀疏矩陣的時候效果並不好,即使是低維的稀疏矩陣也未必比SVM好 
Tree Model不太需要one-hot編碼: 
對於決策樹來說,one-hot的本質是增加樹的深度 
tree-model是在動態的過程中生成類似 One-Hot + Feature Crossing 的機制 
1. 一個特征或者多個特征最終轉換成一個葉子節點作為編碼 ,one-hot可以理解成三個獨立事件 
2. 決策樹是沒有特征大小的概念的,只有特征處於他分布的哪一部分的概念 
one-hot可以解決線性可分問題 但是比不上label econding 
one-hot降維后的缺點: 
降維前可以交叉的降維后可能變得不能交叉 
樹模型的訓練過程: 
從根節點到葉子節點整條路中有多少個節點相當於交叉了多少次,所以樹的模型是自行交叉 
eg:是否是長的 { 否(是→ 柚子,否 → 蘋果) ,是 → 香蕉 } 園 cross 黃 → 形狀 (圓,長) 顏色 (黃,紅) one-hot度為4的樣本 
使用樹模型的葉子節點作為特征集交叉結果可以減少不必要的特征交叉的操作 或者減少維度和degree候選集 
eg 2 degree → 8的特征向量 樹 → 3個葉子節點 
樹模型:Ont-Hot + 高degree笛卡爾積 + lasso 要消耗更少的計算量和計算資源 
這就是為什么樹模型之后可以stack線性模型 
n*m的輸入樣本 → 決策樹訓練之后可以知道在哪一個葉子節點上 → 輸出葉子節點的index → 變成一個n*1的矩陣 → one-hot編碼 → 可以得到一個n*o的矩陣(o是葉子節點的個數) → 訓練一個線性模型 
典型的使用: GBDT + RF 
優點 : 節省做特征交叉的時間和空間 
如果只使用one-hot訓練模型,特征之間是獨立的 
對於現有模型的理解:(G(l(張量))): 
其中:l(·)為節點的模型 
G(·)為節點的拓撲方式 
神經網絡:l(·)取邏輯回歸模型 
G(·)取全連接的方式 
決策樹: l(·)取LR 
G(·)取樹形鏈接方式 
創新點: l(·)取 NB,SVM 單層NN ,等 
G(·)取怎樣的信息傳遞方式


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM