利用sklearn對MNIST手寫數據集開始一個簡單的二分類判別器項目(在這個過程中學習關於模型性能的評價指標,如accuracy,precision,recall,混淆矩陣)


 

 

 

 

一、獲取MNIST手寫數據集

需要注意的是直接運行下面的代碼可能不能直接下載成功,可以從這里先提前(https://download.csdn.net/download/x454045816/10157075) 下載,放到mldata文件夾中,就不會報錯了

In [6]:
from sklearn.datasets import fetch_mldata
mnist=fetch_mldata("MNIST original",data_home='./')
mnist
 
/home/lxy/env/lib/python3.5/site-packages/sklearn/utils/deprecation.py:77: DeprecationWarning: Function fetch_mldata is deprecated; fetch_mldata was deprecated in version 0.20 and will be removed in version 0.22
  warnings.warn(msg, category=DeprecationWarning)
/home/lxy/env/lib/python3.5/site-packages/sklearn/utils/deprecation.py:77: DeprecationWarning: Function mldata_filename is deprecated; mldata_filename was deprecated in version 0.20 and will be removed in version 0.22
  warnings.warn(msg, category=DeprecationWarning)
Out[6]:
{'COL_NAMES': ['label', 'data'],
 'DESCR': 'mldata.org dataset: mnist-original',
 'data': array([[0, 0, 0, ..., 0, 0, 0],
        [0, 0, 0, ..., 0, 0, 0],
        [0, 0, 0, ..., 0, 0, 0],
        ...,
        [0, 0, 0, ..., 0, 0, 0],
        [0, 0, 0, ..., 0, 0, 0],
        [0, 0, 0, ..., 0, 0, 0]], dtype=uint8),
 'target': array([0., 0., 0., ..., 9., 9., 9.])}
 

分別獲取數據集的特征數據X以及標簽數據y

In [8]:
X,y=mnist["data"],mnist["target"]
print(X.shape)
print(y.shape)
 
(70000, 784)
(70000,)
 

可以看到整個數據集一共有70000張圖片,每張圖片有784個特征(這是因為圖片的像素為28*28=784,同時需要了解的是每個像素值介於0~255)

 

二、分割數據集,創建測試集,注意在創建完測試集后,很長的一段時間只是在訓練集上的操作,只有當你處於項目的尾聲,當你准備上線一個分類器的時候,才應該使用測試集

1.MNIST數據集將前60000張圖片作為訓練集,后10000張圖片作為測試集

In [10]:
X_train,X_test,y_train,y_test=X[:60000],X[60000:],y[:60000],y[60000:]
In [11]:
import numpy as np
shuffle_index=np.random.permutation(60000)
X_train,y_train=X_train[shuffle_index],y_train[shuffle_index]
 

三、訓練一個二分類器

簡化問題,只嘗試識別標簽0~9中的一個數字,假如識別數字5的二分類器。識別結果為是5和非5

1.由原標簽y_train和y_test創建二分類器標簽向量

In [13]:
y_train_5=(y_train==5)
y_test_5=(y_test==5)
# print(type(y_train_5))
# print(y_train_5)
 
<class 'numpy.ndarray'>
[False False False ... False False False]
 

2.利用隨機梯度下降SGD分類器

好處是能夠高效地處理非常大的數據集

In [14]:
from sklearn.linear_model import SGDClassifier
sgd_clf=SGDClassifier(random_state=42)
 

3.用訓練集尋訓練模型

In [15]:
sgd_clf.fit(X_train,y_train_5)
 
/home/lxy/env/lib/python3.5/site-packages/sklearn/linear_model/stochastic_gradient.py:144: FutureWarning: max_iter and tol parameters have been added in SGDClassifier in 0.19. If both are left unset, they default to max_iter=5 and tol=None. If tol is not None, max_iter defaults to max_iter=1000. From 0.21, default max_iter will be 1000, and default tol will be 1e-3.
  FutureWarning)
Out[15]:
SGDClassifier(alpha=0.0001, average=False, class_weight=None,
       early_stopping=False, epsilon=0.1, eta0=0.0, fit_intercept=True,
       l1_ratio=0.15, learning_rate='optimal', loss='hinge', max_iter=None,
       n_iter=None, n_iter_no_change=5, n_jobs=None, penalty='l2',
       power_t=0.5, random_state=42, shuffle=True, tol=None,
       validation_fraction=0.1, verbose=0, warm_start=False)
 

四、評估分類器模型

1.利用cross_val_score,實現k折交叉驗證,將訓練集分成k折,每次從k折中隨機一個折作為驗證集,另外k-1個折作為訓練集,這樣就有多少個折就有多少個(1,k-1)個(驗證集,訓練集)組合的模型性能的accuracy得分

In [36]:
from sklearn.model_selection import cross_val_score
cross_val_score(sgd_clf,X_train,y_train_5,scoring="accuracy",cv=3)
 
/home/lxy/env/lib/python3.5/site-packages/sklearn/linear_model/stochastic_gradient.py:144: FutureWarning: max_iter and tol parameters have been added in SGDClassifier in 0.19. If both are left unset, they default to max_iter=5 and tol=None. If tol is not None, max_iter defaults to max_iter=1000. From 0.21, default max_iter will be 1000, and default tol will be 1e-3.
  FutureWarning)
/home/lxy/env/lib/python3.5/site-packages/sklearn/linear_model/stochastic_gradient.py:144: FutureWarning: max_iter and tol parameters have been added in SGDClassifier in 0.19. If both are left unset, they default to max_iter=5 and tol=None. If tol is not None, max_iter defaults to max_iter=1000. From 0.21, default max_iter will be 1000, and default tol will be 1e-3.
  FutureWarning)
/home/lxy/env/lib/python3.5/site-packages/sklearn/linear_model/stochastic_gradient.py:144: FutureWarning: max_iter and tol parameters have been added in SGDClassifier in 0.19. If both are left unset, they default to max_iter=5 and tol=None. If tol is not None, max_iter defaults to max_iter=1000. From 0.21, default max_iter will be 1000, and default tol will be 1e-3.
  FutureWarning)
Out[36]:
array([0.96045, 0.9627 , 0.96615])
 

看似交叉驗證時模型的精度(accuracy)平均能大於95%

2.但是我們可以編寫一個非常簡單的沒有實際訓練功能的自定義預測值的“非5”二分類器:

fit函數並沒有實際的訓練功能

predict函數也沒有用到訓練只是直接會返回一個值為False的len(X)行,1列的numpy數組作為這個判別模型的預測值

numpy.zeros(shape,dtype=float,order='C')說明:

shape:指明數組幾行幾列

dtype:指明值的類型,當dtype=bool時,值會初始化為False

In [40]:
from sklearn.base import BaseEstimator
class Never5Classifier(BaseEstimator):
    def fit(self,X,y=None):
#         print ('test-print1')
        pass
    def predict(self,X):
#         print('test-print2')
        return np.zeros((len(X),1),dtype=bool)
In [41]:
never_5_clf=Never5Classifier()
In [42]:
cross_val_score(never_5_clf,X_train,y_train_5,scoring="accuracy",cv=3)
Out[42]:
array([0.9094 , 0.9114 , 0.90815])
 

可以看到在運用上面毫無實際預測功能只是人工設定預測值的判別器,對於非5的判別accuracy也能高達90%。原因在於只有 10% 的圖片是數字 5,所以你總是猜測某張圖片不是 5,你也會有90%的可能性是對的。

所以這個小例子說明accuracy通常來說不是一個好的性能度量指標,特別是當你處理有偏差的數據集,比方說其中一些類比其他類頻繁得多。

 

3.利用precision、recall評估模型性能

 

(1)首先需要了解混淆矩陣,可自行查閱資料,混淆矩陣對角線上的數字是預測標簽值和實際標簽值一樣的個數。

為了得到混淆矩陣,我們應該有對於標簽y的預測結果值,通過cross_val_predict實現

(2)關於cross_val_predict函數說明:

cross_val_score是運用了交叉驗證返回的是模型性能的score,而cross_val_predict同樣也是運用了交叉驗證,不過它的目的在於得到預測結果值.它的功能就是返回每條樣本作為CV中的驗證集時,對應的模型對於該樣本的預測結果

In [47]:
from sklearn.model_selection import cross_val_predict
y_train_pred=cross_val_predict(sgd_clf,X_train,y_train_5,cv=3)
# print (len(y_train_pred)==len(y_train))用於驗證是不是預測結果和訓練輸入的數據一樣多
 
/home/lxy/env/lib/python3.5/site-packages/sklearn/linear_model/stochastic_gradient.py:144: FutureWarning: max_iter and tol parameters have been added in SGDClassifier in 0.19. If both are left unset, they default to max_iter=5 and tol=None. If tol is not None, max_iter defaults to max_iter=1000. From 0.21, default max_iter will be 1000, and default tol will be 1e-3.
  FutureWarning)
/home/lxy/env/lib/python3.5/site-packages/sklearn/linear_model/stochastic_gradient.py:144: FutureWarning: max_iter and tol parameters have been added in SGDClassifier in 0.19. If both are left unset, they default to max_iter=5 and tol=None. If tol is not None, max_iter defaults to max_iter=1000. From 0.21, default max_iter will be 1000, and default tol will be 1e-3.
  FutureWarning)
/home/lxy/env/lib/python3.5/site-packages/sklearn/linear_model/stochastic_gradient.py:144: FutureWarning: max_iter and tol parameters have been added in SGDClassifier in 0.19. If both are left unset, they default to max_iter=5 and tol=None. If tol is not None, max_iter defaults to max_iter=1000. From 0.21, default max_iter will be 1000, and default tol will be 1e-3.
  FutureWarning)
 

利用sklearn可以得到混淆矩陣

In [48]:
from sklearn.metrics import confusion_matrix
confusion_matrix(y_train_5,y_train_pred)
Out[48]:
array([[53562,  1017],
       [ 1197,  4224]])
 

混淆矩陣中的每一行表示一個實際的類, 而每一列表示一個預測的類。該矩陣的第一行認為“非 5”(反例)中的 53272 張被正確歸類為 “非 5”(他們被稱為真反例,true negatives), 而其余 1307 被錯誤歸類為"是 5" (假正例,false positives)。第二行認為“是 5” (正例)中的 1077 被錯誤地歸類為“非 5”(假反例,false negatives),其余 4344 正確分類為 “是 5”類(真正例,true positives)。一個完美的分類器將只有真反例和真正例,所以混淆矩陣的非零值僅在其主對角線(左上至右下)

 

(3)由混淆矩陣可以得到Precision和recall

Precision=TP/TP+FP

Recall=TP/TP+FN

可以參考Hands-On Machine Learning with Scikit-Learn and TensorFlow上面的這幅圖幫助理解:

 

 

 

 

 

 

 

 

 

另外,要區分一下accuracy和precision: accuracy = (TP + TN) / (TP + FP + TN + FN)

In [49]:
from sklearn.metrics import precision_score,recall_score
precision_score(y_train_5,y_train_pred)
Out[49]:
0.8059530623926732
In [50]:
recall_score(y_train_5,y_train_pred)
Out[50]:
0.7791920309905921


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM