Deep Learning 10_深度學習UFLDL教程:Convolution and Pooling_exercise(斯坦福大學深度學習教程)


前言

理論知識UFLDL教程http://www.cnblogs.com/tornadomeet/archive/2013/04/09/3009830.html

實驗環境:win7, matlab2015b,16G內存,2T機械硬盤

實驗內容Exercise:Convolution and Pooling。從2000張64*64的RGB圖片(它是 the STL10 Dataset的一個子集)中提取特征作為訓練數據集,訓練softmax分類器,然后從3200張64*64的RGB圖片(它是 the STL10 Dataset的另一個子集)中提取特征作為測試數據集,輸入到前面已經訓練好的softmax分類器,把這3200張圖片分為4類:airplane, car, cat, dog。

實驗基礎說明

1.怎么樣從2000張64*64的RGB圖片中提取特征得到訓練集,怎么樣從3200張64*64的RGB圖片中提取特征得到測試集?

因為這里的RGB圖片是64*64,尺寸較大,而不是前面所有實驗中用的8*8的小圖像塊,如果用Deep Learning九之深度學習UFLDL教程:linear decoder_exercise(斯坦福大學深度學習教程)中的方法直接從大尺寸圖片中提取特征,那么運算量就太大,所以,為了減小運算量,不能直接從大尺寸圖片中提取特征,而是要用間接的減小運算量的方法,這個方法利用了自然圖像的固有特性:自然圖像的一部分的統計特性與其他部分是一樣的。這個固有特性也就意味着:從自然圖像的某一部分A中學習的特征L_feature也能用在該自然圖像的另一部分B上。也就是說,對於這個自然圖像上的所有位置,我們都能使用同樣的學習特征。那么究竟怎么樣從這個大尺寸自然圖像提取出它的L_feature特征呢(這個特征是從它的一部分A上學習到的)?答案是卷積!即把這個特征和大尺寸圖片相卷,就可以提取這個大尺寸圖片的L_feature特征,至於原因請看下面。這個L_feature特征維數只會比大尺寸圖片稍微小一點(假設從小圖像塊中學習到的特征是8*8,大尺寸圖片是64*64,那么這個L_feature特征就是(64-8+1)*(64-8+1),即:57*57),如果把這些特征作訓練數據集,那么運算量還是很大,輸入層神經元節點數還是要57*57*3個,所以我們再對這些特征用池化的方法(即:假設池化的維數是19,池化就是把57*57的特征依次分為19*19的9個小部分,然后把每個小部分變為一個值(如果這個值是每個小部分的平均值就是平均池化,是最大值就是最大池化),從而把這個57*57的特征變為了3*3的特征),從而最后從2000張64*64的RGB圖片中提取出了3*3的特征得到了訓練數據集,同理,可得到測試數據集。

      具體方法:

      是從the STL10 Dataset中隨機抽樣選出10萬張8*8的RGB小圖像塊(即:Sampled 8x8 patches from the STL-10 dataset),然后對它們進行預處理,具體包括:先對它們進行0均值化(注意:不是每個樣本各自單獨0均值化),再對它們進行ZCA白化。

      是對預處理后的數據利用線性解碼器提取出M個顏色特征。

前兩步已經在Deep Learning九之深度學習UFLDL教程:linear decoder_exercise(斯坦福大學深度學習教程)中實現。

      是把2000張64*64的 RGB圖片中的每一張圖片都分別與第二步中提取出的M個特征相卷積,從而就能在每張64*64的 RGB圖片中都提取出在第二步中學習到的M個特征,共得到2000*M個卷積特征。

      是把這2000*M個特征進行池化,減小它的維數,從而得到了訓練數據集。同理,可得到測試數據集。

 后兩步就是本節實驗的內容。

 2.為什么從自然圖像上的一部分A提取出L_feature特征后,要提取這整張自然圖像的L_feature特征的方法是:把這個特征和這個自然圖像相卷積?

首先,我們要明確知道:

① 卷積運算圖解:

 ② 從數據集Y中提取特征F(F是從數據集X中通過稀疏自動編碼器提取到的特征)的方法:

如果用數據集X訓練稀疏自動編碼器,得到稀疏自動編碼器的權重參數是opttheta,從而就提取到特征F,F就是稀疏自動編碼器的激活值,即F=sigmod(X*opttheta),而把數據集Y通過該訓練過稀疏自動編碼器得到的激活值就是從數據集Y中提取的特征F,即:F=sigmod(Y*opttheta)。

這一點實際上已經在Deep Learning七之深度學習UFLDL教程:Self-Taught Learning_Exercise(斯坦福大學深度學習教程)的“實驗內容及步驟”中的第一、二點提到。

③ 我們要清楚從自然圖像的某一部分A中提取L_feature特征的方法是線性解碼器,它的第一層實際上是一個稀疏自動編碼器(假設用A來訓練該稀疏自動編碼器得到其網絡參數為opttheta1),我們說的這個L_feature特征實際上就是這個第一層稀疏自動編碼器的激活值,即:L_feature=sigmod(A*opttheta1)

其次,在清楚以上三點的情況下,我們才能進行如下說明:

假設這個L_feature特征大小是8*8,要從這整張自然圖像中提取L_feature特征的方法是:從這整張自然圖像上依次抽取8*8區域Q通過前面提到的網絡參數為opttheta1的稀疏自動編碼器,即可得到從Q上提取的L_feature特征,即為其激活值:L_featuresigmod(Q*opttheta1)。這些所有8*8區域提取的L_feature特征組合在一起,就是這整張自然圖像上提取的L_feature特征。這個過程就是Ng在講解中說明的,把這個L_feature特征作為探測器,應用到這個圖像的任意地方中去的過程。這個過程如下:

Convolution schematic.gif

 

而這以上整個過程,基本正好符合卷積運算,所以我們把得到特征叫卷積特征,即:這個過程基本正好是opttheta1與整張自然圖像的卷積過程,只兩個不同之處

a. 卷積運算時,opttheta1的倒序依次與區域Q相乘,而我們實際計算L_feature特征時opttheta1不是倒序的。所以為了能直接運用卷積,我們可先把opttheta1倒序再與整張自然圖像進行卷積,得到的就正好是L_feature特征。所以,在cnnConvolve.m中的cnnConvolve函數有這句代碼來倒序:

                           feature = rot90(squeeze(feature),2);

當然,Ng用的是這句:

                          feature = flipud(fliplr(squeeze(feature)));

相比起來, rot90的運行速度更快,我在這里做了改進。

b. 整個卷積運算過程實際上還包含了使用邊緣補 0 部分進行計算的卷積結果部分,而我們並不需要這個部分,所以我們在cnnConvolve.m中的cnnConvolve函數中有:

convolvedImage = convolvedImage + conv2(im, feature, 'valid');

參數valid使返回在卷積過程中,未使用邊緣補 0 部分進行計算的卷積結果部分。

 

綜上,所以把這個特征和這個自然圖像相卷積即可提取這整張自然圖像的L_feature特征。

  

3.一些matlab函數

squeeze:  移除單一維

使用方法 :B=squeeze(A)

返回和矩陣A相同元素但所有單一維都移除的矩陣B,單一維是滿足size(A,dim)=1的維。
squeeze命令對二維數組是不起作用的;
如果A是一行或列向量或一標量(1*1)值,則B=A。

例如:2*1*3 數組Y = rand(2,1,3). 這個數組有單一維 —就是每頁僅僅一列:

Y =

Y(:,:,1) =

0.5194

0.8310

 Y(:,:,2) =
 0.0346
 0.0535

Y(:,:,3) =
0.5297  
0.6711

命令Z = squeeze(Y)結果是2*3矩陣:
Z =
0.5194 0.0346 0.5297
0.8310 0.0535 0.6711

 

rot90(X)

Ng教程中用的是:W = flipud(fliplr(W));

這個函數可用rot90(W,2)代替,因為它的運行速度更快。估計是Ng寫這個教程的時候在2011年,rot90這個函數在matlab中還沒出現,好像是在2012年才出現的。

用法:rot90(X),其中X表示一個矩陣。

功能:rot90函數是matlab中使一個矩陣逆時針旋轉90度的函數。Y=rot90(X)表示使矩陣X逆時針旋轉90度,作為新的矩陣Y,但矩陣X本身不變。

rot90(x,2),其中X表示一個矩陣。功能:將矩陣x旋轉180度,形成新的矩陣,但x本身不變。

rot90(x,n),其中x表示一個矩陣,n為正整數,默認功能:將矩陣x逆時針旋轉90*n度,形成新矩陣,x本身不變。

 

conv2

格式:C=conv2(A,B)

        C=conv2(Hcol,Hrow,A)

         C=conv2(...,'shape')
說明:

 C=conv2(A,B) ,conv2 的算矩陣 A 和 B 的卷積,若 [Ma,Na]=size(A), [Mb,Nb]=size(B), 則 size(C)=[Ma+Mb-1,Na+Nb-1];

 C=conv2(Hcol,Hrow,A) 中,矩陣 A 分別與 Hcol 向量在列方向和 Hrow 向量在行方向上進行卷積;

C=conv2(...,'shape') 用來指定 conv2返回二維卷積結果部分,參數 shape 可取值如下:

           full 為缺省值,返回二維卷積的全部結果;
           same 返回二維卷積結果中與 A 大小相同的中間部分;
           valid 返回在卷積過程中,未使用邊緣補 0 部分進行計算的卷積結果部分,當 size(A)>size(B)時,size(C)=[Ma-Mb+1,Na-Nb+1]

 permute

語法格式:

B = permute(A,order)

按照向量order指定的順序重排A的各維。B中元素和A中元素完全相同。但由於經過重新排列,在A、B訪問同一個元素使用的下標就不一樣了。order中的元素必須各不相同

三維:

a=rand(2,3,4); %這是一個三維數組,各維的長度分別為:2,3,4
%現在交換第一維和第二維:
permute(A,[2,1,3]) %變成3*2*4的矩陣

二維:

二維的更形象,a=[1,2+j;3+2*j,4+5*j];permute(a,[2,1]),相當於把行(x)、列(y)互換;有別於轉置(a'),你試一下就知道了。所以就叫非共軛轉置。

4.優秀的編程技巧

①在Ng的代碼中,總是有檢查的習慣,無論是前面的梯度計算還是本節實驗中的卷積和池化等,Ng都會在計算完后想辦法來驗證前面的計算是否正確,這是一個良好的習慣,起碼可以保證這些關鍵步驟沒有錯誤。

②可用類似語句來檢查代碼:

assert(mod(hiddenSize, stepSize) == 0, 'stepSize should divide hiddenSize');

以及

if abs(features(featureNum, 1) - convolvedFeatures(featureNum, imageNum, imageRow, imageCol)) > 1e-9
fprintf('Convolved feature does not match activation from autoencoder\n');

end

 

5.

代價函數 \textstyle J_{\rm sparse}(W,b) 為:

 
\begin{align}
J(W,b)
&= \left[ \frac{1}{m} \sum_{i=1}^m J(W,b;x^{(i)},y^{(i)}) \right]
                       + \frac{\lambda}{2} \sum_{l=1}^{n_l-1} \; \sum_{i=1}^{s_l} \; \sum_{j=1}^{s_{l+1}} \left( W^{(l)}_{ji} \right)^2
 \\
&= \left[ \frac{1}{m} \sum_{i=1}^m \left( \frac{1}{2} \left\| h_{W,b}(x^{(i)}) - y^{(i)} \right\|^2 \right) \right]
                       + \frac{\lambda}{2} \sum_{l=1}^{n_l-1} \; \sum_{i=1}^{s_l} \; \sum_{j=1}^{s_{l+1}} \left( W^{(l)}_{ji} \right)^2
\end{align}

 \textstyle {\rm KL}(\rho || \hat\rho_j)
 = \rho \log \frac{\rho}{\hat\rho_j} + (1-\rho) \log \frac{1-\rho}{1-\hat\rho_j},其中   \begin{align}
\hat\rho_j = \frac{1}{m} \sum_{i=1}^m \left[ a^{(2)}_j(x^{(i)}) \right]
\end{align}

\begin{align}
J_{\rm sparse}(W,b) = J(W,b) + \beta \sum_{j=1}^{s_2} {\rm KL}(\rho || \hat\rho_j),
\end{align} 

計算梯度需要用到的公式:


\begin{align}
\delta_i^{(3)} = - (y_i - \hat{x}_i)
\end{align}
,其中y是期望的輸出。

\begin{align}
\delta^{(2)}_i =
  \left( \left( \sum_{j=1}^{s_{2}} W^{(2)}_{ji} \delta^{(3)}_j \right)
+ \beta \left( - \frac{\rho}{\hat\rho_i} + \frac{1-\rho}{1-\hat\rho_i} \right) \right) f'(z^{(2)}_i) .
\end{align}  其中, \textstyle f'(z^{(l)}_i) = a^{(l)}_i (1- a^{(l)}_i)

 \begin{align}
\nabla_{W^{(l)}} J(W,b;x,y) &= \delta^{(l+1)} (a^{(l)})^T, \\
\nabla_{b^{(l)}} J(W,b;x,y) &= \delta^{(l+1)}.
\end{align}

令 \textstyle \Delta W^{(l)} := 0 , \textstyle \Delta b^{(l)} := 0 

 \textstyle \Delta W^{(l)} := \Delta W^{(l)} + \nabla_{W^{(l)}} J(W,b;x,y)

 \textstyle \Delta b^{(l)} := \Delta b^{(l)} + \nabla_{b^{(l)}} J(W,b;x,y)

 

\begin{align}
\nabla_{W^{(l)}} J(W,b) &= \left( \frac{1}{m} \Delta W^{(l)} \right) + \lambda W^{(l)} \\
\nabla_{b^{(l)}} J(W,b) &= \frac{1}{m} \Delta b^{(l)}.
\end{align}

 

疑問

1.從代碼中可看出,為什么對10萬張小圖像塊要經過預處理(0均值化和ZCA白化),而對2000張和3200張64*64RGB圖片卻未進行預處理?感覺自己對什么時候該進行預處理,什么時候不用進行預處理,為什么這樣,都沒完全掌握!比如:在Deep Learning四之深度學習UFLDL教程:PCA in 2D_Exercise(斯坦福大學深度學習教程)中為什么二維數據不用進行0均值化,而自然圖像就要先0均值化?

 

 

實驗步驟

1.初始化參數,加載上一節實驗結果,即:10萬張8*8的RGB小圖像塊中提取的顏色特征,並把特征可視化。

2.先加載8張64*64的圖片(用來測試卷積和池化是否正確),再實現卷積函數cnnConvolve.m,並檢查該函數是否正確。

3.實現池化函數cnnPool.m,並檢查該函數是否正確。

4.加載2000張64*64RGB圖片,利用前面實現的卷積函數從中提取出卷積特征convolvedFeaturesThis后,再利用池化函數從convolvedFeaturesThis中提取出池化特征pooledFeaturesTrain,把它作為softmax分類器的訓練數據集;加載3200張64*64RGB圖片,利用前面實現的卷積函數從中提取出卷積特征convolvedFeaturesThis后,再利用池化函數從convolvedFeaturesThis中提取出池化特征pooledFeaturesTest,把它作為softmax分類器的測試數據集。

5.用訓練數據集pooledFeaturesTrain及其標簽訓練softmax分類器,得到模型參數softmaxModel。

6.利用訓練過的模型參數為pooledFeaturesTest的softmax分類器對測試數據集pooledFeaturesTest進行分類,即得到3200張64*64RGB圖片的分類結果。

 

 運行結果

Accuracy: 80.313%

所有訓練數據和測試數據的卷積和池化特征的提取所用時間為:

Elapsed time is 2644.920372 seconds.

特征可視化結果:



代碼
cnnExercise.m
%% CS294A/CS294W Convolutional Neural Networks Exercise

%  Instructions
%  ------------
% 
%  This file contains code that helps you get started on the
%  convolutional neural networks exercise. In this exercise, you will only
%  need to modify cnnConvolve.m and cnnPool.m. You will not need to modify
%  this file.

%%======================================================================
%% STEP 0: Initialization
%  Here we initialize some parameters used for the exercise.

imageDim = 64;         % image dimension
imageChannels = 3;     % number of channels (rgb, so 3)

patchDim = 8;          % patch dimension
numPatches = 50000;    % number of patches

visibleSize = patchDim * patchDim * imageChannels;  % number of input units 
outputSize = visibleSize;   % number of output units
hiddenSize = 400;           % number of hidden units 

epsilon = 0.1;           % epsilon for ZCA whitening

poolDim = 19;          % dimension of pooling region

%%======================================================================
%% STEP 1: Train a sparse autoencoder (with a linear decoder) to learn 
%  features from color patches. If you have completed the linear decoder
%  execise, use the features that you have obtained from that exercise, 
%  loading them into optTheta. Recall that we have to keep around the 
%  parameters used in whitening (i.e., the ZCA whitening matrix and the
%  meanPatch)

% --------------------------- YOUR CODE HERE --------------------------
% Train the sparse autoencoder and fill the following variables with 
% the optimal parameters:

optTheta =  zeros(2*hiddenSize*visibleSize+hiddenSize+visibleSize, 1);
ZCAWhite =  zeros(visibleSize, visibleSize);
meanPatch = zeros(visibleSize, 1);
load STL10Features.mat;

% --------------------------------------------------------------------

% Display and check to see that the features look good
W = reshape(optTheta(1:visibleSize * hiddenSize), hiddenSize, visibleSize);
b = optTheta(2*hiddenSize*visibleSize+1:2*hiddenSize*visibleSize+hiddenSize);

displayColorNetwork( (W*ZCAWhite)');

%%======================================================================
%% STEP 2: Implement and test convolution and pooling
%  In this step, you will implement convolution and pooling, and test them
%  on a small part of the data set to ensure that you have implemented
%  these two functions correctly. In the next step, you will actually
%  convolve and pool the features with the STL10 images.

%% STEP 2a: Implement convolution
%  Implement convolution in the function cnnConvolve in cnnConvolve.m

% Note that we have to preprocess the images in the exact same way 
% we preprocessed the patches before we can obtain the feature activations.

load stlTrainSubset.mat % loads numTrainImages, trainImages, trainLabels

%% 只用8張圖片來測試卷積和池化是否正確 Use only the first 8 images for testing
convImages = trainImages(:, :, :, 1:8);  % 格式:trainImages(r, c, channel, image number)

% NOTE: Implement cnnConvolve in cnnConvolve.m first!
convolvedFeatures = cnnConvolve(patchDim, hiddenSize, convImages, W, b, ZCAWhite, meanPatch);

%% STEP 2b: Checking your convolution
%  To ensure that you have convolved the features correctly, we have
%  provided some code to compare the results of your convolution with
%  activations from the sparse autoencoder

% For 1000 random points
for i = 1:1000    
    featureNum = randi([1, hiddenSize]);
    imageNum = randi([1, 8]);
    imageRow = randi([1, imageDim - patchDim + 1]);
    imageCol = randi([1, imageDim - patchDim + 1]);    
   
    patch = convImages(imageRow:imageRow + patchDim - 1, imageCol:imageCol + patchDim - 1, :, imageNum);
    patch = patch(:);  % 將patch矩陣從3維矩陣轉換為一個列向量         
    patch = patch - meanPatch;
    patch = ZCAWhite * patch;  % 白化后的數據
    
    features = feedForwardAutoencoder(optTheta, hiddenSize, visibleSize, patch); 

    if abs(features(featureNum, 1) - convolvedFeatures(featureNum, imageNum, imageRow, imageCol)) > 1e-9
        fprintf('Convolved feature does not match activation from autoencoder\n');
        fprintf('Feature Number    : %d\n', featureNum);
        fprintf('Image Number      : %d\n', imageNum);
        fprintf('Image Row         : %d\n', imageRow);
        fprintf('Image Column      : %d\n', imageCol);
        fprintf('Convolved feature : %0.5f\n', convolvedFeatures(featureNum, imageNum, imageRow, imageCol));
        fprintf('Sparse AE feature : %0.5f\n', features(featureNum, 1));       
        error('Convolved feature does not match activation from autoencoder');
    end 
end

disp('Congratulations! Your convolution code passed the test.');

%% STEP 2c: Implement pooling
%  Implement pooling in the function cnnPool in cnnPool.m

% NOTE: Implement cnnPool in cnnPool.m first!
pooledFeatures = cnnPool(poolDim, convolvedFeatures);

%% STEP 2d: Checking your pooling
%  To ensure that you have implemented pooling, we will use your pooling
%  function to pool over a test matrix and check the results.

testMatrix = reshape(1:64, 8, 8);
expectedMatrix = [mean(mean(testMatrix(1:4, 1:4))) mean(mean(testMatrix(1:4, 5:8))); ...
                  mean(mean(testMatrix(5:8, 1:4))) mean(mean(testMatrix(5:8, 5:8))); ];
            
testMatrix = reshape(testMatrix, 1, 1, 8, 8);
        
pooledFeatures = squeeze(cnnPool(4, testMatrix));

if ~isequal(pooledFeatures, expectedMatrix)
    disp('Pooling incorrect');
    disp('Expected');
    disp(expectedMatrix);
    disp('Got');
    disp(pooledFeatures);
else
    disp('Congratulations! Your pooling code passed the test.');
end

%%======================================================================
%% STEP 3: Convolve and pool with the dataset
%  In this step, you will convolve each of the features you learned with
%  the full large images to obtain the convolved features. You will then
%  pool the convolved features to obtain the pooled features for
%  classification.
%
%  Because the convolved features matrix is very large, we will do the
%  convolution and pooling 50 features at a time to avoid running out of
%  memory. Reduce this number if necessary

stepSize = 50;
assert(mod(hiddenSize, stepSize) == 0, 'stepSize should divide hiddenSize');

load stlTrainSubset.mat % loads numTrainImages, trainImages, trainLabels
load stlTestSubset.mat  % loads numTestImages,  testImages,  testLabels

pooledFeaturesTrain = zeros(hiddenSize, numTrainImages, ...
    floor((imageDim - patchDim + 1) / poolDim), ...
    floor((imageDim - patchDim + 1) / poolDim) );
pooledFeaturesTest = zeros(hiddenSize, numTestImages, ...
    floor((imageDim - patchDim + 1) / poolDim), ...
    floor((imageDim - patchDim + 1) / poolDim) );

tic();

for convPart = 1:(hiddenSize / stepSize)
    
    featureStart = (convPart - 1) * stepSize + 1;
    featureEnd = convPart * stepSize;
    
    fprintf('Step %d: features %d to %d\n', convPart, featureStart, featureEnd);  
    Wt = W(featureStart:featureEnd, :);
    bt = b(featureStart:featureEnd);    
    
    fprintf('Convolving and pooling train images\n');
    convolvedFeaturesThis = cnnConvolve(patchDim, stepSize, ...
        trainImages, Wt, bt, ZCAWhite, meanPatch);
    pooledFeaturesThis = cnnPool(poolDim, convolvedFeaturesThis);
    pooledFeaturesTrain(featureStart:featureEnd, :, :, :) = pooledFeaturesThis;   
    toc();
    clear convolvedFeaturesThis pooledFeaturesThis;
    
    fprintf('Convolving and pooling test images\n');
    convolvedFeaturesThis = cnnConvolve(patchDim, stepSize, ...
        testImages, Wt, bt, ZCAWhite, meanPatch);
    pooledFeaturesThis = cnnPool(poolDim, convolvedFeaturesThis);
    pooledFeaturesTest(featureStart:featureEnd, :, :, :) = pooledFeaturesThis;   
    toc();

    clear convolvedFeaturesThis pooledFeaturesThis;

end


% You might want to save the pooled features since convolution and pooling takes a long time
save('cnnPooledFeatures.mat', 'pooledFeaturesTrain', 'pooledFeaturesTest');
toc();

%%======================================================================
%% STEP 4: Use pooled features for classification
%  Now, you will use your pooled features to train a softmax classifier,
%  using softmaxTrain from the softmax exercise.
%  Training the softmax classifer for 1000 iterations should take less than
%  10 minutes.

% Add the path to your softmax solution, if necessary
% addpath /path/to/solution/

% Setup parameters for softmax
softmaxLambda = 1e-4;
numClasses = 4;
% Reshape the pooledFeatures to form an input vector for softmax
softmaxX = permute(pooledFeaturesTrain, [1 3 4 2]); % 把pooledFeaturesTrain的第2維移到最后
softmaxX = reshape(softmaxX, numel(pooledFeaturesTrain) / numTrainImages,...
    numTrainImages);
softmaxY = trainLabels;

options = struct;
options.maxIter = 200;
softmaxModel = softmaxTrain(numel(pooledFeaturesTrain) / numTrainImages,...
    numClasses, softmaxLambda, softmaxX, softmaxY, options);

%%======================================================================
%% STEP 5: Test classifer
%  Now you will test your trained classifer against the test images

softmaxX = permute(pooledFeaturesTest, [1 3 4 2]);
softmaxX = reshape(softmaxX, numel(pooledFeaturesTest) / numTestImages, numTestImages);
softmaxY = testLabels;

[pred] = softmaxPredict(softmaxModel, softmaxX);
acc = (pred(:) == softmaxY(:));
acc = sum(acc) / size(acc, 1);
fprintf('Accuracy: %2.3f%%\n', acc * 100);

% You should expect to get an accuracy of around 80% on the test images.
 
        
cnnConvolve.m
function convolvedFeatures = cnnConvolve(patchDim, numFeatures, images, W, b, ZCAWhite, meanPatch)
%卷積特征提取:把每一個特征都與每一張大尺寸圖片images相卷積,並返回卷積結果
%cnnConvolve Returns the convolution of the features given by W and b with
%the given images
%
% Parameters:
%  patchDim - patch (feature) dimension
%  numFeatures - number of features
%  images - large images to convolve with, matrix in the form
%           images(r, c, channel, image number)
%  W, b - W, b for features from the sparse autoencoder
%  ZCAWhite, meanPatch - ZCAWhitening and meanPatch matrices used for
%                        preprocessing
%
% Returns:
%  convolvedFeatures - matrix of convolved features in the form
%                      convolvedFeatures(featureNum, imageNum, imageRow, imageCol)
%                      表示第個featureNum特征與第imageNum張圖片相卷的結果保存在矩陣
%                      convolvedFeatures(featureNum, imageNum, :, :)的第imageRow行第imageCol列,
%                      而每行和列的大小都為imageDim - patchDim + 1

numImages = size(images, 4);     % 圖片數量
imageDim = size(images, 1);      % 每幅圖片行數
imageChannels = size(images, 3); % 每幅圖片通道數

patchSize = patchDim*patchDim;
assert(numFeatures == size(W,1), 'W should have numFeatures rows');
assert(patchSize*imageChannels == size(W,2), 'W should have patchSize*imageChannels cols');


% Instructions:
%   Convolve every feature with every large image here to produce the 
%   numFeatures x numImages x (imageDim - patchDim + 1) x (imageDim - patchDim + 1) 
%   matrix convolvedFeatures, such that 
%   convolvedFeatures(featureNum, imageNum, imageRow, imageCol) is the
%   value of the convolved featureNum feature for the imageNum image over
%   the region (imageRow, imageCol) to (imageRow + patchDim - 1, imageCol + patchDim - 1)
%
% Expected running times: 
%   Convolving with 100 images should take less than 3 minutes 
%   Convolving with 5000 images should take around an hour
%   (So to save time when testing, you should convolve with less images, as
%   described earlier)

% -------------------- YOUR CODE HERE --------------------
% Precompute the matrices that will be used during the convolution. Recall
% that you need to take into account the whitening and mean subtraction
% steps

WT = W*ZCAWhite;           % 等效的網絡權值
b_mean = b - WT*meanPatch; % 等效偏置項


% --------------------------------------------------------

convolvedFeatures = zeros(numFeatures, numImages, imageDim - patchDim + 1, imageDim - patchDim + 1);
for imageNum = 1:numImages
  for featureNum = 1:numFeatures

    % convolution of image with feature matrix for each channel
    convolvedImage = zeros(imageDim - patchDim + 1, imageDim - patchDim + 1);
    for channel = 1:3

      % Obtain the feature (patchDim x patchDim) needed during the convolution
      % ---- YOUR CODE HERE ----
      offset = (channel-1)*patchSize;
      feature = reshape(WT(featureNum,offset+1:offset+patchSize), patchDim, patchDim);%取一個權值圖像塊出來
      im  = images(:,:,channel,imageNum);
      
      
      % ------------------------

      % Flip the feature matrix because of the definition of convolution, as explained later
      feature = rot90(squeeze(feature),2);
      
      % Obtain the image
      im = squeeze(images(:, :, channel, imageNum));

      % Convolve "feature" with "im", adding the result to convolvedImage
      % be sure to do a 'valid' convolution
      % ---- YOUR CODE HERE ----

      convolvedoneChannel = conv2(im, feature, 'valid');    % 單個特征分別與所有圖片相卷
      convolvedImage = convolvedImage + convolvedoneChannel;% 直接把3通道的值加起來,理由:3通道相當於有3個feature-map,類似於cnn第2層以后的輸入。
            
      % ------------------------

    end
    
    % Subtract the bias unit (correcting for the mean subtraction as well)
    % Then, apply the sigmoid function to get the hidden activation
    % ---- YOUR CODE HERE ----

    convolvedImage = sigmoid(convolvedImage+b_mean(featureNum));
    
    
    % ------------------------
    
    % The convolved feature is the sum of the convolved values for all channels
    convolvedFeatures(featureNum, imageNum, :, :) = convolvedImage;
  end
end


end
function sigm = sigmoid(x)
    sigm = 1./(1+exp(-x));
end
 
        
cnnPool.m
function pooledFeatures = cnnPool(poolDim, convolvedFeatures)
%cnnPool Pools the given convolved features
%
% Parameters:
%  poolDim - dimension of pooling region
%  convolvedFeatures - convolved features to pool (as given by cnnConvolve)
%                      convolvedFeatures(featureNum, imageNum, imageRow, imageCol)
%
% Returns:
%  pooledFeatures - matrix of pooled features in the form
%                   pooledFeatures(featureNum, imageNum, poolRow, poolCol)
%                      表示第個featureNum特征與第imageNum張圖片的卷積特征池化后的結果保存在矩陣
%                      pooledFeatures(featureNum, imageNum, :, :)的第poolRow行第poolCol列
%     

numImages = size(convolvedFeatures, 2);   % 圖片數量
numFeatures = size(convolvedFeatures, 1); % 卷積特征數量
convolvedDim = size(convolvedFeatures, 3);% 卷積特征維數

pooledFeatures = zeros(numFeatures, numImages, floor(convolvedDim / poolDim), floor(convolvedDim / poolDim));

% -------------------- YOUR CODE HERE --------------------
% Instructions:
%   Now pool the convolved features in regions of poolDim x poolDim,
%   to obtain the 
%   numFeatures x numImages x (convolvedDim/poolDim) x (convolvedDim/poolDim) 
%   matrix pooledFeatures, such that
%   pooledFeatures(featureNum, imageNum, poolRow, poolCol) is the 
%   value of the featureNum feature for the imageNum image pooled over the
%   corresponding (poolRow, poolCol) pooling region 
%   (see http://ufldl/wiki/index.php/Pooling )
%   
%   Use mean pooling here.
% -------------------- YOUR CODE HERE --------------------

resultDim  = floor(convolvedDim / poolDim);
for imageNum = 1:numImages   % 第imageNum張圖片
    for featureNum = 1:numFeatures  % 第featureNum個特征
        for poolRow = 1:resultDim
            offsetRow = 1+(poolRow-1)*poolDim;
            for poolCol = 1:resultDim
                offsetCol = 1+(poolCol-1)*poolDim;
                patch = convolvedFeatures(featureNum,imageNum,offsetRow:offsetRow+poolDim-1,...
                    offsetCol:offsetCol+poolDim-1);%取出一個patch
                pooledFeatures(featureNum,imageNum,poolRow,poolCol) = mean(patch(:));%使用均值pool
            end
        end
    end
end


end
 
        

 

 

 

 

 

 

參考資料

UFLDL教程

http://www.cnblogs.com/tornadomeet/archive/2013/04/09/3009830.html

http://www.cnblogs.com/tornadomeet/archive/2013/03/25/2980766.html






 

——


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM