Deep Learning 9_深度學習UFLDL教程:linear decoder_exercise(斯坦福大學深度學習教程)


前言

實驗內容Exercise:Learning color features with Sparse Autoencoders。即:利用線性解碼器,從100000張8*8的RGB圖像塊中提取顏色特征,這些特征會被用於下一節的練習

理論知識線性解碼器http://www.cnblogs.com/tornadomeet/archive/2013/04/08/3007435.html

實驗基礎說明

1.為什么要用線性解碼器,而不用前面用過的棧式自編碼器等?即:線性解碼器的作用?

這一點,Ng已經在講解中說明了,因為線性解碼器不用要求輸入數據范圍一定為(0,1),而前面用過的棧式自編碼器等要求輸入數據范圍必須為(0,1)。為a3的輸出值是f函數的輸出,而在普通的sparse autoencoder中f函數一般為sigmoid函數,所以其輸出值的范圍為(0,1),所以可以知道a3的輸出值范圍也在0到1之間。另外我們知道,在稀疏模型中的輸出層應該是盡量和輸入層特征相同,也就是說a3=x1,這樣就可以推導出x1也是在0和1之間,那就是要求我們對輸入到網絡中的數據要先變換到0和1之間,這一條件雖然在有些領域滿足,比如前面實驗中的MINIST數字識別。但是有些領域,比如說使用了PCA Whitening后的數據,其范圍卻不一定在0和1之間。因此Linear Decoder方法就出現了。Linear Decoder是指在隱含層采用的激發函數是sigmoid函數,而在輸出層的激發函數采用的是線性函數,比如說最特別的線性函數——等值函數。

 

2.在實驗中,在ZCA whitening前進行數據預處理時,每列代表一個樣本,但為什么是對patches的每行0均值化(即:每一維度0均值化,具體做法是:首先計算每一個維度上數據的均值(使用全體數據計算),之后在每一個維度上都減去該均值。),而以前的實驗都是對每列即每個樣本0均值化(即:逐樣本均值消減)?

①因為以前是灰度圖,現在是RGB彩色圖像,如果現在對每列平均就是對三個通道求平均,這肯定不行。因為不同色彩通道中的像素並不都存在平穩特性,而要進行逐樣本均值消減(即:單獨每個樣本0均值化)有一個必須滿足的前提:該數據是平穩的(見:數據預處理

 穩性的理解可見:http://lidequan12345.blog.163.com/blog/static/28985036201177892790

②因為以前是自然圖像,自然圖像中像素之間的統計特性都一樣,有一定的相關性,而現在是人工分割的圖像塊,沒有這種特性。

 

3.在實驗中,把網絡權值顯示出來為什么是displayColorNetwork( (W*ZCAWhite)'),而不像以前用的是display_Network( (W1)')?

 因為在本實驗中,數據patches在輸入網絡前先經過了ZCA whitening的數據預處理,變成了ZCA白化后的數據ZCAWhite * patches,所以第一層隱含層輸出的實際上是W*ZCAWhite * patches,也就是說從原始數據patches到第一層隱含層輸出為W*ZCAWhite * patches的整個過程l轉換權值為W*ZCAWhite。

 

4.PCA Whitening和ZCA Whitening的區別?即:為什么本實驗沒用PCA Whitening

PCA Whitening:處理后的各數據方差都都相等,並都為1。主要用於降維和去相關性。

ZCA Whitening:處理后的各數據方差不一定為1,但一定相等。主要用於去相關性,且能盡量保持原始數據

 

5.優秀的編程技巧:

要學會用函數句柄,比如patches = bsxfun(@minus, patches, meanPatch);

因為不使用函數句柄的情況下,對函數多次調用,每次都要為該函數進行全面的路徑搜索,直接影響計算速度,借助句柄可以完全避免這種時間損耗。也就是直接指定了函數的指針。函數句柄就像一個函數的名字,有點類似於C++程序中的引用。當然這一點已經在Deep Learning一之深度學習UFLDL教程:Sparse Autoencoder練習(斯坦福大學深度學習教程)中提到過,但我覺得有必須再強調一下。

 

實驗步驟

1.初始化參數,編寫計算線性解碼器代價函數及其梯度的函數sparseAutoencoderLinearCost.m,主要是在sparseAutoencoderCost.m的基礎上稍微修改,然后再檢查其梯度實現是否正確。

2.加載數據並原始數據進行ZCA Whitening的預處理。

3.學習特征,即用LBFG算法訓練整個線性解碼器網絡,得到整個網絡權值optTheta。

4.可視化第一層學習到的特征。

 

實驗結果

原始數據

ZCA Whitening后的數據

特征可視化結果,即:每一層學習到的特征

 代碼

linearDecoderExercise.m

%% CS294A/CS294W Linear Decoder Exercise

%  Instructions
%  ------------
% 
%  This file contains code that helps you get started on the
%  linear decoder exericse. For this exercise, you will only need to modify
%  the code in sparseAutoencoderLinearCost.m. You will not need to modify
%  any code in this file.

%%======================================================================
%% STEP 0: Initialization
%  Here we initialize some parameters used for the exercise.

imageChannels = 3;     % number of channels (rgb, so 3)

patchDim   = 8;          % patch dimension
numPatches = 100000;   % number of patches

visibleSize = patchDim * patchDim * imageChannels;  % number of input units 
outputSize  = visibleSize;   % number of output units
hiddenSize  = 400;           % number of hidden units 

sparsityParam = 0.035; % desired average activation of the hidden units.
lambda = 3e-3;         % weight decay parameter       
beta = 5;              % weight of sparsity penalty term       

epsilon = 0.1;           % epsilon for ZCA whitening

%%======================================================================
%% STEP 1: Create and modify sparseAutoencoderLinearCost.m to use a linear decoder,
%          and check gradients
%  You should copy sparseAutoencoderCost.m from your earlier exercise 
%  and rename it to sparseAutoencoderLinearCost.m. 
%  Then you need to rename the function from sparseAutoencoderCost to
%  sparseAutoencoderLinearCost, and modify it so that the sparse autoencoder
%  uses a linear decoder instead. Once that is done, you should check 
% your gradients to verify that they are correct.

% NOTE: Modify sparseAutoencoderCost first!

% To speed up gradient checking, we will use a reduced network and some
% dummy patches

debugHiddenSize = 5;
debugvisibleSize = 8;
patches = rand([8 10]);
theta = initializeParameters(debugHiddenSize, debugvisibleSize); 

[cost, grad] = sparseAutoencoderLinearCost(theta, debugvisibleSize, debugHiddenSize, ...
                                           lambda, sparsityParam, beta, ...
                                           patches);

% Check gradients
numGrad = computeNumericalGradient( @(x) sparseAutoencoderLinearCost(x, debugvisibleSize, debugHiddenSize, ...
                                                  lambda, sparsityParam, beta, ...
                                                  patches), theta);

% Use this to visually compare the gradients side by side
disp([numGrad grad]); 

diff = norm(numGrad-grad)/norm(numGrad+grad);
% Should be small. In our implementation, these values are usually less than 1e-9.
disp(diff); 

assert(diff < 1e-9, 'Difference too large. Check your gradient computation again');

% NOTE: Once your gradients check out, you should run step 0 again to
%       reinitialize the parameters
%}

%%======================================================================
%% STEP 2: 從pathes中學習特征 Learn features on small patches
%  In this step, you will use your sparse autoencoder (which now uses a 
%  linear decoder) to learn features on small patches sampled from related
%  images.

%% STEP 2a: 加載數據 Load patches
%  In this step, we load 100k patches sampled from the STL10 dataset and
%  visualize them. Note that these patches have been scaled to [0,1]

load stlSampledPatches.mat  %怎么就就這個變量加到pathes上了呢?因為它里面自己定義了變量patches的值!
figure;
displayColorNetwork(patches(:, 1:100)); 

%% STEP 2b: 預處理 Apply preprocessing
%  In this sub-step, we preprocess the sampled patches, in particular, 
%  ZCA whitening them. 
% 
%  In a later exercise on convolution and pooling, you will need to replicate 
%  exactly the preprocessing steps you apply to these patches before 
%  using the autoencoder to learn features on them. Hence, we will save the
%  ZCA whitening and mean image matrices together with the learned features
%  later on.

% Subtract mean patch (hence zeroing the mean of the patches)
meanPatch = mean(patches, 2);  %為什么是對每行求平均,以前是對每列即每個樣本求平均呀?因為以前是灰度圖,現在是彩色圖,如果現在對每列平均就是對三個通道求平均,這肯定不行
patches = bsxfun(@minus, patches, meanPatch);

% Apply ZCA whitening
sigma = patches * patches' / numPatches; %協方差矩陣
[u, s, v] = svd(sigma);
ZCAWhite = u * diag(1 ./ sqrt(diag(s) + epsilon)) * u';
patches = ZCAWhite * patches;

figure;
displayColorNetwork(patches(:, 1:100));

%% STEP 2c: Learn features
%  You will now use your sparse autoencoder (with linear decoder) to learn
%  features on the preprocessed patches. This should take around 45 minutes.

theta = initializeParameters(hiddenSize, visibleSize);

% Use minFunc to minimize the function
addpath minFunc/

options = struct;
options.Method = 'lbfgs'; 
options.maxIter = 400;
options.display = 'on';

[optTheta, cost] = minFunc( @(p) sparseAutoencoderLinearCost(p, ...
                                   visibleSize, hiddenSize, ...
                                   lambda, sparsityParam, ...
                                   beta, patches), ...
                              theta, options);

% Save the learned features and the preprocessing matrices for use in 
% the later exercise on convolution and pooling
fprintf('Saving learned features and preprocessing matrices...\n');                          
save('STL10Features.mat', 'optTheta', 'ZCAWhite', 'meanPatch');
fprintf('Saved\n');

%% STEP 2d: Visualize learned features

W = reshape(optTheta(1:visibleSize * hiddenSize), hiddenSize, visibleSize);
b = optTheta(2*hiddenSize*visibleSize+1:2*hiddenSize*visibleSize+hiddenSize);
figure;
displayColorNetwork( (W*ZCAWhite)');

 

sparseAutoencoderLinearCost.m

function [cost,grad,features] = sparseAutoencoderLinearCost(theta, visibleSize, hiddenSize, ...
                                                            lambda, sparsityParam, beta, data)
%計算線性解碼器代價函數及其梯度
% visibleSize:輸入層神經單元節點數   
% hiddenSize:隱藏層神經單元節點數  
% lambda: 權重衰減系數 
% sparsityParam: 稀疏性參數
% beta: 稀疏懲罰項的權重 
% data: 訓練集 
% theta:參數向量,包含W1、W2、b1、b2
% -------------------- YOUR CODE HERE --------------------
% Instructions:
%   Copy sparseAutoencoderCost in sparseAutoencoderCost.m from your
%   earlier exercise onto this file, renaming the function to
%   sparseAutoencoderLinearCost, and changing the autoencoder to use a
%   linear decoder.
% -------------------- YOUR CODE HERE --------------------                                    
% The input theta is a vector because minFunc only deal with vectors. In
% this step, we will convert theta to matrix format such that they follow
% the notation in the lecture notes.
W1 = reshape(theta(1:hiddenSize*visibleSize), hiddenSize, visibleSize);
W2 = reshape(theta(hiddenSize*visibleSize+1:2*hiddenSize*visibleSize), visibleSize, hiddenSize);
b1 = theta(2*hiddenSize*visibleSize+1:2*hiddenSize*visibleSize+hiddenSize);
b2 = theta(2*hiddenSize*visibleSize+hiddenSize+1:end);

% Loss and gradient variables (your code needs to compute these values)
m = size(data, 2); % 樣本數量

%% ---------- YOUR CODE HERE --------------------------------------
%  Instructions: Compute the loss for the Sparse Autoencoder and gradients
%                W1grad, W2grad, b1grad, b2grad
%
%  Hint: 1) data(:,i) is the i-th example
%        2) your computation of loss and gradients should match the size
%        above for loss, W1grad, W2grad, b1grad, b2grad

% z2 = W1 * x + b1
% a2 = f(z2)
% z3 = W2 * a2 + b2
% h_Wb = a3 = f(z3)

z2 = W1 * data + repmat(b1, [1, m]);
a2 = sigmoid(z2);
z3 = W2 * a2 + repmat(b2, [1, m]);
a3 = z3;

rhohats = mean(a2,2);
rho = sparsityParam;
KLsum = sum(rho * log(rho ./ rhohats) + (1-rho) * log((1-rho) ./ (1-rhohats)));


squares = (a3 - data).^2;
squared_err_J = (1/2) * (1/m) * sum(squares(:));              %均方差項
weight_decay_J = (lambda/2) * (sum(W1(:).^2) + sum(W2(:).^2));%權重衰減項
sparsity_J = beta * KLsum;                                    %懲罰項

cost = squared_err_J + weight_decay_J + sparsity_J;%損失函數值

% delta3 = -(data - a3) .* fprime(z3);
% but fprime(z3) = a3 * (1-a3)
delta3 = -(data - a3);
beta_term = beta * (- rho ./ rhohats + (1-rho) ./ (1-rhohats));
delta2 = ((W2' * delta3) + repmat(beta_term, [1,m]) ) .* a2 .* (1-a2);

W2grad = (1/m) * delta3 * a2' + lambda * W2;   % W2梯度
b2grad = (1/m) * sum(delta3, 2);               % b2梯度
W1grad = (1/m) * delta2 * data' + lambda * W1; % W1梯度
b1grad = (1/m) * sum(delta2, 2);               % b1梯度

%-------------------------------------------------------------------
% Convert weights and bias gradients to a compressed form
% This step will concatenate and flatten all your gradients to a vector
% which can be used in the optimization method.
grad = [W1grad(:) ; W2grad(:) ; b1grad(:) ; b2grad(:)];

end
%-------------------------------------------------------------------
% We are giving you the sigmoid function, you may find this function
% useful in your computation of the loss and the gradients.
function sigm = sigmoid(x)

    sigm = 1 ./ (1 + exp(-x));
end

 

displayColorNetwork.m

 

function displayColorNetwork(A)

% display receptive field(s) or basis vector(s) for image patches
%
% A         the basis, with patches as column vectors

% In case the midpoint is not set at 0, we shift it dynamically
if min(A(:)) >= 0
    A = A - mean(A(:)); % 0均值化
end

cols = round(sqrt(size(A, 2)));% 每行大圖像中小圖像塊的個數

channel_size = size(A,1) / 3;
dim = sqrt(channel_size);   % 小圖像塊內每行或列像素點個數
dimp = dim+1;
rows = ceil(size(A,2)/cols);   % 每列大圖像中小圖像塊的個數
B = A(1:channel_size,:);                   % R通道像素值
C = A(channel_size+1:channel_size*2,:);    % G通道像素值
D = A(2*channel_size+1:channel_size*3,:);  % B通道像素值
B=B./(ones(size(B,1),1)*max(abs(B)));% 歸一化
C=C./(ones(size(C,1),1)*max(abs(C)));
D=D./(ones(size(D,1),1)*max(abs(D)));
% Initialization of the image
I = ones(dim*rows+rows-1,dim*cols+cols-1,3);

%Transfer features to this image matrix
for i=0:rows-1
  for j=0:cols-1
      
    if i*cols+j+1 > size(B, 2)
        break
    end
    
    % This sets the patch
    I(i*dimp+1:i*dimp+dim,j*dimp+1:j*dimp+dim,1) = ...
         reshape(B(:,i*cols+j+1),[dim dim]);
    I(i*dimp+1:i*dimp+dim,j*dimp+1:j*dimp+dim,2) = ...
         reshape(C(:,i*cols+j+1),[dim dim]);
    I(i*dimp+1:i*dimp+dim,j*dimp+1:j*dimp+dim,3) = ...
         reshape(D(:,i*cols+j+1),[dim dim]);

  end
end

I = I + 1; % 使I的范圍從[-1,1]變為[02]
I = I / 2; % 使I的范圍從[02]變為[0, 1]
imagesc(I); 
axis equal  % 等比坐標軸:設置屏幕高寬比,使得每個坐標軸的具有均勻的刻度間隔
axis off    % 關閉所有的坐標軸標簽、刻度、背景

end

 

 

參考資料

線性解碼器

http://www.cnblogs.com/tornadomeet/archive/2013/04/08/3007435.html

http://www.cnblogs.com/tornadomeet/archive/2013/03/25/2980766.html

 


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM