Deep learning:二十二(linear decoder練習)


 

  前言:

  本節是練習Linear decoder的應用,關於Linear decoder的相關知識介紹請參考:Deep learning:十七(Linear DecodersConvolutionPooling),實驗步驟參考Exercise: Implement deep networks for digit classification。本次實驗是用linear decoder的sparse autoencoder來訓練出stl-10數據庫圖片的patch特征。並且這次的訓練權值是針對rgb圖像塊的。

 

  基礎知識:

  PCA Whitening是保證數據各維度的方差為1,而ZCA Whitening是保證數據各維度的方差相等即可,不一定要唯一。並且這兩種whitening的一般用途也不一樣,PCA Whitening主要用於降維且去相關性,而ZCA Whitening主要用於去相關性,且盡量保持原數據。

  Matlab的一些知識:

  函數句柄的好處就是把一個函數作為參數傳入到本函數中,在該函數內部可以利用該函數進行各種運算得出最后需要的結果,比如說函數中要用到各種求導求積分的方法,如果是傳入該函數經過各種運算后的值的話,那么在調用該函數前就需要不少代碼,這樣比較累贅,所以采用函數句柄后這些代碼直接放在了函數內部,每調用一次無需在函數外面實現那么多的東西。

  Matlab中保存各種數據時可以采用save函數,並將其保持為.mat格式的,這樣在matlab的current folder中看到的是.mat格式的文件,但是直接在文件夾下看,它是不直接顯示后綴的,且顯示的是Microsoft Access Table Shortcut,也就是.mat的簡稱。

  關於實驗的一些說明:

  在Ng的教程和實驗中,它的輸入樣本矩陣是每一列代表一個樣本的,列數為樣本的總個數。

  matlab中矩陣64*10w大小肯定是可以的。

  在本次實驗中,ZCA Whitening是針對patches進行的,且patches的均值化是對每一維進行的(感覺這種均值化比較靠譜,前面有文章是進行對patch中一個樣本求均值,感覺那樣很不靠譜,不過那是在natural image中做的,因為natural image每一維的統計特性都一樣,所以可以那樣均值化,但還是感覺不太靠譜)。因為使用的是ZCA whitening,所以新的向量並沒有進行降維,只是去了相關性和讓每一維的方差都相等而已。另外,由此可見,在進行數據Whitening時並不需要對原始的大圖片進行whitening,而是你用什么數據輸入網絡去訓練就對什么數據進行whitening,而這里,是用的小patches來訓練的,所以應該對小patches進行whitening。

  關於本次實驗的一些數據和變量分配如下:

  總共需訓練的樣本矩陣大小為192*100000。因為輸入訓練的一個patch大小為8*8的,所以網絡的輸入層節點數為192(=8*8*3,因為是3通道的,每一列按照rgb的順序排列),另外本次試驗的隱含層個數為400,權值懲罰系數為0.003,稀疏性懲罰系數為5,稀疏性體現在3.5%的隱含層節點被激發。ZCA白化時分母加上0.1的值防止出現大的數值。

  用的是Linear decoder,所以最后的輸出層的激發函數為1,即輸出和輸入相等。這樣在問題內部的計算量變小了點。

  程序中最后需要把學習到的網絡權值給顯示出來,不過這個顯示的內容已經包括了whitening部分了,所以是whitening和sparse autoencoder的組合。程序中顯示用的是displayColorNetwork( (W*ZCAWhite)');

  這里為什么要用(W*ZCAWhite)'呢?首先,使用W*ZCAWhite是因為每個樣本x輸入網絡,其輸出等價於W*ZCAWhite*x;另外,由於W*ZCAWhite的每一行才是一個隱含節點的變換值,而displayColorNetwork函數是把每一列顯示一個小圖像塊的,所以需要對其轉置。

 

  實驗結果:

  原始圖片截圖:

   

  ZCA Whitening后截圖;

   

  學習到的400個特征顯示如下:

   

 

  實驗主要部分代碼:

%% CS294A/CS294W Linear Decoder Exercise

%  Instructions
%  ------------
% 
%  This file contains code that helps you get started on the
%  linear decoder exericse. For this exercise, you will only need to modify
%  the code in sparseAutoencoderLinearCost.m. You will not need to modify
%  any code in this file.

%%======================================================================
%% STEP 0: Initialization
%  Here we initialize some parameters used for the exercise.

imageChannels = 3;     % number of channels (rgb, so 3)

patchDim   = 8;          % patch dimension
numPatches = 100000;   % number of patches

visibleSize = patchDim * patchDim * imageChannels;  % number of input units 
outputSize  = visibleSize;   % number of output units
hiddenSize  = 400;           % number of hidden units %中間的隱含層還變多了

sparsityParam = 0.035; % desired average activation of the hidden units.
lambda = 3e-3;         % weight decay parameter       
beta = 5;              % weight of sparsity penalty term       

epsilon = 0.1;           % epsilon for ZCA whitening

%%======================================================================
%% STEP 1: Create and modify sparseAutoencoderLinearCost.m to use a linear decoder,
%          and check gradients
%  You should copy sparseAutoencoderCost.m from your earlier exercise 
%  and rename it to sparseAutoencoderLinearCost.m. 
%  Then you need to rename the function from sparseAutoencoderCost to
%  sparseAutoencoderLinearCost, and modify it so that the sparse autoencoder
%  uses a linear decoder instead. Once that is done, you should check 
% your gradients to verify that they are correct.

% NOTE: Modify sparseAutoencoderCost first!

% To speed up gradient checking, we will use a reduced network and some
% dummy patches

debugHiddenSize = 5;
debugvisibleSize = 8;
patches = rand([8 10]);%隨機產生10個樣本,每個樣本為一個8維的列向量,元素值為0~1
theta = initializeParameters(debugHiddenSize, debugvisibleSize); 

[cost, grad] = sparseAutoencoderLinearCost(theta, debugvisibleSize, debugHiddenSize, ...
                                           lambda, sparsityParam, beta, ...
                                           patches);

% Check gradients
numGrad = computeNumericalGradient( @(x) sparseAutoencoderLinearCost(x, debugvisibleSize, debugHiddenSize, ...
                                                  lambda, sparsityParam, beta, ...
                                                  patches), theta);

% Use this to visually compare the gradients side by side
disp([numGrad cost]); 

diff = norm(numGrad-grad)/norm(numGrad+grad);
% Should be small. In our implementation, these values are usually less than 1e-9.
disp(diff); 

assert(diff < 1e-9, 'Difference too large. Check your gradient computation again');

% NOTE: Once your gradients check out, you should run step 0 again to
%       reinitialize the parameters
%}

%%======================================================================
%% STEP 2: Learn features on small patches
%  In this step, you will use your sparse autoencoder (which now uses a 
%  linear decoder) to learn features on small patches sampled from related
%  images.

%% STEP 2a: Load patches
%  In this step, we load 100k patches sampled from the STL10 dataset and
%  visualize them. Note that these patches have been scaled to [0,1]

load stlSampledPatches.mat

displayColorNetwork(patches(:, 1:100));

%% STEP 2b: Apply preprocessing
%  In this sub-step, we preprocess the sampled patches, in particular, 
%  ZCA whitening them. 
% 
%  In a later exercise on convolution and pooling, you will need to replicate 
%  exactly the preprocessing steps you apply to these patches before 
%  using the autoencoder to learn features on them. Hence, we will save the
%  ZCA whitening and mean image matrices together with the learned features
%  later on.

% Subtract mean patch (hence zeroing the mean of the patches)
meanPatch = mean(patches, 2);  %注意這里減掉的是每一維屬性的均值,為什么會和其它的不同呢?
patches = bsxfun(@minus, patches, meanPatch);%每一維都均值化

% Apply ZCA whitening
sigma = patches * patches' / numPatches;
[u, s, v] = svd(sigma);
ZCAWhite = u * diag(1 ./ sqrt(diag(s) + epsilon)) * u';%求出ZCAWhitening矩陣
patches = ZCAWhite * patches;
figure
displayColorNetwork(patches(:, 1:100));

%% STEP 2c: Learn features
%  You will now use your sparse autoencoder (with linear decoder) to learn
%  features on the preprocessed patches. This should take around 45 minutes.

theta = initializeParameters(hiddenSize, visibleSize);

% Use minFunc to minimize the function
addpath minFunc/

options = struct;
options.Method = 'lbfgs'; 
options.maxIter = 400;
options.display = 'on';

[optTheta, cost] = minFunc( @(p) sparseAutoencoderLinearCost(p, ...
                                   visibleSize, hiddenSize, ...
                                   lambda, sparsityParam, ...
                                   beta, patches), ...
                              theta, options);%注意它的參數

% Save the learned features and the preprocessing matrices for use in 
% the later exercise on convolution and pooling
fprintf('Saving learned features and preprocessing matrices...\n');                          
save('STL10Features.mat', 'optTheta', 'ZCAWhite', 'meanPatch');
fprintf('Saved\n');

%% STEP 2d: Visualize learned features

W = reshape(optTheta(1:visibleSize * hiddenSize), hiddenSize, visibleSize);
b = optTheta(2*hiddenSize*visibleSize+1:2*hiddenSize*visibleSize+hiddenSize);
figure;
%這里為什么要用(W*ZCAWhite)'呢?首先,使用W*ZCAWhite是因為每個樣本x輸入網絡,
%其輸出等價於W*ZCAWhite*x;另外,由於W*ZCAWhite的每一行才是一個隱含節點的變換值
%而displayColorNetwork函數是把每一列顯示一個小圖像塊的,所以需要對其轉置。
displayColorNetwork( (W*ZCAWhite)');

 

 sparseAutoencoderLinearCost.m:

function [cost,grad] = sparseAutoencoderLinearCost(theta, visibleSize, hiddenSize, ...
                                                            lambda, sparsityParam, beta, data)
% -------------------- YOUR CODE HERE --------------------
% Instructions:
%   Copy sparseAutoencoderCost in sparseAutoencoderCost.m from your
%   earlier exercise onto this file, renaming the function to
%   sparseAutoencoderLinearCost, and changing the autoencoder to use a
%   linear decoder.
% -------------------- YOUR CODE HERE --------------------                                    
% The input theta is a vector because minFunc only deal with vectors. In
% this step, we will convert theta to matrix format such that they follow
% the notation in the lecture notes.
W1 = reshape(theta(1:hiddenSize*visibleSize), hiddenSize, visibleSize);
W2 = reshape(theta(hiddenSize*visibleSize+1:2*hiddenSize*visibleSize), visibleSize, hiddenSize);
b1 = theta(2*hiddenSize*visibleSize+1:2*hiddenSize*visibleSize+hiddenSize);
b2 = theta(2*hiddenSize*visibleSize+hiddenSize+1:end);

% Loss and gradient variables (your code needs to compute these values)
m = size(data, 2);%樣本點的個數

%% ---------- YOUR CODE HERE --------------------------------------
%  Instructions: Compute the loss for the Sparse Autoencoder and gradients
%                W1grad, W2grad, b1grad, b2grad
%
%  Hint: 1) data(:,i) is the i-th example
%        2) your computation of loss and gradients should match the size
%        above for loss, W1grad, W2grad, b1grad, b2grad

% z2 = W1 * x + b1
% a2 = f(z2)
% z3 = W2 * a2 + b2
% h_Wb = a3 = f(z3)

z2 = W1 * data + repmat(b1, [1, m]);
a2 = sigmoid(z2);
z3 = W2 * a2 + repmat(b2, [1, m]);
a3 = z3;

rhohats = mean(a2,2);
rho = sparsityParam;
KLsum = sum(rho * log(rho ./ rhohats) + (1-rho) * log((1-rho) ./ (1-rhohats)));


squares = (a3 - data).^2;
squared_err_J = (1/2) * (1/m) * sum(squares(:));
weight_decay_J = (lambda/2) * (sum(W1(:).^2) + sum(W2(:).^2));
sparsity_J = beta * KLsum;

cost = squared_err_J + weight_decay_J + sparsity_J;%損失函數值

% delta3 = -(data - a3) .* fprime(z3);
% but fprime(z3) = a3 * (1-a3)
delta3 = -(data - a3);
beta_term = beta * (- rho ./ rhohats + (1-rho) ./ (1-rhohats));
delta2 = ((W2' * delta3) + repmat(beta_term, [1,m]) ) .* a2 .* (1-a2);

W2grad = (1/m) * delta3 * a2' + lambda * W2;
b2grad = (1/m) * sum(delta3, 2);
W1grad = (1/m) * delta2 * data' + lambda * W1;
b1grad = (1/m) * sum(delta2, 2);

%-------------------------------------------------------------------
% Convert weights and bias gradients to a compressed form
% This step will concatenate and flatten all your gradients to a vector
% which can be used in the optimization method.
grad = [W1grad(:) ; W2grad(:) ; b1grad(:) ; b2grad(:)];

end
%-------------------------------------------------------------------
% We are giving you the sigmoid function, you may find this function
% useful in your computation of the loss and the gradients.
function sigm = sigmoid(x)

    sigm = 1 ./ (1 + exp(-x));
end

 

  參考資料:

     Deep learning:十七(Linear DecodersConvolutionPooling)

     Exercise: Implement deep networks for digit classification

 

 

 

 

 


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM