神經網絡解決多分類問題例:數字識別


神經網絡解決多分類問題例:數字識別

1. 觀察樣本(Visualizing the data)

訓練集提供5000張數字圖片,每張圖片為20x20像素,並被轉化成1x400的向量存儲。樣本輸入為5000x400的矩陣,輸出為5000x1的向量。coursera提供了將灰度值轉化為圖片的函數,但這對我們解決問題沒有實質性的幫助。

2. 設計神經網絡(Designing Nural Network)

由於每一個樣本輸入為1x400的向量,因此輸入神經元應有400個。我們預測的數字共有10個,因此輸出神經元應有10個。由於問題並不復雜,只需使用1隱層即可(早期的自動駕駛試驗所使用的神經網絡為3隱層),隱層中的神經元個數定為25個。

定義三個變量存儲各層的神經元個數,方便后續調用。

input_layer_size = 400;
hidden_layer_size = 25;
output_layer_size = 10;

3. 編寫代價函數計算函數(nnCostFunction)

3.1 實現必備的工具函數

在BP神經網絡的實現過程中,有幾個操作是經常使用的。如果不將其封裝成函數很容易寫着寫着就把自己繞暈。

  • sigmoid函數

    function g = sigmoid(z)
    	g = 1 ./ (1+exp(-z));
    end
    
  • addBias函數

    由於我們在設計神經網絡時沒有考慮偏置神經元,而在前向傳播的時候,計算下一層的輸入值必須用到偏置神經元的參數\(\theta_0\),因此在這里封裝成函數。

    function ans = addBias(X)
    	ans = [ones(size(X,1)),X];
    end
    
  • oneHot函數

    one-hot-encoding譯作獨熱編碼。它用於將m*1的樣本輸出y轉化為m*output_layer_size的矩陣。對於每一行,有output_layer_size個數,分別對應各個輸出神經元是0還是1。

    function ans = oneHot(y,output_layer_size)
    	ans = zeros(size(y,1),output_layer_size);
    	for i = 1:size(y,1)
    		ans(i,y(i)) = 1;
    	end
    end
    

3.2 規范化矩陣定義

在神經網絡的實現過程中,矩陣維度的錯誤可能是初學者遇見最頻繁的一個問題。而每一次發現矩陣維度錯誤時,往往又需要從輸入層開始一步步推導出正確的矩陣維度,重新修改代碼中矩陣的計算方式、計算順序。出現這種情況,往往是編寫代碼時邏輯混亂,一下子把這個矩陣寫出m*n,一下子把那個矩陣寫成n*m。如果能夠在編寫代碼前事先設計好各個矩陣的表示方法,規范化行、列的實際意義,就能在發現錯誤后快速改正,或者直接避免錯誤。

在這里我們沿用coursera的講義中的矩陣定義的規范

矩陣名稱 常用代號 矩陣規模 行意義 列意義
輸入值矩陣 \(X,z^{(i)}\) m * n 樣本個數 該層神經元個數(屬性個數)
參數矩陣 \(\Theta_{(j)}\) (a+1) * b 該層神經元個數(這一層的激活函數的個數) 上一層神經元個數+1(每個激活函數的屬性個數+偏置屬性)
輸出值矩陣 \(Y,a^{(i)}\) m * k 樣本個數 該層神經元個數(輸出值個數)

另外簡記: input_layer_size = n, hidden_layer_size = l, output_layer_size = K

3.3 實現前向傳播(Feedforward )

前向傳播是計算各個輸出神經元的輸出值的過程。它可以向量化計算。

截屏2020-09-20 上午12.13.43

將神經網絡的前向傳播定義在 nnCostFunction 函數中

傳入參數[ nn_params(所有參數值\(\theta\)的展開向量) , input_layer_size, hidden_layer_size, output_layer_size , X, y, lambda ]

function [J grad] = nnCostFunction(nn_params, ...
                                   input_layer_size, ...
                                   hidden_layer_size, ...
                                   output_layer_size, ...
                                   X, y, lambda)
% 使用reshape函數將向量nn_params重新構造成Theta1,Theta2兩個矩陣。注意,Theta1,Theta2兩個矩陣
% 都是考慮了偏置神經元的。
Theta1 = reshape(nn_params(1:hidden_layer_size * (input_layer_size + 1)), ...
                 hidden_layer_size, (input_layer_size + 1));

Theta2 = reshape(nn_params((1 + (hidden_layer_size * (input_layer_size + 1))):end), ...
                 output_layer_size, (hidden_layer_size + 1));
% 樣本個數
m = size(X, 1);
% 代價值
J = 0;
% 梯度值
Theta1_grad = zeros(size(Theta1));
Theta2_grad = zeros(size(Theta2));

% 初始化完成
% ==============================================================================
% ==============================================================================
% Part1: 實現前向傳播

% 前向傳播
z2 = addBias(X) * Theta1';
a2 = sigmoid(z2);
z3 = addBias(a2) * Theta2';
a3 = sigmoid(z3);
% 獨熱編碼
encodeY = oneHot(y,output_layer_size);

% tempTheta 用於計算正則項,將偏置神經元對應的theta全部置0
tempTheta2 = Theta2;
tempTheta2(:,1) = 0;
tempTheta1 = Theta1;
tempTheta1(:,1) = 0;

J = 1/m * sum(sum((-encodeY .* log(a3) - (1-encodeY) .* log(1-a3)))) + 1/(2*m) * lambda * (sum(sum(tempTheta1 .^2)) + sum(sum(tempTheta2 .^ 2)) );

代價函數計算公式

\[J(\theta) = \frac{1}{m}\sum_{i=1}^{m}\sum_{k=1}^{K}[-y_k^{(i)}\log{(h_\theta(x^{(i)})_k)} - (1-y_k^{(i)})\log{(1-h_\theta(x^{(i)})_k)}] + \frac{\lambda}{2m}[\sum_{i=1}^l\sum_{j=1}^n\Theta_{ij}^2 + \sum_{i=1}^K\sum_{j=1}^l\Theta_{ij}^2] \]

神經網絡的學習能力過強,可以擬合高度復雜的模型。因此如果不加以正則化很可能會過擬合。此時經驗誤差很小而泛化誤差不夠小

3.4 實現反向傳播(Backpropagation)

這一節是反向傳播算法的核心部分,也是最難懂、寫代碼時最復雜的一部分。初學時肯定很頭疼,我在這部分卡了3天沒能理解。

這一部分設計鏈式法則和矩陣求導,其中鏈式法則必須會,矩陣求導如果不會可以先通過矩陣的維度來推導結果(推導得相對慢一些)。

推薦b站一個視頻,詳細的講了反向傳播算法的梯度值計算。她還發過吳恩達神經網絡解數字識別的Python實現,我沒看不過總體思路肯定和matlab實現類似,可以作為理解反向傳播算法的一個講解視頻。

手把手教大家實現吳恩達深度學習作業第二周06-反向傳播推導

反向傳播算法用於解決梯度計算。通過鏈式法則與矩陣求推導出各個參數\(\theta\)的梯度

Theta2_grad = 1/m*(a3-encodeY)' * addBias(a2) + lambda * tempTheta2/m;
Theta1_grad = 1/m*(Theta2(:,2:end)' * (a3-encodeY)' .* a2' .* (1-a2') * addBias(X)) + lambda * tempTheta1 / m;
% Unroll gradients
grad = [Theta1_grad(:) ; Theta2_grad(:)];

4. 高級最優化訓練神經網絡(Learning parameters using fmincg)

4.1 隨機初始化參數

function W = randInitializeWeights(L_in, L_out)

W = zeros(L_out, 1 + L_in);

% Randomly initialize the weights to small values
epsilon_init = 0.12;
W = rand(L_out, 1 + L_in) * 2 * epsilon_init - epsilon_init;

end

隨機初始化參數對於神經網絡的重要性不必多提,屬於基礎知識,不明白的可以重新看一看吳恩達的那節課。

initial_Theta1 = randInitializeWeights(input_layer_size, hidden_layer_size);
initial_Theta2 = randInitializeWeights(hidden_layer_size, output_layer_size);
% 展開成向量,便於傳遞參數
initial_nn_params = [initial_Theta1(:) ; initial_Theta2(:)];

4.2 調用高級最優化函數訓練神經網絡

這是coursera給出的高級最優化函數 fmincg。使用前先學習用法,如果不會用就用fminunc,略慢一些。

function [X, fX, i] = fmincg(f, X, options, P1, P2, P3, P4, P5)
% Minimize a continuous differentialble multivariate function. Starting point
% is given by "X" (D by 1), and the function named in the string "f", must
% return a function value and a vector of partial derivatives. The Polack-
% Ribiere flavour of conjugate gradients is used to compute search directions,
% and a line search using quadratic and cubic polynomial approximations and the
% Wolfe-Powell stopping criteria is used together with the slope ratio method
% for guessing initial step sizes. Additionally a bunch of checks are made to
% make sure that exploration is taking place and that extrapolation will not
% be unboundedly large. The "length" gives the length of the run: if it is
% positive, it gives the maximum number of line searches, if negative its
% absolute gives the maximum allowed number of function evaluations. You can
% (optionally) give "length" a second component, which will indicate the
% reduction in function value to be expected in the first line-search (defaults
% to 1.0). The function returns when either its length is up, or if no further
% progress can be made (ie, we are at a minimum, or so close that due to
% numerical problems, we cannot get any closer). If the function terminates
% within a few iterations, it could be an indication that the function value
% and derivatives are not consistent (ie, there may be a bug in the
% implementation of your "f" function). The function returns the found
% solution "X", a vector of function values "fX" indicating the progress made
% and "i" the number of iterations (line searches or function evaluations,
% depending on the sign of "length") used.
%
% Usage: [X, fX, i] = fmincg(f, X, options, P1, P2, P3, P4, P5)
%
% See also: checkgrad 
%
% Copyright (C) 2001 and 2002 by Carl Edward Rasmussen. Date 2002-02-13
%
%
% (C) Copyright 1999, 2000 & 2001, Carl Edward Rasmussen
% 
% Permission is granted for anyone to copy, use, or modify these
% programs and accompanying documents for purposes of research or
% education, provided this copyright notice is retained, and note is
% made of any changes that have been made.
% 
% These programs and documents are distributed without any warranty,
% express or implied.  As the programs were written for research
% purposes only, they have not been tested to the degree that would be
% advisable in any important application.  All use of these programs is
% entirely at the user's own risk.
%
% [ml-class] Changes Made:
% 1) Function name and argument specifications
% 2) Output display
%

% Read options
if exist('options', 'var') && ~isempty(options) && isfield(options, 'MaxIter')
    length = options.MaxIter;
else
    length = 100;
end


RHO = 0.01;                            % a bunch of constants for line searches
SIG = 0.5;       % RHO and SIG are the constants in the Wolfe-Powell conditions
INT = 0.1;    % don't reevaluate within 0.1 of the limit of the current bracket
EXT = 3.0;                    % extrapolate maximum 3 times the current bracket
MAX = 20;                         % max 20 function evaluations per line search
RATIO = 100;                                      % maximum allowed slope ratio

argstr = ['feval(f, X'];                      % compose string used to call function
for i = 1:(nargin - 3)
  argstr = [argstr, ',P', int2str(i)];
end
argstr = [argstr, ')'];

if max(size(length)) == 2, red=length(2); length=length(1); else red=1; end
S=['Iteration '];

i = 0;                                            % zero the run length counter
ls_failed = 0;                             % no previous line search has failed
fX = [];
[f1 df1] = eval(argstr);                      % get function value and gradient
i = i + (length<0);                                            % count epochs?!
s = -df1;                                        % search direction is steepest
d1 = -s'*s;                                                 % this is the slope
z1 = red/(1-d1);                                  % initial step is red/(|s|+1)

while i < abs(length)                                      % while not finished
  i = i + (length>0);                                      % count iterations?!

  X0 = X; f0 = f1; df0 = df1;                   % make a copy of current values
  X = X + z1*s;                                             % begin line search
  [f2 df2] = eval(argstr);
  i = i + (length<0);                                          % count epochs?!
  d2 = df2'*s;
  f3 = f1; d3 = d1; z3 = -z1;             % initialize point 3 equal to point 1
  if length>0, M = MAX; else M = min(MAX, -length-i); end
  success = 0; limit = -1;                     % initialize quanteties
  while 1
    while ((f2 > f1+z1*RHO*d1) || (d2 > -SIG*d1)) && (M > 0) 
      limit = z1;                                         % tighten the bracket
      if f2 > f1
        z2 = z3 - (0.5*d3*z3*z3)/(d3*z3+f2-f3);                 % quadratic fit
      else
        A = 6*(f2-f3)/z3+3*(d2+d3);                                 % cubic fit
        B = 3*(f3-f2)-z3*(d3+2*d2);
        z2 = (sqrt(B*B-A*d2*z3*z3)-B)/A;       % numerical error possible - ok!
      end
      if isnan(z2) || isinf(z2)
        z2 = z3/2;                  % if we had a numerical problem then bisect
      end
      z2 = max(min(z2, INT*z3),(1-INT)*z3);  % don't accept too close to limits
      z1 = z1 + z2;                                           % update the step
      X = X + z2*s;
      [f2 df2] = eval(argstr);
      M = M - 1; i = i + (length<0);                           % count epochs?!
      d2 = df2'*s;
      z3 = z3-z2;                    % z3 is now relative to the location of z2
    end
    if f2 > f1+z1*RHO*d1 || d2 > -SIG*d1
      break;                                                % this is a failure
    elseif d2 > SIG*d1
      success = 1; break;                                             % success
    elseif M == 0
      break;                                                          % failure
    end
    A = 6*(f2-f3)/z3+3*(d2+d3);                      % make cubic extrapolation
    B = 3*(f3-f2)-z3*(d3+2*d2);
    z2 = -d2*z3*z3/(B+sqrt(B*B-A*d2*z3*z3));        % num. error possible - ok!
    if ~isreal(z2) || isnan(z2) || isinf(z2) || z2 < 0 % num prob or wrong sign?
      if limit < -0.5                               % if we have no upper limit
        z2 = z1 * (EXT-1);                 % the extrapolate the maximum amount
      else
        z2 = (limit-z1)/2;                                   % otherwise bisect
      end
    elseif (limit > -0.5) && (z2+z1 > limit)         % extraplation beyond max?
      z2 = (limit-z1)/2;                                               % bisect
    elseif (limit < -0.5) && (z2+z1 > z1*EXT)       % extrapolation beyond limit
      z2 = z1*(EXT-1.0);                           % set to extrapolation limit
    elseif z2 < -z3*INT
      z2 = -z3*INT;
    elseif (limit > -0.5) && (z2 < (limit-z1)*(1.0-INT))  % too close to limit?
      z2 = (limit-z1)*(1.0-INT);
    end
    f3 = f2; d3 = d2; z3 = -z2;                  % set point 3 equal to point 2
    z1 = z1 + z2; X = X + z2*s;                      % update current estimates
    [f2 df2] = eval(argstr);
    M = M - 1; i = i + (length<0);                             % count epochs?!
    d2 = df2'*s;
  end                                                      % end of line search

  if success                                         % if line search succeeded
    f1 = f2; fX = [fX' f1]';
    fprintf('%s %4i | Cost: %4.6e\r', S, i, f1);
    s = (df2'*df2-df1'*df2)/(df1'*df1)*s - df2;      % Polack-Ribiere direction
    tmp = df1; df1 = df2; df2 = tmp;                         % swap derivatives
    d2 = df1'*s;
    if d2 > 0                                      % new slope must be negative
      s = -df1;                              % otherwise use steepest direction
      d2 = -s'*s;    
    end
    z1 = z1 * min(RATIO, d1/(d2-realmin));          % slope ratio but max RATIO
    d1 = d2;
    ls_failed = 0;                              % this line search did not fail
  else
    X = X0; f1 = f0; df1 = df0;  % restore point from before failed line search
    if ls_failed || i > abs(length)          % line search failed twice in a row
      break;                             % or we ran out of time, so we give up
    end
    tmp = df1; df1 = df2; df2 = tmp;                         % swap derivatives
    s = -df1;                                                    % try steepest
    d1 = -s'*s;
    z1 = 1/(1-d1);                     
    ls_failed = 1;                                    % this line search failed
  end
  if exist('OCTAVE_VERSION')
    fflush(stdout);
  end
end
fprintf('\n');

調用部分

options = optimset('MaxIter', 50);

%  You should also try different values of lambda
lambda = 1;

% Create "short hand" for the cost function to be minimized
costFunction = @(p) nnCostFunction(p, ...
                                   input_layer_size, ...
                                   hidden_layer_size, ...
                                   output_layer_size, X, y, lambda);

% Now, costFunction is a function that takes in only one argument (the
% neural network parameters)
[nn_params, cost] = fmincg(costFunction, initial_nn_params, options);

% Obtain Theta1 and Theta2 back from nn_params
Theta1 = reshape(nn_params(1:hidden_layer_size * (input_layer_size + 1)), ...
                 hidden_layer_size, (input_layer_size + 1));

Theta2 = reshape(nn_params((1 + (hidden_layer_size * (input_layer_size + 1))):end), ...
                 output_layer_size, (hidden_layer_size + 1));

5.計算經驗誤差 / 調整參數 / 模型評估(Trying out different learning settings)

function p = predict(Theta1, Theta2, X)
%PREDICT Predict the label of an input given a trained neural network
%   p = PREDICT(Theta1, Theta2, X) outputs the predicted label of X given the
%   trained weights of a neural network (Theta1, Theta2)

% Useful values
m = size(X, 1);
num_labels = size(Theta2, 1);

% You need to return the following variables correctly 
p = zeros(size(X, 1), 1);

h1 = sigmoid([ones(m, 1) X] * Theta1');
h2 = sigmoid([ones(m, 1) h1] * Theta2');
[dummy, p] = max(h2, [], 2);

% =========================================================================
end

調用函數

pred = predict(Theta1, Theta2, X);
fprintf('\nTraining Set Accuracy: %f\n', mean(double(pred == y)) * 100);

調整參數,可以觀察經驗誤差的變化

\(\lambda\) \(Max \ \ Iterations\) \(accuracy\)
1 50 94.34% ~ 96.00%
1 100 98.64%
1.5 1000 99.04%
1.5 5000(折磨電腦) 99.26%
2 2000 98.68%

這些都是經驗誤差,而我們的目的是減小泛化誤差。然而coursera沒有給出測試集,我們暫時不去測泛化誤差了。嚴格來說還應該有泛化誤差的測量部分。

6. 觀察隱層神經元(Visualizing the hidden layer)

這一部分顯得沒有那么重要,我們可以看看隱層神經元到底在干些什么。

對於每一個隱層神經元,找到一組輸入vector[1,400]使之激活值接近1(此時表示其極大可能性為某一種狀態,而此時其余神經元均接近於0)。然后將其轉化成20x20像素圖像。

fprintf('\nVisualizing Neural Network... \n')

displayData(Theta1(:, 2:end));

fprintf('\nProgram paused. Press enter to continue.\n');
pause;
截屏2020-09-20 上午9.09.10


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM