《機器學習》學習筆記(二):神經網絡


        在解決一些簡單的分類問題時,線性回歸與邏輯回歸就足以應付,但面對更加復雜的問題時(例如對圖片中車的類型進行識別),運用之前的線性模型可能就得不到理想的結果,而且由於更大的數據量,之前方法的計算量也會變得異常龐大。因此我們需要學習一個非線性系統:神經網絡。

        我在學習時,主要通過Andrew Ng教授提供的網絡,而且文中多處都有借鑒Andrew Ng教授在mooc提供的資料。

        轉載請注明出處:http://blog.csdn.net/u010278305

        神經網絡在解決一些復雜的非線性分類問題時,相對於線性回歸、邏輯回歸,都被證明是一個更好的算法。其實神經網絡也可以看做的邏輯回歸的組合(疊加,級聯等)。

        一個典型神經網絡的模型如下圖所示:

                                                           

        上述模型由3個部分組成:輸入層、隱藏層、輸出層。其中輸入層輸入特征值,輸出層的輸出作為我們分類的依據。例如一個20*20大小的手寫數字圖片的識別舉例,那么輸入層的輸入便可以是20*20=400個像素點的像素值,即模型中的a1;輸出層的輸出便可以看做是該幅圖片是0到9其中某個數字的概率。而隱藏層、輸出層中的每個節點其實都可以看做是邏輯回歸得到的。邏輯回歸的模型可以看做這樣(如下圖所示):

                                                                               

        有了神經網絡的模型,我們的目的就是求解模型里邊的參數theta,為此我們還需知道該模型的代價函數以及每一個節點的“梯度值”。

        代價函數的定義如下:

                                                                                      

      代價函數關於每一個節點處theta的梯度可以用反向傳播算法計算出來。反向傳播算法的思想是由於我們無法直觀的得到隱藏層的輸出,但我們已知輸出層的輸出,通過反向傳播,倒退其參數。

我們以以下模型舉例,來說明反向傳播的思路、過程:

                                                                                          

該模型與給出的第一個模型不同的是,它具有兩個隱藏層。

        為了熟悉這個模型,我們需要先了解前向傳播的過程,對於此模型,前向傳播的過程如下:

                                                                                   

其中,a1,z2等參數的意義可以參照本文給出的第一個神經網絡模型,類比得出。

然后我們定義誤差delta符號具有如下含義(之后推導梯度要用):

                                                     

誤差delta的計算過程如下:

                                                                         

然后我們通過反向傳播算法求得節點的梯度,反向傳播算法的過程如下:

                                              

有了代價函數與梯度函數,我們可以先用數值的方法檢測我們的梯度結果。之后我們就可以像之前那樣調用matlab的fminunc函數求得最優的theta參數。

需要注意的是,在初始化theta參數時,需要賦予theta隨機值,而不能是固定為0或是什么,這就避免了訓練之后,每個節點的參數都是一樣的。

下面給出計算代價與梯度的代碼:

function [J grad] = nnCostFunction(nn_params, ...
                                   input_layer_size, ...
                                   hidden_layer_size, ...
                                   num_labels, ...
                                   X, y, lambda)
%NNCOSTFUNCTION Implements the neural network cost function for a two layer
%neural network which performs classification
%   [J grad] = NNCOSTFUNCTON(nn_params, hidden_layer_size, num_labels, ...
%   X, y, lambda) computes the cost and gradient of the neural network. The
%   parameters for the neural network are "unrolled" into the vector
%   nn_params and need to be converted back into the weight matrices. 
% 
%   The returned parameter grad should be a "unrolled" vector of the
%   partial derivatives of the neural network.
%

% Reshape nn_params back into the parameters Theta1 and Theta2, the weight matrices
% for our 2 layer neural network
Theta1 = reshape(nn_params(1:hidden_layer_size * (input_layer_size + 1)), ...
                 hidden_layer_size, (input_layer_size + 1));

Theta2 = reshape(nn_params((1 + (hidden_layer_size * (input_layer_size + 1))):end), ...
                 num_labels, (hidden_layer_size + 1));

% Setup some useful variables
m = size(X, 1);
         
% You need to return the following variables correctly 
J = 0;
Theta1_grad = zeros(size(Theta1));
Theta2_grad = zeros(size(Theta2));

% ====================== YOUR CODE HERE ======================
% Instructions: You should complete the code by working through the
%               following parts.
%
% Part 1: Feedforward the neural network and return the cost in the
%         variable J. After implementing Part 1, you can verify that your
%         cost function computation is correct by verifying the cost
%         computed in ex4.m
%
% Part 2: Implement the backpropagation algorithm to compute the gradients
%         Theta1_grad and Theta2_grad. You should return the partial derivatives of
%         the cost function with respect to Theta1 and Theta2 in Theta1_grad and
%         Theta2_grad, respectively. After implementing Part 2, you can check
%         that your implementation is correct by running checkNNGradients
%
%         Note: The vector y passed into the function is a vector of labels
%               containing values from 1..K. You need to map this vector into a 
%               binary vector of 1's and 0's to be used with the neural network
%               cost function.
%
%         Hint: We recommend implementing backpropagation using a for-loop
%               over the training examples if you are implementing it for the 
%               first time.
%
% Part 3: Implement regularization with the cost function and gradients.
%
%         Hint: You can implement this around the code for
%               backpropagation. That is, you can compute the gradients for
%               the regularization separately and then add them to Theta1_grad
%               and Theta2_grad from Part 2.
%
J_tmp=zeros(m,1);
for i=1:m
    y_vec=zeros(num_labels,1);
    y_vec(y(i))=1;
    a1 = [ones(1, 1) X(i,:)]';
    z2=Theta1*a1;
    a2=sigmoid(z2);
    a2=[ones(1,size(a2,2)); a2];
    z3=Theta2*a2;
    a3=sigmoid(z3);
    hThetaX=a3;
    J_tmp(i)=sum(-y_vec.*log(hThetaX)-(1-y_vec).*log(1-hThetaX));
end
J=1/m*sum(J_tmp);
J=J+lambda/(2*m)*(sum(sum(Theta1(:,2:end).^2))+sum(sum(Theta2(:,2:end).^2)));

Delta1 = zeros( hidden_layer_size, (input_layer_size + 1));
Delta2 = zeros( num_labels, (hidden_layer_size + 1));
for t=1:m
    y_vec=zeros(num_labels,1);
    y_vec(y(t))=1;
    a1 = [1 X(t,:)]';
    z2=Theta1*a1;
    a2=sigmoid(z2);
    a2=[ones(1,size(a2,2)); a2];
    z3=Theta2*a2;
    a3=sigmoid(z3);
    delta_3=a3-y_vec;
    gz2=[0;sigmoidGradient(z2)];
    delta_2=Theta2'*delta_3.*gz2;
    delta_2=delta_2(2:end);
    Delta2=Delta2+delta_3*a2';
    Delta1=Delta1+delta_2*a1';
end
Theta1_grad=1/m*Delta1;
Theta2_grad=1/m*Delta2;

Theta1(:,1)=0;
Theta1_grad=Theta1_grad+lambda/m*Theta1;
Theta2(:,1)=0;
Theta2_grad=Theta2_grad+lambda/m*Theta2;
% -------------------------------------------------------------

% =========================================================================

% Unroll gradients
grad = [Theta1_grad(:) ; Theta2_grad(:)];


end

最后總結一下,對於一個典型的神經網絡,訓練過程如下:

                                    

                                   

按照這個步驟,我們就可以求得神經網絡的參數theta。

轉載請注明出處:http://blog.csdn.net/u010278305

 


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM