吳恩達機器學習第5周Neural Networks(Cost Function and Backpropagation)


5.1 Cost Function

假設訓練樣本為:{(x1),y(1)),(x(2),y(2)),...(x(m),y(m))}

L  = total no.of layers in network

sL= no,of units(not counting bias unit) in layer L

K = number of output units/classes

如圖所示的神經網絡,L = 4,s1 = 3,s2 = 5,s3 = 5, s4 = 4

邏輯回歸的代價函數:

            

神經網絡的代價函數:

   

 

 5.2 反向傳播算法 Backpropagation

 關於反向傳播算法的一篇通俗的解釋http://blog.csdn.net/shijing_0214/article/details/51923547

 

 

 5.3 Training a neural network

 

隱藏層的單元數一般一樣,隱藏層一般越多越好,但計算量會較大。

Training a Neural Network

  1. Randomly initialize the weights
  2. Implement forward propagation to get hΘ(x(i)) for any x(i)
  3. Implement the cost function
  4. Implement backpropagation to compute partial derivatives
  5. Use gradient checking to confirm that your backpropagation works. Then disable gradient checking.
  6. Use gradient descent or a built-in optimization function to minimize the cost function with the weights in theta.

 


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM