神經網絡學習筆記 - 損失函數的定義和微分證明


神經網絡學習筆記 - 損失函數的定義和微分證明

損失函數 Loss function (cross entropy loss)

損失函數,反向傳播和梯度計算構成了循環神經網絡的訓練過程。

激活函數softmax和損失函數會一起使用。
激活函數會根據輸入的參數(一個矢量,表示每個分類的可能性),計算每個分類的概率(0, 1)。
損失函數根據softmax的計算結果\(\hat{y}\)和期望結果\(y\),根據交叉熵方法(cross entropy loss) 可得到損失\(L\)

cross entropy loss函數

\[L_t(y_t, \hat{y_t}) = - y_t \log \hat{y_t} \\ L(y, \hat{y}) = - \sum_{t} y_t \log \hat{y_t} \\ \frac{ \partial L_t } { \partial z_t } = \hat{y_t} - y_t \\ \text{where} \\ z_t = s_tV \\ \hat{y_t} = softmax(z_t) \\ y_t \text{ : for training data x, the expected result y at time t. which are from training data} \]

證明

\[\begin{align} \frac{ \partial L_t } { \partial z_t } & = \frac{ \partial \left ( - \sum_{k} y_k \log \hat{y_k} \right ) } { \partial z_t } \\ & = - \sum_{k} y_k \frac{ \partial \log \hat{y_k} } { \partial z_t } \\ & = - \sum_{k} y_k \frac {1} {\hat{y_k}} \cdot \frac{ \partial \hat{y_k} } { \partial z_t } \\ & = - \left ( y_t \frac {1} {\hat{y_t}} \cdot \frac{ \partial \hat{y_t} } { \partial z_t } \right ) - \left ( \sum_{k \ne t} y_k \frac {1} {\hat{y_k}} \cdot \frac{ \partial \hat{y_k} } { \partial z_t } \right ) \\ & \because \text{softmax differentiation formula } \\ & = - \left ( y_t \frac {1} {\hat{y_t}} \cdot ( 1 - \hat{y_t} ) \hat{y_t} \right ) - \left ( \sum_{k \ne t} y_k \frac {1} {\hat{y_k}} \cdot (-\hat{y_t} \hat{y_k}) \right ) \\ & = - \left ( y_t \cdot ( 1 - \hat{y_t} ) \right ) - \left ( \sum_{k \ne t} y_k \cdot (-\hat{y_t}) \right ) \\ & = - y_t + y_t \hat{y_t} + \left ( \sum_{k \ne t} y_k \hat{y_t} \right ) \\ & = - y_t + \hat{y_t} \left ( \sum_{k} y_k \right ) \\ & \because \sum_{k} y_k = 1 \\ & = \hat{y_t} - y_t \end{align} \]

參照


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM