1. 反向傳播算法介紹
誤差反向傳播(Error Back Propagation)算法,簡稱BP算法。BP算法由信號正向傳播和誤差反向傳播組成。它的主要思想是由后一級的誤差計算前一級的誤差,從而極大減少運算量。
設訓練數據為\(\{\bm{(x^{(1)},y^{(1)}),\cdots,(x^{(N)}),y^{(N)}}\}\)共\(N\)個,輸出為\(n_L\)維,即\(\bm y^{(i)} = (y_1^{(i)},\cdots,y_{n_L}^{(i)})\)。
2. 信息前向傳播
以第2層為例:
\[z_1^{(2)} = w_{11}^{(2)} x_1 + w_{12}^{(2)} x_2 + w_{13}^{(2)} x_3 + b_1^{(2)} \\ z_2^{(2)} = w_{21}^{(2)} x_1 + w_{22}^{(2)} x_2 + w_{23}^{(2)} x_3 + b_2^{(2)} \\ z_3^{(2)} = w_{11}^{(2)} x_1 + w_{12}^{(2)} x_2 + w_{13}^{(2)} x_3 + b_1^{(2)} \\ a_1^{(2)} = f(z_1^{(2)}) \\ a_2^{(2)} = f(z_2^{(2)}) \\ a_3^{(2)} = f(z_3^{(2)}) \\ \]
上述等式用向量化可表示為
\[\bm z^{(2)} = \bm W^{(2)} \cdot \bm a^{(1)} + b^{(2)} \\ \bm a^{(2)} = f(\bm z^{(2)}) \]
類似地,可歸納出
\[\bm z^{(l)} = \bm W^{(l)} \cdot \bm a^{(l)} + b^{(l)} \quad (2 \leq l \leq L) \\ \bm a^{(l)} = f(\bm z^{(l)}) \]
對L層神經網絡,最終輸出為\(\bm a^{(l)}\)。
從輸入層到輸出層,信息前向傳播的流向為
\[\bm {x = a^{(1)} \to z^{(2)} \to \cdots \to a^{(L-1)} \to z^{(L)} \to a^{(L)} = y} \]
- 誤差反向傳播
對單獨一個訓練數據\((\bm{x^{(i)},y^{(i)}})\)來說,代價函數(cost function)為
\[E^{(i)} = \frac 1 2 ||\bm{y^{(i)}-a^{(i)}}||^2 = \frac 1 2 \sum _{k=1}^{n_L} (y_k^{(i)}-a_k^{(i)})^2 \]
為了描述方便,省去上標\(^{(i)}\)(打公式的上標也很辛苦),將代價函數記為\(E\)。
總的損失函數為
\[E_{total} = \frac 1 N \sum _{i=1}^N E^{(i)} \]
采用梯度下降法更新參數\(w_{ij}^{(l)},b_i^{l},\ 2 \leq l \leq L\)。
采用梯度下降法更新參數的公式為:
\[\bm W^{(l)} = \bm W^{(l)} - \mu \frac {\partial E_{total}}{\partial \bm W^{(l)}} = \bm W^{(l)} - \frac \mu N \sum _{i=1}^N \frac{\partial E^{(i)}}{\partial \bm W^{(l)}} \\ \bm b^{(l)} = \bm b^{(l)} - \mu \frac {\partial E_{total}}{\partial \bm b^{(l)}} = \bm b^{(l)} - \frac \mu N \sum _{i=1}^N \frac{\partial E^{(i)}}{\partial \bm b^{(l)}} \]
3.1 輸出層的權重參數更新
將\(E\)在隱藏層展開:
\[E = \frac 1 2 ||\bm y - \bm a^{(3)}|| = \frac 1 2 [(y_1-a_1^{(3)})^2+(y_2-a_2^{(3)})^2] = \frac 1 2 [(y_1-f(z_1^{(3)}))^2+(y_2-f(z_2^{(3)}))^2] \\ = \frac 1 2 [(y_1-f(w_{11}^{(3)} a_1^{(2)} + w_{12}^{(3)} a_2^{(2)} + w_{13}^{(3)} a_3^{(2)} + b_1^{(3)}))^2+(y_2-f(w_{21}^{(3)} a_1^{(2)} + w_{22}^{(3)} a_2^{(2)} + w_{23}^{(3)} a_3^{(2)} + b_2^{(3)})))^2] \]
由求導的鏈式法則,對隱藏層到輸出層神經元的權重參數求偏導,有:
\[\frac {\partial E}{\partial w_{11}^{(3)}} = \frac 1 2 \cdot 2 \cdot (y_1-a_1^{(3)})(-\frac{\partial a_1^{(3)}}{\partial w_{11}^{(3)}})=-(y_1-a_1^{(3)})f'(z_1^{(3)})a_1^{(2)} \]
記\(\frac{\partial E}{\partial z_i^{(l)}}\)記為\(\delta _i^{(l)}\),即\(\delta _i^{(l)} = \frac{\partial E}{\partial z_i^{(l)}}\),稱為誤差項(靈敏度),代表該層對最終總誤差的影響大小。
\(\frac {\partial E}{\partial w_{11}^{(3)}}\)可寫為:
\[\frac {\partial E}{\partial w_{11}^{(3)}} = \frac {\partial E}{\partial z_1^{(3)}} \cdot \frac {\partial z_1^{(3)}}{\partial w_{11}^{(3)}} = \delta _1^{(3)} a_1^{(2)} \]
同理可求得
\[\frac {\partial E}{\partial w_{12}^{(3)}} = \delta _1^{(3)} a_2^{(2)}, \quad \frac {\partial E}{\partial w_{13}^{(3)}} = \delta _1^{(3)} a_3^{(2)}, \quad \frac {\partial E}{\partial w_{21}^{(3)}} = \delta _2^{(3)} a_1^{(2)}, \quad \frac {\partial E}{\partial w_{22}^{(3)}} = \delta _2^{(3)} a_2^{(2)}, \quad \frac {\partial E}{\partial w_{23}^{(3)}} = \delta _2^{(3)} a_3^{(2)} \]
引入\(\delta _i^{(l)}\)一個很重要的原因是可通過\(\delta _{i+1}^{(l)}\)來求解\(\delta _i^{(l)}\),這樣可以充分利用之前計算過的結果來加快整個計算過程,這也是反向傳播算法的核心思想。
推廣:
\[\delta _i^{(L)} = -(y_i-a_i^{(L)})f'(z_i^{(L)})\ (1 \leq i \leq n_L) \\ \frac {\partial E}{\partial w_{ij}^{(L)}} = \delta _i^{(L)} \cdot \ a_j^{(L-1)} \ (1 \leq i \leq n_L, 1 \leq j \leq n_{L-1}) \]
表示成向量形式:
\[\bm \delta ^{(L)} = -(\bm y-\bm a^{(L)}) \odot f'(\bm z^{(L)}) \\ \triangledown _{\bm W^{(L)}} E = \bm \delta ^{(L)} \cdot (\bm a^{(L-1)})^T \]
其中,\(\odot\)表示哈達瑪積(Hadamard Product)或稱Element-wise Product,即2個矩陣對應位置的元素相乘。\(\triangledown _{\bm W^{(L)}} E\)得到一個新的矩陣,這個矩陣中第\(i\)行第\(j\)列的元素由\(E\)對\(\bm W^{(L)}\)中的元素\(w_{ij}^{(L)}\)求偏導得到。
先求出最后一行的誤差,再通過反向傳播一層一層向前傳導,更新前面層的誤差值。
3.2 輸出層與隱藏層的權重參數更新
\[\frac {\partial E}{\partial w_{ij}^{(l)}} = \delta _i^{(l)} \cdot \ a_j^{(l-1)} \]
其中,\(\delta _i^{(l)}\)與\(\bm \delta ^{(l+1)}\)(注意與下一層的所有誤差項均有關,因此寫成向量)的關系推導如下:
\[\delta _i^{(l)} = \frac {\partial E}{\partial z_i^{(l)}} = \sum _{j=1}^{n_{l+1}}\frac {\partial E}{\partial z_j^{(l+1)}} \frac{\partial z_j^{(l+1)}}{\partial z_i^{(l)}} = \sum _{j=1}^{n_{l+1}} \delta _j^{(l+1)}\frac{\partial z_j^{(l+1)}}{\partial z_i^{(l)}} \\ z_j^{(l+1)} = \sum _{j=1}^{n_{l}} w_{ji}^{(l+1)} a_i^{(l)} + b_j^{(l+1)} = \sum _{j=1}^{n_{l}} w_{ji}^{(l+1)} f(z_i^{(l)}) + b_j^{(l+1)}\\ \therefore \frac {\partial z_j^{(l+1)}}{\partial z_i^{(l)}} = \frac{\partial z_j^{(l+1)}}{\partial a_i^{(l)}} \frac{\partial a_i^{(l)}}{\partial z_i^{(l)}} = w_{ji}^{(l+1)} f'(z_i^{(l)}) \]
代入
\[\delta _i^{(l)} = \sum _{j=1}^{n_{l+1}} \delta _j^{(l+1)} w_{ji}^{(l+1)} f'(z_i^{(l)}) = (\sum _{j=1}^{n_{l+1}} \delta _j^{(l+1)} w_{ji}^{(l+1)}) \cdot f'(z_i^{(l)}) \]
表示成矩陣(向量)形式為:
\[\bm \delta^{(l)} = ((\bm W^{(l+1)})^T \bm \delta^{(l+1)}) \odot \bm f'(\bm z^{(l)}) \]
\(f(x)\)的一個重要性質就是
\[f'(x) = f(x)(1-f(x)) \]
3.3 輸出層與隱藏層的偏執參數更新
\[\frac{\partial E}{\partial b_i^{(l)}} = \frac{\partial E}{\partial z_i^{(l)}} \frac{\partial z_i^{(l)}}{\partial b_i^{(l)}} = \delta _i^{(l)} \]
表示成矩陣形式為:
\[\triangledown _{\bm b^{(l)}} E = \bm \delta^{(l)} \]
3.4 4個核心公式
\[\begin{aligned} & \delta _i^{(L)} = -(y_i-a_i^{(L)})f'(z_i^{(l)}) \\ & \delta _i^{(l)} = (\sum _{j=1}^{n_{l+1}} \delta _j^{(l+1)}w_{ji}^{(l+1)})f'(z_i^{(l)}) \\ & \frac{\partial E}{\partial w_{ij}^{(l)}} = \delta _i^{(l)} a_j^{(l-1)} \\ & \frac{\partial E}{\partial b_i^{(l)}} = \delta _i^{(l)} \end{aligned} \]
表示成矩陣形式為:
\[\begin{aligned} & \bm \delta^{(L)} = -(\bm y - \bm a^{(L)}) \odot \bm f'(\bm z^{(L)}) \\ & \bm \delta^{(l)} = ((\bm W^{(l+1)})^T \bm \delta^{(l+1)}) \odot \bm f'(\bm z^{(l)}) \\ & \triangledown_{\bm W^{(l)}} E = \frac{\partial E}{\partial \bm W^{(l)}} = \bm \delta^{(l)}(\bm a^{(l-1)})^T \\ & \triangledown_{\bm b^{(l)}} E = \frac{\partial E}{\partial \bm b^{(l)}} = \bm \delta^{(l)} \end{aligned} \]