反向傳播算法基於多元函數鏈式法則,以下記錄多元函數鏈式法則的證明與反向傳播算法的實例推演。
多元復合函數的求導法則(多元鏈式法則)
定義
如果函數$u=\varphi(t)$及$v=\psi(t)$都在點$t$可導,函數$z = f(u,v)$在對應點$(u,v)$具有連續偏導數(重點),那么復合函數$z = f[\varphi(t),\psi(t)]$在點$t$可導,且有:
$\displaystyle \frac{\mathrm{d}z}{\mathrm{d}t} = \frac{\partial z}{\partial u}\frac{\mathrm{d}u}{\mathrm{d}t}+ \frac{\partial z}{\partial v}\frac{\mathrm{d}v}{\mathrm{d}t} $
證明
設$t$獲得增量$\Delta t$,此時$u = \varphi(t),v = \psi(t)$的對應增量為$\Delta u,\Delta v$,由此,$z=f(u,v)$獲得增量$\Delta z$。因為函數$z = f(u,v)$在點$(u,v)$有連續偏導數,於是全增量$\Delta z$可表示為(全微分的充分條件):
$\displaystyle\Delta z = \frac{\partial z}{\partial u}\Delta u+ \frac{\partial z}{\partial v}\Delta v+\varepsilon_1\Delta u+\varepsilon_2\Delta v$
這里有,當$\Delta u\to 0,\Delta v\to 0$時,有$\varepsilon_1\to0,\varepsilon_2\to 0$。
上式兩邊各除以$\Delta t$,得:
$\displaystyle\frac{\Delta z}{\Delta t} = \frac{\partial z}{\partial u}\frac{\Delta u}{\Delta t}+ \frac{\partial z}{\partial v}\frac{\Delta v}{\Delta t}+\varepsilon_1\frac{\Delta u}{\Delta t}+\varepsilon_2\frac{\Delta v}{\Delta t}$
於是,取極限$\Delta t\to0$,就有:
$\displaystyle \frac{\mathrm{d}z}{\mathrm{d}t} = \frac{\partial z}{\partial u}\frac{\mathrm{d}u}{\mathrm{d}t}+ \frac{\partial z}{\partial v}\frac{\mathrm{d}v}{\mathrm{d}t} $
得證。(主要功勞在於它有連續偏導數,於是有全微分,才能用來加)
而神經網絡中的多元函數大多是仿射函數(激活函數不是,但是也符合可微條件),符合上述全微分的充分條件,所以可以使用反向傳播來計算所有參數的梯度。
反向傳播算法的實例推演
反向傳播的過程與公式推導(點擊鏈接)不難,主要就是運用上述的多元函數鏈式法則。
但是看完原理后,代碼如何實現呢?有個疑問就是,第n層的一個參數$w$,要使用第n+1層所有相關導數的累加,假設需要累加$n$次;而第n+1層的相關導數,又要第n+2層的所有相關導數的相關導數的累加,假設累加$m$次。也就是說,對於第n層的這個參數的導數,僅通過兩層就要計算$n\times m$次。如此嵌套下去,要進行的加和運算似乎是指數級上升,這樣是肯定不行的。可以想到,某些中間結果需要暫存,以防上述重復計算。以下直接舉一個具體的例子來說明。
實例推演
定義一個全連接網絡,規模與各層激活函數如下圖所示:
如圖所示,神經網絡使用MSE來作為損失函數。網絡使用隨機梯度下降,即每次傳入一個樣本。實際上,使用MSE時,批量梯度下降每次更新的梯度是批次中各個樣本點梯度的簡單相加,與隨機梯度下降相比並沒有其他復雜操作,所以這里只舉隨機梯度下降的為例。
每一層的操作都已在圖中標出。其中$x\in R^4,y\in R^3$分別是樣本的特征向量和標記向量;$h\in R^5,g\in R^6,f\in R^3$分別是輸入層、隱層、輸出層的輸出向量;$w^k,b^k$分別是第$k$層用於仿射變換的矩陣和偏置向量,$i^k$是仿射變換的結果;$\delta(x)$是ReLu激活函數(對向量元素進行的運算),$\sigma(x)$是Softmax激活函數(對整個向量進行的運算)。
為了便於理解各個步驟之間的聯系,以下將一些步驟的結果命名為大寫字母。
首先算損失$L$對$f$的梯度,存入$A$中:
$ \displaystyle A =\frac{\partial L}{\partial f} = \left[ \begin{matrix} \frac{\partial L}{\partial f_1}\\ \frac{\partial L}{\partial f_2}\\ \frac{\partial L}{\partial f_3}\\ \end{matrix} \right] = \left[ \begin{matrix} f_1 - y_1\\ f_2 - y_2\\ f_3 - y_3\\ \end{matrix} \right] = f-y $
再算$f$對$i^3$的導數,由於激活函數使用的是Softmax,每個$i_k$都參與了每個$f_j$的計算,所以是一個雅可比矩陣:
$\displaystyle \begin{aligned} B &= \frac{\partial f}{\partial i^3}= \left[ \begin{matrix} \frac{\partial f_1}{\partial i^3_1}&\frac{\partial f_1}{\partial i^3_2}&\frac{\partial f_1}{\partial i^3_3}\\ \frac{\partial f_2}{\partial i^3_1}&\frac{\partial f_2}{\partial i^3_2}&\frac{\partial f_2}{\partial i^3_3}\\ \frac{\partial f_3}{\partial i^3_1}&\frac{\partial f_3}{\partial i^3_2}&\frac{\partial f_3}{\partial i^3_3}\\ \end{matrix} \right]= \left[ \begin{matrix} \frac{\partial \sigma_1(i^3)}{\partial i^3_1}&\frac{\partial \sigma_1(i^3)}{\partial i^3_2}&\frac{\partial \sigma_1(i^3)}{\partial i^3_3}\\ \frac{\partial \sigma_2(i^3)}{\partial i^3_1}&\frac{\partial \sigma_2(i^3)}{\partial i^3_2}&\frac{\partial \sigma_2(i^3)}{\partial i^3_3}\\ \frac{\partial \sigma_3(i^3)}{\partial i^3_1}&\frac{\partial \sigma_3(i^3)}{\partial i^3_2}&\frac{\partial \sigma_3(i^3)}{\partial i^3_3}\\ \end{matrix} \right]\\& =\left[ \begin{matrix} f_1-f_1f_1&-f_1f_2&-f_1f_3\\ -f_1f_2&f_2-f_2f_2&-f_2f_3\\ -f_1f_3&-f_2f_3&f_3-f_3f_3\\ \end{matrix} \right] =\text{diag}(f)-ff^T \end{aligned} $
於是$L$對$i^3$求梯度就是($\cdot$表示矩陣乘法):
$\begin{gather} C =\frac{\partial L}{\partial i^3} = \left[ \begin{matrix} \frac{\partial L}{\partial i^3_1}\\ \frac{\partial L}{\partial i^3_2}\\ \frac{\partial L}{\partial i^3_3}\\ \end{matrix} \right] =\left[ \begin{matrix} \sum_{j=1}^3\frac{\partial L}{\partial f_j}\frac{\partial f_j}{\partial i^3_1}\\ \sum_{j=1}^3\frac{\partial L}{\partial f_j}\frac{\partial f_j}{\partial i^3_2}\\ \sum_{j=1}^3\frac{\partial L}{\partial f_j}\frac{\partial f_j}{\partial i^3_3}\\ \end{matrix} \right] =B^T\cdot A \label{}\end{gather}$
接下來就是求$L$對參數$w^3$的梯度,但是它們中間有個$i^3$隔着,所以要先求$i^3$對$w$的導數。$i^3$是向量,$w^3$是矩陣,如果一一對應求導就變成了一個三維的張量(不知道有沒有雅可比張量的說法)。但是可以發現,每個$w^3_{jk}$($j$行$k$列)只參與$i^3_j$的計算,所以每個$w^3_{jk}$只需求其對應的$i^3_j$的導數即可,求出的是一個矩陣(這一步在代碼中並不需要計算,只是為了容易理解):
$\begin{gather}\displaystyle \frac{\partial i^3}{\partial w^3} =\left[ \begin{matrix} \frac{\partial i_1^3}{\partial w_{11}^3}&\cdots&\frac{\partial i_1^3}{\partial w_{16}^3}\\ \frac{\partial i_2^3}{\partial w_{21}^3}&\cdots&\frac{\partial i_2^3}{\partial w_{26}^3}\\ \frac{\partial i_3^3}{\partial w_{31}^3}&\cdots&\frac{\partial i_3^3}{\partial w_{36}^3}\\ \end{matrix} \right] =\left[ \begin{matrix} g_1&\cdots &g_6\\ g_1&\cdots &g_6\\ g_1&\cdots &g_6\\ \end{matrix} \right] =\left[ \begin{matrix} g^T\\ g^T\\ g^T\\ \end{matrix} \right] \label{}\end{gather}$
於是$L$對$w^3$的梯度就是($\times$表示按元素進行的乘法):
$\begin{gather}\displaystyle \frac{\partial L}{\partial w^3} =\left[ \begin{matrix} \frac{\partial L}{\partial i_1^3}\frac{\partial i_1^3}{\partial w_{11}^3}&\cdots&\frac{\partial L}{\partial i_1^3}\frac{\partial i_1^3}{\partial w_{16}^3}\\ \frac{\partial L}{\partial i_2^3}\frac{\partial i_2^3}{\partial w_{21}^3}&\cdots&\frac{\partial L}{\partial i_2^3}\frac{\partial i_2^3}{\partial w_{26}^3}\\ \frac{\partial L}{\partial i_3^3}\frac{\partial i_3^3}{\partial w_{31}^3}&\cdots&\frac{\partial L}{\partial i_3^3}\frac{\partial i_3^3}{\partial w_{36}^3}\\ \end{matrix} \right] =\left[ CCCCCC \right] \times \left[ \begin{matrix} g^T\\ g^T\\ g^T\\ \end{matrix} \right] =C\cdot g^T \label{}\end{gather}$
接下來求$L$關於$b^3$的導數,因為$b^3_k$的只參與$i^3_k$的計算,且導數為1,所以很簡單:
$\begin{gather} \displaystyle \frac{\partial L}{\partial b^3} =\left[ \begin{matrix} \frac{\partial L}{\partial i^3_1}\frac{\partial i^3_1}{\partial b^3_1}\\ \frac{\partial L}{\partial i^3_2}\frac{\partial i^3_2}{\partial b^3_2}\\ \frac{\partial L}{\partial i^3_3}\frac{\partial i^3_3}{\partial b^3_3}\\ \end{matrix} \right] =\frac{\partial L}{\partial i^3}=C \label{}\end{gather}$
然后就要向前傳播到隱層了,要算出$L$對$g$的梯度,首先要算出$i^3$對$g$的導數(這一步在代碼中並不需要計算,只是為了容易理解):
$\begin{gather} \displaystyle \frac{\partial i^3}{\partial g} =\left[ \begin{matrix} \frac{\partial i_1^3}{\partial g_1}&\cdots&\frac{\partial i_1^3}{\partial g_6}\\ \frac{\partial i_2^3}{\partial g_1}&\cdots&\frac{\partial i_2^3}{\partial g_6}\\ \frac{\partial i_3^3}{\partial g_1}&\cdots&\frac{\partial i_3^3}{\partial g_6}\\ \end{matrix} \right] =w^3 \label{}\end{gather}$
分析可以發現,每個$g_j$都參與了所有$i^3_k$的計算,所以$L$對$g_j$的導數由多元函數鏈式法則可得:
$\displaystyle D =\frac{\partial L}{\partial g} =\left[ \begin{matrix} \frac{\partial L}{\partial g_1}\\ \vdots\\ \frac{\partial L}{\partial g_6}\\ \end{matrix} \right] =\left[ \begin{matrix} \frac{\partial L}{\partial i_1^3}\frac{\partial i_1^3}{\partial g_1} +\frac{\partial L}{\partial i_2^3}\frac{\partial i_2^3}{\partial g_1} +\frac{\partial L}{\partial i_3^3}\frac{\partial i_3^3}{\partial g_1}\\ \vdots\\ \frac{\partial L}{\partial i_1^3}\frac{\partial i_1^3}{\partial g_6} +\frac{\partial L}{\partial i_2^3}\frac{\partial i_2^3}{\partial g_6} +\frac{\partial L}{\partial i_3^3}\frac{\partial i_3^3}{\partial g_6}\\ \end{matrix} \right] ={w^3}^T\cdot C $
求出$L$對$g$的梯度后,就開始隱層各個參數的計算了。還是要一步一步算,首先算$g$關於$i^2$的梯度。因為隱層的激活函數是ReLu,是按元素進行的運算,所以每個$g_k$只對它對應的那個$i^2_k$求導。導數也很簡單,判斷大小即可:
$\displaystyle E =\frac{\partial g}{\partial i^2} =\left[ \begin{matrix} \frac{\partial g_1}{\partial i_1^2}\\ \vdots\\ \frac{\partial g_6}{\partial i_6^2}\\ \end{matrix} \right] =\left[ \begin{matrix} \delta'(i_1^2)\\ \vdots\\ \delta'(i_6^2)\\ \end{matrix} \right] ,\;\;\; \delta'(x) = \left\{ \begin{matrix} 1,x\ge0\\ 0,x<0 \end{matrix} \right. $
再求$L$關於$i^2$的梯度,它和$L$關於$i^3$的梯度$(1)$式不同,因為激活函數的計算方式不同:
$\displaystyle F =\frac{\partial L}{\partial i^2} =\left[ \begin{matrix} \frac{\partial L}{\partial i_1^2}\\ \vdots\\ \frac{\partial L}{\partial i_6^2}\\ \end{matrix} \right] =\left[ \begin{matrix} \frac{\partial L}{\partial g_1}\frac{\partial g_1}{\partial i_1^2}\\ \vdots\\ \frac{\partial L}{\partial g_6}\frac{\partial g_6}{\partial i_6^2}\\ \end{matrix} \right] = E\times D $
為了容易理解,在求$L$對$w^2$的梯度之前,先求$i^2$對$w^2$的導數,與$(2)$式類似,求出的是6行5列的矩陣:
$\displaystyle \frac{\partial i^2}{\partial w^2} =\left[ \begin{matrix} \frac{\partial i_1^2}{\partial w_{11}^2}&\cdots&\frac{\partial i_1^2}{\partial w_{15}^2}\\ \vdots&\vdots&\vdots\\ \frac{\partial i_6^2}{\partial w_{61}^2}&\cdots&\frac{\partial i_6^2}{\partial w_{65}^2}\\ \end{matrix} \right] =\left[ \begin{matrix} h_1&\cdots&h_5\\ &\vdots&\\ h_1&\cdots&h_5\\ \end{matrix} \right] =\left.\left[ \begin{matrix} h^T\\ \vdots\\ h^T\\ \end{matrix} \right]\right\} 6 \;r $
與$(3)$式類似,$L$對$w^2$的梯度就是:
$\displaystyle \frac{\partial L}{\partial w^2} =\left[ \begin{matrix} \frac{\partial L}{\partial i_1^2}\frac{\partial i_1^2}{\partial w_{11}^2}&\cdots&\frac{\partial L}{\partial i_1^2}\frac{\partial i_1^2}{\partial w_{15}^2}\\ \vdots&\vdots&\vdots\\ \frac{\partial L}{\partial i_6^2}\frac{\partial i_6^2}{\partial w_{61}^2}&\cdots&\frac{\partial L}{\partial i_6^2}\frac{\partial i_6^2}{\partial w_{65}^2}\\ \end{matrix} \right] = \left[FFFFF\right]\times \left.\left[ \begin{matrix} h^T\\ \vdots\\ h^T\\ \end{matrix} \right]\right\}6 \;r =F\cdot h^T $
與$(4)$式類似,$L$對$b^2$的梯度為:
$\displaystyle \frac{\partial L}{\partial b^2}= \frac{\partial L}{\partial i^2}=F $
現在,輸出層與隱層的參數梯度已經計算完畢了。還剩輸入層,它與隱層的唯一區別就在於層中元素數量不同,而傳播與求梯度的方法和隱層是一樣的,只需對輸入層進行$(5)$式及以后的相應操作即可。
總結
通過以上的推演,可以發現,在全連接神經網絡中,梯度$\frac{\partial L}{\partial i^k}$是關鍵。它涉及到參數$w^k$與$b^k$梯度的直接計算,是反向傳播的載體。上面使用大寫字母標記的變量中,實際上只有$\frac{\partial L}{\partial i^k}$多次參與計算。因此,在以上推演的反向傳播中,真正要在迭代中暫存的只有$C$和$F$,其他步驟都合並在一個式子中完成即可。以下是迭代的示意圖: