導數偏導數的數學定義
參考資料1和2中對導數偏導數的定義都非常明確.導數和偏導數都是函數對自變量而言.從數學定義上講,求導或者求偏導只有函數對自變量,其余任何情況都是錯的.但是很多機器學習的資料和開源庫都涉及到標量對向量求導.比如下面這個pytorch的例子.
import torch
x = torch.tensor([1.0, 2.0, 3.0], requires_grad=True)
y = x ** 2 + 2
z = torch.sum(y)
z.backward()
print(x.grad)
簡單解釋下,設\(x=[x_1,x_2,x_3]\),則
\[\begin{equation*} z=x_1^2+x_2^2+x_3^2+6 \end{equation*} \]
則
\[\begin{equation*} \frac{\partial z}{\partial x_1}=2x_1 \end{equation*} \]
\[\begin{equation*} \frac{\partial z}{\partial x_2}=2x_2 \end{equation*} \]
\[\begin{equation*} \frac{\partial z}{\partial x_3}=2x_3 \end{equation*} \]
將\(x_1=1.0\),\(x_2=2.0\),\(x_3=3.0\)代入就可以得到
\[\begin{equation*} (\frac{\partial z}{\partial x_1},\frac{\partial z}{\partial x_2},\frac{\partial z}{\partial x_3})=(2x_1,2x_2,2x_3)=(2.0,4.0,6.0) \end{equation*} \]
結果是和pytorch的輸出是一樣的.反過來想想,其實所謂的"標量對向量求導"本質上是函數對各個自變量求導,這里只是把各個自變量看成一個向量.和數學上的定義並不矛盾.
backward的gradient參數作用
現在有如下問題,已知
\[\begin{equation*} y_1=x_1x_2x_3 \end{equation*} \]
\[\begin{equation*} y_2=x_1+x_2+x_3 \end{equation*} \]
\[\begin{equation*} y_3=x_1+x_2x_3 \end{equation*} \]
\[\begin{equation*} A=f(y_1,y_2,y_3) \end{equation*} \]
其中函數\(f(y_1,y_2,y_3)\)的具體定義未知,現在求
\[\begin{equation*} \frac{\partial A}{\partial x_1}=? \end{equation*} \]
\[\begin{equation*} \frac{\partial A}{\partial x_2}=? \end{equation*} \]
\[\begin{equation*} \frac{\partial A}{\partial x_3}=? \end{equation*} \]
根據參考資料2中講的多元復合函數的求導法則.
\[\begin{equation*} \frac{\partial A}{\partial x_1}=\frac{\partial A}{\partial y_1}\frac{\partial y_1}{\partial x_1}+\frac{\partial A}{\partial y_2}\frac{\partial y_2}{\partial x_1}+\frac{\partial A}{\partial y_3}\frac{\partial y_3}{\partial x_1} \end{equation*} \]
\[\begin{equation*} \frac{\partial A}{\partial x_2}=\frac{\partial A}{\partial y_1}\frac{\partial y_1}{\partial x_2}+\frac{\partial A}{\partial y_2}\frac{\partial y_2}{\partial x_2}+\frac{\partial A}{\partial y_3}\frac{\partial y_3}{\partial x_2} \end{equation*} \]
\[\begin{equation*} \frac{\partial A}{\partial x_3}=\frac{\partial A}{\partial y_1}\frac{\partial y_1}{\partial x_3}+\frac{\partial A}{\partial y_2}\frac{\partial y_2}{\partial x_3}+\frac{\partial A}{\partial y_3}\frac{\partial y_3}{\partial x_3} \end{equation*} \]
上面3個等式可以寫成矩陣相乘的形式.如下
\[\begin{equation}\label{simple} [\frac{\partial A}{\partial x_1},\frac{\partial A}{\partial x_2},\frac{\partial A}{\partial x_3}]= [\frac{\partial A}{\partial y_1},\frac{\partial A}{\partial y_2},\frac{\partial A}{\partial y_3}] \left[ \begin{matrix} \frac{\partial y_1}{\partial x_1} & \frac{\partial y_1}{\partial x_2} & \frac{\partial A}{\partial x_3} \\ \frac{\partial y_2}{\partial x_1} & \frac{\partial y_2}{\partial x_2} & \frac{\partial A}{\partial x_3} \\ \frac{\partial y_3}{\partial x_1} & \frac{\partial y_3}{\partial x_2} & \frac{\partial A}{\partial x_3} \end{matrix} \right] \end{equation} \]
其中
\[\begin{equation*} \left[ \begin{matrix} \frac{\partial y_1}{\partial x_1} & \frac{\partial y_1}{\partial x_2} & \frac{\partial y_1}{\partial x_3} \\ \frac{\partial y_2}{\partial x_1} & \frac{\partial y_2}{\partial x_2} & \frac{\partial y_2}{\partial x_3} \\ \frac{\partial y_3}{\partial x_1} & \frac{\partial y_3}{\partial x_2} & \frac{\partial y_3}{\partial x_3} \end{matrix} \right] \end{equation*} \]
叫作雅可比(Jacobian)式.雅可比式可以根據已知條件求出.現在只要知道\([\frac{\partial A}{\partial y_1},\frac{\partial A}{\partial y_2},\frac{\partial A}{\partial y_3}]\)的值,哪怕不知道\(f(y_1,y_2,y_3)\)的具體形式也能求出來\([\frac{\partial A}{\partial x_1},\frac{\partial A}{\partial x_2},\frac{\partial A}{\partial x_3}]\). 那現在的現在的問題是:
怎么樣才能求出
\[\begin{equation*} [\frac{\partial A}{\partial y_1},\frac{\partial A}{\partial y_2},\frac{\partial A}{\partial y_3}] \end{equation*} \]
答案是由pytorch的backward函數的gradient參數提供.這就是gradient參數的作用. 參數gradient能解決什么問題,有什么實際的作用呢?說實話,因為我才接觸到pytorch,還真沒有見過現實中怎么用gradient參數.但是目前可以通過數學意義來理解,就是可以忽略復合函數某個位置之前的所有函數 的具體形式,直接給定一個梯度來求得對各個自變量的偏導.
上面各個方程用代碼表示如下所示:
# coding utf-8
import torch
x1 = torch.tensor(1, requires_grad=True, dtype=torch.float)
x2 = torch.tensor(2, requires_grad=True, dtype=torch.float)
x3 = torch.tensor(3, requires_grad=True, dtype=torch.float)
y = torch.randn(3)
y[0] = x1 * x2 * x3
y[1] = x1 + x2 + x3
y[2] = x1 + x2 * x3
x = torch.tensor([x1, x2, x3])
y.backward(torch.tensor([0.1, 0.2, 0.3], dtype=torch.float))
print(x1.grad)
print(x2.grad)
print(x3.grad)
按照上用的推導方法
\[\begin{equation*} \begin{split} [\frac{\partial A}{\partial x_1},\frac{\partial A}{\partial x_2},\frac{\partial A}{\partial x_3}] &=[\frac{\partial A}{\partial y_1},\frac{\partial A}{\partial y_2},\frac{\partial A}{\partial y_3}] \left[ \begin{matrix} x_2x_3 & x_1x_3 & x_1x_2 \\ 1 & 1 & 1 \\ 1 & x_3 & x_2 \end{matrix} \right] &=[0.1,0.2,0.3] \left[ \begin{matrix} 6 & 3 & 2 \\ 1 & 1 & 1 \\ 1 & 3 & 2 \end{matrix} \right] &=[1.1,1.4,1.0] \end{split} \end{equation*} \]
和代碼的運行結果是一樣的.
參考資料
- 同濟大學數學系,高等數學第七版上冊,高等教育出版社,p75-76, 2015.
- 同濟大學數學系,高等數學第七版下冊,高等教育出版社,p78-80,p88-91, 2015.
- Calculus,Thirteenth Edition,p822, 2013.
- 詳解Pytorch 自動微分里的(vector-Jacobian product)
- PyTorch 的 backward 為什么有一個 grad_variables 參數?)