Deep Learning 用邏輯回歸訓練圖片的典型步驟.
筆記摘自:https://xienaoban.github.io/posts/59595.html
1. 處理數據
1.1 向量化(Vectorization)
將每張圖片的高和寬和RGB展為向量,最終X的shape為 (height*width*3, m)
.
1.2 特征歸一化(Normalization)
對於一般數據,使用標准化(Standardization)
- \(X_{scale} = \frac{(X(axis=0) - X.mean(axis=0))}{X.std(axis=0)}\)
z_i = (x_i - mean) / delta
,mean
與delta
代表X的均值和標准差. 最終特征處於[-1, 1]區間.
對於圖片, 可直接使用Min-Max Scaling
- 即將每個特征除以255(每個像素分為R, G, B, 范圍在0~255)使得值處於[0, 1].
2. 初始化參數
一般將 w
和 b
隨機選擇.
3. 梯度下降(Gradient descent)
根據 w
, b
和訓練集,來訓練數據.
- 需要設定 迭代次數 與 學習率 .
以下為大循環(迭代次數)中內容:
3.1 計算代價函數
對於\(x^{(i)} \in X\), 有
\[z^{(i)} = w^Tx^{(i)} + b \]
\[ a^{(i)} = \hat{y}^{(i)} = sigmod(z^{(i)}) = \sigma(z^{(i)}) = \frac{1}{1 + e^{-z^{(i)}}} \]
\[損失函數: {L}(a^{(i)}, y^{(i)}) = {L}(\hat{y}^{(i)}, y^{(i)}) = - y^{(i)} \log(a^{(i)}) - (1-y^{(i)} ) \log(1-a^{(i)}) \]
\[A = (a^{(1)}, a^{(2)}, ... , a^{(m-1)}, a^{(m)}) = \sigma(w^TX+b) = \frac{1}{1+e^{-(w^TX+b)}} \]
\[代價函數: J(w,b) = -\frac{1}{m} \sum^{m}_{i=1} \mathcal{L}(\hat{y}^{(i)}, y^{(i)}) = -\frac{1}{m} \sum^{m}_{i=1} (y^{(i)} log(\hat{y}^{(i)}) + (1-y^{(i)}) log(1-\hat{y}^{(i)})) \]
# 激活函數
A = sigmoid(w.T.dot(X) + b)
# 代價函數
cost = -np.sum(Y * np.log(A) + (1-Y) * np.log(1 - A)) / m
3.2 計算反向傳播的梯度
即:對 \(J = -\dfrac{1}{m} \sum L(a, y)\) 計算導數,即對\({L}(a, y)\) 計算導數,以下求導,均省略上標。
求:\(\dfrac{\partial J}{\partial w}\) 和 $\dfrac{\partial J}{\partial b} $ (dw 和 db)
\[\dfrac{\partial L}{\partial a} = \dfrac{\partial L(a, y)}{\partial a} = -\frac{y}{a} + \frac{1-y}{1-a} \]
\[\dfrac{da}{dz} = (\frac{1}{1 + e^{-z}})' = \dfrac{e^{-z}}{(1+e^{-z})^2} = \dfrac{1}{1+e^{-z}} - \dfrac{1}{(1+e^{-z})^2} = a-a^2 = a · (1-a) \]
\[\dfrac{\partial L}{\partial z} = \dfrac{\partial L}{\partial a} \dfrac{da}{dz} = (-\dfrac{y}{a} + \dfrac{1-y}{1-a}) · a · (1-a) = a - y \]
\[\dfrac{\partial L}{\partial w} = \dfrac{\partial L}{\partial z} \dfrac{\partial z}{\partial w} = (a-y) · x \]
\[\dfrac{\partial L}{\partial b} = \dfrac{\partial L}{\partial z} \dfrac{\partial z}{\partial b} = a-y \]
根據 \(J = -\dfrac{1}{m} \sum L(a, y)\) 最終可得:
\[\dfrac{\partial J}{\partial w} = \dfrac{\partial J}{\partial a} \dfrac{\partial a}{\partial w} = \dfrac{1}{m} X(A-Y)^T \]
\[\dfrac{\partial J}{\partial b} = \dfrac{1}{m} \sum^{m}_{i=1} (a^{(i)} - y^{(i)}) \]
dw = X.dot((A - Y).T) / m
db = np.sum(A - Y) / m
3.3 更新 w
, b
w = w - learning_rate * dw
b = b - learning_rate * db
4. 預測測試集
-
使用訓練出來的
w
,b
, 對測試集使用y_pred = sigmoid(wx+b)
, 計算得預測的概率 -
對其取整, 例如大於0.7則判定為 '是', 否則為'否'.