我想大部分程序員的第一個程序應該都是“hello world”,在深度學習領域,這個“hello world”程序就是手寫字體識別程序。
這次我們詳細的分析下手寫字體識別程序,從而可以對深度學習建立一個基本的概念。
1.初始化權重和偏置矩陣,構建神經網絡的架構
import numpy as np
class network():
def __init__(self, sizes):
self.num_layers = len(sizes)
self.sizes = sizes
self.biases = [ np.random.randn(y,1) for y in sizes[1:] ]
self.weights = [ np.random.randn(y,x) for x,y in zip(sizes(:-1), sizes(1:)) ]
在實例化一個神經網絡時,去初始化權重和偏置的矩陣,例如
network0 = network([784, 30, 10])
可以初始化一個3層的神經網絡, 各層神經元的個數分別為 784, 30 , 10
2. 如何去反向傳播計算代價函數的梯度?
這個過程可以大概概括如下:
(1)正向傳播,獲得每個神經元的帶權輸出和激活因子(a)
(2)計算輸出層的誤差
(3)反向傳播計算每一層的誤差和梯度
用python實現的代碼如下:
def backprop(self, x, y):
delta_w = [ np.zeros(w.shape) for w in self.weights]
delta_b = [ np.zeros(b.shape) for b in self.biases ]
#計算每個神經元的帶權輸入z及激活值
zs = []
activation = x
activations = [x]
for b,w in zip(self.biases, self.weights):
z = np.dot(w, activation) + b
zs.append(z)
activation = sigmod(z)
activations.append(activation)
#計算輸出層誤差(這里采用的是二次代價函數)
delta = (activations[-1] - y) * sigmod_prime(zs[-1])
delta_w[-1] = np.dot(delta, activations[-2].transpose())
delta_b[-1] = delta
#反向傳播
for l in xrange(2, self.num_layers):
delta = np.dot(delta_w[-l+1].transpose(),delta)*sigmod_prime(zs[-l])
delta_w[-l] = np.dot(delta, activations[-l-1].transpose())
delta_b[-l] = delta
return delta_w, delta_b
3.如何梯度下降,更新權重和偏置?
通過反向傳播獲得了更新權重和偏置的增量,進一步進行更新,梯度下降。
def update_mini_batch(self, mini_batch, eta):
delta_w = [ np.zeros(w.shape) for w in self.weights ]
delta_b = [ np.zeros(b.shape) for b in self.biases ]
for x,y in mini_batch:
(這里針對一個小批量內所有樣本,應用反向傳播,積累權重和偏置的變化)
delta_w_p, delta_b_p = self.backprop(x,y)
delta_w = [ dt_w + dt_w_p for dt_w,dt_w_p in zip(delta_w, delta_w_p)]
delta_b = [ dt_b + dt_b_p for dt_b,dt_b_p in zip(delta_b, delta_b_p)]
self.weights = [ w-(eta/len(mini_batch)*nw) for w,nw in zip(self.weights, delta_w)]
self.biases = [ b-(eta/len(mini_batch)*nb) for b,nb in zip(self.biases, delta_b)]
def SGD(self, epochs, training_data, mini_batch_size,eta, test_data=None):
if test_data:
n_tests = len(tast_data)
n_training_data = len(training_data)
for i in xrange(0, epochs):
random.shuffle(training_data)
mini_batches = [ training_data[k:k+mini_batch_size]
for k in xrange(0, n_training_data, mini_batch_size)
]
for mini_batch in mini_batches:
self.update_mini_batch(mini_batch, eta)