概述
詳細
一、基礎知識介紹
神經網絡基礎知識的介紹部分包含了大量公式及圖,使用網站的在線編輯器,實現是力不從心。我寫了13頁的word文檔,放在了解壓包中,大家下載來看吧,我錄了一個視頻,大家可以大致瀏覽一下。
二、Python代碼實現神經網絡框架
如果大家之前對神經網絡不了解的話,在看這部分內容之前,一定要掌握第一部分的基礎內容,否則的話,你會看不懂源代碼的,因為很多代碼都是根根據公式才能寫出來的。
在此處,我們把一個深度神經網絡可以分為許多層,包括數據的輸入層、全連接層、激活函數層、損失函數層等,另外還可以加入dropout層。如果想構建卷積神經網絡的話,還可以加入卷積層、池化層等。本demo實現的神經網絡框架就是基於分層結構,把每一層實現之后,大家就可以根據自己的需要,搭建自己的神經網絡了。
本框架包括的核心模塊及作用:
layer模塊:里面定義組成神經網絡各層的作用,包括數據輸入層、全連接層、激活函數層、損失函數層等。
function_for_layer模塊:里面定義了激活函數、損失函數、權值初始化方法等。
update_method模塊:學習率的更新機制、權值的更新機制(如批量隨機梯度下降法)等。
net模塊:大家可以根據自己的需要,在這里定義自己的神經網絡。
圖1給出了神經網絡框架的示意圖。

另外,在上傳的壓縮包里面,還有一份關於神經網絡框架的說明文檔,大家可以根據看着說明文檔讀源碼。我錄了一個小視頻 ,大家可以瀏覽一下。
layer模塊:
數據輸入層:
class data:
def __init__(self):
self.data_sample = 0
self.data_label = 0
self.output_sample = 0
self.output_label = 0
self.point = 0 #用於記住下一次pull數據的地方;
def get_data(self, sample, label): # sample 每一行表示一個樣本數據, label的每一行表示一個樣本的標簽.
self.data_sample = sample
self.data_label = label
def shuffle(self): # 用於打亂順序;
random_sequence = random.sample(np.arange(self.data_sample.shape[0]), self.data_sample.shape[0])
self.data_sample = self.data_sample[random_sequence]
self.data_label = self.data_label[random_sequence]
def pull_data(self): #把數據推向輸出
start = self.point
end = start + batch_size
output_index = np.arange(start, end)
if end > self.data_sample.shape[0]:
end = end - self.data_sample.shape[0]
output_index = np.append(np.arange(start, self.data_sample.shape[0]), np.arange(0, end))
self.output_sample = self.data_sample[output_index]
self.output_label = self.data_label[output_index]
self.point = end % self.data_sample.shape[0]
全連接層:
class fully_connected_layer:
def __init__(self, num_neuron_inputs, num_neuron_outputs):
self.num_neuron_inputs = num_neuron_inputs
self.num_neuron_outputs = num_neuron_outputs
self.inputs = np.zeros((batch_size, num_neuron_inputs))
self.outputs = np.zeros((batch_size, num_neuron_outputs))
self.weights = np.zeros((num_neuron_inputs, num_neuron_outputs))
self.bias = np.zeros(num_neuron_outputs)
self.weights_previous_direction = np.zeros((num_neuron_inputs, num_neuron_outputs))
self.bias_previous_direction = np.zeros(num_neuron_outputs)
self.grad_weights = np.zeros((batch_size, num_neuron_inputs, num_neuron_outputs))
self.grad_bias = np.zeros((batch_size, num_neuron_outputs))
self.grad_inputs = np.zeros((batch_size, num_neuron_inputs))
self.grad_outputs = np.zeros((batch_size,num_neuron_outputs))
def initialize_weights(self):
self.weights = ffl.xavier(self.num_neuron_inputs, self.num_neuron_outputs)
# 在正向傳播過程中,用於獲取輸入;
def get_inputs_for_forward(self, inputs):
self.inputs = inputs
def forward(self):
self.outputs = self.inputs .dot(self.weights) + np.tile(self.bias, (batch_size, 1))
# 在反向傳播過程中,用於獲取輸入;
def get_inputs_for_backward(self, grad_outputs):
self.grad_outputs = grad_outputs
def backward(self):
#求權值的梯度,求得的結果是一個三維的數組,因為有多個樣本;
for i in np.arange(batch_size):
self.grad_weights[i,:] = np.tile(self.inputs[i,:], (1, 1)).T \
.dot(np.tile(self.grad_outputs[i, :], (1, 1))) + \
self.weights * weights_decay
#求求偏置的梯度;
self.grad_bias = self.grad_outputs
#求 輸入的梯度;
self.grad_inputs = self.grad_outputs .dot(self.weights.T)
def update(self):
#權值與偏置的更新;
grad_weights_average = np.mean(self.grad_weights, 0)
grad_bias_average = np.mean(self.grad_bias, 0)
(self.weights, self.weights_previous_direction) = update_function(self.weights,
grad_weights_average,
self.weights_previous_direction)
(self.bias, self.bias_previous_direction) = update_function(self.bias,
grad_bias_average,
self.bias_previous_direction)
激活函數層:
class activation_layer:
def __init__(self, activation_function_name):
if activation_function_name == 'sigmoid':
self.activation_function = ffl.sigmoid
self.der_activation_function = ffl.der_sigmoid
elif activation_function_name == 'tanh':
self.activation_function = ffl.tanh
self.der_activation_function = ffl.der_tanh
elif activation_function_name == 'relu':
self.activation_function = ffl.relu
self.der_activation_function = ffl.der_relu
else:
print '輸入的激活函數不對啊'
self.inputs = 0
self.outputs = 0
self.grad_inputs = 0
self.grad_outputs = 0
def get_inputs_for_forward(self, inputs):
self.inputs = inputs
def forward(self):
#需要激活函數
self.outputs = self.activation_function(self.inputs)
def get_inputs_for_backward(self, grad_outputs):
self.grad_outputs = grad_outputs
def backward(self):
#需要激活函數的導數
self.grad_inputs = self.grad_outputs * self.der_activation_function(self.inputs)
損失函數層:
class loss_layer:
def __init__(self, loss_function_name):
self.inputs = 0
self.loss = 0
self.accuracy = 0
self.label = 0
self.grad_inputs = 0
if loss_function_name == 'SoftmaxWithLoss':
self.loss_function =ffl.softmaxwithloss
self.der_loss_function =ffl.der_softmaxwithloss
elif loss_function_name == 'LeastSquareError':
self.loss_function =ffl.least_square_error
self.der_loss_function =ffl.der_least_square_error
else:
print '輸入的損失函數不對吧,別繼續了,重新輸入吧'
def get_label_for_loss(self, label):
self.label = label
def get_inputs_for_loss(self, inputs):
self.inputs = inputs
def compute_loss_and_accuracy(self):
#計算正確率
if_equal = np.argmax(self.inputs, 1) == np.argmax(self.label, 1)
self.accuracy = np.sum(if_equal) / batch_size
#計算訓練誤差
self.loss = self.loss_function(self.inputs, self.label)
def compute_gradient(self):
self.grad_inputs = self.der_loss_function(self.inputs, self.label)
function_for_layer模塊:
激活函數的定義:
# sigmoid函數及其導數的定義
def sigmoid(x):
return 1 / (1 + np.exp(-x))
def der_sigmoid(x):
return sigmoid(x) * (1 - sigmoid(x))
# tanh函數及其導數的定義
def tanh(x):
return (np.exp(x) - np.exp(-x)) / (np.exp(x) + np.exp(-x))
def der_tanh(x):
return 1 - tanh(x) * tanh(x)
# ReLU函數及其導數的定義
def relu(x):
temp = np.zeros_like(x)
if_bigger_zero = (x > temp)
return x * if_bigger_zero
def der_relu(x):
temp = np.zeros_like(x)
if_bigger_equal_zero = (x >= temp) #在零處的導數設為1
return if_bigger_equal_zero * np.ones_like(x)
損失函數的定義:
# SoftmaxWithLoss函數及其導數的定義
def softmaxwithloss(inputs, label):
temp1 = np.exp(inputs)
probability = temp1 / (np.tile(np.sum(temp1, 1), (inputs.shape[1], 1))).T
temp3 = np.argmax(label, 1) #縱坐標
temp4 = [probability[i, j] for (i, j) in zip(np.arange(label.shape[0]), temp3)]
loss = -1 * np.mean(np.log(temp4))
return loss
def der_softmaxwithloss(inputs, label):
temp1 = np.exp(inputs)
temp2 = np.sum(temp1, 1) #它得到的是一維的向量;
probability = temp1 / (np.tile(temp2, (inputs.shape[1], 1))).T
gradient = probability - label
return gradient
權值初始化方法:
# xavier 初始化方法
def xavier(num_neuron_inputs, num_neuron_outputs):
temp1 = np.sqrt(6) / np.sqrt(num_neuron_inputs+ num_neuron_outputs + 1)
weights = stats.uniform.rvs(-temp1, 2 * temp1, (num_neuron_inputs, num_neuron_outputs))
return weights
update_method模塊:
學習率的更新機制:
#定義一些需要的全局變量
momentum = 0.9
base_lr = 0 # 在建造net是對它初始化;
iteration = -1 # 它常常需要在訓練過程中修改
########################### 定義學習率的變化機制函數 ####################################
# inv方法
def inv(gamma = 0.0005, power = 0.75):
if iteration == -1:
assert False, '需要在訓練過程中,改變update_method 模塊里的 iteration 的值'
return base_lr * np.power((1 + gamma * iteration), -power)
# 固定方法
def fixed():
return base_lr
批量隨機梯度下降法:
# 基於批量的隨機梯度下降法
def batch_gradient_descent(weights, grad_weights, previous_direction):
lr = inv()
direction = momentum * previous_direction + lr * grad_weights
weights_now = weights - direction
return (weights_now, direction)
net模塊:
例如定義一個四層的神經網絡:
#搭建一個四層的神經網絡;
self.inputs_train = layer.data() # 訓練樣本的輸入層
self.inputs_test = layer.data() # 測試樣本的輸入層
self.fc1 = layer.fully_connected_layer(784, 50)
self.ac1 = layer.activation_layer('tanh')
self.fc2 = layer.fully_connected_layer(50, 50)
self.ac2 = layer.activation_layer('tanh')
self.fc3 = layer.fully_connected_layer(50, 10)
self.loss = layer.loss_layer('SoftmaxWithLoss')
定義網絡的一些其它功能接口,例如載入訓練樣本與測試樣本:
def load_sample_and_label_train(self, sample, label):
self.inputs_train.get_data(sample, label)
def load_sample_and_label_test(self, sample, label):
self.inputs_test.get_data(sample, label)
定義網絡的初始化接口:
def initial(self):
self.fc1.initialize_weights()
self.fc2.initialize_weights()
self.fc3.initialize_weights()
定義在訓練過程中網絡的前向傳播與反向傳播:
def forward_train(self):
self.inputs_train.pull_data()
self.fc1.get_inputs_for_forward(self.inputs_train.outputs)
self.fc1.forward()
self.ac1.get_inputs_for_forward(self.fc1.outputs)
self.ac1.forward()
self.fc2.get_inputs_for_forward(self.ac1.outputs)
self.fc2.forward()
self.ac2.get_inputs_for_forward(self.fc2.outputs)
self.ac2.forward()
self.fc3.get_inputs_for_forward(self.ac2.outputs)
self.fc3.forward()
self.loss.get_inputs_for_loss(self.fc3.outputs)
self.loss.get_label_for_loss(self.inputs_train.output_label)
self.loss.compute_loss_and_accuracy()
def backward_train(self):
self.loss.compute_gradient()
self.fc3.get_inputs_for_backward(self.loss.grad_inputs)
self.fc3.backward()
self.ac2.get_inputs_for_backward(self.fc3.grad_inputs)
self.ac2.backward()
self.fc2.get_inputs_for_backward(self.ac2.grad_inputs)
self.fc2.backward()
self.ac1.get_inputs_for_backward(self.fc2.grad_inputs)
self.ac1.backward()
self.fc1.get_inputs_for_backward(self.ac1.grad_inputs)
self.fc1.backward()
定義在測試過程中的網絡正向傳播:
def forward_test(self):
self.inputs_test.pull_data()
self.fc1.get_inputs_for_forward(self.inputs_test.outputs)
self.fc1.forward()
self.ac1.get_inputs_for_forward(self.fc1.outputs)
self.ac1.forward()
self.fc2.get_inputs_for_forward(self.ac1.outputs)
self.fc2.forward()
self.ac2.get_inputs_for_forward(self.fc2.outputs)
self.ac2.forward()
self.fc3.get_inputs_for_forward(self.ac2.outputs)
self.fc3.forward()
self.loss.get_inputs_for_loss(self.fc3.outputs)
self.loss.get_label_for_loss(self.inputs_test.output_label)
self.loss.compute_loss_and_accuracy()
定義權值與梯度的更新:
def update(self):
self.fc1.update()
self.fc2.update()
self.fc3.update()
三、使用在net模塊定義好的神經網絡識別手寫字體
在第二部分中的net模塊中,我們定義了一個784*50*50*10的神經網絡,訓練該神經網絡識別手寫體數字。
手寫體數字簡介:來自Yann LeCun 等人維護一個手寫數字集,訓練樣本包括60000個,測試樣本為10000個,可以在官網http://yann.lecun.com/exdb/mnist/index.html下載。 但是官網的數據為二進制的數據,不方便用,不過大家不用但心,我已經把它轉化為了matlab中常用的.mat格式的數據,下載壓縮包/demo/data.mat中查看。 手寫字體長這樣子:

寫一個train.py文件,使用它來訓練神經網絡並測試。
# 導入數據;
data = scipy.io.loadmat('data.mat')
train_label = data['train_label']
train_data = data['train_data']
test_label = data['test_label']
test_data = data['test_data']
#一些相關的重要參數
num_train = 800
lr = 0.1
weight_decay = 0.001
train_batch_size = 100
test_batch_size = 10000
# 創建網絡並加載樣本
solver = net.net(train_batch_size, lr, weight_decay)
solver.load_sample_and_label_train(train_data, train_label)
solver.load_sample_and_label_test(test_data, test_label)
# 初始化權值;
solver.initial()
# 用於存放訓練誤差
train_error = np.zeros(num_train)
# 訓練
for i in range(num_train):
print '第', i, '次迭代'
net.layer.update_method.iteration = i
solver.forward_train()
solver.backward_train()
solver.update()
train_error[i] = solver.loss.loss
plt.plot(train_error)
plt.show()
#測試
solver.turn_to_test(test_batch_size)
solver.forward_test()
print '測試樣本的識別率為:', solver.loss.accuracy
運行train.py程序,得到:
在網絡訓練過程中,訓練誤差的下降曲線為:

測試樣本 的識別率為:

當然,大家可以通過調節參數來調高識別率。
四、項目文件目錄截圖

注:本文著作權歸作者,由demo大師發表,拒絕轉載,轉載需要作者授權
