【python實現卷積神經網絡】定義訓練和測試過程


代碼來源:https://github.com/eriklindernoren/ML-From-Scratch

卷積神經網絡中卷積層Conv2D(帶stride、padding)的具體實現:https://www.cnblogs.com/xiximayou/p/12706576.html

激活函數的實現(sigmoid、softmax、tanh、relu、leakyrelu、elu、selu、softplus):https://www.cnblogs.com/xiximayou/p/12713081.html

損失函數定義(均方誤差、交叉熵損失):https://www.cnblogs.com/xiximayou/p/12713198.html

優化器的實現(SGD、Nesterov、Adagrad、Adadelta、RMSprop、Adam):https://www.cnblogs.com/xiximayou/p/12713594.html

卷積層反向傳播過程:https://www.cnblogs.com/xiximayou/p/12713930.html

全連接層實現:https://www.cnblogs.com/xiximayou/p/12720017.html

批量歸一化層實現:https://www.cnblogs.com/xiximayou/p/12720211.html

池化層實現:https://www.cnblogs.com/xiximayou/p/12720324.html

padding2D實現:https://www.cnblogs.com/xiximayou/p/12720454.html

Flatten層實現:https://www.cnblogs.com/xiximayou/p/12720518.html

上采樣層UpSampling2D實現:https://www.cnblogs.com/xiximayou/p/12720558.html

Dropout層實現:https://www.cnblogs.com/xiximayou/p/12720589.html

激活層實現:https://www.cnblogs.com/xiximayou/p/12720622.html

 

首先是所有的代碼:

from __future__ import print_function, division
from terminaltables import AsciiTable
import numpy as np
import progressbar
from mlfromscratch.utils import batch_iterator
from mlfromscratch.utils.misc import bar_widgets


class NeuralNetwork():
    """Neural Network. Deep Learning base model.
    Parameters:
    -----------
    optimizer: class
        The weight optimizer that will be used to tune the weights in order of minimizing
        the loss.
    loss: class
        Loss function used to measure the model's performance. SquareLoss or CrossEntropy.
    validation: tuple
        A tuple containing validation data and labels (X, y)
    """
    def __init__(self, optimizer, loss, validation_data=None):
        self.optimizer = optimizer
        self.layers = []
        self.errors = {"training": [], "validation": []}
        self.loss_function = loss()
        self.progressbar = progressbar.ProgressBar(widgets=bar_widgets)

        self.val_set = None
        if validation_data:
            X, y = validation_data
            self.val_set = {"X": X, "y": y}

    def set_trainable(self, trainable):
        """ Method which enables freezing of the weights of the network's layers. """
        for layer in self.layers:
            layer.trainable = trainable

    def add(self, layer):
        """ Method which adds a layer to the neural network """
        # If this is not the first layer added then set the input shape
        # to the output shape of the last added layer
        if self.layers:
            layer.set_input_shape(shape=self.layers[-1].output_shape())

        # If the layer has weights that needs to be initialized 
        if hasattr(layer, 'initialize'):
            layer.initialize(optimizer=self.optimizer)

        # Add layer to the network
        self.layers.append(layer)

    def test_on_batch(self, X, y):
        """ Evaluates the model over a single batch of samples """
        y_pred = self._forward_pass(X, training=False)
        loss = np.mean(self.loss_function.loss(y, y_pred))
        acc = self.loss_function.acc(y, y_pred)

        return loss, acc

    def train_on_batch(self, X, y):
        """ Single gradient update over one batch of samples """
        y_pred = self._forward_pass(X)
        loss = np.mean(self.loss_function.loss(y, y_pred))
        acc = self.loss_function.acc(y, y_pred)
        # Calculate the gradient of the loss function wrt y_pred
        loss_grad = self.loss_function.gradient(y, y_pred)
        # Backpropagate. Update weights
        self._backward_pass(loss_grad=loss_grad)

        return loss, acc

    def fit(self, X, y, n_epochs, batch_size):
        """ Trains the model for a fixed number of epochs """
        for _ in self.progressbar(range(n_epochs)):
            
            batch_error = []
            for X_batch, y_batch in batch_iterator(X, y, batch_size=batch_size):
                loss, _ = self.train_on_batch(X_batch, y_batch)
                batch_error.append(loss)

            self.errors["training"].append(np.mean(batch_error))

            if self.val_set is not None:
                val_loss, _ = self.test_on_batch(self.val_set["X"], self.val_set["y"])
                self.errors["validation"].append(val_loss)

        return self.errors["training"], self.errors["validation"]

    def _forward_pass(self, X, training=True):
        """ Calculate the output of the NN """
        layer_output = X
        for layer in self.layers:
            layer_output = layer.forward_pass(layer_output, training)

        return layer_output

    def _backward_pass(self, loss_grad):
        """ Propagate the gradient 'backwards' and update the weights in each layer """
        for layer in reversed(self.layers):
            loss_grad = layer.backward_pass(loss_grad)

    def summary(self, name="Model Summary"):
        # Print model name
        print (AsciiTable([[name]]).table)
        # Network input shape (first layer's input shape)
        print ("Input Shape: %s" % str(self.layers[0].input_shape))
        # Iterate through network and get each layer's configuration
        table_data = [["Layer Type", "Parameters", "Output Shape"]]
        tot_params = 0
        for layer in self.layers:
            layer_name = layer.layer_name()
            params = layer.parameters()
            out_shape = layer.output_shape()
            table_data.append([layer_name, str(params), str(out_shape)])
            tot_params += params
        # Print network configuration table
        print (AsciiTable(table_data).table)
        print ("Total Parameters: %d\n" % tot_params)

    def predict(self, X):
        """ Use the trained model to predict labels of X """
        return self._forward_pass(X, training=False)

接着我們來一個一個函數進行分析:

1、初始化__init__:這里面定義好優化器optimizer、模型層layers、錯誤errors、損失函數loss_function、用於顯示進度條progressbar,這里從mlfromscratch.utils.misc中導入了bar_widgets,我們看看這是什么:

bar_widgets = [
    'Training: ', progressbar.Percentage(), ' ', progressbar.Bar(marker="-", left="[", right="]"),
    ' ', progressbar.ETA()
]

2、set_trainable():用於設置哪些模型層需要進行參數的更新

3、add():將一個模塊放入到卷積神經網絡中,例如卷積層、池化層、激活層等等。

4、test_on_batch():使用batch進行測試,這里不需要進行反向傳播。

5、train_on_batch():使用batch進行訓練,包括前向傳播計算損失以及反向傳播更新參數。

6、fit():喂入數據進行訓練或驗證,這里需要定義好epochs和batch_size的大小,同時有一個讀取數據的函數batch_iterator(),位於mlfromscratch.utils下的data_manipulation.py中:

def batch_iterator(X, y=None, batch_size=64):
    """ Simple batch generator """
    n_samples = X.shape[0]
    for i in np.arange(0, n_samples, batch_size):
        begin, end = i, min(i+batch_size, n_samples)
        if y is not None:
            yield X[begin:end], y[begin:end]
        else:
            yield X[begin:end]

7、_forward_pass():模型層的前向傳播。

8、_backward_pass():模型層的反向傳播。

9、summary():用於輸出模型的每層的類型、參數數量以及輸出大小。

10、predict():用於輸出預測值。

不難發現,該代碼是借鑒了tensorflow中的一些模塊的設計思想。

 


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM