代碼來源:https://github.com/eriklindernoren/ML-From-Scratch
卷積神經網絡中卷積層Conv2D(帶stride、padding)的具體實現:https://www.cnblogs.com/xiximayou/p/12706576.html
激活函數的實現(sigmoid、softmax、tanh、relu、leakyrelu、elu、selu、softplus):https://www.cnblogs.com/xiximayou/p/12713081.html
損失函數定義(均方誤差、交叉熵損失):https://www.cnblogs.com/xiximayou/p/12713198.html
優化器的實現(SGD、Nesterov、Adagrad、Adadelta、RMSprop、Adam):https://www.cnblogs.com/xiximayou/p/12713594.html
卷積層反向傳播過程:https://www.cnblogs.com/xiximayou/p/12713930.html
全連接層實現:https://www.cnblogs.com/xiximayou/p/12720017.html
批量歸一化層實現:https://www.cnblogs.com/xiximayou/p/12720211.html
池化層實現:https://www.cnblogs.com/xiximayou/p/12720324.html
padding2D實現:https://www.cnblogs.com/xiximayou/p/12720454.html
Flatten層實現:https://www.cnblogs.com/xiximayou/p/12720518.html
上采樣層UpSampling2D實現:https://www.cnblogs.com/xiximayou/p/12720558.html
class Dropout(Layer): """A layer that randomly sets a fraction p of the output units of the previous layer to zero. Parameters: ----------- p: float The probability that unit x is set to zero. """ def __init__(self, p=0.2): self.p = p self._mask = None self.input_shape = None self.n_units = None self.pass_through = True self.trainable = True def forward_pass(self, X, training=True): c = (1 - self.p) if training: self._mask = np.random.uniform(size=X.shape) > self.p c = self._mask return X * c def backward_pass(self, accum_grad): return accum_grad * self._mask def output_shape(self): return self.input_shape
核心就是生成一個隨機失活神經元的遮罩。
