批量歸一化batch_normalization


為了解決在深度神經網絡訓練初期降低梯度消失/爆炸問題,Sergey loffe和Christian Szegedy提出了使用批量歸一化的技術的方案,該技術包括在每一層激活函數之前在模型里加一個操作,簡單零中心化和歸一化輸入,之后再通過每層的兩個新參數(一個縮放,另一個移動)縮放和移動結果,話句話說,這個操作讓模型學會最佳模型和每層輸入的平均值

批量歸一化原理

(1)\(\mu_B = \frac{1}{m_B}\sum_{i=1}^{m_B}x^{(i)}\) #經驗平均值,評估整個小批量B

(2)\(\theta_B = \frac{1}{m_B}\sum_{i=1}^{m_b}(x^{(i)} - \mu_B)^2\) #評估整個小批量B的方差

(3)\(x_{(i)}^* = \frac{x^{(i)} - \mu_B}{\sqrt{\theta_B^2+\xi}}\)#零中心化和歸一化

(4)\(z^{(i)} = \lambda x_{(i)}^* + \beta\)#將輸入進行縮放和移動

在測試期間,沒有小批量的數據來計算經驗平均值和標准方差,所有可以簡單地用整個訓練集的平均值和標准方差來代替,在訓練過程中可以用變動平均值有效計算出來

但是,批量歸一化的確也給模型增加了一些復雜度和運行代價,使得神經網絡的預測速度變慢,所以如果逆需要快速預測,可能需要在進行批量歸一化之前先檢查以下ELU+He初始化的表現如何

tf.layers.batch_normalization使用

函數原型

def batch_normalization(inputs,
                    axis=-1,
                    momentum=0.99,
                    epsilon=1e-3,
                    center=True,
                    scale=True,
                    beta_initializer=init_ops.zeros_initializer(),
                    gamma_initializer=init_ops.ones_initializer(),
                    moving_mean_initializer=init_ops.zeros_initializer(),
                    moving_variance_initializer=init_ops.ones_initializer(),
                    beta_regularizer=None,
                    gamma_regularizer=None,
                    beta_constraint=None,
                    gamma_constraint=None,
                    training=False,
                    trainable=True,
                    name=None,
                    reuse=None,
                    renorm=False,
                    renorm_clipping=None,
                    renorm_momentum=0.99,
                    fused=None,
                    virtual_batch_size=None,
                    adjustment=None):

使用注意事項

(1)使用batch_normalization需要三步:

a.在卷積層將激活函數設置為None
b.使用batch_normalization
c.使用激活函數激活

例子:
inputs = tf.layers.dense(inputs,self.n_neurons,
                                   kernel_initializer=self.initializer,
                                   name = 'hidden%d'%(layer+1))
if self.batch_normal_momentum:
    inputs = tf.layers.batch_normalization(inputs,momentum=self.batch_normal_momentum,train=self._training)

inputs = self.activation(inputs,name = 'hidden%d_out'%(layer+1))

(2)在訓練時,將參數training設置為True,在測試時,將training設置為False,同時要特別注意update_ops的使用

update_ops = tf.get_collection(tf.GraphKeys.UPDATE_OPS)
需要在每次訓練時更新,可以使用sess.run(update_ops)
也可以:
with tf.control_dependencies(update_ops):
    train_op = tf.train.AdamOptimizer(learning_rate).minimize(loss)

使用mnist數據集進行簡單測試

from tensorflow.examples.tutorials.mnist import input_data
import tensorflow as tf
import numpy as np
mnist = input_data.read_data_sets('MNIST_data',one_hot=True)
x_train,y_train = mnist.train.images,mnist.train.labels
x_test,y_test = mnist.test.images,mnist.test.labels
Extracting MNIST_data\train-images-idx3-ubyte.gz
Extracting MNIST_data\train-labels-idx1-ubyte.gz
Extracting MNIST_data\t10k-images-idx3-ubyte.gz
Extracting MNIST_data\t10k-labels-idx1-ubyte.gz
he_init = tf.contrib.layers.variance_scaling_initializer()
def dnn(inputs,n_hiddens=1,n_neurons=100,initializer=he_init,activation=tf.nn.elu,batch_normalization=None,training=None):
    for layer in range(n_hiddens):
        inputs = tf.layers.dense(inputs,n_neurons,kernel_initializer=initializer,name = 'hidden%d'%(layer+1))
        if batch_normalization is not None:   
            inputs = tf.layers.batch_normalization(inputs,momentum=batch_normalization,training=training)
        inputs = activation(inputs,name = 'hidden%d'%(layer+1))
    return inputs
tf.reset_default_graph()
n_inputs = 28*28
n_hidden = 100
n_outputs = 10

X = tf.placeholder(tf.float32,shape=(None,n_inputs),name='X')
Y = tf.placeholder(tf.int32,shape=(None,n_outputs),name='Y')

training = tf.placeholder_with_default(False,shape=(),name='tarining')
dnn_outputs = dnn(X)

logits = tf.layers.dense(dnn_outputs,n_outputs,kernel_initializer = he_init,name='logits')
y_proba = tf.nn.softmax(logits,name='y_proba')
xentropy = tf.nn.softmax_cross_entropy_with_logits(labels=Y,logits=y_proba)
loss = tf.reduce_mean(xentropy,name='loss')
train_op = tf.train.AdamOptimizer(learning_rate=0.01).minimize(loss)

correct = tf.equal(tf.argmax(Y,1),tf.argmax(y_proba,1))
accuracy = tf.reduce_mean(tf.cast(correct,tf.float32))

epoches = 20
batch_size = 100
np.random.seed(42)

init = tf.global_variables_initializer()
rnd_index = np.random.permutation(len(x_train))
n_batches = len(x_train) // batch_size
with tf.Session() as sess:
    sess.run(init)
    for epoch in range(epoches):       
        for batch_index in np.array_split(rnd_index,n_batches):
            x_batch,y_batch = x_train[batch_index],y_train[batch_index]
            feed_dict = {X:x_batch,Y:y_batch,training:True}
            sess.run(train_op,feed_dict=feed_dict)
        loss_val,accuracy_val = sess.run([loss,accuracy],feed_dict={X:x_test,Y:y_test,training:False})
        print('epoch:{},loss:{},accuracy:{}'.format(epoch,loss_val,accuracy_val))


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM