TensorFlow——Checkpoint為模型添加檢查點


1.檢查點

保存模型並不限於在訓練模型后,在訓練模型之中也需要保存,因為TensorFlow訓練模型時難免會出現中斷的情況,我們自然希望能夠將訓練得到的參數保存下來,否則下次又要重新訓練。

這種在訓練中保存模型,習慣上稱之為保存檢查點。

2.添加保存點

通過添加檢查點,可以生成載入檢查點文件,並能夠指定生成檢查文件的個數,例如使用saver的另一個參數——max_to_keep=1,表明最多只保存一個檢查點文件,在保存時使用如下的代碼傳入迭代次數。

import tensorflow as tf
import numpy as np
import matplotlib.pyplot as plt
import os

train_x = np.linspace(-5, 3, 50)
train_y = train_x * 5 + 10 + np.random.random(50) * 10 - 5

plt.plot(train_x, train_y, 'r.')
plt.grid(True)
plt.show()

tf.reset_default_graph()

X = tf.placeholder(dtype=tf.float32)
Y = tf.placeholder(dtype=tf.float32)

w = tf.Variable(tf.random.truncated_normal([1]), name='Weight')
b = tf.Variable(tf.random.truncated_normal([1]), name='bias')

z = tf.multiply(X, w) + b

cost = tf.reduce_mean(tf.square(Y - z))
learning_rate = 0.01
optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(cost)

init = tf.global_variables_initializer()

training_epochs = 20
display_step = 2


saver = tf.train.Saver(max_to_keep=15)
savedir = "model/"


if __name__ == '__main__':
    with tf.Session() as sess:
        sess.run(init)
        loss_list = []
        for epoch in range(training_epochs):
            for (x, y) in zip(train_x, train_y):
                sess.run(optimizer, feed_dict={X: x, Y: y})

            if epoch % display_step == 0:
                loss = sess.run(cost, feed_dict={X: x, Y: y})
                loss_list.append(loss)
                print('Iter: ', epoch, ' Loss: ', loss)

            w_, b_ = sess.run([w, b], feed_dict={X: x, Y: y})

            saver.save(sess, savedir + "linear.cpkt", global_step=epoch)

        print(" Finished ")
        print("W: ", w_, " b: ", b_, " loss: ", loss)
        plt.plot(train_x, train_x * w_ + b_, 'g-', train_x, train_y, 'r.')
        plt.grid(True)
        plt.show()

    load_epoch = 10

    with tf.Session() as sess2:
        sess2.run(tf.global_variables_initializer())
        saver.restore(sess2, savedir + "linear.cpkt-" + str(load_epoch))
        print(sess2.run([w, b], feed_dict={X: train_x, Y: train_y}))

在上述的代碼中,我們使用saver.save(sess, savedir + "linear.cpkt", global_step=epoch)將訓練的參數傳入檢查點進行保存,saver = tf.train.Saver(max_to_keep=1)表示只保存一個文件,這樣在訓練過程中得到的新的模型就會覆蓋以前的模型。

cpkt = tf.train.get_checkpoint_state(savedir)
if cpkt and cpkt.model_checkpoint_path:
  saver.restore(sess2, cpkt.model_checkpoint_path)

kpt = tf.train.latest_checkpoint(savedir)
saver.restore(sess2, kpt)

上述的兩種方法也可以對checkpoint文件進行加載,tf.train.latest_checkpoint(savedir)為加載最后的檢查點文件。這種方式,我們可以通過保存指定訓練次數的檢查點,比如保存5的倍數次保存一下檢查點。

3.簡便保存檢查點

我們還可以用更加簡單的方法進行檢查點的保存,tf.train.MonitoredTrainingSession()函數,該函數可以直接實現保存載入檢查點模型的文件,與前面的方法不同的是,它是按照訓練時間來保存檢查點的,可以通過指定save_checkpoint_secs參數的具體秒數,設置多久保存一次檢查點。

import tensorflow as tf
import numpy as np
import matplotlib.pyplot as plt
import os

train_x = np.linspace(-5, 3, 50)
train_y = train_x * 5 + 10 + np.random.random(50) * 10 - 5

# plt.plot(train_x, train_y, 'r.')
# plt.grid(True)
# plt.show()

tf.reset_default_graph()

X = tf.placeholder(dtype=tf.float32)
Y = tf.placeholder(dtype=tf.float32)

w = tf.Variable(tf.random.truncated_normal([1]), name='Weight')
b = tf.Variable(tf.random.truncated_normal([1]), name='bias')

z = tf.multiply(X, w) + b

cost = tf.reduce_mean(tf.square(Y - z))
learning_rate = 0.01
optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(cost)

init = tf.global_variables_initializer()

training_epochs = 30
display_step = 2


global_step = tf.train.get_or_create_global_step()

step = tf.assign_add(global_step, 1)

saver = tf.train.Saver()

savedir = "check-point/"

if __name__ == '__main__':
    with tf.train.MonitoredTrainingSession(checkpoint_dir=savedir + 'linear.cpkt', save_checkpoint_secs=5) as sess:
        sess.run(init)
        loss_list = []
        for epoch in range(training_epochs):
            sess.run(global_step)
            for (x, y) in zip(train_x, train_y):
                sess.run(optimizer, feed_dict={X: x, Y: y})

            if epoch % display_step == 0:
                loss = sess.run(cost, feed_dict={X: x, Y: y})
                loss_list.append(loss)
                print('Iter: ', epoch, ' Loss: ', loss)

            w_, b_ = sess.run([w, b], feed_dict={X: x, Y: y})
            sess.run(step)

        print(" Finished ")
        print("W: ", w_, " b: ", b_, " loss: ", loss)
        plt.plot(train_x, train_x * w_ + b_, 'g-', train_x, train_y, 'r.')
        plt.grid(True)
        plt.show()

    load_epoch = 10

    with tf.Session() as sess2:
        sess2.run(tf.global_variables_initializer())

        # saver.restore(sess2, savedir + 'linear.cpkt-' + str(load_epoch))

        # cpkt = tf.train.get_checkpoint_state(savedir)
        # if cpkt and cpkt.model_checkpoint_path:
        #     saver.restore(sess2, cpkt.model_checkpoint_path)
        #
        kpt = tf.train.latest_checkpoint(savedir + 'linear.cpkt')

        saver.restore(sess2, kpt)

        print(sess2.run([w, b], feed_dict={X: train_x, Y: train_y}))

上述的代碼中,我們設置了沒訓練了5秒中之后,就保存一次檢查點,它默認的保存時間間隔是10分鍾,這種按照時間的保存模式更適合使用大型數據集訓練復雜模型的情況,注意在使用上述的方法時,要定義global_step變量,在訓練完一個批次或者一個樣本之后,要將其進行加1的操作,否則將會報錯。

 


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM