TensorFlow模型保存和加載方法
模型保存
import tensorflow as tf w1 = tf.Variable(tf.constant(2.0, shape=[1]), name="w1-name") w2 = tf.Variable(tf.constant(3.0, shape=[1]), name="w2-name") a = tf.placeholder(dtype=tf.float32, name="a-name") b = tf.placeholder(dtype=tf.float32, name="b-name") y = a * w1 + b * w2 init = tf.global_variables_initializer() saver = tf.train.Saver() with tf.Session() as sess: sess.run(init) print(a) # Tensor("a-name:0", dtype=float32) print(b) # Tensor("b-name:0", dtype=float32) print(y) # Tensor("add:0", dtype=float32) print(sess.run(y, feed_dict={a: 10, b: 10})) saver.save(sess, "./model/model.ckpt")
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 12
- 13
- 14
- 15
- 16
- 17
- 18
- 19
- 20
這段代碼中,通過saver.save
函數將TensorFlow模型保存到了model/model.ckpt
文件中,這里代碼中指定路徑為"model/model.ckpt"
,也就是保存到了當前程序所在文件夾里面的model
文件夾中。
TensorFlow模型會保存在后綴為.ckpt
的文件中。保存后在save這個文件夾中實際會出現3個文件,因為TensorFlow會將計算圖的結構和圖上參數取值分開保存。
model.ckpt.meta
文件保存了TensorFlow計算圖的結構,可以理解為神經網絡的網絡結構model.ckpt
文件保存了TensorFlow程序中每一個變量的取值checkpoint
文件保存了一個目錄下所有的模型文件列表

模型加載:只加載變量,但是還是需要重新定義圖結構
import tensorflow as tf # 使用和保存模型代碼中一樣的方式來聲明變量 # 變量rw1, rw2 不需要進行初始化 rw1 = tf.Variable(tf.constant(2.0, shape=[1]), name="w1-name") rw2 = tf.Variable(tf.constant(3.0, shape=[1]), name="w2-name") # 重新定義圖結構 result = 10 * rw1 + 10 * rw2 saver = tf.train.Saver() with tf.Session() as sess: saver.restore(sess, "./model/model.ckpt") print(sess.run(result))
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 12
- 13
- 14
- 15
tf.train.Saver類也支持在保存和加載時給變量重命名
import tensorflow as tf # 聲明的變量名稱name與已保存的模型中的變量名稱name不一致 rw1 = tf.Variable(tf.constant(2.0, shape=[1]), name="rw1-name") rw2 = tf.Variable(tf.constant(3.0, shape=[1]), name="rw2-name") # 重新定義圖結構 result = 10 * rw1 + 10 * rw2 # 若直接生命Saver類對象,會報錯變量找不到 # 使用一個字典dict重命名變量即可,{"已保存的變量的名稱name": 重命名變量名} # 原來名稱name為 w1-name 的變量現在加載到變量 rw1(名稱name為 rw1-name)中 saver = tf.train.Saver({"w1-name": rw1, "w2-name": rw2}) with tf.Session() as sess: saver.restore(sess, "./model/model.ckpt") print(sess.run(result))
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 12
- 13
- 14
- 15
- 16
- 17
模型加載: 不需要重新定義圖結構
import tensorflow as tf saver = tf.train.import_meta_graph("./model/model.ckpt.meta") graph = tf.get_default_graph() # 通過 Tensor 名獲取變量 a = graph.get_tensor_by_name("a-name:0") b = graph.get_tensor_by_name("b-name:0") y = graph.get_tensor_by_name("add:0") with tf.Session() as sess: saver.restore(sess, "./model/model.ckpt") print(sess.run(y, feed_dict={a: 10, b: 10}))
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 12
- 13
convert_variables_to_constants
# 通過convert_variables_to_constants函數將計算圖中的變量及其取值通過常量的方式保存於一個文件中 import tensorflow as tf from tensorflow.python.framework import graph_util v1 = tf.Variable(tf.constant(1.0, shape=[1]), name="v1") v2 = tf.Variable(tf.constant(2.0, shape=[1]), name="v2") result = v1 + v2 with tf.Session() as sess: sess.run(tf.global_variables_initializer()) # 導出當前計算圖的GraphDef部分,即從輸入層到輸出層的計算過程部分 graph_def = tf.get_default_graph().as_graph_def() output_graph_def = graph_util.convert_variables_to_constants(sess, graph_def, ['add']) with tf.gfile.GFile("Model/combined_model.pb", 'wb') as f: f.write(output_graph_def.SerializeToString()) # 載入包含變量及其取值的模型 import tensorflow as tf from tensorflow.python.platform import gfile with tf.Session() as sess: model_filename = "Model/combined_model.pb" with gfile.FastGFile(model_filename, 'rb') as f: graph_def = tf.GraphDef() graph_def.ParseFromString(f.read()) result = tf.import_graph_def(graph_def, return_elements=["add:0"]) print(sess.run(result)) # [array([ 3.], dtype=float32)]
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 12
- 13
- 14
- 15
- 16
- 17
- 18
- 19
- 20
- 21
- 22
- 23
- 24
- 25
- 26
- 27
- 28
- 29
- 30
- 31
- 32
TensorFlow 模型保存/載入的兩種方法
我們在上線使用一個算法模型的時候,首先必須將已經訓練好的模型保存下來。tensorflow保存模型的方式與sklearn不太一樣,sklearn很直接,一個sklearn.externals.joblib的dump與load方法就可以保存與載入使用。而tensorflow由於有graph,operation 這些概念,保存與載入模型稍顯麻煩。
一、基本方法
網上搜索tensorflow模型保存,搜到的大多是基本的方法。即
保存
- 定義變量
- 使用saver.save()方法保存
載入
- 定義變量
- 使用saver.restore()方法載入
如 保存 代碼如下
import tensorflow as tf import numpy as np W = tf.Variable([[1,1,1],[2,2,2]],dtype = tf.float32,name='w') b = tf.Variable([[0,1,2]],dtype = tf.float32,name='b') init = tf.initialize_all_variables() saver = tf.train.Saver() with tf.Session() as sess: sess.run(init) save_path = saver.save(sess,"save/model.ckpt")
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
載入代碼如下:
import tensorflow as tf import numpy as np W = tf.Variable(tf.truncated_normal(shape=(2,3)),dtype = tf.float32,name='w') b = tf.Variable(tf.truncated_normal(shape=(1,3)),dtype = tf.float32,name='b') saver = tf.train.Saver() with tf.Session() as sess: saver.restore(sess,"save/model.ckpt")
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
這種方法不方便的在於,在使用模型的時候,必須把模型的結構重新定義一遍,然后載入對應名字的變量的值。但是很多時候我們都更希望能夠讀取一個文件然后就直接使用模型,而不是還要把模型重新定義一遍。所以就需要使用另一種方法。
二、不需重新定義網絡結構的方法
tf.train.import_meta_graph
import_meta_graph(
meta_graph_or_file,
clear_devices=False, import_scope=None, **kwargs )
- 1
- 2
- 3
- 4
- 5
- 6
這個方法可以從文件中將保存的graph的所有節點加載到當前的default graph中,並返回一個saver。也就是說,我們在保存的時候,除了將變量的值保存下來,其實還有將對應graph中的各種節點保存下來,所以模型的結構也同樣被保存下來了。
比如我們想要保存計算最后預測結果的y
,則應該在訓練階段將它添加到collection中。具體代碼如下 :
保存
### 定義模型 input_x = tf.placeholder(tf.float32, shape=(None, in_dim), name='input_x') input_y = tf.placeholder(tf.float32, shape=(None, out_dim), name='input_y') w1 = tf.Variable(tf.truncated_normal([in_dim, h1_dim], stddev=0.1), name='w1') b1 = tf.Variable(tf.zeros([h1_dim]), name='b1') w2 = tf.Variable(tf.zeros([h1_dim, out_dim]), name='w2') b2 = tf.Variable(tf.zeros([out_dim]), name='b2') keep_prob = tf.placeholder(tf.float32, name='keep_prob') hidden1 = tf.nn.relu(tf.matmul(self.input_x, w1) + b1) hidden1_drop = tf.nn.dropout(hidden1, self.keep_prob) ### 定義預測目標 y = tf.nn.softmax(tf.matmul(hidden1_drop, w2) + b2) # 創建saver saver = tf.train.Saver(...variables...) # 假如需要保存y,以便在預測時使用 tf.add_to_collection('pred_network', y) sess = tf.Session() for step in xrange(1000000): sess.run(train_op) if step % 1000 == 0: # 保存checkpoint, 同時也默認導出一個meta_graph # graph名為'my-model-{global_step}.meta'. saver.save(sess, 'my-model', global_step=step)
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 12
- 13
- 14
- 15
- 16
- 17
- 18
- 19
- 20
- 21
- 22
- 23
- 24
載入
with tf.Session() as sess: new_saver = tf.train.import_meta_graph('my-save-dir/my-model-10000.meta') new_saver.restore(sess, 'my-save-dir/my-model-10000') # tf.get_collection() 返回一個list. 但是這里只要第一個參數即可 y = tf.get_collection('pred_network')[0] graph = tf.get_default_graph() # 因為y中有placeholder,所以sess.run(y)的時候還需要用實際待預測的樣本以及相應的參數來填充這些placeholder,而這些需要通過graph的get_operation_by_name方法來獲取。 input_x = graph.get_operation_by_name('input_x').outputs[0] keep_prob = graph.get_operation_by_name('keep_prob').outputs[0] # 使用y進行預測 sess.run(y, feed_dict={input_x:...., keep_prob:1.0})
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 12
- 13
- 14
具體示例
save.py
import tensorflow as tf from tensorflow.examples.tutorials.mnist import input_data # 加載數據集 mnist = input_data.read_data_sets("data", one_hot=True) # Parameters learning_rate = 0.001 batch_size = 100 display_step = 10 model_path = "save/model.ckpt" # Network Parameters n_hidden_1 = 256 # 1st layer number of features n_hidden_2 = 256 # 2st layer number of features n_input = 784 # MNIST data input (img shape: 28*28) n_classes = 10 # MNIST total classes (0-9 digits) # tf Graph input x = tf.placeholder(tf.float32, [None, n_input], name="input_x") y = tf.placeholder(tf.float32, [None, n_classes], name="input_y") # Store layers weight & bias weights = { 'h1': tf.Variable(tf.random_normal([n_input, n_hidden_1])), 'h2': tf.Variable(tf.random_normal([n_hidden_1, n_hidden_2])), 'out': tf.Variable(tf.random_normal([n_hidden_2, n_classes])) } biases = { 'b1': tf.Variable(tf.random_normal([n_hidden_1])), 'b2': tf.Variable(tf.random_normal([n_hidden_2])), 'out': tf.Variable(tf.random_normal([n_classes])) } # Create model def multilayer_perceptron(x, weights, biases): # layer1 h1 = tf.add(tf.matmul(x, weights['h1']), biases['b1']) h1 = tf.nn.relu(h1) # layer2 h2 = tf.add(tf.matmul(h1, weights['h2']), biases['b2']) h2 = tf.nn.relu(h2) # out out = tf.add(tf.matmul(h2, weights['out']), biases['out']) return out # Construct model logits = multilayer_perceptron(x, weights, biases) pred = tf.nn.softmax(logits) # Define loss and optimizer cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=y)) optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(cost) corrcet_pred = tf.equal(tf.argmax(pred, 1), tf.argmax(y, 1)) accuracy = tf.reduce_mean(tf.cast(corrcet_pred, tf.float32)) # Initializing the variables init = tf.global_variables_initializer() # 保存模型 saver = tf.train.Saver() tf.add_to_collection("pred", pred) tf.add_to_collection('acc', accuracy) with tf.Session() as sess: sess.run(init) step = 0 while step * batch_size < 180000: batch_xs, batch_ys = mnist.train.next_batch(batch_size) loss, _, acc = sess.run([cost, optimizer, accuracy], feed_dict={x: batch_xs, y: batch_ys}) if step % display_step == 0: # step: 1790 loss: 16.9724 acc: 0.95 print("step: ", step, "loss: ", loss, "acc: ", acc) saver.save(sess, save_path=model_path, global_step=step) step += 1 print("Train Finish!")
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 12
- 13
- 14
- 15
- 16
- 17
- 18
- 19
- 20
- 21
- 22
- 23
- 24
- 25
- 26
- 27
- 28
- 29
- 30
- 31
- 32
- 33
- 34
- 35
- 36
- 37
- 38
- 39
- 40
- 41
- 42
- 43
- 44
- 45
- 46
- 47
- 48
- 49
- 50
- 51
- 52
- 53
- 54
- 55
- 56
- 57
- 58
- 59
- 60
- 61
- 62
- 63
- 64
- 65
- 66
- 67
- 68
- 69
- 70
- 71
- 72
- 73
- 74
- 75
- 76
- 77
- 78
- 79
- 80
- 81
- 82
- 83
checkpoint:
model_checkpoint_path: "model.ckpt-1790" all_model_checkpoint_paths: "model.ckpt-1750" all_model_checkpoint_paths: "model.ckpt-1760" all_model_checkpoint_paths: "model.ckpt-1770" all_model_checkpoint_paths: "model.ckpt-1780" all_model_checkpoint_paths: "model.ckpt-1790"
- 1
- 2
- 3
- 4
- 5
- 6
restore.py
import tensorflow as tf from tensorflow.examples.tutorials.mnist import input_data # load mnist data mnist = input_data.read_data_sets("data", one_hot=True) with tf.Session() as sess: new_saver = tf.train.import_meta_graph("save/model.ckpt-1790.meta") new_saver.restore(sess, "save/model.ckpt-1790") # tf.get_collection() 返回一個list. 但是這里只要第一個參數即可 pred = tf.get_collection("pred")[0] acc = tf.get_collection("acc")[0] # 因為 pred, acc 中有 placeholder,所以 sess.run(acc)的時候還需要用實際待預測的樣本以及相應的參數來填充這些placeholder, # 而這些需要通過graph的get_operation_by_name方法來獲取。 graph = tf.get_default_graph() x = graph.get_operation_by_name("input_x").outputs[0] y = graph.get_operation_by_name("input_y").outputs[0] test_xs = mnist.test.images test_ys = mnist.test.labels #test set acc: [0.91820002] print("test set acc: ", sess.run([acc], feed_dict={ x: test_xs, y: test_ys }))
原文:https://blog.csdn.net/u011026329/article/details/79190347