tensorflow 1.0 學習:模型的保存與恢復(Saver)


tensorflow 1.0 學習:模型的保存與恢復(Saver)

 

將訓練好的模型參數保存起來,以便以后進行驗證或測試,這是我們經常要做的事情。tf里面提供模型保存的是tf.train.Saver()模塊。

模型保存,先要創建一個Saver對象:如

saver=tf.train.Saver()

在創建這個Saver對象的時候,有一個參數我們經常會用到,就是 max_to_keep 參數,這個是用來設置保存模型的個數,默認為5,即 max_to_keep=5,保存最近的5個模型。如果你想每訓練一代(epoch)就想保存一次模型,則可以將 max_to_keep設置為None或者0,如:

saver=tf.train.Saver(max_to_keep=0)

但是這樣做除了多占用硬盤,並沒有實際多大的用處,因此不推薦。

當然,如果你只想保存最后一代的模型,則只需要將max_to_keep設置為1即可,即

saver=tf.train.Saver(max_to_keep=1)

創建完saver對象后,就可以保存訓練好的模型了,如:

saver.save(sess,'ckpt/mnist.ckpt',global_step=step)

第一個參數sess,這個就不用說了。第二個參數設定保存的路徑和名字,第三個參數將訓練的次數作為后綴加入到模型名字中。

saver.save(sess, 'my-model', global_step=0) ==>      filename: 'my-model-0'
...
saver.save(sess, 'my-model', global_step=1000) ==> filename: 'my-model-1000'

看一個mnist實例:

復制代碼
# -*- coding: utf-8 -*-
"""
Created on Sun Jun  4 10:29:48 2017

@author: Administrator
"""
import tensorflow as tf
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets("MNIST_data/", one_hot=False)

x = tf.placeholder(tf.float32, [None, 784])
y_=tf.placeholder(tf.int32,[None,])

dense1 = tf.layers.dense(inputs=x, 
                      units=1024, 
                      activation=tf.nn.relu,
                      kernel_initializer=tf.truncated_normal_initializer(stddev=0.01),
                      kernel_regularizer=tf.nn.l2_loss)
dense2= tf.layers.dense(inputs=dense1, 
                      units=512, 
                      activation=tf.nn.relu,
                      kernel_initializer=tf.truncated_normal_initializer(stddev=0.01),
                      kernel_regularizer=tf.nn.l2_loss)
logits= tf.layers.dense(inputs=dense2, 
                        units=10, 
                        activation=None,
                        kernel_initializer=tf.truncated_normal_initializer(stddev=0.01),
                        kernel_regularizer=tf.nn.l2_loss)

loss=tf.losses.sparse_softmax_cross_entropy(labels=y_,logits=logits)
train_op=tf.train.AdamOptimizer(learning_rate=0.001).minimize(loss)
correct_prediction = tf.equal(tf.cast(tf.argmax(logits,1),tf.int32), y_)    
acc= tf.reduce_mean(tf.cast(correct_prediction, tf.float32))

sess=tf.InteractiveSession()  
sess.run(tf.global_variables_initializer())

saver=tf.train.Saver(max_to_keep=1)
for i in range(100):
  batch_xs, batch_ys = mnist.train.next_batch(100)
  sess.run(train_op, feed_dict={x: batch_xs, y_: batch_ys})
  val_loss,val_acc=sess.run([loss,acc], feed_dict={x: mnist.test.images, y_: mnist.test.labels})
  print('epoch:%d, val_loss:%f, val_acc:%f'%(i,val_loss,val_acc))
  saver.save(sess,'ckpt/mnist.ckpt',global_step=i+1)
sess.close()
復制代碼

代碼中紅色部分就是保存模型的代碼,雖然我在每訓練完一代的時候,都進行了保存,但后一次保存的模型會覆蓋前一次的,最終只會保存最后一次。因此我們可以節省時間,將保存代碼放到循環之外(僅適用max_to_keep=1,否則還是需要放在循環內).

在實驗中,最后一代可能並不是驗證精度最高的一代,因此我們並不想默認保存最后一代,而是想保存驗證精度最高的一代,則加個中間變量和判斷語句就可以了。

復制代碼
saver=tf.train.Saver(max_to_keep=1)
max_acc=0
for i in range(100):
  batch_xs, batch_ys = mnist.train.next_batch(100)
  sess.run(train_op, feed_dict={x: batch_xs, y_: batch_ys})
  val_loss,val_acc=sess.run([loss,acc], feed_dict={x: mnist.test.images, y_: mnist.test.labels})
  print('epoch:%d, val_loss:%f, val_acc:%f'%(i,val_loss,val_acc))
  if val_acc>max_acc:
      max_acc=val_acc
      saver.save(sess,'ckpt/mnist.ckpt',global_step=i+1)
sess.close()
復制代碼

如果我們想保存驗證精度最高的三代,且把每次的驗證精度也隨之保存下來,則我們可以生成一個txt文件用於保存。

復制代碼
saver=tf.train.Saver(max_to_keep=3)
max_acc=0
f=open('ckpt/acc.txt','w')
for i in range(100):
  batch_xs, batch_ys = mnist.train.next_batch(100)
  sess.run(train_op, feed_dict={x: batch_xs, y_: batch_ys})
  val_loss,val_acc=sess.run([loss,acc], feed_dict={x: mnist.test.images, y_: mnist.test.labels})
  print('epoch:%d, val_loss:%f, val_acc:%f'%(i,val_loss,val_acc))
  f.write(str(i+1)+', val_acc: '+str(val_acc)+'\n')
  if val_acc>max_acc:
      max_acc=val_acc
      saver.save(sess,'ckpt/mnist.ckpt',global_step=i+1)
f.close()
sess.close()
復制代碼

 

模型的恢復用的是restore()函數,它需要兩個參數restore(sess, save_path),save_path指的是保存的模型路徑。我們可以使用tf.train.latest_checkpoint()來自動獲取最后一次保存的模型。如:

model_file=tf.train.latest_checkpoint('ckpt/')
saver.restore(sess,model_file)

則程序后半段代碼我們可以改為:

復制代碼
sess=tf.InteractiveSession()  
sess.run(tf.global_variables_initializer())

is_train=False
saver=tf.train.Saver(max_to_keep=3)

#訓練階段
if is_train:
    max_acc=0
    f=open('ckpt/acc.txt','w')
    for i in range(100):
      batch_xs, batch_ys = mnist.train.next_batch(100)
      sess.run(train_op, feed_dict={x: batch_xs, y_: batch_ys})
      val_loss,val_acc=sess.run([loss,acc], feed_dict={x: mnist.test.images, y_: mnist.test.labels})
      print('epoch:%d, val_loss:%f, val_acc:%f'%(i,val_loss,val_acc))
      f.write(str(i+1)+', val_acc: '+str(val_acc)+'\n')
      if val_acc>max_acc:
          max_acc=val_acc
          saver.save(sess,'ckpt/mnist.ckpt',global_step=i+1)
    f.close()

#驗證階段
else:
    model_file=tf.train.latest_checkpoint('ckpt/')
    saver.restore(sess,model_file)
    val_loss,val_acc=sess.run([loss,acc], feed_dict={x: mnist.test.images, y_: mnist.test.labels})
    print('val_loss:%f, val_acc:%f'%(val_loss,val_acc))
sess.close()
復制代碼

標紅的地方,就是與保存、恢復模型相關的代碼。用一個bool型變量is_train來控制訓練和驗證兩個階段。

整個源程序:

復制代碼
# -*- coding: utf-8 -*-
"""
Created on Sun Jun  4 10:29:48 2017

@author: Administrator
"""
import tensorflow as tf
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets("MNIST_data/", one_hot=False)

x = tf.placeholder(tf.float32, [None, 784])
y_=tf.placeholder(tf.int32,[None,])

dense1 = tf.layers.dense(inputs=x, 
                      units=1024, 
                      activation=tf.nn.relu,
                      kernel_initializer=tf.truncated_normal_initializer(stddev=0.01),
                      kernel_regularizer=tf.nn.l2_loss)
dense2= tf.layers.dense(inputs=dense1, 
                      units=512, 
                      activation=tf.nn.relu,
                      kernel_initializer=tf.truncated_normal_initializer(stddev=0.01),
                      kernel_regularizer=tf.nn.l2_loss)
logits= tf.layers.dense(inputs=dense2, 
                        units=10, 
                        activation=None,
                        kernel_initializer=tf.truncated_normal_initializer(stddev=0.01),
                        kernel_regularizer=tf.nn.l2_loss)

loss=tf.losses.sparse_softmax_cross_entropy(labels=y_,logits=logits)
train_op=tf.train.AdamOptimizer(learning_rate=0.001).minimize(loss)
correct_prediction = tf.equal(tf.cast(tf.argmax(logits,1),tf.int32), y_)    
acc= tf.reduce_mean(tf.cast(correct_prediction, tf.float32))

sess=tf.InteractiveSession()  
sess.run(tf.global_variables_initializer())

is_train=True
saver=tf.train.Saver(max_to_keep=3)

#訓練階段
if is_train:
    max_acc=0
    f=open('ckpt/acc.txt','w')
    for i in range(100):
      batch_xs, batch_ys = mnist.train.next_batch(100)
      sess.run(train_op, feed_dict={x: batch_xs, y_: batch_ys})
      val_loss,val_acc=sess.run([loss,acc], feed_dict={x: mnist.test.images, y_: mnist.test.labels})
      print('epoch:%d, val_loss:%f, val_acc:%f'%(i,val_loss,val_acc))
      f.write(str(i+1)+', val_acc: '+str(val_acc)+'\n')
      if val_acc>max_acc:
          max_acc=val_acc
          saver.save(sess,'ckpt/mnist.ckpt',global_step=i+1)
    f.close()

#驗證階段
else:
    model_file=tf.train.latest_checkpoint('ckpt/')
    saver.restore(sess,model_file)
    val_loss,val_acc=sess.run([loss,acc], feed_dict={x: mnist.test.images, y_: mnist.test.labels})
    print('val_loss:%f, val_acc:%f'%(val_loss,val_acc))
sess.close()
復制代碼

 參考文章:http://blog.csdn.net/u011500062/article/details/51728830

 

 

tensorflow 1.0 學習:用別人訓練好的模型來進行圖像分類

 

谷歌在大型圖像數據庫ImageNet上訓練好了一個Inception-v3模型,這個模型我們可以直接用來進來圖像分類。

下載地址:https://storage.googleapis.com/download.tensorflow.org/models/inception_dec_2015.zip

下載完解壓后,得到幾個文件:

其中的classify_image_graph_def.pb 文件就是訓練好的Inception-v3模型。

imagenet_synset_to_human_label_map.txt是類別文件。

隨機找一張圖片:如

對這張圖片進行識別,看它屬於什么類?

代碼如下:先創建一個類NodeLookup來將softmax概率值映射到標簽上。

然后創建一個函數create_graph()來讀取模型。

最后讀取圖片進行分類識別:

復制代碼
# -*- coding: utf-8 -*-

import tensorflow as tf
import numpy as np
import re
import os

model_dir='D:/tf/model/'
image='d:/cat.jpg'


#將類別ID轉換為人類易讀的標簽
class NodeLookup(object):
  def __init__(self,
               label_lookup_path=None,
               uid_lookup_path=None):
    if not label_lookup_path:
      label_lookup_path = os.path.join(
          model_dir, 'imagenet_2012_challenge_label_map_proto.pbtxt')
    if not uid_lookup_path:
      uid_lookup_path = os.path.join(
          model_dir, 'imagenet_synset_to_human_label_map.txt')
    self.node_lookup = self.load(label_lookup_path, uid_lookup_path)

  def load(self, label_lookup_path, uid_lookup_path):
    if not tf.gfile.Exists(uid_lookup_path):
      tf.logging.fatal('File does not exist %s', uid_lookup_path)
    if not tf.gfile.Exists(label_lookup_path):
      tf.logging.fatal('File does not exist %s', label_lookup_path)

    # Loads mapping from string UID to human-readable string
    proto_as_ascii_lines = tf.gfile.GFile(uid_lookup_path).readlines()
    uid_to_human = {}
    p = re.compile(r'[n\d]*[ \S,]*')
    for line in proto_as_ascii_lines:
      parsed_items = p.findall(line)
      uid = parsed_items[0]
      human_string = parsed_items[2]
      uid_to_human[uid] = human_string

    # Loads mapping from string UID to integer node ID.
    node_id_to_uid = {}
    proto_as_ascii = tf.gfile.GFile(label_lookup_path).readlines()
    for line in proto_as_ascii:
      if line.startswith('  target_class:'):
        target_class = int(line.split(': ')[1])
      if line.startswith('  target_class_string:'):
        target_class_string = line.split(': ')[1]
        node_id_to_uid[target_class] = target_class_string[1:-2]

    # Loads the final mapping of integer node ID to human-readable string
    node_id_to_name = {}
    for key, val in node_id_to_uid.items():
      if val not in uid_to_human:
        tf.logging.fatal('Failed to locate: %s', val)
      name = uid_to_human[val]
      node_id_to_name[key] = name

    return node_id_to_name

  def id_to_string(self, node_id):
    if node_id not in self.node_lookup:
      return ''
    return self.node_lookup[node_id]

#讀取訓練好的Inception-v3模型來創建graph
def create_graph():
  with tf.gfile.FastGFile(os.path.join(
      model_dir, 'classify_image_graph_def.pb'), 'rb') as f:
    graph_def = tf.GraphDef()
    graph_def.ParseFromString(f.read())
    tf.import_graph_def(graph_def, name='')


#讀取圖片
image_data = tf.gfile.FastGFile(image, 'rb').read()

#創建graph
create_graph()

sess=tf.Session()
#Inception-v3模型的最后一層softmax的輸出
softmax_tensor= sess.graph.get_tensor_by_name('softmax:0')
#輸入圖像數據,得到softmax概率值(一個shape=(1,1008)的向量)
predictions = sess.run(softmax_tensor,{'DecodeJpeg/contents:0': image_data})
#(1,1008)->(1008,)
predictions = np.squeeze(predictions)

# ID --> English string label.
node_lookup = NodeLookup()
#取出前5個概率最大的值(top-5)
top_5 = predictions.argsort()[-5:][::-1]
for node_id in top_5:
  human_string = node_lookup.id_to_string(node_id)
  score = predictions[node_id]
  print('%s (score = %.5f)' % (human_string, score))
  
sess.close()
復制代碼

最后輸出:

tiger cat (score = 0.40316)
Egyptian cat (score = 0.21686)
tabby, tabby cat (score = 0.21348)
lynx, catamount (score = 0.01403)
Persian cat (score = 0.00394)

 

tensorflow中保存模型、加載模型做預測(不需要再定義網絡結構)

旭旭_哥 2017-10-11 23:11:21 20282 收藏 11
展開
下面用一個線下回歸模型來記載保存模型、加載模型做預測
參考文章:
http://blog.csdn.net/thriving_fcl/article/details/71423039


訓練一個線下回歸模型並保存看代碼:
import tensorflow as tf
import numpy as np
import matplotlib.pyplot as plt
money=np.array([[109],[82],[99], [72], [87], [78], [86], [84], [94], [57]]).astype(np.float32)
click=np.array([[11], [8], [8], [6],[ 7], [7], [7], [8], [9], [5]]).astype(np.float32)
x_test=money[0:5].reshape(-1,1)
y_test=click[0:5]
x_train=money[5:].reshape(-1,1)
y_train=click[5:]
x=tf.placeholder(tf.float32,[None,1],name='x') #保存要輸入的格式
w=tf.Variable(tf.zeros([1,1]))
b=tf.Variable(tf.zeros([1]))
y=tf.matmul(x,w)+b
tf.add_to_collection('pred_network', y) #用於加載模型獲取要預測的網絡結構
y_=tf.placeholder(tf.float32,[None,1])
cost=tf.reduce_sum(tf.pow((y-y_),2))
train_step=tf.train.GradientDescentOptimizer(0.000001).minimize(cost)
init=tf.global_variables_initializer()
sess=tf.Session()
sess.run(init)
cost_history=[]
saver = tf.train.Saver()
for i in range(100):
feed={x:x_train,y_:y_train}
sess.run(train_step,feed_dict=feed)
cost_history.append(sess.run(cost,feed_dict=feed))
# 輸出最終的W,b和cost值
print("109的預測值是:",sess.run(y, feed_dict={x: [[109]]}))
print("W_Value: %f" % sess.run(w), "b_Value: %f" % sess.run(b), "cost_Value: %f" % sess.run(cost, feed_dict=feed))
saver_path = saver.save(sess, "/Users/shuubiasahi/Desktop/tensorflow/modelsave/model.ckpt",global_step=100)
print("model saved in file: ", saver_path)
模型訓練結果:
2017-10-11 23:05:12.606557: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use FMA instructions, but these are available on your machine and could speed up CPU computations.
109的預測值是: [[ 9.84855175]]

保存模型文件分析:
一、 saver.restore()時填的文件名,因為在saver.save的時候,每個checkpoint會保存三個文件,如 
model.ckpt-100.meta, model.ckpt-100.index, model.ckpt-100.data-00000-of-00001 
在import_meta_graph時填的就是meta文件名,我們知道權值都保存在model.ckpt-100.data-00000-of-00001這個文件中,但是如果在restore方法中填這個文件名,就會報錯,應該填的是前綴,這個前綴可以使用tf.train.latest_checkpoint(checkpoint_dir)這個方法獲取。
二、模型的y中有用到placeholder,在sess.run()的時候肯定要feed對應的數據,因此還要根據具體placeholder的名字,從graph中使用get_operation_by_name方法獲取。


加載訓練好的模型:
import tensorflow as tf
with tf.Session() as sess:
new_saver=tf.train.import_meta_graph('/Users/shuubiasahi/Desktop/tensorflow/modelsave/model.ckpt-100.meta')
new_saver.restore(sess,"/Users/shuubiasahi/Desktop/tensorflow/modelsave/model.ckpt-100")
graph = tf.get_default_graph()
x=graph.get_operation_by_name('x').outputs[0]
y=tf.get_collection("pred_network")[0]
print("109的預測值是:",sess.run(y, feed_dict={x: [[109]]}))

加載模型后的預測結果:
2017-10-11 23:07:33.176523: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use FMA instructions, but these are available on your machine and could speed up CPU computations.
109的預測值是: [[ 9.84855175]]
Process finished with exit code 0
————————————————
版權聲明:本文為CSDN博主「旭旭_哥」的原創文章,遵循CC 4.0 BY-SA版權協議,轉載請附上原文出處鏈接及本聲明。
原文鏈接:https://blog.csdn.net/luoyexuge/java/article/details/78209670


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM