tensorflow中命名空間、變量命名的問題


1.簡介

對比分析tf.Variable / tf.get_variable | tf.name_scope / tf.variable_scope的異同

2.說明

  • tf.Variable創建變量;tf.get_variable創建與獲取變量
  • tf.Variable自動檢測命名沖突並且處理;tf.get_variable在沒有設置reuse時會報錯
  • tf.name_scope沒有reuse功能,tf.get_variable在變量沖突時報錯;tf.variable_scope有reuse功能,可配合tf.get_variable實現變量共享
  • tf.get_variable變量命名不受tf.name_scope的影響;tf.Variable受兩者的影響

3.代碼示例

3.1 tf.Variable

tf.Variable在命名沖突時自動處理沖突問題

 1 import tensorflow as tf
 2 a1 = tf.Variable(tf.constant(1.0, shape=[1]),name="a")
 3 a2 = tf.Variable(tf.constant(1.0, shape=[1]),name="a")
 4 print(a1)
 5 print(a2)
 6 print(a1==a2)
 7 
 8 
 9 ###
10 <tf.Variable 'a:0' shape=(1,) dtype=float32_ref>
11 <tf.Variable 'a_1:0' shape=(1,) dtype=float32_ref>
12 False

3.2 tf.get_variable

tf.get_variable在沒有設置命名空間reuse的情況下變量命名沖突時報錯

1 import tensorflow as tf
2 a3 = tf.get_variable("a", shape=[1], initializer=tf.constant_initializer(1.0))
3 a4 = tf.get_variable("a", shape=[1], initializer=tf.constant_initializer(1.0))
4 
5 
6 ###
7 ValueError: Variable a already exists, disallowed.
8 Did you mean to set reuse=True or reuse=tf.AUTO_REUSE in VarScope?

3.3 tf.name_scope

tf.name_scope沒有reuse功能,tf.get_variable命名不受它影響,並且命名沖突時報錯;tf.Variable命名受它影響

 1 import tensorflow as tf
 2 a = tf.Variable(tf.constant(1.0, shape=[1]),name="a")
 3 with tf.name_scope('layer2'):
 4     a1 = tf.Variable(tf.constant(1.0, shape=[1]),name="a")
 5     a2 = tf.Variable(tf.constant(1.0, shape=[1]),name="a")
 6     a3 = tf.get_variable("a", shape=[1], initializer=tf.constant_initializer(1.0))
 7 #     a4 = tf.get_variable("a", shape=[1], initializer=tf.constant_initializer(1.0))   該句會報錯
 8     print(a)
 9     print(a1) 
10     print(a2)
11     print(a3)
12     print(a1==a2)
13 
14 
15 ###
16 <tf.Variable 'a:0' shape=(1,) dtype=float32_ref>
17 <tf.Variable 'layer2/a:0' shape=(1,) dtype=float32_ref>
18 <tf.Variable 'layer2/a_1:0' shape=(1,) dtype=float32_ref>
19 <tf.Variable 'a_1:0' shape=(1,) dtype=float32_ref>
20 False

3.4 tf.variable_scope

tf.variable_scope可以配tf.get_variable實現變量共享;reuse默認為None,有False/True/tf.AUTO_REUSE可選:

  • 設置reuse = None/False時tf.get_variable創建新變量,變量存在則報錯
  • 設置reuse = True時tf.get_variable只講獲取已存在的變量,變量不存在時報錯
  • 設置reuse = tf.AUTO_REUSE時tf.get_variable在變量已存在則自動復用,不存在則創建
 1 import tensorflow as tf
 2 with tf.variable_scope('layer1',reuse=tf.AUTO_REUSE):
 3     a1 = tf.Variable(tf.constant(1.0, shape=[1]),name="a")
 4     a2 = tf.Variable(tf.constant(1.0, shape=[1]),name="a")
 5     a3 = tf.get_variable("a", shape=[1], initializer=tf.constant_initializer(1.0))
 6     a4 = tf.get_variable("a", shape=[1], initializer=tf.constant_initializer(1.0))
 7     print(a1) 
 8     print(a2)
 9     print(a1==a2)
10     print(a3)
11     print(a4)
12     print(a3==a4)
13 
14 
15 ### 
16 <tf.Variable 'layer1_1/a:0' shape=(1,) dtype=float32_ref>
17 <tf.Variable 'layer1_1/a_1:0' shape=(1,) dtype=float32_ref>
18 False
19 <tf.Variable 'layer1/a_2:0' shape=(1,) dtype=float32_ref>
20 <tf.Variable 'layer1/a_2:0' shape=(1,) dtype=float32_ref>
21 True

!!!

 


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM