tensorflow變量作用域(variable scope)


舉例說明

TensorFlow中的變量一般就是模型的參數。當模型復雜的時候共享變量會無比復雜。

官網給了一個case,當創建兩層卷積的過濾器時,每輸入一次圖片就會創建一次過濾器對應的變量,但是我們希望所有圖片都共享同一過濾器變量,一共有4個變量:conv1_weights,conv1_biases,conv2_weights, and conv2_biases。

通常的做法是將這些變量設置為全局變量。但是存在的問題是打破封裝性,這些變量必須文檔化被其他代碼文件引用,一旦代碼變化,調用方也可能需要變化。

還有一種保證封裝性的方式是將模型封裝成類。

不過TensorFlow提供了Variable Scope 這種獨特的機制來共享變量。這個機制涉及兩個主要函數:

tf.get_variable(<name>, <shape>, <initializer>) 創建或返回給定名稱的變量
tf.variable_scope(<scope_name>) 管理傳給get_variable()的變量名稱的作用域

  

在下面的代碼中,通過tf.get_variable()創建了名稱分別為weights和biases的兩個變量。

def conv_relu(input, kernel_shape, bias_shape):
    # Create variable named "weights".
    weights = tf.get_variable("weights", kernel_shape,
        initializer=tf.random_normal_initializer())
    # Create variable named "biases".
    biases = tf.get_variable("biases", bias_shape,
        initializer=tf.constant_initializer(0.0))
    conv = tf.nn.conv2d(input, weights,
        strides=[1, 1, 1, 1], padding='SAME')
    return tf.nn.relu(conv + biases)

  

但是我們需要兩個卷積層,這時可以通過tf.variable_scope()指定作用域進行區分,如with tf.variable_scope("conv1")這行代碼指定了第一個卷積層作用域為conv1,

在這個作用域下有兩個變量weights和biases。

def my_image_filter(input_images):
    with tf.variable_scope("conv1"):
        # Variables created here will be named "conv1/weights", "conv1/biases".
        relu1 = conv_relu(input_images, [5, 5, 32, 32], [32])
    with tf.variable_scope("conv2"):
        # Variables created here will be named "conv2/weights", "conv2/biases".
        return conv_relu(relu1, [5, 5, 32, 32], [32])

  

最后在image_filters這個作用域重復使用第一張圖片輸入時創建的變量,調用函數reuse_variables(),代碼如下:

with tf.variable_scope("image_filters") as scope:
    result1 = my_image_filter(image1)
    scope.reuse_variables()
    result2 = my_image_filter(image2)

  

tf.get_variable()工作機制

tf.get_variable()工作機制是這樣的:

  • 當tf.get_variable_scope().reuse == False,調用該函數會創建新的變量

      with tf.variable_scope("foo"):
          v = tf.get_variable("v", [1])
      assert v.name == "foo/v:0"
    

      

  • 當tf.get_variable_scope().reuse == True,調用該函數會重用已經創建的變量

      with tf.variable_scope("foo"):
          v = tf.get_variable("v", [1])
      with tf.variable_scope("foo", reuse=True):
          v1 = tf.get_variable("v", [1])
      assert v1 is v
    

      

變量都是通過作用域/變量名來標識,后面會看到作用域可以像文件路徑一樣嵌套。

tf.variable_scope理解

tf.variable_scope()用來指定變量的作用域,作為變量名的前綴,支持嵌套,如下:

with tf.variable_scope("foo"):
    with tf.variable_scope("bar"):
        v = tf.get_variable("v", [1])
assert v.name == "foo/bar/v:0"

  

當前環境的作用域可以通過函數tf.get_variable_scope()獲取,並且reuse標志可以通過調用reuse_variables()設置為True,這個非常有用,如下

with tf.variable_scope("foo"):
    v = tf.get_variable("v", [1])
    tf.get_variable_scope().reuse_variables()
    v1 = tf.get_variable("v", [1])
assert v1 is v

  

作用域中的resuse默認是False,調用函數reuse_variables()可設置為True,一旦設置為True,就不能返回到False,並且該作用域的子空間reuse都是True。如果不想重用變量,那么可以退回到上層作用域,相當於exit當前作用域,如

with tf.variable_scope("root"):
    # At start, the scope is not reusing.
    assert tf.get_variable_scope().reuse == False
    with tf.variable_scope("foo"):
        # Opened a sub-scope, still not reusing.
        assert tf.get_variable_scope().reuse == False
    with tf.variable_scope("foo", reuse=True):
        # Explicitly opened a reusing scope.
        assert tf.get_variable_scope().reuse == True
        with tf.variable_scope("bar"):
            # Now sub-scope inherits the reuse flag.
            assert tf.get_variable_scope().reuse == True
    # Exited the reusing scope, back to a non-reusing one.
    assert tf.get_variable_scope().reuse == False

  

一個作用域可以作為另一個新的作用域的參數,如:

with tf.variable_scope("foo") as foo_scope:
    v = tf.get_variable("v", [1])
with tf.variable_scope(foo_scope):
    w = tf.get_variable("w", [1])
with tf.variable_scope(foo_scope, reuse=True):
    v1 = tf.get_variable("v", [1])
    w1 = tf.get_variable("w", [1])
assert v1 is v
assert w1 is w

  

不管作用域如何嵌套,當使用with tf.variable_scope()打開一個已經存在的作用域時,就會跳轉到這個作用域。

with tf.variable_scope("foo") as foo_scope:
    assert foo_scope.name == "foo"
with tf.variable_scope("bar"):
    with tf.variable_scope("baz") as other_scope:
        assert other_scope.name == "bar/baz"
        with tf.variable_scope(foo_scope) as foo_scope2:
            assert foo_scope2.name == "foo"  # Not changed.

  

variable scope的Initializers可以創遞給子空間和tf.get_variable()函數,除非中間有函數改變,否則不變。

with tf.variable_scope("foo", initializer=tf.constant_initializer(0.4)):
    v = tf.get_variable("v", [1])
    assert v.eval() == 0.4  # Default initializer as set above.
    w = tf.get_variable("w", [1], initializer=tf.constant_initializer(0.3)):
    assert w.eval() == 0.3  # Specific initializer overrides the default.
    with tf.variable_scope("bar"):
        v = tf.get_variable("v", [1])
        assert v.eval() == 0.4  # Inherited default initializer.
    with tf.variable_scope("baz", initializer=tf.constant_initializer(0.2)):
        v = tf.get_variable("v", [1])
        assert v.eval() == 0.2  # Changed default initializer.

  

算子(ops)會受變量作用域(variable scope)影響,相當於隱式地打開了同名的名稱作用域(name scope),如+這個算子的名稱為foo/add

with tf.variable_scope("foo"):
    x = 1.0 + tf.get_variable("v", [1])
assert x.op.name == "foo/add"

  

除了變量作用域(variable scope),還可以顯式打開名稱作用域(name scope),名稱作用域僅僅影響算子的名稱,不影響變量的名稱。另外如果tf.variable_scope()傳入字符參數,創建變量作用域的同時會隱式創建同名的名稱作用域。如下面的例子,變量v的作用域是foo,而算子x的算子變為foo/bar,因為有隱式創建名稱作用域foo

with tf.variable_scope("foo"):
    with tf.name_scope("bar"):
        v = tf.get_variable("v", [1])
        x = 1.0 + v
assert v.name == "foo/v:0"
assert x.op.name == "foo/bar/add"

  

注意: 如果tf.variable_scope()傳入的不是字符串而是scope對象,則不會隱式創建同名的名稱作用域。




免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM