1、個人理解:
1.1、tensorflow的 構建視圖、構建操作... 都只是在預定義一些操作/一些占位,並沒有實際的在跑代碼,一直要等到 session.run 才會 實際的去執行某些代碼
1.2、我們 預定義的 一大堆 視圖/操作 等等,並不一定所有的都會執行到,只有 session.run 使用到的才會執行到。否則的話 在 tensorflow視圖里面僅僅是孤立的點 並沒有數據流過它
1.3、sess.run 可以執行某一個操作(變量),也可以執行某一個函數
2、資料:
2.1、度娘:“sess.run”
tensorflow學習筆記(1):sess.run()_站在巨人的肩膀上coding-CSDN博客.html(https://blog.csdn.net/LOVE1055259415/article/details/80011094)
sess.run 會調用哪些方法_百度知道.html(https://zhidao.baidu.com/question/1051057979950110419.html)
2.2、度娘:“tensor tf.print”、“tensor tf.print 返回值”
tensorflow Debugger教程(二)——tf.Print()與tf.print()函數_MIss-Y的博客-CSDN博客.html(https://blog.csdn.net/qq_27825451/article/details/96100496)
ZC:傳統做法(打印sess.run(...)的返回值) + tf.Print() + tf.print()
tensorflow在函數中用tf.Print輸出中間值的方法_sjtuxx_lee的博客-CSDN博客.html(https://blog.csdn.net/sjtuxx_lee/article/details/84571377)
ZC:tf.Print() ,“沒有數據流過,就不會被執行”
tensorflow筆記 tf.Print()_thormas1996的博客-CSDN博客.html(https://blog.csdn.net/thormas1996/article/details/81224405)
ZC:“需要注意的是tf.Print()只是構建一個op,需要run之后才會打印。”
''' # 測試代碼(1) import tensorflow as tf state = tf.Variable(0.0,dtype=tf.float32) one = tf.constant(1.0,dtype=tf.float32) new_val = tf.add(state, one) update = tf.assign(state, new_val) init = tf.initialize_all_variables() with tf.Session() as sess: sess.run(init) for _ in range(10): u,s = sess.run([update,state]) print(s) ''' ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ''' # 測試代碼(2) import tensorflow as tf
import os
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '2' state = tf.Variable(0.0,dtype=tf.float32) one = tf.constant(1.0,dtype=tf.float32) new_val = tf.add(state, one) update = tf.assign(state, new_val)# 返回tensor, 值為new_val update2 = tf.assign(state, 10000)# 沒有fetch,便沒有執行 init = tf.global_variables_initializer() with tf.Session() as sess: sess.run(init) for _ in range(3): print(sess.run(update)) ''' ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### # 測試代碼(3) import sys import numpy # batches = numpy.zeros((32,1)) batches = numpy.zeros((12,1)) # print(batches) # print(type(batches)) batches[0][0] = 1 # print(batches) print(type(batches)) print("batches.shape : ", batches.shape) print("batches[0][0].shape : ", batches[0][0].shape) # sys.exit() print("\n\n\n") import tensorflow as tf # tf.enable_eager_execution() # RNN的大小(隱藏節點的維度) rnn_size = 512 tf.reset_default_graph() train_graph = tf.Graph() with train_graph.as_default(): input_text = tf.placeholder(tf.int32, [None, None], name="input") targets = tf.placeholder(tf.int32, [None, None], name="targets") lr = tf.placeholder(tf.float32) # tf.print(targets,[targets]) input_data_shape = tf.shape(input_text) # tf.print(input_data_shape) # 構建RNN單元並初始化 # 將一個或多個BasicLSTMCells 疊加在MultiRNNCell中,這里我們使用2層LSTM cell cell = tf.contrib.rnn.MultiRNNCell([tf.contrib.rnn.BasicLSTMCell(num_units=rnn_size) for _ in range(2)]) initial_state = cell.zero_state(input_data_shape[0], tf.float32) # print("type(initial_state) : ", type(initial_state)) initial_state = tf.identity(initial_state, name="initial_state") # tf.enable_eager_execution() # ZC: 感覺只要 tf.Print/tf.print是在打印占位符的信息,這句代碼放在它們的前面的話,就會報錯“AttributeError: 'Tensor' object has no attribute '_datatype_enum'”。 若是tf.Print/tf.print(它們的返回值是不同的)是在打印占位符的信息的話,都是需要sess.run的 ! ! # op = tf.print("--> --> --> input_text: ", input_text, output_stream=sys.stderr) op = tf.Print(input_text, ['--> input_text: ', input_text]) # tf.print("--> --> --> input_text: ", input_text, output_stream=sys.stderr) tf.enable_eager_execution()# ZC: 這一句放在 with的里面,這個位置是OK的。放在with的外面,會提示 要將這句代碼放到程序開始的位置去 x=tf.constant([2,3,4,5]) y=tf.constant([20,30,40,50]) z=tf.add(x,y) tf.print("x:",x, "y:",y,"z:",z, output_stream=sys.stderr) with tf.Session(graph=train_graph) as sess: sess.run(tf.global_variables_initializer()) print("input_data_shape : ", input_data_shape) print("input_data_shape[0] : ", input_data_shape[0]) print("initial_state.shape : ", initial_state.shape) print("input_text : ", input_text) print("type(batches) : ", type(batches)) print("batches.shape : ", batches.shape) print() # state = sess.run(initial_state, {input_text: batches[0][0]}) # state, inputDataShape = sess.run([initial_state, input_data_shape], {input_text: batches[0][0]}) # state, inputDataShape = sess.run([initial_state, input_data_shape], {input_text: batches}) state, inputDataShape, op = sess.run([initial_state, input_data_shape, op], feed_dict={input_text: batches})# ZC: 這里 是否使用 feed_dict,效果上是一樣的 print(">>> >>> >>> >>> >>> sess.run(...) 之后 <<< <<< <<< <<< <<<\n") print("op : ", op) print("state.shape : ", state.shape) # print("state[0][0] : ") # print(state[0][0]) print() print("inputDataShape : ", inputDataShape) print("type(inputDataShape) : ", type(inputDataShape)) print("len(inputDataShape) : ", len(inputDataShape)) print("inputDataShape.shape : ", inputDataShape.shape) print("inputDataShape[0] : ", inputDataShape[0]) print() print("input_data_shape : ", input_data_shape) print("input_data_shape[0] : ", input_data_shape[0]) print("initial_state.shape : ", initial_state.shape) print("input_text : ", input_text)