1、個人理解:
1.1、tensorflow的 構建視圖、構建操作... 都只是在預定義一些操作/一些占位,並沒有實際的在跑代碼,一直要等到 session.run 才會 實際的去執行某些代碼
1.2、我們 預定義的 一大堆 視圖/操作 等等,並不一定所有的都會執行到,只有 session.run 使用到的才會執行到。否則的話 在 tensorflow視圖里面僅僅是孤立的點 並沒有數據流過它
1.3、sess.run 可以執行某一個操作(變量),也可以執行某一個函數
2、資料:
2.1、度娘:“sess.run”
tensorflow學習筆記(1):sess.run()_站在巨人的肩膀上coding-CSDN博客.html(https://blog.csdn.net/LOVE1055259415/article/details/80011094)
sess.run 會調用哪些方法_百度知道.html(https://zhidao.baidu.com/question/1051057979950110419.html)
2.2、度娘:“tensor tf.print”、“tensor tf.print 返回值”
tensorflow Debugger教程(二)——tf.Print()與tf.print()函數_MIss-Y的博客-CSDN博客.html(https://blog.csdn.net/qq_27825451/article/details/96100496)
ZC:傳統做法(打印sess.run(...)的返回值) + tf.Print() + tf.print()
tensorflow在函數中用tf.Print輸出中間值的方法_sjtuxx_lee的博客-CSDN博客.html(https://blog.csdn.net/sjtuxx_lee/article/details/84571377)
ZC:tf.Print() ,“沒有數據流過,就不會被執行”
tensorflow筆記 tf.Print()_thormas1996的博客-CSDN博客.html(https://blog.csdn.net/thormas1996/article/details/81224405)
ZC:“需要注意的是tf.Print()只是構建一個op,需要run之后才會打印。”
3、測試代碼:
1 ''' 2 # 測試代碼(1) 3 4 import tensorflow as tf 5 state = tf.Variable(0.0,dtype=tf.float32) 6 one = tf.constant(1.0,dtype=tf.float32) 7 new_val = tf.add(state, one) 8 update = tf.assign(state, new_val) 9 init = tf.initialize_all_variables() 10 with tf.Session() as sess: 11 sess.run(init) 12 for _ in range(10): 13 u,s = sess.run([update,state]) 14 print(s) 15 ''' 16 17 ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### 18 19 ''' 20 # 測試代碼(2) 21 22 import tensorflow as tf 23 state = tf.Variable(0.0,dtype=tf.float32) 24 one = tf.constant(1.0,dtype=tf.float32) 25 new_val = tf.add(state, one) 26 update = tf.assign(state, new_val)# 返回tensor, 值為new_val 27 update2 = tf.assign(state, 10000)# 沒有fetch,便沒有執行 28 init = tf.initialize_all_variables() 29 with tf.Session() as sess: 30 sess.run(init) 31 for _ in range(3): 32 print(sess.run(update)) 33 ''' 34 35 ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### 36 37 # 測試代碼(3) 38 39 import sys 40 import numpy 41 # batches = numpy.zeros((32,1)) 42 batches = numpy.zeros((12,1)) 43 # print(batches) 44 # print(type(batches)) 45 batches[0][0] = 1 46 # print(batches) 47 print(type(batches)) 48 print("batches.shape : ", batches.shape) 49 print("batches[0][0].shape : ", batches[0][0].shape) 50 # sys.exit() 51 print("\n\n\n") 52 53 54 import tensorflow as tf 55 56 # tf.enable_eager_execution() 57 58 # RNN的大小(隱藏節點的維度) 59 rnn_size = 512 60 61 tf.reset_default_graph() 62 train_graph = tf.Graph() 63 64 65 with train_graph.as_default(): 66 input_text = tf.placeholder(tf.int32, [None, None], name="input") 67 targets = tf.placeholder(tf.int32, [None, None], name="targets") 68 lr = tf.placeholder(tf.float32) 69 # tf.print(targets,[targets]) 70 71 input_data_shape = tf.shape(input_text) 72 # tf.print(input_data_shape) 73 # 構建RNN單元並初始化 74 # 將一個或多個BasicLSTMCells 疊加在MultiRNNCell中,這里我們使用2層LSTM cell 75 cell = tf.contrib.rnn.MultiRNNCell([tf.contrib.rnn.BasicLSTMCell(num_units=rnn_size) for _ in range(2)]) 76 initial_state = cell.zero_state(input_data_shape[0], tf.float32) 77 # print("type(initial_state) : ", type(initial_state)) 78 initial_state = tf.identity(initial_state, name="initial_state") 79 80 81 82 # tf.enable_eager_execution() # ZC: 感覺只要 tf.Print/tf.print是在打印占位符的信息,這句代碼放在它們的前面的話,就會報錯“AttributeError: 'Tensor' object has no attribute '_datatype_enum'”。 若是tf.Print/tf.print(它們的返回值是不同的)是在打印占位符的信息的話,都是需要sess.run的 ! ! 83 # op = tf.print("--> --> --> input_text: ", input_text, output_stream=sys.stderr) 84 op = tf.Print(input_text, ['--> input_text: ', input_text]) 85 # tf.print("--> --> --> input_text: ", input_text, output_stream=sys.stderr) 86 87 tf.enable_eager_execution()# ZC: 這一句放在 with的里面,這個位置是OK的。放在with的外面,會提示 要將這句代碼放到程序開始的位置去 88 x=tf.constant([2,3,4,5]) 89 y=tf.constant([20,30,40,50]) 90 z=tf.add(x,y) 91 92 tf.print("x:",x, "y:",y,"z:",z, output_stream=sys.stderr) 93 94 95 with tf.Session(graph=train_graph) as sess: 96 sess.run(tf.global_variables_initializer()) 97 print("input_data_shape : ", input_data_shape) 98 print("input_data_shape[0] : ", input_data_shape[0]) 99 print("initial_state.shape : ", initial_state.shape) 100 print("input_text : ", input_text) 101 102 print("type(batches) : ", type(batches)) 103 print("batches.shape : ", batches.shape) 104 105 print() 106 107 # state = sess.run(initial_state, {input_text: batches[0][0]}) 108 # state, inputDataShape = sess.run([initial_state, input_data_shape], {input_text: batches[0][0]}) 109 # state, inputDataShape = sess.run([initial_state, input_data_shape], {input_text: batches}) 110 state, inputDataShape, op = sess.run([initial_state, input_data_shape, op], feed_dict={input_text: batches})# ZC: 這里 是否使用 feed_dict,效果上是一樣的 111 112 print(">>> >>> >>> >>> >>> sess.run(...) 之后 <<< <<< <<< <<< <<<\n") 113 print("op : ", op) 114 print("state.shape : ", state.shape) 115 # print("state[0][0] : ") 116 # print(state[0][0]) 117 print() 118 119 print("inputDataShape : ", inputDataShape) 120 print("type(inputDataShape) : ", type(inputDataShape)) 121 print("len(inputDataShape) : ", len(inputDataShape)) 122 print("inputDataShape.shape : ", inputDataShape.shape) 123 print("inputDataShape[0] : ", inputDataShape[0]) 124 125 126 print() 127 print("input_data_shape : ", input_data_shape) 128 print("input_data_shape[0] : ", input_data_shape[0]) 129 print("initial_state.shape : ", initial_state.shape) 130 print("input_text : ", input_text)
4、
5、