這里將講解tensorflow是如何通過計算圖來更新變量和最小化損失函數來反向傳播誤差的;這步將通過聲明優化函數來實現。一旦聲明好優化函數,tensorflow將通過它在所有的計算圖中解決反向傳播的項。當我們傳入數據,最小化損失函數,tensorflow會在計算圖中根據狀態相應的調節變量。
這里先舉一個簡單的例子,從均值1,標准差為0.1的正態分布中隨機抽樣100個數,然后乘以變量A,損失函數L2正則函數,也就是實現函數X*A=target,X為100個隨機數,target為10,那么A的最優結果也為10。
實現如下:
import matplotlib.pyplot as plt import numpy as np import tensorflow as tf from tensorflow.python.framework import ops ops.reset_default_graph() # 創建計算圖 sess = tf.Session() #生成數據,100個隨機數x_vals以及100個目標數y_vals x_vals = np.random.normal(1, 0.1, 100) y_vals = np.repeat(10., 100) #聲明x_data、target占位符 x_data = tf.placeholder(shape=[1], dtype=tf.float32) y_target = tf.placeholder(shape=[1], dtype=tf.float32) # 聲明變量A A = tf.Variable(tf.random_normal(shape=[1])) #乘法操作,也就是例子中的X*A my_output = tf.multiply(x_data, A) #增加L2正則損失函數 loss = tf.square(my_output - y_target) # 初始化所有變量 init = tf.initialize_all_variables() sess.run(init) #聲明變量的優化器;大部分優化器算法需要知道每步迭代的步長,這距離是由學習控制率。 my_opt = tf.train.GradientDescentOptimizer(0.02) train_step = my_opt.minimize(loss) #訓練,將損失值加入數組loss_batch loss_batch = [] for i in range(100): rand_index = np.random.choice(100) rand_x = [x_vals[rand_index]] rand_y = [y_vals[rand_index]] sess.run(train_step, feed_dict={x_data: rand_x, y_target: rand_y}) print('Step #' + str(i + 1) + ' A = ' + str(sess.run(A))) print('Loss = ' + str(sess.run(loss, feed_dict={x_data: rand_x, y_target: rand_y}))) temp_loss = sess.run(loss, feed_dict={x_data: rand_x, y_target: rand_y}) loss_batch.append(temp_loss) plt.plot( loss_batch, 'r--', label='Batch Loss, size=20') plt.legend(loc='upper right', prop={'size': 11}) plt.show()

輸出結果(輸出A以及對應的損失函數):
Step #1 A = [ 0.08779037]
Loss = [ 98.3597641]
Step #2 A = [ 0.48817557]
Loss = [ 90.38272095]
Step #3 A = [ 0.85985768]
Loss = [ 83.92495728]
Step #4 A = [ 1.289047]
Loss = [ 71.54370117]
.........
Step #98 A = [ 10.10386372]
Loss = [ 0.00271681]
Step #99 A = [ 10.10850525]
Loss = [ 0.01301978]
Step #100 A = [ 10.07686806]
Loss = [ 0.5048126]
對於損失函數看這里:tensorflow進階篇-4(損失函數1),tensorflow進階篇-4(損失函數2),tensorflow進階篇-4(損失函數3)
