tf_upgrade_v2.exe實驗


實驗前

import tensorflow as tf
import numpy as np
#create data
x_data=np.random.rand(100).astype(np.float32)#訓練樣本
y_data=x_data*0.1+0.3#求參數(隱去真實參數和函數式)怎么知道樣本符合的這是線性函數呢?如果假設樣本符合的是二次函數呢?能求出參數值嗎?
###create tensorflow structure start###
Weights = tf.Variable(tf.random_uniform([1],-1.0,1.0))#隨機參數初值
biases = tf.Variable(tf.zeros([1]))
y=Weights*x_data+biases#按隨機參數擬合的y值一開始和y_data真值差很大
loss = tf.reduce_mean(tf.square(y-y_data))#損失值
optimizer = tf.train.GradientDescentOptimizer(0.5)
###create tensorflow structure end###
train = optimizer.minimize(loss)#訓練
init = tf.initialize_all_variables()#初始化
sess = tf.Session()
sess.run(init)
for step in range(201):
    sess.run(train)
    if step % 20 == 0:
        print(step, sess.run(Weights), sess.run(biases))

實驗后:Weights、biases初始值為隨機值,但是隨着迭代它們會趨近於真值。條件為loss最小。

import tensorflow as tf
import numpy as np
#create data
x_data=np.random.rand(100).astype(np.float32)
y_data=x_data*0.1+0.3
###create tensorflow structure start###
Weights = tf.Variable(tf.random.uniform([1],-1.0,1.0))
biases = tf.Variable(tf.zeros([1]))
y=Weights*x_data+biases
loss = tf.reduce_mean(input_tensor=tf.square(y-y_data))
optimizer = tf.compat.v1.train.GradientDescentOptimizer(0.5)
###create tensorflow structure end###
train = optimizer.minimize(loss)
init = tf.compat.v1.initialize_all_variables()
sess = tf.compat.v1.Session()
sess.run(init)
for step in range(201):
    sess.run(train)
    if step % 20 == 0:
        print(step, sess.run(Weights), sess.run(biases))

代碼對比可看出代碼前后的變化

https://blog.csdn.net/u012223913/article/details/79097297


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM