用Tensorflow完成簡單的線性回歸模型


 思路:在數據上選擇一條直線y=Wx+b,在這條直線上附件隨機生成一些數據點如下圖,讓TensorFlow建立回歸模型,去學習什么樣的W和b能更好去擬合這些數據點。

1)隨機生成1000個數據點,圍繞在y=0.1x+0.3 周圍,設置W=0.1,b=0.3,屆時看構建的模型是否能學習到w和b的值。

import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
num_points=1000     
vectors_set=[]
for i in range(num_points):
    x1=np.random.normal(0.0,0.55)   #橫坐標,進行隨機高斯處理化,以0為均值,以0.55為標准差
    y1=x1*0.1+0.3+np.random.normal(0.0,0.03)   #縱坐標,數據點在y1=x1*0.1+0.3上小范圍浮動
    vectors_set.append([x1,y1])
    x_data=[v[0] for v in vectors_set]
    y_data=[v[1] for v in vectors_set]
    plt.scatter(x_data,y_data,c='r')
    plt.show()

構造數據如下圖

2)構造線性回歸模型,學習上面數據圖是符合一個怎么樣的W和b

    W = tf.Variable(tf.random_uniform([1], -1.0, 1.0), name='W')  # 生成1維的W矩陣,取值是[-1,1]之間的隨機數
    b = tf.Variable(tf.zeros([1]), name='b') # 生成1維的b矩陣,初始值是0
    y = W * x_data + b     # 經過計算得出預估值y  
    loss = tf.reduce_mean(tf.square(y - y_data), name='loss') # 以預估值y和實際值y_data之間的均方誤差作為損失
    optimizer = tf.train.GradientDescentOptimizer(0.5) # 采用梯度下降法來優化參數  學習率為0.5
    train = optimizer.minimize(loss, name='train')  # 訓練的過程就是最小化這個誤差值
    sess = tf.Session()
    init = tf.global_variables_initializer()
    sess.run(init)
    print ("W =", sess.run(W), "b =", sess.run(b), "loss =", sess.run(loss))  # 初始化的W和b是多少
    for step in range(20):   # 執行20次訓練
      sess.run(train)      
      print ("W =", sess.run(W), "b =", sess.run(b), "loss =", sess.run(loss)) # 輸出訓練好的W和b

打印每一次結果,如下圖,隨着迭代進行,訓練的W、b越來越接近0.1、0.3,說明構建的回歸模型確實學習到了之間建立的數據的規則。loss一開始很大,后來慢慢變小,說明模型表達效果隨着迭代越來越好。

W = [-0.9676645] b = [0.] loss = 0.45196822

W = [-0.6281831] b = [0.29385352] loss = 0.17074569

W = [-0.39535886] b = [0.29584622] loss = 0.07962803

W = [-0.23685378] b = [0.2972129] loss = 0.03739688

W = [-0.12894464] b = [0.2981433] loss = 0.017823622

W = [-0.05548081] b = [0.29877672] loss = 0.008751821

W = [-0.00546716] b = [0.29920793] loss = 0.0045472304

W = [0.02858179] b = [0.2995015] loss = 0.0025984894

W = [0.05176209] b = [0.29970136] loss = 0.0016952885

W = [0.06754307] b = [0.29983744] loss = 0.0012766734

W = [0.07828666] b = [0.29993007] loss = 0.001082654

W = [0.08560082] b = [0.29999313] loss = 0.0009927301

W = [0.09058025] b = [0.30003607] loss = 0.0009510521

W = [0.09397022] b = [0.30006528] loss = 0.00093173544

W = [0.09627808] b = [0.3000852] loss = 0.00092278246

W = [0.09784925] b = [0.30009875] loss = 0.000918633

W = [0.09891889] b = [0.30010796] loss = 0.00091670983

W = [0.0996471] b = [0.30011424] loss = 0.0009158184

W = [0.10014286] b = [0.3001185] loss = 0.00091540517

W = [0.10048037] b = [0.30012143] loss = 0.0009152137

W = [0.10071015] b = [0.3001234] loss = 0.0009151251

注:以上內容為我學習唐宇迪老師的Tensorflow課程所做的筆記


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM