TensoFlow自動求導機制
『TensorFlow』第二彈_線性擬合&神經網絡擬合_恰是故人歸
下面做了三個簡單嘗試,
- 利用包含gradients、assign等tf函數直接構建圖進行自動梯度下降
- 利用優化器計算出導數,再將導數應用到變量上
- 直接使用優化器不顯式得到導數
更新參數必須使用assign,這也可能會涉及到控制依賴問題。
# Author : Hellcat
# Time : 2/20/2018
import tensorflow as tf
tf.set_random_seed(1000)
def get_fake_data(batch_size=8):
x = 20 * tf.random_uniform([batch_size,1],dtype=tf.float32)
y = tf.multiply(x,3) + 1 + tf.multiply(
tf.random_normal([batch_size,1],mean=0,stddev=0.01,dtype=tf.float32),1)
return x, y
x, y = get_fake_data()
w = tf.Variable(tf.random_uniform([1,1], dtype=tf.float32), name='w')
b = tf.Variable(tf.random_uniform([1,1], dtype=tf.float32), name='b')
lr = 0.001
y_pred = tf.add(tf.multiply(w,x),b)
loss = tf.reduce_mean(tf.pow(tf.multiply(0.5,(y_pred - y)),2),axis=0)
# 梯度嘗試
grad_w, grad_b = tf.gradients(loss,[w,b])
train_w = tf.assign(w,tf.subtract(w,lr*grad_w))
train_b = tf.assign(b,tf.subtract(b,lr*grad_b))
train = [train_w, train_b]
# 使用優化器
# optimizer = tf.train.GradientDescentOptimizer(lr) # 優化器&學習率選擇
# ## 優化器+梯度操作
# grads_and_vars = optimizer.compute_gradients(loss, [w,b])
# train = optimizer.apply_gradients(grads_and_vars)
## 優化器徑直優化
# train = optimizer.minimize(loss)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for ii in range(80000):
sess.run([train])
if ii % 1000 == 0:
print(sess.run(w),sess.run(b))
PyTorch自動求導機制
由於梯度是會累加的,所以清空梯度一定不要忘記。
import torch as t
from torch.autograd import Variable as V
import matplotlib.pyplot as plt
from IPython import display
# 指定隨機數種子
t.manual_seed(1000)
def get_fake_data(batch_size=8):
x = t.rand(batch_size,1)*20
y = x * 2 + 3 + 3*t.randn(batch_size,1)
return x, y
x, y = get_fake_data()
plt.scatter(x.squeeze(), y.squeeze())
w = V(t.rand(1,1),requires_grad=True)
b = V(t.rand(1,1),requires_grad=True)
lr = 0.001
for ii in range(8000):
x, y = get_fake_data()
x, y = V(x), V(y)
# print(x, y)
y_pred = x.mm(w) + b.expand_as(x)
loss = 0.5*(y_pred - y)**2
loss = loss.sum() # 集結loss向量
loss.backward()
w.data.sub_(lr * w.grad.data)
b.data.sub_(lr * b.grad.data)
w.grad.data.zero_()
b.grad.data.zero_()
if ii % 1000 == 0:
display.clear_output(wait=True)
x = t.arange(0,20).view(-1,1)
y = x.mm(w.data) + b.data.expand_as(x)
plt.plot(x.numpy(), y.numpy())
x2, y2 = get_fake_data(batch_size=20)
plt.scatter(x2, y2)
plt.xlim(0,20)
plt.ylim(0,40)
plt.show()
print(w.data.squeeze(), b.data.squeeze())
