梯度下降簡介


Outline

  • What's Gradient

  • What does it mean

  • How to Search

  • AutoGrad

What's Gradient

  • 導數,derivative,抽象表達

  • 偏微分,partial derivative,沿着某個具體的軸運動

  • 梯度,gradient,向量

\[\nabla{f} = (\frac{\partial{f}}{\partial{x_1}};\frac{\partial{f}}{{\partial{x_2}}};\cdots;\frac{\partial{f}}{{\partial{x_n}}}) \]

19-梯度下降簡介-梯度圖.jpg

What does it mean?

  • 箭頭的方向表示梯度的方向
  • 箭頭模的大小表示梯度增大的速率

19-梯度下降簡介-梯度是什么.jpg

How to search

  • 沿着梯度下降的反方向搜索

19-梯度下降簡介-2梯度搜索.jpg

For instance

\[\theta_{t+1}=\theta_t-\alpha_t\nabla{f(\theta_t)} \]

19-梯度下降簡介-二維梯度下降1.gif

19-梯度下降簡介-二維梯度下降2.gif

AutoGrad

  • With Tf.GradientTape() as tape:

    • Build computation graph
    • \(loss = f_\theta{(x)}\)
  • [w_grad] = tape.gradient(loss,[w])

import tensorflow as tf
w = tf.constant(1.)
x = tf.constant(2.)
y = x * w
with tf.GradientTape() as tape:
    tape.watch([w])
    y2 = x * w
grad1 = tape.gradient(y, [w])
grad1
[None]
with tf.GradientTape() as tape:
    tape.watch([w])
    y2 = x * w
grad2 = tape.gradient(y2, [w])
grad2
[<tf.Tensor: id=30, shape=(), dtype=float32, numpy=2.0>]
try:
    grad2 = tape.gradient(y2, [w])
except Exception as e:
    print(e)
GradientTape.gradient can only be called once on non-persistent tapes.
  • 永久保存grad
with tf.GradientTape(persistent=True) as tape:
    tape.watch([w])
    y2 = x * w
grad2 = tape.gradient(y2, [w])
grad2
[<tf.Tensor: id=35, shape=(), dtype=float32, numpy=2.0>]
grad2 = tape.gradient(y2, [w])
grad2
[<tf.Tensor: id=39, shape=(), dtype=float32, numpy=2.0>]

\(2^{nd}\)-order

  • y = xw + b

  • \(\frac{\partial{y}}{\partial{w}} = x\)

  • \(\frac{\partial^2{y}}{\partial{w^2}} = \frac{\partial{y'}}{\partial{w}} = \frac{\partial{X}}{\partial{w}} = None\)

with tf.GradientTape() as t1:
    with tf.GradientTape() as t2:
        y = x * w + b
    dy_dw, dy_db = t2.gradient(y, [w, b])

d2y_dw2 = t1.gradient(dy_dw, w)


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM