Kingma D P, Ba J. Adam: A Method for Stochastic Optimization[J]. arXiv: Learning, 2014.
@article{kingma2014adam:,
title={Adam: A Method for Stochastic Optimization},
author={Kingma, Diederik P and Ba, Jimmy},
journal={arXiv: Learning},
year={2014}}
概
鼎鼎大名.
主要內容
用\(f(\theta)\)表示目標函數, 隨機最優通常需要最小化\(\mathbb{E}(f(\theta))\), 但是因為每一次我們都取的是一個小批次, 故實際上我們處理的是\(f_1(\theta),\ldots, f_T(\theta)\). 用\(g_t=\nabla_{\theta}f_t(\theta)\)表示第\(t\)步對應的梯度.
Adam 方法分別估計梯度\(\mathbb{E}(g_t)\)的一階矩和二階矩(Adam: adaptive moment estimation 名字的由來).
算法
注意: 下面的算法中關於向量的運算都是逐項(element-wise)的運算.

選擇合適的參數
首先, 分析為什么會有
可以用歸納法證明
倘若分布穩定: \(\mathbb{E}[g_t]=\mathbb{E}[g],\mathbb{E}[g_t^2]=\mathbb{E}[g^2]\), 則
這就是為什么會有(A.1)這一步.
Adam提出時的一個很大的應用場景就是dropout(正對梯度是稀疏的情況), 這是往往需要我們取較大的\(\beta_2\)(可理解為抵消隨機因素).
既然\(\mathbb{E}[g]/\sqrt{\mathbb{E}[g^2]}\le 1\), 我們可以把步長\(\alpha\)理解為一個信賴域(既然\(|\Delta_t| \frac{<}{\approx} a\)).
另外一個很重要的性質是, 比如函數擴大(或縮小)\(c\)倍\(cf\), 此時梯度相應為\(cg\), 我們所對應的
並無變化.
一些別的優化算法
AdaGrad:
RMSprop:
AdaDelta:
注: 均為逐項
AdaMax
本文還提出了另外一種算法




理論
不想談了, 感覺證明有好多錯誤.
代碼

import numpy as np
class Adam:
def __init__(self, instance, alpha=0.001, beta1=0.9, beta2=0.999,
epsilon=1e-8, beta_decay=1., alpha_decay=False):
""" the Adam using numpy
:param instance: the theta in paper, should have the grad method to call the grads
and the zero_grad method for clearing the grads
:param alpha: the same as the paper default:0.001
:param beta1: the same as the paper default:0.9
:param beta2: the same as the paper default:0.999
:param epsilon: the same as the paper default:1e-8
:param beta_decay:
:param alpha_decay: default False, if True, we will set alpha = alpha / sqrt(t)
"""
self.instance = instance
self.alpha = alpha
self.beta1 = beta1
self.beta2 = beta2
self.epsilon = epsilon
self.beta_decay = beta_decay
self.alpha_decay = alpha_decay
self.initialize_paras()
def initialize_paras(self):
self.m = 0.
self.v = 0.
self.timestep = 0
def update_paras(self):
grads = self.instance.grad
self.beta1 *= self.beta_decay
self.beta2 *= self.beta_decay
self.m = self.beta1 * self.m + (1 - self.beta1) * grads
self.v = self.beta2 * self.v + (1 - self.beta2) * grads ** 2
self.timestep += 1
if self.alpha_decay:
return self.alpha / np.sqrt(self.timestep)
return self.alpha
def zero_grad(self):
self.instance.zero_grad()
def step(self):
alpha = self.update_paras()
betat1 = 1 - self.beta1 ** self.timestep
betat2 = 1 - self.beta2 ** self.timestep
temp = alpha * np.sqrt(betat2) / betat1
self.instance.parameters -= temp * self.m / (np.sqrt(self.v) + self.epsilon)
class PPP:
def __init__(self, parameters, grad_func):
self.parameters = parameters
self.zero_grad()
self.grad_func = grad_func
def zero_grad(self):
self.grad = np.zeros_like(self.parameters)
def calc_grad(self):
self.grad += self.grad_func(self.parameters)
def f(x):
return x[0] ** 2 + 5 * x[1] ** 2
def grad(x):
return np.array([2 * x[0], 100 * x[1]])
if __name__ == "__main__":
x = np.array([10., 10.])
x = PPP(x, grad)
xs = []
ys = []
optim = Adam(x, alpha=0.4)
for i in range(100):
xs.append(x.parameters.copy())
y = f(x.parameters)
ys.append(y)
optim.zero_grad()
x.calc_grad()
optim.step()
xs = np.array(xs)
ys = np.array(ys)
import matplotlib.pyplot as plt
fig, (ax0, ax1)= plt.subplots(1, 2)
ax0.plot(xs[:, 0], xs[:, 1])
ax0.scatter(xs[:, 0], xs[:, 1])
ax0.set(title="trajectory", xlabel="x", ylabel="y")
ax1.plot(np.arange(len(ys)), ys)
ax1.set(title="loss-iterations", xlabel="iterations", ylabel="loss")
plt.show()
