构建神经网络
将上述计算预测输出的过程以“类和对象”的方式来描述,实现的方案如下所示。类成员变量有参数 w 和 b,并写了一个forward函数(代表“前向计算”)完成上述从特征和参数到输出预测值的计算过程。
1 class Network(object):
2 def __init__(self, num_of_weights):
3 # 随机产生w的初始值
4 # 为了保持程序每次运行结果的一致性,
5 # 此处设置固定的随机数种子
6 np.random.seed(0)
7 self.w = np.random.randn(num_of_weights, 1)
8 self.b = 0.
9
10 def forward(self, x):
11 a = np.dot(x, self.w) + self.b
12 return a
基于Network类的定义,模型的计算过程可以按下述方式达成。
1 net = Network(13)
2 x1 = x[0]
3 y1 = y[0]
4 z = net.forward(x1)
5 print(x1)
6 print(y1)
7 print(z)
[-0.02146321 0.03767327 -0.28552309 -0.08663366 0.01289726 0.04634817
0.00795597 -0.00765794 -0.25172191 -0.11881188 -0.29002528 0.0519112
-0.17590923]
[-0.00390539]
[-0.63182506]
13表示参数个数,x1第一组参数;y1第一组实际房价;z是根据第一组参数得到预测值。上图看到,预测值z与真实值y1之间还是有很大的差距。
通过模型计算x1表示的影响因素所对应的房价应该是z, 但实际数据告诉我们房价是y,这时我们需要有某种指标来衡量预测值z跟真实值y之间的差距。对于回归问题,最常采用的衡量方法是使用均方误差作为评价模型好坏的指标,具体定义如下:
Loss=(y - z) * (y - z)
上式中的Loss(简记为: L) 通常也被称作损失函数,它是衡量模型好坏的指标,在回归问题中均方误差是一种比较常见的形式,分类问题中通常会采用交叉熵损失函数,在后续的章节中会更详细的介绍。 对一个样本计算损失的代码实现如下:
1 Loss = (y1 - z)*(y1 - z)
2 print(y1)
3 print(z)
4 print(Loss)
[-0.00390539]
[-0.63182506]
[0.39428312]
因为计算损失时需要把每个样本的损失都考虑到,所以我们需要对单个样本的损失函数进行求和,并除以样本总数N。
L=1/N * Sigma(yi-zi)^2 (i=1 to 样本总数)
对上面的计算代码做出相应的调整,在Network类下面添加损失函数的计算过程如下
1 class Network(object):
2 def __init__(self, num_of_weights):
3 # 随机产生w的初始值
4 # 为了保持程序每次运行结果的一致性,此处设置固定的随机数种子
5 np.random.seed(0)
6 self.w = np.random.randn(num_of_weights, 1)
7 self.b = 0.
8
9 def forward(self, x):
10 z = np.dot(x, self.w) + self.b
11 return z
12
13 def loss(self, z, y):
14 error = z - y
15 cost = error * error
16 cost = np.mean(cost)
17 return cost
使用上面定义的Network类,可以方便的计算预测值和损失函数。 需要注意,类中的变量x, w,b, z, error等均是向量。以变量x为例,共有两个维度,一个代表特征数量(=13),一个代表样本数量(演示程序如下)。
1 net = Network(13)
2 # 此处可以一次性计算多个样本的预测值和损失函数
3 x1 = x[0:3]
4 y1 = y[0:3]
5 z = net.forward(x1)
6 print('predict: ', z)
7 loss = net.loss(z, y1)
8 print('loss:', loss)
predict: [[-0.63182506]
[-0.55793096]
[-1.00062009]]
loss: 0.7229825055441156
此处演示了前三组数据(0,1,2)的预测值与损失值。x,y还是原来读取的原始数据:影响房价的因素,真实房价。
神经网络的训练
上述计算过程描述了如何构建神经网络,通过神经网络完成预测值和损失函数的计算。接下来将介绍如何求解参数w和b的数值,这个过程也称为模型训练。模型训练的目标是让定义的损失函数尽可能的小,也就是说找到一个参数解w和b使得损失函数取得极小值。
求解损失函数的极小值
基于最基本的微积分知识,函数在极值点处的导数为0。那么,让损失函数取极小值的w和b应该是下述方程组的解:
∂L/∂wj=0, for j=0,1,2...12
∂L/∂b=0
将样本数据(x,y)代入上面的方程组固然可以求解出w和b的值,但是这种方法只对线性回归这样简单的情况有效。如果模型中含有非线性变换,或者损失函数不是均方差这种简单形式,则很难通过上式求解。为了避免这一情况,下面我们将引入更加普适的数值求解方法。
梯度下降法
训练的关键是找到一组(w,b)使得损失函数L取极小值。我们先看一下损失函数L只随两个参数变化时的简单情形,启发下寻解的思路。
L=L(w5,w9)
这里我们将w0,w1,...,w12中除w5,w9之外的参数和b都固定下来,可以用图画出L(w5,w9)的形式。
1 net = Network(13)
2 losses = []
3 #只画出参数w5和w9在区间[-160, 160]的曲线部分,已经包含损失函数的极值
4 w5 = np.arange(-160.0, 160.0, 1.0)
5 w9 = np.arange(-160.0, 160.0, 1.0)
6 losses = np.zeros([len(w5), len(w9)])
7
8 #计算设定区域内每个参数取值所对应的Loss
9 for i in range(len(w5)): #对w5每间隔1遍历
10 for j in range(len(w9)): #对w9每间隔1遍历
11 net.w[5] = w5[i]
12 net.w[9] = w9[j]
13 z = net.forward(x)
14 loss = net.loss(z, y)
15 losses[i, j] = loss
16
17 #将两个变量和对应的Loss作3D图: 看来可以在程序的任意地方使用import
18 import matplotlib.pyplot as plt
19 from mpl_toolkits.mplot3d import Axes3D
20 fig = plt.figure('Loss')
21 ax = Axes3D(fig)
22
23 w5, w9 = np.meshgrid(w5, w9)
24
25 ax.plot_surface(w5, w9, losses, rstride=1, cstride=1, cmap='rainbow')
26 plt.show('Loss')

简单情形——只考虑两个参数w5和w9
对于这种简单情形,我们利用上面的程序在3维空间中画出了损失函数随参数变化的曲面图,从上图可以看出有些区域的函数值明显比周围的点小。需要说明的是:为什么这里我们选择w5和w9来画图?这是因为选择这两个参数的时候,可比较直观的从损失函数的曲面图上发现极值点的存在。其他参数组合,从图形上观测损失函数的极值点不够直观 (看来作者为了让我们看懂,不知试过多少次w的组合,才找到w5,w9这一对,真实煞费苦心)。
上文提到,直接求解导数方程的方式在多数情况下较困难,本质原因是导数方程往往正向求解容易(已知X,求得Y),反向求解较难(已知Y,求得X)。这种特性的方程在很多加密算法中较为常见,与日常见到的锁头特性一样:已知“钥匙”,锁头判断是否正确容易;已知“锁头”,反推钥匙的形状比较难。
这种情况特别类似于一位想从山峰走到坡谷的盲人,他看不见坡谷在哪(无法逆向求解出Loss导数为0时的参数值),但可以伸脚探索身边的坡度(当前点的导数值,也称为梯度)。那么,求解Loss函数最小值可以“从当前的参数取值,一步步的按照下坡的方向下降,直到走到最低点”实现。这种方法个人称它为“瞎子下坡法”。哦不,有个更正式的说法“梯度下降法”。
现在我们要找出一组[w5,w9]的值,使得损失函数最小,实现梯度下降法的方案如下:
- 随机的选一组初始值,例如: [w5,w9]=[−100.0,−100.0]
- 选取下一个点[w5′,w9′]使得 L(w5′,w9′)<L(w5,w9)
- 重复上面的步骤2,直到损失函数几乎不再下降
如何选择[w5′,w9′]是至关重要的,第一要保证L是下降的,第二要使得下降的趋势尽可能的快。微积分的基础知识告诉我们,沿着梯度的反方向,是函数值下降最快的方向,如下图所示在点P0(上图未画出),[w5,w9]=[−100.0,−100.0],梯度方向是图中P0点的箭头指向的方向,沿着箭头方向向前移动一小步,可以观察损失函数的变化。 在P0点,[w5,w9]=[−100.0,−100.0],L=686.3
计算梯度
上面我们讲过了损失函数的计算方法,这里稍微加以改写,引入因子1/2,定义损失函数如下
L=1/2*N * Sigma(yi-zi)^2 (i=1,2,,,N)
其中zi是网络对第i个样本的预测值
zi=Sigma(xi j wj +b)
可以计算出L对w和b的偏导数
∂L/∂wj = 1/N * Sgma(zi -yi) * ∂z/wj = 1/N * Sigma(zi-yi)xi j (i=1,2,3.. N)
∂L/∂b=1/N * Sigma(zi - yi ) ∂zi / b = 1/N * Sigma(zi - yi) (i=1,2,3...N)
从导数的计算过程可以看出,因子1/2被消掉了,这是因为二次函数求导的时候会产生因子2,这也是我们将损失函数改写的原因。
这里我们感兴趣的是w5和w9,
∂L/∂w5=1/N * Sigma(zi-yi)* xi 5
∂L/∂w9=1/N * Sigma(zi-yi)* xi 9
则可以在Network类中定义如下的梯度计算函数
∂L/∂wj = 1/N * Sgma(zi -yi) * ∂z/wj = 1/N * Sigma(zi-yi)xi j (i=1,2,3.. N)
借助于numpy里面的矩阵操作,我们可以直接对所有wj (j=0,...,12)一次性的计算出13个参数所对应的梯度来。上式中关键是(z-y)*x,这一部分也是代码中的关键部分。
先考虑只有一个样本的情况,上式中的N=1,∂L/∂wj =(z1−y1)x1 j
可以通过具体的程序查看每个变量的数据和维度
1 x1 = x[0]
2 y1 = y[0]
3 z1 = net.forward(x1)
4 print('x1 {}, shape {}'.format(x1, x1.shape))
5 print('y1 {}, shape {}'.format(y1, y1.shape))
6 print('x1 {}, shape {}'.format(z1, z1.shape))
x1 [-0.02146321 0.03767327 -0.28552309 -0.08663366 0.01289726 0.04634817
0.00795597 -0.00765794 -0.25172191 -0.11881188 -0.29002528 0.0519112
-0.17590923], shape (13,)
y1 [-0.00390539], shape (1,)
x1 [-12.05947643], shape (1,)
按上面的公式,当只有一个样本时(样本x1表示第一个样本),可以计算某个wj,比如w0的梯度
1 gradient_w0 = (z1 - y1) * x1[0]
2 print('gradient_w0 {}'.format(gradient_w0))
gradient_w0 [0.25875126]
同样我们可以计算w1的梯度
1 gradient_w1 = (z1 - y1) * x1[1]
2 print('gradient_w1 {}'.format(gradient_w1))
gradient_w1 [-0.45417275]
依次计算w2的梯度,x1[1]对应w1,x1[2]对应w2.
1 gradient_w2= (z1 - y1) * x1[2]
2 print('gradient_w2 {}'.format(gradient_w2))
gradient_w2 [3.44214394]
Numpy给我们提供了更简单的操作方法,即使用矩阵操作。计算梯度的代码中直接用 (z1 - y1) * x1,得到的是一个13维的向量,每个分量分别代表该维度的梯度。Numpy的广播功能(对向量和矩阵计算如同对1个单一变量计算一样)是我们使用它的原因。
1 gradient_w = (z1 - y1) * x1
2 print('gradient_w {}, gradient.shape {}'.format(gradient_w, gradient_w.shape))
gradient_w [ 0.25875126 -0.45417275 3.44214394 1.04441828 -0.15548386 -0.55875363
-0.09591377 0.09232085 3.03465138 1.43234507 3.49642036 -0.62581917
2.12068622], gradient.shape (13,)
共13个w值。再回到上面的梯度计算公式
∂L/∂wj = 1/N * Sgma(zi -yi) * ∂z/wj = 1/N * Sigma(zi-yi)xi j (i=1,2,3.. N)
这里输入数据中有多个样本,每个样本都对梯度有贡献。如上代码计算了只有样本1时的梯度值,同样的计算方法也可以计算样本2和样本3对梯度的贡献。
1 x2 = x[1]
2 y2 = y[1]
3 z2 = net.forward(x2)
4 gradient_w = (z2 - y2) * x2
5 print('gradient_w {}, gradient.shape {}'.format(gradient_w, gradient_w.shape))
注意:x1=x[0],x2=x[1]
gradient_w [ 0.7329239 4.91417754 3.33394253 2.9912385 4.45673435 -0.58146277
-5.14623287 -2.4894594 7.19011988 7.99471607 0.83100061 -1.79236081
2.11028056], gradient.shape (13,)
1 x3 = x[2]
2 y3 = y[2]
3 z3 = net.forward(x3)
4 gradient_w = (z3 - y3) * x3
5 print('gradient_w {}, gradient.shape {}'.format(gradient_w, gradient_w.shape))
gradient_w [ 0.25138584 1.68549775 1.14349809 1.02595515 1.5286008 -1.93302947
0.4058236 -0.85385157 2.46611579 2.74208162 0.28502219 -0.46695229
2.39363651], gradient.shape (13,)
1 # 注意这里是一次取出3个样本的数据,不是取出第3个样本
2 x3samples = x[0:3]
3 y3samples = y[0:3]
4 z3samples = net.forward(x3samples)
5
6 print('x {}, shape {}'.format(x3samples, x3samples.shape))
7 print('y {}, shape {}'.format(y3samples, y3samples.shape))
8 print('z {}, shape {}'.format(z3samples, z3samples.shape))
x [[-0.02146321 0.03767327 -0.28552309 -0.08663366 0.01289726 0.04634817
0.00795597 -0.00765794 -0.25172191 -0.11881188 -0.29002528 0.0519112
-0.17590923]
[-0.02122729 -0.14232673 -0.09655922 -0.08663366 -0.12907805 0.0168406
0.14904763 0.0721009 -0.20824365 -0.23154675 -0.02406783 0.0519112
-0.06111894]
[-0.02122751 -0.14232673 -0.09655922 -0.08663366 -0.12907805 0.1632288
-0.03426854 0.0721009 -0.20824365 -0.23154675 -0.02406783 0.03943037
-0.20212336]], shape (3, 13)
y [[-0.00390539]
[-0.05723872]
[ 0.23387239]], shape (3, 1)
z [[-12.05947643]
[-34.58467747]
[-11.60858134]], shape (3, 1)
上面的x3samples, y3samples, z3samples的第一维大小均为3(打印的结果中有两个[]),表示有3个样本。下面计算这3个样本对梯度的贡献。
1 gradient_w = (z3samples - y3samples) * x3samples
2 print('gradient_w {}, gradient.shape {}'.format(gradient_w, gradient_w.shape))
gradient_w [[ 0.25875126 -0.45417275 3.44214394 1.04441828 -0.15548386 -0.55875363
-0.09591377 0.09232085 3.03465138 1.43234507 3.49642036 -0.62581917
2.12068622]
[ 0.7329239 4.91417754 3.33394253 2.9912385 4.45673435 -0.58146277
-5.14623287 -2.4894594 7.19011988 7.99471607 0.83100061 -1.79236081
2.11028056]
[ 0.25138584 1.68549775 1.14349809 1.02595515 1.5286008 -1.93302947
0.4058236 -0.85385157 2.46611579 2.74208162 0.28502219 -0.46695229
2.39363651]], gradient.shape (3, 13)
此处可见,计算梯度gradient_w的维度是3×13,并且其第1行与上面第1个样本计算的梯度一致,第2行与上面第2个样本计算的梯度一致,第3行与上面第3个样本计算的梯度一致。这里使用矩阵操作,可能更加方便的对3个样本分别计算各自对梯度的贡献。
那么对于有N个样本的情形,我们可以直接使用如下方式计算出所有样本对梯度的贡献,这就是使用Numpy库广播功能带来的便捷。
1 z = net.forward(x)
2 gradient_w = (z - y) * x
3 print('gradient_w shape {}'.format(gradient_w.shape))
4 print(gradient_w)
gradient_w shape (404, 13)
[[ 0.25875126 -0.45417275 3.44214394 ... 3.49642036 -0.62581917
2.12068622]
[ 0.7329239 4.91417754 3.33394253 ... 0.83100061 -1.79236081
2.11028056]
[ 0.25138584 1.68549775 1.14349809 ... 0.28502219 -0.46695229
2.39363651]
...
[ 14.70025543 -15.10890735 36.23258734 ... 24.54882966 5.51071122
26.26098922]
[ 9.29832217 -15.33146159 36.76629344 ... 24.91043398 -1.27564923
26.61808955]
[ 19.55115919 -10.8177237 25.94192351 ... 17.5765494 3.94557661
17.64891012]]
上面gradient_w的每一行代表了一个样本对梯度的贡献。根据梯度的计算公式,总梯度是对每个样本对梯度贡献的平均值。 如果要计算∂L/∂w0,可如下计算:
∂L/∂w0=(0.25875126+0.7329239+...+19.55115919)/404
(上式中是以列为单位计算的)同样,我们也可以使用Numpy的均值函数来完成此过程:
1 # axis = 0 表示把每一行做相加然后再除以总的行数
2 gradient_w = np.mean(gradient_w, axis=0)
3 print('gradient_w ', gradient_w.shape)
4 print('w ', net.w.shape)
5 print(gradient_w)
6 print(net.w)
有必要对axis=0单独再解释下;
1 >>X=np.array([[1,2],[3,4],[5,6]])
2 >>print X
3 [[1 2]
4 [3 4]
5 [5 6]]
6 >>print X.mean(axis=0)
7 [3. 4.]
8 >>print X.mean(axis=1)
9 [1.5 3.5 5.5]
axis=0表示输出矩阵是1行,也就是求每一列的平均值。(1+3+5)/3=3,(2+4+6)/3=4
axis=1表示输出矩阵是1列, 也就是求每一行的平均值,(1+2)/2=1.5,(3+4)/2=3.5,(5+6)/2=5.5
实际上这个axis=0就是选择shape中第一个元素(即第一维)变为1,axis=1就是选择shape中第二个元素变为1。用shape来看会比较方便。
以上输出结果为:
gradient_w (13,)
w (13, 1)
[ 1.59697064 -0.92928123 4.72726926 1.65712204 4.96176389 1.18068454
4.55846519 -3.37770889 9.57465893 10.29870662 1.3900257 -0.30152215
1.09276043]
[[ 1.76405235e+00]
[ 4.00157208e-01]
[ 9.78737984e-01]
[ 2.24089320e+00]
[ 1.86755799e+00]
[ 1.59000000e+02]
[ 9.50088418e-01]
[-1.51357208e-01]
[-1.03218852e-01]
[ 1.59000000e+02]
[ 1.44043571e-01]
[ 1.45427351e+00]
[ 7.61037725e-01]]
gradient_w为平均梯度,net_w是偏置值列表。
我们使用numpy的矩阵操作方便的完成了gradient的计算,但引入了一个问题,gradient_w的形状是(13,),而w的维度是(13, 1)。导致该问题的原因是使用np.mean函数的时候消除了第0维。为了加减乘除等计算方便,gradient_w和w必须保持一致的形状。所以,我们将gradient_w的维度也设置为(13, 1),代码如下:
1 gradient_w = gradient_w[:, np.newaxis]
2 print('gradient_w shape', gradient_w.shape)
3 print(np.newaxis)
gradient_w shape (13, 1)
None
综合上面的讨论,我们可以把计算梯度的代码整理如下:
1 z = net.forward(x)
2 gradient_w = (z - y) * x
3 gradient_w = np.mean(gradient_w, axis=0)
4 gradient_w = gradient_w[:, np.newaxis]
5 gradient_w
array([[ 1.59697064],
[-0.92928123],
[ 4.72726926],
[ 1.65712204],
[ 4.96176389],
[ 1.18068454],
[ 4.55846519],
[-3.37770889],
[ 9.57465893],
[10.29870662],
[ 1.3900257 ],
[-0.30152215],
[ 1.09276043]])
上述代码非常简洁的完成了www的梯度计算。同样,计算bbb的梯度的代码也是类似的原理。
1 gradient_b = (z - y)
2 gradient_b = np.mean(gradient_b)
3 # 此处b是一个数值,所以可以直接用np.mean得到一个标量
4 gradient_b
-1.0918438870293816e-13
将上面计算w和b的梯度的过程,写成Network类的gradient函数,代码如下所示。
1 class Network(object):
2 def __init__(self, num_of_weights):
3 # 随机产生w的初始值
4 # 为了保持程序每次运行结果的一致性,此处设置固定的随机数种子
5 np.random.seed(0)
6 self.w = np.random.randn(num_of_weights, 1)
7 self.b = 0.
8
9 def forward(self, x):
10 z = np.dot(x, self.w) + self.b
11 return z
12
13 def loss(self, z, y):
14 error = z - y
15 num_samples = error.shape[0]
16 cost = error * error
17 cost = np.sum(cost) / num_samples
18 return cost
19
20 def gradient(self, x, y):
21 z = self.forward(x)
22 gradient_w = (z-y)*x
23 gradient_w = np.mean(gradient_w, axis=0)
24 gradient_w = gradient_w[:, np.newaxis]
25 gradient_b = (z - y)
26 gradient_b = np.mean(gradient_b)
27
28 return gradient_w, gradient_b
1 # 调用上面定义的gradient函数,计算梯度
2 # 初始化网络,
3 net = Network(13)
4 # 设置[w5, w9] = [-100., +100.]
5 net.w[5] = -100.0
6 net.w[9] = -100.0
7
8 z = net.forward(x)
9 loss = net.loss(z, y)
10 gradient_w, gradient_b = net.gradient(x, y)
11 gradient_w5 = gradient_w[5][0]
12 gradient_w9 = gradient_w[9][0]
13 print('point {}, loss {}'.format([net.w[5][0], net.w[9][0]], loss))
14 print('gradient {}'.format([gradient_w5, gradient_w9]))
point [-100.0, -100.0], loss 686.3005008179159
gradient [-0.850073323995813, -6.138412364807849]
寻找损失函数更小的点
下面我们开始研究怎样更新梯度,首先沿着梯度的反方向移动一小步下下一个点P1,观察损失函数的变化。
1 # 在[w5, w9]平面上,沿着梯度的反方向移动到下一个点P1
2 # 定义移动步长 eta
3 eta = 0.1
4 # 更新参数w5和w9
5 net.w[5] = net.w[5] - eta * gradient_w5
6 net.w[9] = net.w[9] - eta * gradient_w9
7 # 重新计算z和loss
8 z = net.forward(x)
9 loss = net.loss(z, y)
10 gradient_w, gradient_b = net.gradient(x, y)
11 gradient_w5 = gradient_w[5][0]
12 gradient_w9 = gradient_w[9][0]
13 print('point {}, loss {}'.format([net.w[5][0], net.w[9][0]], loss))
14 print('gradient {}'.format([gradient_w5, gradient_w9]))
point [-99.30472235411645, -95.21386756922303], loss 628.0145896171035
gradient [-0.892805532668633, -5.786482593233519]
运行上面的代码,可以发现沿着梯度反方向走一小步,下一个点的损失函数的确减少了。
- 我们可以不停的点击上面的代码块,观察损失函数是否一直在变小。
将上面的循环的计算过程封装在train和update函数中,如下代码所示。
1 class Network(object):
2 def __init__(self, num_of_weights):
3 # 随机产生w的初始值
4 # 为了保持程序每次运行结果的一致性,此处设置固定的随机数种子
5 np.random.seed(0)
6 self.w = np.random.randn(num_of_weights,1)
7 self.w[5] = -100.
8 self.w[9] = -100.
9 self.b = 0.
10
11 def forward(self, x):
12 z = np.dot(x, self.w) + self.b
13 return z
14
15 def loss(self, z, y):
16 error = z - y
17 num_samples = error.shape[0]
18 cost = error * error
19 cost = np.sum(cost) / num_samples
20 return cost
21
22 def gradient(self, x, y):
23 z = self.forward(x)
24 gradient_w = (z-y)*x
25 gradient_w = np.mean(gradient_w, axis=0)
26 gradient_w = gradient_w[:, np.newaxis]
27 gradient_b = (z - y)
28 gradient_b = np.mean(gradient_b)
29 return gradient_w, gradient_b
30
31 def update(self, graident_w5, gradient_w9, eta=0.01):
32 net.w[5] = net.w[5] - eta * gradient_w5
33 net.w[9] = net.w[9] - eta * gradient_w9
34
35 def train(self, x, y, iterations=100, eta=0.01):
36 points = []
37 losses = []
38 for i in range(iterations):
39 points.append([net.w[5][0], net.w[9][0]])
40 z = self.forward(x) #调用前向计算 41 L = self.loss(z, y) #调用损失计算 42 gradient_w, gradient_b = self.gradient(x, y) #计算梯度 43 gradient_w5 = gradient_w[5][0]
44 gradient_w9 = gradient_w[9][0]
45 self.update(gradient_w5, gradient_w9, eta) #更新w的值 46 losses.append(L)
47 if i % 50 == 0:
48 print('iter {}, point {}, loss {}'.format(i, [net.w[5][0], net.w[9][0]], L))
49 return points, losses
50
51 # 获取数据
52 train_data, test_data = load_data()
53 x = train_data[:, :-1]
54 y = train_data[:, -1:]
55 # 创建网络
56 net = Network(13)
57 num_iterations=2000
58 # 启动训练
59 points, losses = net.train(x, y, iterations=num_iterations, eta=0.01)
60
61 # 画出损失函数的变化趋势
62 plot_x = np.arange(num_iterations)
63 plot_y = np.array(losses)
64 plt.plot(plot_x, plot_y)
65 plt.show()
总共迭代2000次,在train()中显示,每间隔50次打印损失函数值。我们的目标是损失函数越小越好。
iter 0, point [-99.99107194467331, -99.93861587635192], loss 686.3005008179159
iter 50, point [-99.54466917833872, -96.92625614479068], loss 649.1881215964214
iter 100, point [-99.09826641200414, -94.02257939568014], loss 614.6274942098021
iter 150, point [-98.65186364566955, -91.223571331213], loss 582.4351927948481
iter 200, point [-98.20546087933496, -88.52536592506404], loss 552.4410512829517
iter 250, point [-97.75905811300038, -85.92423994585761], loss 524.4872034417948
iter 300, point [-97.31265534666579, -83.41660768291554], loss 498.4271924666585
iter 350, point [-96.8662525803312, -80.99901586681396], loss 474.12514508049964
iter 400, point [-96.41984981399662, -78.66813877755351], loss 451.4550054673662
iter 450, point [-95.97344704766203, -76.42077353341368], loss 430.29982470266293
iter 500, point [-95.52704428132745, -74.25383555381723], loss 410.5511016581253
iter 550, point [-95.08064151499286, -72.16435418977726], loss 392.1081716509329
iter 600, point [-94.63423874865828, -70.14946851573782], loss 374.87763937683286
iter 650, point [-94.18783598232369, -68.20642327684624], loss 358.77285291795
iter 700, point [-93.7414332159891, -66.33256498591636], loss 343.7134158486171
iter 750, point [-93.29503044965452, -64.52533816455369], loss 329.62473467831967
iter 800, point [-92.84862768331993, -62.78228172311797], loss 316.43759907099167
iter 850, point [-92.40222491698535, -61.101025474394774], loss 304.08779246551063
iter 900, point [-91.95582215065076, -59.47928677603745], loss 292.51573089440996
iter 950, point [-91.50941938431617, -57.91486729702361], loss 281.666127957512
iter 1000, point [-91.06301661798159, -56.405649903545054], loss 271.48768405528784
iter 1050, point [-90.616613851647, -54.94959565991996], loss 261.93279812411936
iter 1100, point [-90.17021108531242, -53.54474094027905], loss 252.95730024305914
iter 1150, point [-89.72380831897783, -52.18919464693409], loss 244.52020359984573
iter 1200, point [-89.27740555264324, -50.88113553148819], loss 236.5834744135485
iter 1250, point [-88.83100278630866, -49.61880961489314], loss 229.1118185128753
iter 1300, point [-88.38460001997407, -48.40052770279916], loss 222.07248336346697
iter 1350, point [-87.93819725363949, -47.22466299267744], loss 215.4350744249629
iter 1400, point [-87.4917944873049, -46.08964876932534], loss 209.17138479973227
iter 1450, point [-87.04539172097031, -44.99397618549037], loss 203.25523721040955
iter 1500, point [-86.59898895463573, -43.936192124468434], loss 197.66233741314892
iter 1550, point [-86.15258618830114, -42.91489714164897], loss 192.37013821823896
iter 1600, point [-85.70618342196656, -41.928743482090795], loss 187.35771334975172
iter 1650, point [-85.25978065563197, -40.97643317132055], loss 182.60564043157902
iter 1700, point [-84.81337788929739, -40.05671617664895], loss 178.0958924388527
iter 1750, point [-84.3669751229628, -39.16838863640047], loss 173.8117370016488
iter 1800, point [-83.92057235662821, -38.31029115454813], loss 169.7376429923022
iter 1850, point [-83.47416959029363, -37.48130715833693], loss 165.85919386886502
iter 1900, point [-83.02776682395904, -36.6803613165702], loss 162.1630072854668
iter 1950, point [-82.58136405762446, -35.906418016317865], loss 158.63666051578292
对所有参数计算梯度并更新
为了能给读者直观的感受,上面演示的梯度下降法的过程仅包含w5和w9两个参数。房价预测的完整模型,必须要对所有参数w和b进行求解。这需要将Network中的update和train函数进行修改。由于不在限定参与计算的参数(所有参数均参与计算),修改之后的代码反而更加简洁。
1 class Network(object):
2 def __init__(self, num_of_weights):
3 # 随机产生w的初始值
4 # 为了保持程序每次运行结果的一致性,此处设置固定的随机数种子
5 np.random.seed(0)
6 self.w = np.random.randn(num_of_weights, 1)
7 self.b = 0.
8
9 def forward(self, x):
10 z = np.dot(x, self.w) + self.b
11 return z
12
13 def loss(self, z, y):
14 error = z - y
15 num_samples = error.shape[0]
16 cost = error * error
17 cost = np.sum(cost) / num_samples
18 return cost
19
20 def gradient(self, x, y):
21 z = self.forward(x)
22 gradient_w = (z-y)*x
23 gradient_w = np.mean(gradient_w, axis=0)
24 gradient_w = gradient_w[:, np.newaxis]
25 gradient_b = (z - y)
26 gradient_b = np.mean(gradient_b)
27 return gradient_w, gradient_b
28
29 def update(self, gradient_w, gradient_b, eta = 0.01):
30 self.w = self.w - eta * gradient_w
31 self.b = self.b - eta * gradient_b
32
33 def train(self, x, y, iterations=100, eta=0.01):
34 losses = []
35 for i in range(iterations):
36 z = self.forward(x)
37 L = self.loss(z, y)
38 gradient_w, gradient_b = self.gradient(x, y)
39 self.update(gradient_w, gradient_b, eta)
40 losses.append(L)
41 if (i+1) % 10 == 0:
42 print('iter {}, loss {}'.format(i, L))
43 return losses
44
45 # 获取数据
46 train_data, test_data = load_data()
47 x = train_data[:, :-1]
48 y = train_data[:, -1:]
49 # 创建网络
50 net = Network(13)
51 num_iterations=1000
52 # 启动训练
53 losses = net.train(x,y, iterations=num_iterations, eta=0.01)
54
55 # 画出损失函数的变化趋势
56 plot_x = np.arange(num_iterations)
57 plot_y = np.array(losses)
58 plt.plot(plot_x, plot_y)
59 plt.show()
迭代1000次,每间隔10次打印损失函数值。
iter 9, loss 1.8984947314576224
iter 19, loss 1.8031783384598725
iter 29, loss 1.7135517565541092
iter 39, loss 1.6292649416831264
iter 49, loss 1.5499895293373231
iter 59, loss 1.4754174896452612
iter 69, loss 1.4052598659324693
iter 79, loss 1.3392455915676864
iter 89, loss 1.2771203802372915
iter 99, loss 1.218645685090292
iter 109, loss 1.1635977224791534
iter 119, loss 1.111766556287068
iter 129, loss 1.0629552390811503
iter 139, loss 1.0169790065644477
iter 149, loss 0.9736645220185994
iter 159, loss 0.9328491676343147
iter 169, loss 0.8943803798194307
iter 179, loss 0.8581150257549611
iter 189, loss 0.8239188186389669
iter 199, loss 0.7916657692169988
iter 209, loss 0.761237671346902
iter 219, loss 0.7325236194855752
iter 229, loss 0.7054195561163928
iter 239, loss 0.6798278472589763
iter 249, loss 0.6556568843183528
iter 259, loss 0.6328207106387195
iter 269, loss 0.6112386712285092
iter 279, loss 0.59083508421862
iter 289, loss 0.5715389327049418
iter 299, loss 0.5532835757100347
iter 309, loss 0.5360064770773406
iter 319, loss 0.5196489511849665
iter 329, loss 0.5041559244351539
iter 339, loss 0.48947571154034963
iter 349, loss 0.47555980568755696
iter 359, loss 0.46236268171965056
iter 369, loss 0.44984161152579916
iter 379, loss 0.43795649088328303
iter 389, loss 0.4266696770400226
iter 399, loss 0.41594583637124666
iter 409, loss 0.4057518014851036
iter 419, loss 0.3960564371908221
iter 429, loss 0.38683051477942226
iter 439, loss 0.3780465941011246
iter 449, loss 0.3696789129556087
iter 459, loss 0.3617032833413179
iter 469, loss 0.3540969941381648
iter 479, loss 0.3468387198244131
iter 489, loss 0.3399084348532937
iter 499, loss 0.33328733333814486
iter 509, loss 0.3269577537166779
iter 519, loss 0.32090310808539985
iter 529, loss 0.3151078159144129
iter 539, loss 0.30955724187078903
iter 549, loss 0.3042376374955925
iter 559, loss 0.2991360864954391
iter 569, loss 0.2942404534243286
iter 579, loss 0.2895393355454012
iter 589, loss 0.28502201767532415
iter 599, loss 0.28067842982626157
iter 609, loss 0.27649910747186535
iter 619, loss 0.2724751542744919
iter 629, loss 0.2685982071209627
iter 639, loss 0.26486040332365085
iter 649, loss 0.2612543498525749
iter 659, loss 0.2577730944725093
iter 669, loss 0.2544100986669443
iter 679, loss 0.2511592122380609
iter 689, loss 0.2480146494787638
iter 699, loss 0.24497096681926708
iter 709, loss 0.2420230418567802
iter 719, loss 0.23916605368251415
iter 729, loss 0.23639546442555454
iter 739, loss 0.23370700193813704
iter 749, loss 0.2310966435515475
iter 759, loss 0.2285606008362593
iter 769, loss 0.22609530530403904
iter 779, loss 0.22369739499361888
iter 789, loss 0.2213637018851542
iter 799, loss 0.21909124009208833
iter 809, loss 0.21687719478222933
iter 819, loss 0.21471891178284025
iter 829, loss 0.21261388782734392
iter 839, loss 0.2105597614038757
iter 849, loss 0.20855430416838638
iter 859, loss 0.20659541288730932
iter 869, loss 0.20468110187697833
iter 879, loss 0.2028094959090178
iter 889, loss 0.20097882355283644
iter 899, loss 0.19918741092814596
iter 909, loss 0.19743367584210875
iter 919, loss 0.1957161222872899
iter 929, loss 0.19403333527807176
iter 939, loss 0.19238397600456975
iter 949, loss 0.19076677728439412
iter 959, loss 0.1891805392938162
iter 969, loss 0.18762412556104593
iter 979, loss 0.18609645920539716
iter 989, loss 0.18459651940712488
iter 999, loss 0.18312333809366155
随机梯度下降法
在上述程序中,每次迭代的时候均基于数据集中的全部数据进行计算。但在实际问题中数据集往往非常大,如果每次计算都使用全部的数据来计算损失函数和梯度,效率非常低。一个合理的解决方案是每次从总的数据集中随机抽取出小部分数据来代表整体,基于这部分数据计算梯度和损失,然后更新参数。这种方法被称作随机梯度下降法(Stochastic Gradient Descent),简称SGD。每次迭代时抽取出来的一批数据被称为一个min-batch,一个mini-batch所包含的样本数目称为batch_size。当程序迭代的时候,按mini-batch逐渐抽取出样本,当把整个数据集都遍历到了的时候,则完成了一轮的训练,也叫一个epoch。启动训练时,可以将训练的轮数num_epochs和batch_size作为参数传入。epoch: mini-batch*batch_size.000
下面结合程序介绍具体的实现过程。
1 # 获取数据
2 train_data, test_data = load_data()
3 train_data.shape
(404, 14)
train_data中一共包含404条数据,如果batch_size=10,即取前0-9号样本作为第一个mini-batch,命名train_data1。
1 train_data1 = train_data[0:10]
2 train_data1.shape
(10, 14)
使用train_data1的数据(0-9号样本)计算梯度并更新网络参数。
1 net = Network(13)
2 x = train_data1[:, :-1]
3 y = train_data1[:, -1:]
4 loss = net.train(x, y, iterations=1, eta=0.01)
5 loss
[0.9001866101467375]
再取出10-19号样本作为第二个mini-batch,计算梯度并更新网络参数。
1 train_data2 = train_data[10:19]
2 x = train_data1[:, :-1]
3 y = train_data1[:, -1:]
4 loss = net.train(x, y, iterations=1, eta=0.01)
5 loss
[0.8903272433979657]
按此方法不断的取出新的mini-batch并逐渐更新网络参数。 下面的程序可以将train_data分成大小为batch_size的多个mini_batch。
1 batch_size = 10
2 n = len(train_data)
3 mini_batches = [train_data[k:k+batch_size] for k in range(0, n, batch_size)]
4 print('total number of mini_batches is ', len(mini_batches))
5 print('first mini_batch shape ', mini_batches[0].shape)
6 print('last mini_batch shape ', mini_batches[-1].shape)
total number of mini_batches is 41
first mini_batch shape (10, 14)
last mini_batch shape (4, 14)
上面的代码将train_data分成 404/10+1=41个 mini_batch了,其中前40个mini_batch,每个均含有10个样本,最后一个mini_batch只含有4个样本。
另外,我们这里是按顺序取出mini_batch的,而SGD里面是随机的抽取一部分样本代表总体。为了实现随机抽样的效果,我们先将train_data里面的样本顺序随机打乱,然后再抽取mini_batch。随机打乱样本顺序,需要用到np.random.shuffle函数,下面先介绍它的用法。
1 # 新建一个array
2 a = np.array([1,2,3,4,5,6,7,8,9,10,11,12])
3 print('before shuffle', a)
4 np.random.shuffle(a)
5 print('after shuffle', a)
before shuffle [ 1 2 3 4 5 6 7 8 9 10 11 12]
after shuffle [ 3 5 9 10 6 7 2 1 8 11 12 4]
多次运行上面的代码,可以发现每次执行shuffle函数后的数字顺序均不同。 上面举的是一个1维数组乱序的案例,我们在观察下2维数组乱序后的效果。
1 # 新建一个array
2 a = np.array([1,2,3,4,5,6,7,8,9,10,11,12])
3 a = a.reshape([6, 2])
4 print('before shuffle\n', a)
5 np.random.shuffle(a)
6 print('after shuffle\n', a)
before shuffle
[[ 1 2]
[ 3 4]
[ 5 6]
[ 7 8]
[ 9 10]
[11 12]]
after shuffle
[[ 3 4]
[ 7 8]
[ 1 2]
[11 12]
[ 9 10]
[ 5 6]]
观察运行结果可发现,数组的元素在第0维被随机打乱,但第1维的顺序保持不变。例如数字2仍然紧挨在数字1的后面,数字8仍然紧挨在数字7的后面,而第二维的[3, 4]并不排在[1, 2]的后面。
综上随机乱序和抽取mini_batch的步骤,我们可以改写训练过程如下。每个随机抽取的mini-batch数据,输入到模型中用以训练参数。
1 # 获取数据
2 train_data, test_data = load_data()
3
4 # 打乱样本顺序
5 np.random.shuffle(train_data)
6
7 # 将train_data分成多个mini_batch
8 batch_size = 10
9 n = len(train_data)
10 mini_batches = [train_data[k:k+batch_size] for k in range(0, n, batch_size)]
11
12 # 创建网络
13 net = Network(13)
14
15 # 依次使用每个mini_batch的数据
16 for mini_batch in mini_batches:
17 x = mini_batch[:, :-1]
18 y = mini_batch[:, -1:]
19 loss = net.train(x, y, iterations=1)
将这部分实现SGD算法的代码集成进入Network类中的train函数,最终的完整代码如下。
1 import numpy as np
2
3 class Network(object):
4 def __init__(self, num_of_weights):
5 # 随机产生w的初始值
6 # 为了保持程序每次运行结果的一致性,此处设置固定的随机数种子
7 #np.random.seed(0)
8 self.w = np.random.randn(num_of_weights, 1)
9 self.b = 0.
10
11 def forward(self, x):
12 z = np.dot(x, self.w) + self.b
13 return z
14
15 def loss(self, z, y):
16 error = z - y
17 num_samples = error.shape[0]
18 cost = error * error
19 cost = np.sum(cost) / num_samples
20 return cost
21
22 def gradient(self, x, y):
23 z = self.forward(x)
24 N = x.shape[0]
25 gradient_w = 1. / N * np.sum((z-y) * x, axis=0)
26 gradient_w = gradient_w[:, np.newaxis]
27 gradient_b = 1. / N * np.sum(z-y)
28 return gradient_w, gradient_b
29
30 def update(self, gradient_w, gradient_b, eta = 0.01):
31 self.w = self.w - eta * gradient_w
32 self.b = self.b - eta * gradient_b
33
34
35 def train(self, training_data, num_epoches, batch_size=10, eta=0.01):
36 n = len(training_data)
37 losses = []
38 for epoch_id in range(num_epoches):
39 # 在每轮迭代开始之前,将训练数据的顺序随机的打乱,
40 # 然后再按每次取batch_size条数据的方式取出
41 np.random.shuffle(training_data)
42 # 将训练数据进行拆分,每个mini_batch包含batch_size条的数据
43 mini_batches = [training_data[k:k+batch_size] for k in range(0, n, batch_size)]
44 for iter_id, mini_batch in enumerate(mini_batches):
45 #print(self.w.shape)
46 #print(self.b)
47 x = mini_batch[:, :-1]
48 y = mini_batch[:, -1:]
49 a = self.forward(x)
50 loss = self.loss(a, y)
51 gradient_w, gradient_b = self.gradient(x, y)
52 self.update(gradient_w, gradient_b, eta)
53 losses.append(loss)
54 print('Epoch {:3d} / iter {:3d}, loss = {:.4f}'.
55 format(epoch_id, iter_id, loss))
56
57 return losses
58
59 # 获取数据
60 train_data, test_data = load_data()
61
62 # 创建网络
63 net = Network(13)
64 # 启动训练
65 losses = net.train(train_data, num_epoches=50, batch_size=100, eta=0.1)
66
67 # 画出损失函数的变化趋势
68 plot_x = np.arange(len(losses))
69 plot_y = np.array(losses)
70 plt.plot(plot_x, plot_y)
71 plt.show()
Epoch 0 / iter 0, loss = 0.6273
Epoch 0 / iter 1, loss = 0.4835
Epoch 0 / iter 2, loss = 0.5830
Epoch 0 / iter 3, loss = 0.5466
Epoch 0 / iter 4, loss = 0.2147
Epoch 1 / iter 0, loss = 0.6645
Epoch 1 / iter 1, loss = 0.4875
Epoch 1 / iter 2, loss = 0.4707
Epoch 1 / iter 3, loss = 0.4153
Epoch 1 / iter 4, loss = 0.1402
Epoch 2 / iter 0, loss = 0.5897
Epoch 2 / iter 1, loss = 0.4373
Epoch 2 / iter 2, loss = 0.4631
Epoch 2 / iter 3, loss = 0.3960
Epoch 2 / iter 4, loss = 0.2340
Epoch 3 / iter 0, loss = 0.4139
Epoch 3 / iter 1, loss = 0.5635
Epoch 3 / iter 2, loss = 0.3807
Epoch 3 / iter 3, loss = 0.3975
Epoch 3 / iter 4, loss = 0.1207
Epoch 4 / iter 0, loss = 0.3786
Epoch 4 / iter 1, loss = 0.4474
Epoch 4 / iter 2, loss = 0.4019
Epoch 4 / iter 3, loss = 0.4352
Epoch 4 / iter 4, loss = 0.0435
Epoch 5 / iter 0, loss = 0.4387
Epoch 5 / iter 1, loss = 0.3886
Epoch 5 / iter 2, loss = 0.3182
Epoch 5 / iter 3, loss = 0.4189
Epoch 5 / iter 4, loss = 0.1741
Epoch 6 / iter 0, loss = 0.3191
Epoch 6 / iter 1, loss = 0.3601
Epoch 6 / iter 2, loss = 0.4199
Epoch 6 / iter 3, loss = 0.3289
Epoch 6 / iter 4, loss = 1.2691
Epoch 7 / iter 0, loss = 0.3202
Epoch 7 / iter 1, loss = 0.2855
Epoch 7 / iter 2, loss = 0.4129
Epoch 7 / iter 3, loss = 0.3331
Epoch 7 / iter 4, loss = 0.2218
Epoch 8 / iter 0, loss = 0.2368
Epoch 8 / iter 1, loss = 0.3457
Epoch 8 / iter 2, loss = 0.3339
Epoch 8 / iter 3, loss = 0.3812
Epoch 8 / iter 4, loss = 0.0534
Epoch 9 / iter 0, loss = 0.3567
Epoch 9 / iter 1, loss = 0.4033
Epoch 9 / iter 2, loss = 0.1926
Epoch 9 / iter 3, loss = 0.2803
Epoch 9 / iter 4, loss = 0.1557
Epoch 10 / iter 0, loss = 0.3435
Epoch 10 / iter 1, loss = 0.2790
Epoch 10 / iter 2, loss = 0.3456
Epoch 10 / iter 3, loss = 0.2076
Epoch 10 / iter 4, loss = 0.0935
Epoch 11 / iter 0, loss = 0.3024
Epoch 11 / iter 1, loss = 0.2517
Epoch 11 / iter 2, loss = 0.2797
Epoch 11 / iter 3, loss = 0.2989
Epoch 11 / iter 4, loss = 0.0301
Epoch 12 / iter 0, loss = 0.2507
Epoch 12 / iter 1, loss = 0.2563
Epoch 12 / iter 2, loss = 0.2971
Epoch 12 / iter 3, loss = 0.2833
Epoch 12 / iter 4, loss = 0.0597
Epoch 13 / iter 0, loss = 0.2827
Epoch 13 / iter 1, loss = 0.2094
Epoch 13 / iter 2, loss = 0.2417
Epoch 13 / iter 3, loss = 0.2985
Epoch 13 / iter 4, loss = 0.4036
Epoch 14 / iter 0, loss = 0.3085
Epoch 14 / iter 1, loss = 0.2015
Epoch 14 / iter 2, loss = 0.1830
Epoch 14 / iter 3, loss = 0.2978
Epoch 14 / iter 4, loss = 0.0630
Epoch 15 / iter 0, loss = 0.2342
Epoch 15 / iter 1, loss = 0.2780
Epoch 15 / iter 2, loss = 0.2571
Epoch 15 / iter 3, loss = 0.1838
Epoch 15 / iter 4, loss = 0.0627
Epoch 16 / iter 0, loss = 0.1896
Epoch 16 / iter 1, loss = 0.1966
Epoch 16 / iter 2, loss = 0.2018
Epoch 16 / iter 3, loss = 0.3257
Epoch 16 / iter 4, loss = 0.1268
Epoch 17 / iter 0, loss = 0.1990
Epoch 17 / iter 1, loss = 0.2031
Epoch 17 / iter 2, loss = 0.2662
Epoch 17 / iter 3, loss = 0.2128
Epoch 17 / iter 4, loss = 0.0133
Epoch 18 / iter 0, loss = 0.1780
Epoch 18 / iter 1, loss = 0.1575
Epoch 18 / iter 2, loss = 0.2547
Epoch 18 / iter 3, loss = 0.2544
Epoch 18 / iter 4, loss = 0.2007
Epoch 19 / iter 0, loss = 0.1657
Epoch 19 / iter 1, loss = 0.2000
Epoch 19 / iter 2, loss = 0.2045
Epoch 19 / iter 3, loss = 0.2524
Epoch 19 / iter 4, loss = 0.0632
Epoch 20 / iter 0, loss = 0.1629
Epoch 20 / iter 1, loss = 0.1895
Epoch 20 / iter 2, loss = 0.2523
Epoch 20 / iter 3, loss = 0.1896
Epoch 20 / iter 4, loss = 0.0918
Epoch 21 / iter 0, loss = 0.1583
Epoch 21 / iter 1, loss = 0.2322
Epoch 21 / iter 2, loss = 0.1567
Epoch 21 / iter 3, loss = 0.2089
Epoch 21 / iter 4, loss = 0.2035
Epoch 22 / iter 0, loss = 0.2273
Epoch 22 / iter 1, loss = 0.1427
Epoch 22 / iter 2, loss = 0.1712
Epoch 22 / iter 3, loss = 0.1826
Epoch 22 / iter 4, loss = 0.2878
Epoch 23 / iter 0, loss = 0.1685
Epoch 23 / iter 1, loss = 0.1622
Epoch 23 / iter 2, loss = 0.1499
Epoch 23 / iter 3, loss = 0.2329
Epoch 23 / iter 4, loss = 0.1486
Epoch 24 / iter 0, loss = 0.1617
Epoch 24 / iter 1, loss = 0.2083
Epoch 24 / iter 2, loss = 0.1442
Epoch 24 / iter 3, loss = 0.1740
Epoch 24 / iter 4, loss = 0.1641
Epoch 25 / iter 0, loss = 0.1159
Epoch 25 / iter 1, loss = 0.2064
Epoch 25 / iter 2, loss = 0.1690
Epoch 25 / iter 3, loss = 0.1778
Epoch 25 / iter 4, loss = 0.0159
Epoch 26 / iter 0, loss = 0.1730
Epoch 26 / iter 1, loss = 0.1861
Epoch 26 / iter 2, loss = 0.1387
Epoch 26 / iter 3, loss = 0.1486
Epoch 26 / iter 4, loss = 0.1090
Epoch 27 / iter 0, loss = 0.1393
Epoch 27 / iter 1, loss = 0.1775
Epoch 27 / iter 2, loss = 0.1564
Epoch 27 / iter 3, loss = 0.1245
Epoch 27 / iter 4, loss = 0.7611
Epoch 28 / iter 0, loss = 0.1470
Epoch 28 / iter 1, loss = 0.1211
Epoch 28 / iter 2, loss = 0.1285
Epoch 28 / iter 3, loss = 0.1854
Epoch 28 / iter 4, loss = 0.5240
Epoch 29 / iter 0, loss = 0.1740
Epoch 29 / iter 1, loss = 0.0898
Epoch 29 / iter 2, loss = 0.1392
Epoch 29 / iter 3, loss = 0.1842
Epoch 29 / iter 4, loss = 0.0251
Epoch 30 / iter 0, loss = 0.0978
Epoch 30 / iter 1, loss = 0.1529
Epoch 30 / iter 2, loss = 0.1640
Epoch 30 / iter 3, loss = 0.1503
Epoch 30 / iter 4, loss = 0.0975
Epoch 31 / iter 0, loss = 0.1399
Epoch 31 / iter 1, loss = 0.1595
Epoch 31 / iter 2, loss = 0.1209
Epoch 31 / iter 3, loss = 0.1203
Epoch 31 / iter 4, loss = 0.2008
Epoch 32 / iter 0, loss = 0.1501
Epoch 32 / iter 1, loss = 0.1310
Epoch 32 / iter 2, loss = 0.1065
Epoch 32 / iter 3, loss = 0.1489
Epoch 32 / iter 4, loss = 0.0818
Epoch 33 / iter 0, loss = 0.1401
Epoch 33 / iter 1, loss = 0.1367
Epoch 33 / iter 2, loss = 0.0970
Epoch 33 / iter 3, loss = 0.1481
Epoch 33 / iter 4, loss = 0.0711
Epoch 34 / iter 0, loss = 0.1157
Epoch 34 / iter 1, loss = 0.1050
Epoch 34 / iter 2, loss = 0.1378
Epoch 34 / iter 3, loss = 0.1505
Epoch 34 / iter 4, loss = 0.0429
Epoch 35 / iter 0, loss = 0.1096
Epoch 35 / iter 1, loss = 0.1279
Epoch 35 / iter 2, loss = 0.1715
Epoch 35 / iter 3, loss = 0.0888
Epoch 35 / iter 4, loss = 0.0473
Epoch 36 / iter 0, loss = 0.1350
Epoch 36 / iter 1, loss = 0.0781
Epoch 36 / iter 2, loss = 0.1458
Epoch 36 / iter 3, loss = 0.1288
Epoch 36 / iter 4, loss = 0.0421
Epoch 37 / iter 0, loss = 0.1083
Epoch 37 / iter 1, loss = 0.0972
Epoch 37 / iter 2, loss = 0.1513
Epoch 37 / iter 3, loss = 0.1236
Epoch 37 / iter 4, loss = 0.0366
Epoch 38 / iter 0, loss = 0.1204
Epoch 38 / iter 1, loss = 0.1341
Epoch 38 / iter 2, loss = 0.1109
Epoch 38 / iter 3, loss = 0.0905
Epoch 38 / iter 4, loss = 0.3906
Epoch 39 / iter 0, loss = 0.0923
Epoch 39 / iter 1, loss = 0.1094
Epoch 39 / iter 2, loss = 0.1295
Epoch 39 / iter 3, loss = 0.1239
Epoch 39 / iter 4, loss = 0.0684
Epoch 40 / iter 0, loss = 0.1188
Epoch 40 / iter 1, loss = 0.0984
Epoch 40 / iter 2, loss = 0.1067
Epoch 40 / iter 3, loss = 0.1057
Epoch 40 / iter 4, loss = 0.4602
Epoch 41 / iter 0, loss = 0.1478
Epoch 41 / iter 1, loss = 0.0980
Epoch 41 / iter 2, loss = 0.0921
Epoch 41 / iter 3, loss = 0.1020
Epoch 41 / iter 4, loss = 0.0430
Epoch 42 / iter 0, loss = 0.0991
Epoch 42 / iter 1, loss = 0.0994
Epoch 42 / iter 2, loss = 0.1270
Epoch 42 / iter 3, loss = 0.0988
Epoch 42 / iter 4, loss = 0.1176
Epoch 43 / iter 0, loss = 0.1286
Epoch 43 / iter 1, loss = 0.1013
Epoch 43 / iter 2, loss = 0.1066
Epoch 43 / iter 3, loss = 0.0779
Epoch 43 / iter 4, loss = 0.1481
Epoch 44 / iter 0, loss = 0.0840
Epoch 44 / iter 1, loss = 0.0858
Epoch 44 / iter 2, loss = 0.1388
Epoch 44 / iter 3, loss = 0.1000
Epoch 44 / iter 4, loss = 0.0313
Epoch 45 / iter 0, loss = 0.0896
Epoch 45 / iter 1, loss = 0.1173
Epoch 45 / iter 2, loss = 0.0916
Epoch 45 / iter 3, loss = 0.1043
Epoch 45 / iter 4, loss = 0.0074
Epoch 46 / iter 0, loss = 0.1008
Epoch 46 / iter 1, loss = 0.0915
Epoch 46 / iter 2, loss = 0.0877
Epoch 46 / iter 3, loss = 0.1139
Epoch 46 / iter 4, loss = 0.0292
Epoch 47 / iter 0, loss = 0.0679
Epoch 47 / iter 1, loss = 0.0987
Epoch 47 / iter 2, loss = 0.0929
Epoch 47 / iter 3, loss = 0.1098
Epoch 47 / iter 4, loss = 0.4838
Epoch 48 / iter 0, loss = 0.0693
Epoch 48 / iter 1, loss = 0.1095
Epoch 48 / iter 2, loss = 0.1128
Epoch 48 / iter 3, loss = 0.0890
Epoch 48 / iter 4, loss = 0.1008
Epoch 49 / iter 0, loss = 0.0724
Epoch 49 / iter 1, loss = 0.0804
Epoch 49 / iter 2, loss = 0.0919
Epoch 49 / iter 3, loss = 0.1233
Epoch 49 / iter 4, loss = 0.1849
使用神经网络建模房价预测的三个要点
-
构建网络,初始化参数w和b,定义预测和损失函数的计算方法。
-
随机选择初始点,建立梯度的计算方法,和参数更新方式。
-
从总的数据集中抽取部分数据作为一个mini_batch,计算梯度并更新参数,不断迭代知道损失函数几乎不再下降。