Variable和Tensor
import torch from torch.autograd import Variable # torch 中 Variable 模塊 tensor = torch.FloatTensor([[1,2],[3,4]]) variable = Variable(tensor) print(tensor) print(variable)
結果如下:
tensor([[1., 2.], [3., 4.]]) tensor([[1., 2.], [3., 4.]])
發現tensor和variable輸出的形式是一樣的,在新版本的torch中可以直接使用tensor而不需要使用variable。
在舊版本中variable和tensor的區別在於,variable可以進行誤差的反向傳播,而tensor不可以。
接下來看一下,合並Tensor和Variable之后autograd
是如何實現歷史追蹤和反向傳播的
作為能否autograd
的標簽,requires_grad
現在是Tensor的屬性,所以,只要當一個操作(operation)的任何輸入Tensor
具有requires_grad = True
的屬性,autograd
就可以自動追蹤歷史和反向傳播了。
官方給出的具體例子如下:
# 默認創建requires_grad = False的Tensor x = torch.ones(1) # create a tensor with requires_grad=False (default) x.requires_grad # out: False # 創建另一個Tensor,同樣requires_grad = False y = torch.ones(1) # another tensor with requires_grad=False # both inputs have requires_grad=False. so does the output z = x + y # 因為兩個Tensor x,y,requires_grad=False.都無法實現自動微分, # 所以操作(operation)z=x+y后的z也是無法自動微分,requires_grad=False z.requires_grad # out: False # then autograd won't track this computation. let's verify! # 因而無法autograd,程序報錯 z.backward() # out:程序報錯:RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn # now create a tensor with requires_grad=True w = torch.ones(1, requires_grad=True) w.requires_grad # out: True # add to the previous result that has require_grad=False # 因為total的操作中輸入Tensor w的requires_grad=True,因而操作可以進行反向傳播和自動求導。 total = w + z # the total sum now requires grad! total.requires_grad # out: True # autograd can compute the gradients as well total.backward() w.grad #out: tensor([ 1.]) # and no computation is wasted to compute gradients for x, y and z, which don't require grad # 由於z,x,y的requires_grad=False,所以並沒有計算三者的梯度 z.grad == x.grad == y.grad == None # True