在學習陳雲的教程《深度學習框架PyTorch:入門與實踐》的損失函數構建時代碼如下:
可我運行如下代碼:
output = net(input)
target = Variable(t.arange(0,10))
criterion = nn.MSELoss()
loss = criterion(output, target)
loss
運行結果:
RuntimeError Traceback (most recent call last)
<ipython-input-37-e5c73861a53b> in <module>()
2 target = Variable(t.arange(0,10))
3 criterion = nn.MSELoss()
----> 4 loss = criterion(output, target)
5 loss
RuntimeError: Expected object of type torch.FloatTensor but found type torch.LongTensor for argument #2 'target'
根據stackoverflo的問題Pytorch: Convert FloatTensor into DoubleTensor和PyTorch(總)——PyTorch遇到令人迷人的BUG與記錄,用torch.from_numpy(Y).float()
這樣的形式修改下target
的類型。
#torch.from_numpy(Y).float()
output = net(input)
y = np.arange(0,10).reshape(1,10)
target = Variable(t.from_numpy(y).float())
criterion = nn.MSELoss()
loss = criterion(output, target)
loss
運行結果:
tensor(28.5897, grad_fn=<MseLossBackward>)
同樣的,后面優化器Optim代碼中target也是出現這樣的錯誤:
import torch.optim as optim
#新建一個優化器,指定要調整的參數和學習率
optimizer = optim.SGD(net.parameters(),lr=0.01)
#在訓練過程中
#先梯度清零(與net.zero_grad()效果一樣)
optimizer.zero_grad()
#計算損失
output = net(input)
#把target改為Variable(t.from_numpy(y).float())就不會出錯了
loss = criterion(output, target)
#反向傳播
loss.backward()
#更新參數
optimizer.step()
運行結果:
修改target
為Variable(t.from_numpy(y).float())
后成功運行:
import torch.optim as optim
#新建一個優化器,指定要調整的參數和學習率
optimizer = optim.SGD(net.parameters(),lr=0.01)
#在訓練過程中
#先梯度清零(與net.zero_grad()效果一樣)
optimizer.zero_grad()
#計算損失
output = net(input)
#把target改為Variable(t.from_numpy(y).float())就不會出錯了
loss = criterion(output, Variable(t.from_numpy(y).float()))
#反向傳播
loss.backward()
#更新參數
optimizer.step()