1.
UserWarning: Implicit dimension choice for log_softmax has been deprecated. Change the call to include dim=X as an argument.
return F.log_softmax(x)
解決方法:把 F.log_softmax(x)改為F.log_softmax(x,dim=0) , 而且我發現改為F.log_softmax(x,dim=1),這個到底哪個更合理需要進一步確認。
2.
UserWarning: invalid index of a 0-dim tensor. This will be an error in PyTorch 0.5. Use tensor.item() to convert a 0-dim tensor to a Python number train_loss += loss.data[0]
解決方法:把 train_loss+=loss.data[0] 修改為 train_loss+= loss.item()
3.
UserWarning: volatile was removed and now has no effect. Use `with torch.no_grad():` instead. label = Variable(label.cuda(), volatile=True)
解決方法:把 label = Variable(label.cuda(), volatile=True) 修改為 label = Variable(label.cuda())
接下來的幾個問題是我真實的代碼中產生的,並且按如上的方法解決了
4.
遇到的警告: UserWarning: volatile was removed and now has no effect. Use `with torch.no_grad():` instead. data, target = Variable(data, volatile=True), Variable(target) 解決方法:把data, target = Variable(data, volatile=True), Variable(target)修改為 data, target = Variable(data), Variable(target)
5.
遇到的警告: UserWarning: invalid index of a 0-dim tensor. This will be an error in PyTorch 0.5. Use tensor.item() to convert a 0-dim tensor to a Python number test_loss += F.nll_loss(output, target, size_average=False).data[0] 解決方法:把data[0]改為item() test_loss += F.nll_loss(output, target, size_average=False).item() 重新運行,報了另一個警告, UserWarning: size_average and reduce args will be deprecated, please use reduction='sum' instead. warnings.warn(warning.format(ret)) 看了警告中的提示信息,意思size_average這個參數和reduce這個參數都將會將被不贊成。請使用reduction='sum', 所以我就把test_loss += F.nll_loss(output, target, size_average=False).item()改為 test_loss += F.nll_loss(output, target, reduction='sum').item() 這樣運行后就沒有報警告了。其實我用pycharm右鍵去找nll_loss這個函數的定義的地方會發現。警告中的提示信息提示的 非常到位。以下展示下nll_loss函數的定義: