1. 0.4中使用設備:.to(device)
2. 0.4中刪除了Variable,直接tensor就可以
3. with torch.no_grad():的使用代替volatile;棄用volatile,測試中不需要計算梯度的話,用with torch.no_grad():
4. data改用.detach;x.detach()返回一個requires_grad=False的共享數據的Tensor,並且,如果反向傳播中需要x,那么x.detach返回的Tensor的變動會被autograd追蹤。相反,x.data()返回的Tensor,其變動不會被autograd追蹤,如果反向傳播需要用到x的話,值就不對了。
5. torchvision
- pytorch0.4有一些接口已經改變,且模型向下版本兼容,不向上兼容。
- In PyTorch 0.4, is it recommended to use `reshape` than `view` when it is possible?
- Question about 'rebuild_tensor_v2'?
使用pytorch0.3導入pytorch0.4保存的模型時候:
Monkey-patch because I trained with a newer version. # This can be removed once PyTorch 0.4.x is out. # See https://discuss.pytorch.org/t/question-about-rebuild-tensor-v2/14560 import torch._utils try: torch._utils._rebuild_tensor_v2 except AttributeError: def _rebuild_tensor_v2(storage, storage_offset, size, stride, requires_grad, backward_hooks): tensor = torch._utils._rebuild_tensor(storage, storage_offset, size, stride) tensor.requires_grad = requires_grad tensor._backward_hooks = backward_hooks return tensor torch._utils._rebuild_tensor_v2 = _rebuild_tensor_v2
- 拷貝一些權重到新的模型方法,感覺不能直接抽取sequential里面的某一層,除非重新構建模型,forward得到該層的內容,或者使用hook操作;
- pytorch在fintune時將sequential中的層輸出,以vgg為例