pytorch--cpu與gpu load時相互轉化


將gpu改為cpu時,遇到一個報錯:RuntimeError: Attempting to deserialize object on a CUDA device but torch.cuda.is_available() is False. If you are running on a CPU-only machine, please use torch.load with map_location='cpu' to map your storages to the CPU.此時改為:

torch.load("0.9472_0048.weights",map_location='cpu')

假設我們只保存了模型的參數(model.state_dict())到文件名為modelparameters.pth, model = Net()

1. cpu -> cpu或者gpu -> gpu:

checkpoint = torch.load('modelparameters.pth') model.load_state_dict(checkpoint)

2. cpu -> gpu 1

torch.load('modelparameters.pth', map_location=lambda storage, loc: storage.cuda(1))

3. gpu 1 -> gpu 0

torch.load('modelparameters.pth', map_location={'cuda:1':'cuda:0'})

4. gpu -> cpu

torch.load('modelparameters.pth', map_location=lambda storage, loc: storage)


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM