原文:RuntimeError: Trying to backward through the graph a second time, but the buffers have already been freed. Specify retain_graph=True when calling backward the first time.

報錯: 修改: model.module.optimizer G.zero grad loss G.backward model.module.optimizer G.step 為: model.module.optimizer G.zero grad loss G.backward retain graph True model.module.optimizer G.step 問題解決。 ...

2020-09-25 15:47 0 1772 推薦指數:

查看詳情

pytorch中反向傳播的loss.backward(retain_graph=True)報錯

RNN和LSTM模型中的反向傳播方法,在loss.backward()處的問題, 更新完pytorch版本后容易出現問題。 問題1.使用loss.backward()報錯 Trying to backward through the graph a second time ...

Tue Nov 02 02:18:00 CST 2021 0 16164
 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM