原文:“RuntimeError: Trying to backward through the graph a second time, but the buffers have already been freed. Specify retain_graph=True when calling backward the first time”

當模型有多輸出的時候,容易產生此問題,如以下程序所示: zero the parameter gradients model.zero grad forward backward optimize outputs, hidden model inputs, hidden loss loss outputs, session, items acc loss loss.data loss.backwa ...

2019-11-29 18:31 0 3013 推薦指數:

查看詳情

pytorch中反向傳播的loss.backward(retain_graph=True)報錯

RNN和LSTM模型中的反向傳播方法,在loss.backward()處的問題, 更新完pytorch版本后容易出現問題。 問題1.使用loss.backward()報錯 Trying to backward through the graph a second time ...

Tue Nov 02 02:18:00 CST 2021 0 16164
 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM