原文:“RuntimeError: Trying to backward through the graph a second time, but the buffers have already been freed. Specify retain_graph=True when calling backward the first time”

当模型有多输出的时候,容易产生此问题,如以下程序所示: zero the parameter gradients model.zero grad forward backward optimize outputs, hidden model inputs, hidden loss loss outputs, session, items acc loss loss.data loss.backwa ...

2019-11-29 18:31 0 3013 推荐指数:

查看详情

pytorch中反向传播的loss.backward(retain_graph=True)报错

RNN和LSTM模型中的反向传播方法,在loss.backward()处的问题, 更新完pytorch版本后容易出现问题。 问题1.使用loss.backward()报错 Trying to backward through the graph a second time ...

Tue Nov 02 02:18:00 CST 2021 0 16164
 
粤ICP备18138465号  © 2018-2025 CODEPRJ.COM