原文:RuntimeError: Trying to backward through the graph a second time, but the buffers have already been freed. Specify retain_graph=True when calling backward the first time.

报错: 修改: model.module.optimizer G.zero grad loss G.backward model.module.optimizer G.step 为: model.module.optimizer G.zero grad loss G.backward retain graph True model.module.optimizer G.step 问题解决。 ...

2020-09-25 15:47 0 1772 推荐指数:

查看详情

pytorch中反向传播的loss.backward(retain_graph=True)报错

RNN和LSTM模型中的反向传播方法,在loss.backward()处的问题, 更新完pytorch版本后容易出现问题。 问题1.使用loss.backward()报错 Trying to backward through the graph a second time ...

Tue Nov 02 02:18:00 CST 2021 0 16164
 
粤ICP备18138465号  © 2018-2025 CODEPRJ.COM