論文筆記-Sequence to Sequence Learning with Neural Networks


大體思想和RNN encoder-decoder是一樣的,只是用來LSTM來實現。

paper提到三個important point:

1)encoder和decoder的LSTM是兩個不同的模型

2)deep LSTM表現比shallow好,選用了4層的LSTM

3)實踐中發現將輸入句子reverse后再進行訓練效果更好。So for example, instead of mapping the sentence a,b,c to the sentence α,β,γ, the LSTM is asked to map c,b,a to α,β,γ, where α, β, γ is the translation of a, b, c. This way, a is in close proximity to α, b is fairly close to β, and so on, a fact that makes it easy for SGD to “establish communication” between the input and the output.  


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM