碼上歡樂
首頁
榜單
標簽
關於
搜索
相關內容
簡體
繁體
[paper] On Extractive and Abstractive Neural Document Summarization with Transformer Language Models
本文轉載自
查看原文
2019-09-14 22:42
388
summarization
/
paper reading
/
transformer
/
CoRR-2019
×
免責聲明!
本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。
猜您在找
[paper] HIBERT: Document Level Pre-training of Hierarchical Bidirectional Transformers for Document Summarization
論文閱讀 | Adversarial Training for Large Neural Language Models
[report] Transformers and Pointer-Generator Networks for Abstractive Summarization
[code] Transformer For Summarization Source Code Reading [1]
課程五(Sequence Models),第一 周(Recurrent Neural Networks) —— 2.Programming assignments:Dinosaur Island - Character-Level Language Modeling
Paper: Bidirectional LSTM-CRF Models for Sequence Tagging
paper閱讀:UniLM(Unified Language Model Pre-training for Natural Language Understanding and Generation)
0-4評價一個語言模型Evaluating Language Models:Perplexity
【NLP-13】ELMo模型(Embeddings from Language Models)
Paper | Drop an Octave: Reducing Spatial Redundancy in Convolutional Neural Networks with Octave Convolution
粵ICP備18138465號
© 2018-2025 CODEPRJ.COM