碼上歡樂
首頁
榜單
標簽
關於
搜索
相關內容
簡體
繁體
[paper] HIBERT: Document Level Pre-training of Hierarchical Bidirectional Transformers for Document Summarization
本文轉載自
查看原文
2019-09-15 21:31
351
summarization
/
ACL-2019
/
paper reading
/
transformer
×
免責聲明!
本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。
猜您在找
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
【NLP-2019】解讀BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
paper閱讀:UniLM(Unified Language Model Pre-training for Natural Language Understanding and Generation)
LayoutLM: Pre-training of Text and Layout for Document Image Understanding 論文解讀
深度神經網絡結構以及Pre-Training的理解
論文閱讀《Pre-training with Whole Word Masking for Chinese BERT》
[paper] On Extractive and Abstractive Neural Document Summarization with Transformer Language Models
文獻閱讀_image capition_2020ECCV_Oscar: Object-Semantics Aligned Pre-training for Vision-Language Tasks
[report] Transformers and Pointer-Generator Networks for Abstractive Summarization
[paper] Generating Text via Adversarial Training
粵ICP備18138465號
© 2018-2025 CODEPRJ.COM