码上欢乐
首页
榜单
标签
关于
搜索
相关内容
简体
繁体
[paper] HIBERT: Document Level Pre-training of Hierarchical Bidirectional Transformers for Document Summarization
本文转载自
查看原文
2019-09-15 21:31
351
summarization
/
ACL-2019
/
paper reading
/
transformer
×
免责声明!
本站转载的文章为个人学习借鉴使用,本站对版权不负任何法律责任。如果侵犯了您的隐私权益,请联系本站邮箱yoyou2525@163.com删除。
猜您在找
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
【NLP-2019】解读BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
paper阅读:UniLM(Unified Language Model Pre-training for Natural Language Understanding and Generation)
LayoutLM: Pre-training of Text and Layout for Document Image Understanding 论文解读
深度神经网络结构以及Pre-Training的理解
论文阅读《Pre-training with Whole Word Masking for Chinese BERT》
[paper] On Extractive and Abstractive Neural Document Summarization with Transformer Language Models
文献阅读_image capition_2020ECCV_Oscar: Object-Semantics Aligned Pre-training for Vision-Language Tasks
[report] Transformers and Pointer-Generator Networks for Abstractive Summarization
[paper] Generating Text via Adversarial Training
粤ICP备18138465号
© 2018-2025 CODEPRJ.COM