码上快乐
1秒登录
首页
榜单
标签
关于
搜索
相关内容
简体
繁体
[paper] HIBERT: Document Level Pre-training of Hierarchical Bidirectional Transformers for Document Summarization
本文转载自
查看原文
2019-09-15 21:31
351
summarization
/
ACL-2019
/
paper reading
/
transformer
×
免责声明!
本站转载的文章为个人学习借鉴使用,本站对版权不负任何法律责任。如果侵犯了您的隐私权益,请联系本站邮箱yoyou2525@163.com删除。
猜您在找
[paper] Hierarchical Transformers for Multi-Document Summarization
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
【NLP-2019】解读BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
LayoutLM: Pre-training of Text and Layout for Document Image Understanding 论文解读
[paper] On Extractive and Abstractive Neural Document Summarization with Transformer Language Models
paper阅读:UniLM(Unified Language Model Pre-training for Natural Language Understanding and Generation)
深度神经网络结构以及Pre-Training的理解
论文阅读《Pre-training with Whole Word Masking for Chinese BERT》
深度神经网络结构以及Pre-Training的理解
粤ICP备18138465号
© 2018-2025 CODEPRJ.COM