Textual Entailment(自然語言推理-文本蘊含) - AllenNLP


自然語言推理是NLP高級別的任務之一,不過自然語言推理包含的內容比較多,機器閱讀,問答系統和對話等本質上都屬於自然語言推理。最近在看AllenNLP包的時候,里面有個模塊:文本蘊含任務(text entailment),它的任務形式是:

給定一個前提文本(premise),根據這個前提去推斷假說文本(hypothesis)與premise的關系,一般分為蘊含關系(entailment)和矛盾關系(contradiction),蘊含關系(entailment)表示從premise中可以推斷出hypothesis;矛盾關系(contradiction)即hypothesis與premise矛盾。文本蘊含的結果就是這幾個概率值。

 

Textual Entailment
Textual Entailment (TE) models take a pair of sentences and predict whether the facts in the first necessarily imply the facts in the second one. The AllenNLP TE model is a re-implementation of the decomposable attention model (Parikh et al, 2017), a widely used TE baseline that was state-of-the-art onthe SNLI dataset in late 2016. The AllenNLP TE model achieves an accuracy of 86.4% on the SNLI 1.0 test dataset, a 2% improvement on most publicly available implementations and a similar score as the original paper. Rather than pre-trained Glove vectors, this model uses ELMo embeddings, which are completely character based and account for the 2% improvement.

 

AllenNLP集成了EMNLP2016中谷歌作者們撰寫的一篇文章:A Decomposable Attention Model for Natural Language Inference

 

 

 

 

 

 

 

 

 

論文實踐

(1)測試例子一:

前提:Two women are wandering along the shore drinking iced tea.

假設:Two women are sitting on a blanket near some rocks talking about politics.

其測試結果如下:

 

 

 

可視化呈現結果如下:

 

 

 

測試例子二:

前提:If you help the needy, God will reward you.

假設:Giving money to the poor has good consequences.

測試結果如下:

 


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM