大部分內容來源於:https://github.com/Yuzhen-Li/yuzhenli.github.io/wiki/Stanford-CoreNLP%E5%9C%A8Ubuntu%E4%B8%8B%E7%9A%84%E5%AE%89%E8%A3%85%E4%B8%8E%E4%BD%BF%E7%94%A8
1, 安裝java運行環境
sudo apt-get install default-jre
sudo apt-get install default-jdk
2, 下載stanford corenlp包
wget http://nlp.stanford.edu/software/stanford-corenlp-full-2018-02-27.zip unzip stanford-corenlp-full-2018-02-27.zip cd stanford-corenlp-full-2018-02-27/
3, 配置環境變量
for file in `find . -name "*.jar"`; do export CLASSPATH="$CLASSPATH:`realpath $file`"; done
4, 安裝
sudo pip3 install stanfordcorenlp
5, 下載中文支持
wget http://nlp.stanford.edu/software/stanford-chinese-corenlp-2018-02-27-models.jar
6,使用方法
from stanfordcorenlp import StanfordCoreNLP nlp = StanfordCoreNLP(r'/mnt/f/CMBNLP/stanford-corenlp-full-2018-02-27/', lang='zh') ## 這里是coreNLP的路徑,英文去掉 lang='zh'
使用方法1:wrapper
sentence = '中國科學院大學位於北京。' print(nlp.word_tokenize(sentence)) print(nlp.pos_tag(sentence)) print(nlp.ner(sentence)) print(nlp.parse(sentence)) print(nlp.dependency_parse(sentence))
text = 'UCAS is located in Beijing.' # 據目前所知openie功能不支持中文處理 output = nlp.annotate(text, properties={ 'annotators': 'tokenize, ssplit, pos, depparse, natlog, openie', 'outputFormat': 'json',
"openie.triple.strict":"true",
"openie.max_entailments_per_clause":"1"
})
output = json.loads(output)
使用方法2:啟用服務器,據說會快一些
from stanfordcorenlp import StanfordCoreNLP
nlp = StanfordCoreNLP('http://localhost', port=9000) # 樣例源自https://blog.csdn.net/Hallywood/article/details/80154146
sentence = "Kosgi Santosh sent an email to Stanford University. He didn't get a reply"
print('Tokenize:', nlp.coref(sentence))
nlp.close()
使用方法3:命令調用
import subprocess ## 來源同上樣例
subprocess.call(['java','-cp','F:/Program Files/jars/stanford-corenlp-full-2018-02-27/*','-Xmx4g',
'edu.stanford.nlp.pipeline.StanfordCoreNLP',"-annotators",
"tokenize,ssplit,pos,lemma,ner",'-file','subprocesstest.txt'])