Pytorch Pretrained Bert 学习笔记


经常做NLP任务,要想获得好一点的准确率,需要一个与训练好的embedding模型。

参考:github

Install

pip install pytorch-pretrained-bert

Usage

BertTokenizer

BertTokenizer会分割输入的句子,便于后面嵌入。

import torch
from pytorch_pretrained_bert import BertTokenizer, BertModel, BertForMaskedLM

# Load pre-trained model tokenizer (vocabulary)
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')

# Tokenized input
text = "Who was Jim Henson ? Jim Henson was a puppeteer"
tokenized_text = tokenizer.tokenize(text)

对于找不到的词,会限制最大长度进行分割。

BertModel

tokenizer.convert_tokens_to_ids(tokenizer.tokenize(text))

将上面的列表转为tensor,并传给bertmodel

model = BertModel.from_pretrained('bert-base-uncased')
model.eval()

# Predict hidden states features for each layer
encoded_layers, _ = model(tokens_tensor, segments_tensors)


免责声明!

本站转载的文章为个人学习借鉴使用,本站对版权不负任何法律责任。如果侵犯了您的隐私权益,请联系本站邮箱yoyou2525@163.com删除。



 
粤ICP备18138465号  © 2018-2025 CODEPRJ.COM