transformers 報錯,無法加載執行 bert-base-chinese github.com連不上


https://blog.csdn.net/weixin_37935970/article/details/123238677

 

pip install transformers==3.0.2

pip install torch==1.3.1

pip install huggingface_hub

tokenizer = torch.hub.load('huggingface/pytorch-transformers', 'tokenizer', 'bert-base-chinese')

(torch1.3) root@iZ2zedmeg2gi9atq5khtlgZ:~/online_doctor/bert_server# python bert_chinese_encode.py
Downloading: "https://github.com/huggingface/pytorch-transformers/archive/main.zip" to /root/.cache/torch/hub/main.zip
Traceback (most recent call last):
File "bert_chinese_encode.py", line 5, in <module>
tokenizer = torch.hub.load('huggingface/pytorch-transformers', 'tokenizer', 'bert-base-chinese')
File "/root/torch1.3/lib/python3.6/site-packages/torch/hub.py", line 399, in load
model = _load_local(repo_or_dir, model, *args, **kwargs)
File "/root/torch1.3/lib/python3.6/site-packages/torch/hub.py", line 427, in _load_local
entry = _load_entry_from_hubconf(hub_module, model)
File "/root/torch1.3/lib/python3.6/site-packages/torch/hub.py", line 230, in _load_entry_from_hubconf
_check_dependencies(m)
File "/root/torch1.3/lib/python3.6/site-packages/torch/hub.py", line 219, in _check_dependencies
raise RuntimeError('Missing dependencies: {}'.format(', '.join(missing_deps)))
RuntimeError: Missing dependencies: huggingface_hub

pip install huggingface_hub

 

網絡問題:

/etc/hosts 配置,刪除;試了無數次終於開始下載了;(配了也不一定有效果)

Downloading: "https://github.com/huggingface/pytorch-transformers/archive/main.zip" to /root/.cache/torch/hub/main.zip

可以再windows下手動下載再上傳到.cache下

https://gitee.com/ineo6/hosts

 

(torch1.3) root@iZ2zedmeg2gi9atq5khtlgZ:~/online_doctor/bert_server# python bert_chinese_encode.py
============ huggingface pytorch-transformers
Downloading: "https://github.com/huggingface/pytorch-transformers/archive/main.zip" to /root/.cache/torch/hub/main.zip
============ huggingface pytorch-transformers
Traceback (most recent call last):
File "bert_chinese_encode.py", line 7, in <module>
model = torch.hub.load('huggingface/pytorch-transformers', 'model', 'bert-base-chinese')
File "/root/torch1.3/lib/python3.6/site-packages/torch/hub.py", line 397, in load
repo_or_dir = _get_cache_or_reload(repo_or_dir, force_reload, verbose, skip_validation)
File "/root/torch1.3/lib/python3.6/site-packages/torch/hub.py", line 165, in _get_cache_or_reload
repo_owner, repo_name, branch = _parse_repo_info(github)
File "/root/torch1.3/lib/python3.6/site-packages/torch/hub.py", line 119, in _parse_repo_info
with urlopen(f"https://github.com/{repo_owner}/{repo_name}/tree/main/"):
File "/usr/lib/python3.6/urllib/request.py", line 223, in urlopen
return opener.open(url, data, timeout)
File "/usr/lib/python3.6/urllib/request.py", line 526, in open
response = self._open(req, data)
File "/usr/lib/python3.6/urllib/request.py", line 544, in _open
'_open', req)
File "/usr/lib/python3.6/urllib/request.py", line 504, in _call_chain
result = func(*args)
File "/usr/lib/python3.6/urllib/request.py", line 1392, in https_open
context=self._context, check_hostname=self._check_hostname)
File "/usr/lib/python3.6/urllib/request.py", line 1352, in do_open
r = h.getresponse()
File "/usr/lib/python3.6/http/client.py", line 1383, in getresponse
response.begin()
File "/usr/lib/python3.6/http/client.py", line 320, in begin
version, status, reason = self._read_status()
File "/usr/lib/python3.6/http/client.py", line 289, in _read_status
raise RemoteDisconnected("Remote end closed connection without"
http.client.RemoteDisconnected: Remote end closed connection without response
(torch1.3) root@iZ2zedmeg2gi9atq5khtlgZ:~/online_doctor/bert_server# python bert_chinese_encode.py
============ huggingface pytorch-transformers
Using cache found in /root/.cache/torch/hub/huggingface_pytorch-transformers_main
============ huggingface pytorch-transformers
Using cache found in /root/.cache/torch/hub/huggingface_pytorch-transformers_main
Downloading: 66%|████████████████████████████████████████████████████████████████████▏ | 270M/412M [00:21<00:11, 12.1MB/s

 

還是有可能不穩定:

https://blog.csdn.net/zimiao552147572/article/details/105844840

下載復制到.cache  解壓進入目錄;對應有setup.py; 執行 pip install .  安裝4.0.1 再重新安裝為==3.0.2;再運行python 程序,有執行成功;(也在修改/etc/hosts github.com 增加行,刪除行)

 

 

"""
pip install transformers==3.0.2

pip install torch==1.3.1

pip install huggingface_hub
"""

import torch
import torch.nn as nn

# 使用torch.hub加載bert中文模型的字映射器
tokenizer = torch.hub.load('huggingface/pytorch-transformers', 'tokenizer', 'bert-base-chinese')
# 使用torch.hub加載bert中文模型
model = torch.hub.load('huggingface/pytorch-transformers', 'model', 'bert-base-chinese')


# 編寫獲取bert編碼的函數
def get_bert_encode(text_1, text_2, mark=102, max_len=10):
'''
功能: 使用bert中文模型對輸入的文本進行編碼
text_1: 代表輸入的第一句話
text_2: 代表輸入的第二句話
mark: 分隔標記, 是bert預訓練模型tokenizer的一個自身特殊標記, 當輸入兩個文本的時候, 有中間的特殊分隔符, 102
max_len: 限制的最大語句長度, 如果大於max_len, 進行截斷處理, 如果小於max_len, 進行0填充的處理
return: 輸入文本的bert編碼
'''
# 第一步使用tokenizer進行兩個文本的字映射
indexed_tokens = tokenizer.encode(text_1, text_2)
# 接下來要對兩個文本進行補齊, 或者截斷的操作
# 首先要找到分隔標記的位置
k = indexed_tokens.index(mark)

# 第二步處理第一句話, 第一句話是[:k]
if len(indexed_tokens[:k]) >= max_len:
# 長度大於max_len, 進行截斷處理
indexed_tokens_1 = indexed_tokens[:max_len]
else:
# 長度小於max_len, 需要對剩余的部分進行0填充
indexed_tokens_1 = indexed_tokens[:k] + (max_len - len(indexed_tokens[:k])) * [0]

# 第三步處理第二句話, 第二句話是[k:]
if len(indexed_tokens[k:]) >= max_len:
# 長度大於max_len, 進行截斷處理
indexed_tokens_2 = indexed_tokens[k:k+max_len]
else:
# 長度小於max_len, 需要對剩余的部分進行0填充
indexed_tokens_2 = indexed_tokens[k:] + (max_len - len(indexed_tokens[k:])) * [0]

# 接下來將處理后的indexed_tokens_1和indexed_tokens_2進行相加合並
indexed_tokens = indexed_tokens_1 + indexed_tokens_2

# 需要一個額外的標志列表, 來告訴模型那部分是第一句話, 哪部分是第二句話
# 利用0元素來表示第一句話, 利用1元素來表示第二句話
# 注意: 兩句話的長度都已經被我們規范成了max_len
segments_ids = [0] * max_len + [1] * max_len

# 利用torch.tensor將兩個列表封裝成張量
tokens_tensor = torch.tensor([indexed_tokens])
segments_tensor = torch.tensor([segments_ids])

# 利用模型進行編碼不求導
with torch.no_grad():
# 使用bert模型進行編碼, 傳入參數tokens_tensor和segments_tensor, 最終得到模型的輸出encoded_layers
encoded_layers, _ = model(tokens_tensor, token_type_ids=segments_tensor)

return encoded_layers


text_1 = "人生該如何起頭"
text_2 = "改變要如何起手"

encoded_layers = get_bert_encode(text_1, text_2)
print(encoded_layers)
print(encoded_layers.shape)















免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2026 CODEPRJ.COM