需要導入的庫:jieba, json
json是python自帶的庫,jieba只需要在命令行輸入pip install jieba即可
本代碼采用直接硬剛倒排索引,可能會引起稍微不適,請選用。
代碼分為三部分:分詞、創建正排索引、創建倒排索引
需要文件:語料庫、停用詞庫(停用詞庫請自行搜索即可)
語料庫圖片如下:
我用的是自己爬取的一部分新聞標題,包含網易,頭條,鳳凰網以及一小部分微信文章標題。語料庫處理:只需要每一句的后面加個換行即可。
分詞代碼:
stopwords =[] with open('stopwords', 'r', encoding='utf-8')as f: for i in f: word = i.strip() stopwords.append(word) filename = 'test.txt' filename1 = 'test_cws.txt' # 寫入分詞 def write_cws(): num = 0 # 這個是文件id值,如果本身就有,這個可以更改為你自己的,我這里只是簡單的計數作為id值 writing = open(filename1, 'a+', encoding='utf-8') with open(filename, 'r', encoding='utf-8')as f: for line in f: content = line.strip() content = content.replace(' ', '') seg = jieba.cut(content) test ='' for i in seg: if i not in stopwords: test += i+' ' writing.write(str(num)+" "+test+'\n') num += 1 writing.close()
正排索引代碼:
filename2 = 'zhengxiang.txt' def zhengxiang(): all_words = [] all = {} file2 = open(filename2, 'a+', encoding='utf-8') with open(filename1, 'r', encoding='utf-8')as f: for line in f: line = line.strip() # print(line) content = line.split(' ')[1] num = line.split(' ')[0] words = content.split(' ') for word in words: word_num =[num] if word not in all_words: all_words.append(word) all[word] = word_num else: if num not in all[word]: all[word].append(num) for word, nums in all.items(): file2.write(word+' ') for i in range(len(nums)): if i ==0: file2.write(nums[i]) else: file2.write(','+nums[i]) file2.write('\n') file2.close()
倒排索引代碼:
# 倒排索引 filename3 = 'daopai.txt' def daopai(): with open(filename2, 'r', encoding='utf-8')as f: for line in f: try:#這個異常處理是我數據有點問題,如果你本身數據和我上面截圖的語料庫數據一樣,應該不會報錯 word_dict = {}# 單詞的字典,字典格式,方便存取 word_list =[] # 存放這個單詞的情況 syc = [] # 存放單詞以及單詞在所有文件出現的次數,在一個文件出現就加1,不管其中出現多少次 Aword = line.strip()# Aword 是 all_word word = Aword.split(' ')[0] print(word) nums = Aword.split(' ')[1] count = len(nums.split(',')) syc.append(word+' '+str(count)) word_list.append(syc) with open(filename1, 'r', encoding='utf-8') as r: for line1 in r: acount = 0 # 這個單詞在這行中出現的個數 words = line1.strip().split(' ')[1].split(' ') num = line1.strip().split(' ')[0] if word in words: # 判斷這個單詞在不在這個句子 for aword in words: if word == aword: acount += 1 temp1 = [num, acount]# 用於存放單詞出現的地方以及它的次數 word_list.append(temp1) word_dict[word] = word_list with open(filename3, 'a', encoding='utf-8')as f: json.dump(word_dict,f,ensure_ascii=False) f.write(',') f.write('\n') except Exception as e: print(line) print(e)
這個代碼是原語料庫跑出分詞之后,將分詞文件去跑正排索引,將正排索引去跑倒排索引,所以運行的時候,請依次運行。
如果有一定的幫助,點個贊哦,謝謝!!!