用keras實現lstm 利用Keras下的LSTM進行情感分析


1    I either LOVE Brokeback Mountain or think it’s great that homosexuality is becoming more acceptable!:
1    Anyway, thats why I love ” Brokeback Mountain.
1    Brokeback mountain was beautiful…
0    da vinci code was a terrible movie.
0    Then again, the Da Vinci code is super shitty movie, and it made like 700 million.
0    The Da Vinci Code comes out tomorrow, which sucks.
其中的每個句子都有個標簽 1 或 0, 用來代表積極或消極。


 

    先把用到的包一次性全部導入
"language-python hljs">from keras.layers.core import Activation, Dense
from keras.layers.embeddings import Embedding
from keras.layers.recurrent import LSTM
from keras.models import Sequential
from keras.preprocessing import sequence
from sklearn.model_selection import train_test_split
import nltk #用來分詞
import collections #用來統計詞頻
import numpy as np

 

     在開始前,先對所用數據做個初步探索。特別地,我們需要知道數據中有多少個不同的單詞,每句話由多少個單詞組成。
"language-pyhon hljs livecodeserver">maxlen = 0 #句子最大長度
word_freqs = collections.Counter() #詞頻
num_recs = 0 # 樣本數
with open('./train.txt','r+') as f:
for line in f:
label, sentence = line.strip().split("\t")
words = nltk.word_tokenize(sentence.lower())
if len(words) > maxlen:
maxlen = len(words)
for word in words:
word_freqs[word] += 1
num_recs += 1
print('max_len ',maxlen)
print('nb_words ', len(word_freqs))

     max_len 42
     nb_words 2324
      可見一共有 2324 個不同的單詞,包括標點符號。每句話最多包含 42 個單詞。
      根據不同單詞的個數 (nb_words),我們可以把詞匯表的大小設為一個定值,並且對於不在詞匯表里的單詞,把它們用偽單詞 UNK 代替。 根據句子的最大長度 (max_lens),我們可以統一句子的長度,把短句用 0 填充。
      依前所述,我們把 VOCABULARY_SIZE 設為 2002。包含訓練數據中按詞頻從大到小排序后的前 2000 個單詞,外加一個偽單詞 UNK 和填充單詞 0。 最大句子長度 MAX_SENTENCE_LENGTH 設為40。
MAX_FEATURES = 2000
MAX_SENTENCE_LENGTH = 40

      接下來建立兩個 lookup tables,分別是 word2index 和 index2word,用於單詞和數字轉換。
"language-python hljs">vocab_size = min(MAX_FEATURES, len(word_freqs)) + 2
word2index = {x[0]: i+2 for i, x in enumerate(word_freqs.most_common(MAX_FEATURES))}
word2index["PAD"] = 0
word2index["UNK"] = 1
index2word = {v:k for k, v in word2index.items()}

 

      下面就是根據 lookup table 把句子轉換成數字序列了,並把長度統一到 MAX_SENTENCE_LENGTH, 不夠的填 0 , 多出的截掉。
"language-python hljs">X = np.empty(num_recs,dtype=list)
y = np.zeros(num_recs)
i=0
with open('./train.txt','r+') as f:
for line in f:
label, sentence = line.strip().split("\t")
words = nltk.word_tokenize(sentence.lower())
seqs = []
for word in words:
if word in word2index:
seqs.append(word2index[word])
else:
seqs.append(word2index["UNK"])
X[i] = seqs
y[i] = int(label)
i += 1
X = sequence.pad_sequences(X, maxlen=MAX_SENTENCE_LENGTH)

 

      最后是划分數據,80% 作為訓練數據,20% 作為測試數據。
"language-python hljs">Xtrain, Xtest, ytrain, ytest = train_test_split(X, y, test_size=0.2, random_state=42)

 

      數據准備好后,就可以上模型了。這里損失函數用 binary_crossentropy, 優化方法用 adam。 至於 EMBEDDING_SIZE , HIDDEN_LAYER_SIZE , 以及訓練時用到的BATCH_SIZE 和 NUM_EPOCHS 這些超參數,就憑經驗多跑幾次調優了。
EMBEDDING_SIZE = 128
HIDDEN_LAYER_SIZE = 64
model = Sequential()
model.add(Embedding(vocab_size, EMBEDDING_SIZE,input_length=MAX_SENTENCE_LENGTH))
model.add(LSTM(HIDDEN_LAYER_SIZE, dropout=0.2, recurrent_dropout=0.2))
model.add(Dense(1))
model.add(Activation("sigmoid"))
pile(loss="binary_crossentropy", optimizer="adam",metrics=["accuracy"])

 

      網絡構建好后就是上數據訓練了。用 10 個 epochs 和 batch_size 取 32 來訓練這個網絡。在每個 epoch, 我們用測試集當作驗證集。
BATCH_SIZE = 32
NUM_EPOCHS = 10
model.fit(Xtrain, ytrain, batch_size=BATCH_SIZE, epochs=NUM_EPOCHS,validation_data=(Xtest, ytest))

 

      Train on 5668 samples, validate on 1418 samples
      Epoch 1/10
      5668/5668 [==============================] - 12s - loss: 0.2464 - acc: 0.8897 - val_loss: 0.0672 - val_acc: 0.9697
      Epoch 2/10
      5668/5668 [==============================] - 11s - loss: 0.0290 - acc: 0.9896 - val_loss: 0.0407 - val_acc: 0.9838
      Epoch 3/10
      5668/5668 [==============================] - 11s - loss: 0.0078 - acc: 0.9975 - val_loss: 0.0506 - val_acc: 0.9866
      Epoch 4/10
      5668/5668 [==============================] - 11s - loss: 0.0084 - acc: 0.9970 - val_loss: 0.0772 - val_acc: 0.9732
      Epoch 5/10
      5668/5668 [==============================] - 11s - loss: 0.0046 - acc: 0.9989 - val_loss: 0.0415 - val_acc: 0.9880
      Epoch 6/10
      5668/5668 [==============================] - 11s - loss: 0.0012 - acc: 0.9998 - val_loss: 0.0401 - val_acc: 0.9901
      Epoch 7/10
      5668/5668 [==============================] - 11s - loss: 0.0020 - acc: 0.9996 - val_loss: 0.0406 - val_acc: 0.9894
      Epoch 8/10
      5668/5668 [==============================] - 11s - loss: 7.7990e-04 - acc: 0.9998 - val_loss: 0.0444 - val_acc: 0.9887
      Epoch 9/10
      5668/5668 [==============================] - 11s - loss: 5.3168e-04 - acc: 0.9998 - val_loss: 0.0550 - val_acc: 0.9908
      Epoch 10/10
      5668/5668 [==============================] - 11s - loss: 7.8728e-04 - acc: 0.9996 - val_loss: 0.0523 - val_acc: 0.9901

 

      可以看到,經過了 10 個epoch 后,在驗證集上的正確率已經達到了 99%。

 

      我們用已經訓練好的 LSTM 去預測已經划分好的測試集的數據,查看其效果。選了 5 個句子的預測結果,並打印出了原句。
"language-python hljs">score, acc = model.evaluate(Xtest, ytest, batch_size=BATCH_SIZE)
print("\nTest score: %.3f, accuracy: %.3f" % (score, acc))
print('{} {} {}'.format('預測','真實','句子'))
for i in range(5):
idx = np.random.randint(len(Xtest))
xtest = Xtest[idx].reshape(1,40)
ylabel = ytest[idx]
ypred = model.predict(xtest)[0][0]
sent = " ".join([index2word[x] for x in xtest[0] if x != 0])
print(' {} {} {}'.format(int(round(ypred)), int(ylabel), sent))

      Test score: 0.052, accuracy: 0.990
      預測 真實 句子
       0       0      oh , and brokeback mountain is a terrible movie …
       1       1      the last stand and mission impossible 3 both were awesome movies .
       1       1      i love harry potter .
       1       1      mission impossible 2 rocks ! ! … .
       1       1      harry potter is awesome i do n’t care if anyone says differently ! ..

 

      可見在測試集上的正確率已達 99%.

 

      我們可以自己輸入一些話,讓網絡預測我們的情感態度。假如我們輸入 I love reading. 和 You are so boring. 兩句話,看看訓練好的網絡能否預測出正確的情感。
"language-python hljs">INPUT_SENTENCES = ['I love reading.','You are so boring.']
XX = np.empty(len(INPUT_SENTENCES),dtype=list)
i=0
for sentence in INPUT_SENTENCES:
words = nltk.word_tokenize(sentence.lower())
seq = []
for word in words:
if word in word2index:
seq.append(word2index[word])
else:
seq.append(word2index['UNK'])
XX[i] = seq
i+=1
XX = sequence.pad_sequences(XX, maxlen=MAX_SENTENCE_LENGTH)
labels = [int(round(x[0])) for x in model.predict(XX) ]
label2word = {1:'積極', 0:'消極'}
for i in range(len(INPUT_SENTENCES)):
print('{} {}'.format(label2word[labels[i]], INPUT_SENTENCES[i]))

      積極    I love reading.
      消極    You are so boring.

 

  Yes ,預測正確。

 

     全部
# -*- coding: gbk -*-
from keras.layers.core import Activation, Dense
from keras.layers.embeddings import Embedding
from keras.layers.recurrent import LSTM
from keras.models import Sequential
from keras.preprocessing import sequence
from sklearn.model_selection import train_test_split
import collections
import nltk
import numpy as np
## EDA
maxlen = 0
word_freqs = collections.Counter()
num_recs = 0
with open('./train.txt','r+') as f:
for line in f:
label, sentence = line.strip().split("\t")
words = nltk.word_tokenize(sentence.lower())
if len(words) > maxlen:
maxlen = len(words)
for word in words:
word_freqs[word] += 1
num_recs += 1
print('max_len ',maxlen)
print('nb_words ', len(word_freqs))
## 准備數據
MAX_FEATURES = 2000
MAX_SENTENCE_LENGTH = 40
vocab_size = min(MAX_FEATURES, len(word_freqs)) + 2
word2index = {x[0]: i+2 for i, x in enumerate(word_freqs.most_common(MAX_FEATURES))}
word2index["PAD"] = 0
word2index["UNK"] = 1
index2word = {v:k for k, v in word2index.items()}
X = np.empty(num_recs,dtype=list)
y = np.zeros(num_recs)
i=0
with open('./train.txt','r+') as f:
for line in f:
label, sentence = line.strip().split("\t")
words = nltk.word_tokenize(sentence.lower())
seqs = []
for word in words:
if word in word2index:
seqs.append(word2index[word])
else:
seqs.append(word2index["UNK"])
X[i] = seqs
y[i] = int(label)
i += 1
X = sequence.pad_sequences(X, maxlen=MAX_SENTENCE_LENGTH)
## 數據划分
Xtrain, Xtest, ytrain, ytest = train_test_split(X, y, test_size=0.2, random_state=42)
## 網絡構建
EMBEDDING_SIZE = 128
HIDDEN_LAYER_SIZE = 64
BATCH_SIZE = 32
NUM_EPOCHS = 10
model = Sequential()
model.add(Embedding(vocab_size, EMBEDDING_SIZE,input_length=MAX_SENTENCE_LENGTH))
model.add(LSTM(HIDDEN_LAYER_SIZE, dropout=0.2, recurrent_dropout=0.2))
model.add(Dense(1))
model.add(Activation("sigmoid"))
pile(loss="binary_crossentropy", optimizer="adam",metrics=["accuracy"])
## 網絡訓練
model.fit(Xtrain, ytrain, batch_size=BATCH_SIZE, epochs=NUM_EPOCHS,validation_data=(Xtest, ytest))
## 預測
score, acc = model.evaluate(Xtest, ytest, batch_size=BATCH_SIZE)
print("\nTest score: %.3f, accuracy: %.3f" % (score, acc))
print('{} {} {}'.format('預測','真實','句子'))
for i in range(5):
idx = np.random.randint(len(Xtest))
xtest = Xtest[idx].reshape(1,40)
ylabel = ytest[idx]
ypred = model.predict(xtest)[0][0]
sent = " ".join([index2word[x] for x in xtest[0] if x != 0])
print(' {} {} {}'.format(int(round(ypred)), int(ylabel), sent))
##### 自己輸入
INPUT_SENTENCES = ['I love reading.','You are so boring.']
XX = np.empty(len(INPUT_SENTENCES),dtype=list)
i=0
for sentence in INPUT_SENTENCES:
words = nltk.word_tokenize(sentence.lower())
seq = []
for word in words:
if word in word2index:
seq.append(word2index[word])
else:
seq.append(word2index['UNK'])
XX[i] = seq
i+=1
XX = sequence.pad_sequences(XX, maxlen=MAX_SENTENCE_LENGTH)
labels = [int(round(x[0])) for x in model.predict(XX) ]
label2word = {1:'積極', 0:'消極'}
for i in range(len(INPUT_SENTENCES)):
print('{} {}'.format(label2word[labels[i]], INPUT_SENTENCES[i]))


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM