參考文獻:
https://zhuanlan.zhihu.com/p/59772104
https://blog.csdn.net/kejizuiqianfang/article/details/100835528
https://www.cnblogs.com/picassooo/p/13577527.html
https://www.jianshu.com/p/043083d114d4
https://blog.csdn.net/yangyang_yangqi/article/details/84585998
一、nn.LSTM參數講解
import torch import torch.nn as nn from torch.autograd import Variable #構建網絡模型---輸入矩陣特征數input_size、輸出矩陣特征數hidden_size、層數num_layers inputs = torch.randn(5,3,10) ->(seq_len,batch_size,input_size) rnn = nn.LSTM(10,20,2) -> (input_size,hidden_size,num_layers) h0 = torch.randn(2,3,20) ->(num_layers* 1,batch_size,hidden_size) c0 = torch.randn(2,3,20) ->(num_layers*1,batch_size,hidden_size) num_directions=1 因為是單向LSTM ''' Outputs: output, (h_n, c_n) ''' output,(hn,cn) = rnn(inputs,(h0,c0))
二、LSTM中不定長句子處理
import torch from torch import nn import torch.nn.utils.rnn as rnn_utils from torch.utils.data import DataLoader import torch.utils.data as data x1 = [ torch.tensor([[6,6], [6,6],[6,6]]).float(), torch.tensor([[7,7]]).float() ] y = [ torch.tensor([1]), torch.tensor([0]) ] class MyData(data.Dataset): def __init__(self, data_seq, y): self.data_seq = data_seq self.y = y def __len__(self): return len(self.data_seq) def __getitem__(self, idx): tuple_ = (self.data_seq[idx], self.y[idx]) return tuple_ def collate_fn(data_tuple): data_tuple.sort(key=lambda x: len(x[0]), reverse=True) data = [sq[0] for sq in data_tuple] label = [sq[1] for sq in data_tuple] data_length = [len(q) for q in data] data = rnn_utils.pad_sequence(data, batch_first=True, padding_value=0.0) label = rnn_utils.pad_sequence(label, batch_first=True, padding_value=0.0) return data, label,data_length if __name__=='__main__': learning_rate = 0.001 data = MyData(x1, y) data_loader = DataLoader(data, batch_size=2, shuffle=True, collate_fn=collate_fn) batch_x, y, batch_x_len = iter(data_loader).next() print(batch_x) print(batch_x.shape) print(batch_x_len) print(y) print(y.shape) batch_x_pack = rnn_utils.pack_padded_sequence(batch_x, batch_x_len, batch_first=True) net = nn.LSTM(input_size=2, hidden_size=10, num_layers=4, batch_first=True) criteria = nn.CrossEntropyLoss() optimizer = torch.optim.Adam(net.parameters(), lr=learning_rate) print(batch_x_pack) out, (h1, c1) = net(batch_x_pack)
三、孿生LSTM
import torch from torch import nn from torch.utils.data import DataLoader import torch.utils.data as data x1 = [ torch.tensor([[7,7]]).float(), torch.tensor([[6,6], [6,6],[6,6]]).float(), ] x2 = [ torch.tensor([[6,3]]).float(), torch.tensor([[6,3], [3,6],[6,6]]).float(), ] y = [ torch.tensor([1]), torch.tensor([0]), ] class MyData(data.Dataset): def __init__(self, data1, data2, y): self.data1 = data1 self.data2 = data2 self.y = y def __len__(self): return len(self.data1) def __getitem__(self, idx): tuple_ = (self.data1[idx], self.data2[idx],self.y[idx]) return tuple_ class SiameseLSTM(nn.Module): def __init__(self, input_size): super(SiameseLSTM, self).__init__() self.lstm = nn.LSTM(input_size=input_size, hidden_size=10, num_layers=4, batch_first=True) self.fc = nn.Linear(10, 1) def forward(self, data1, data2): out1, (h1, c1) = self.lstm(data1) out2, (h2, c2) = self.lstm(data2) pre1 = out1[:, -1, :] pre2 = out2[:, -1, :] dis = torch.abs(pre1 - pre2) out = self.fc(dis) return out if __name__=='__main__': learning_rate = 0.001 data = MyData(x1, x2, y) data_loader = DataLoader(data, batch_size=1, shuffle=True) net = SiameseLSTM(2) criterion = nn.BCEWithLogitsLoss() optimizer = torch.optim.Adam(net.parameters(), lr=learning_rate) for epoch in range(100): for batch_id, (data1,data2, label) in enumerate(data_loader): distence = net(data1,data2) print(distence) print(label) loss = criterion(distence, label.float()) optimizer.zero_grad() loss.backward() optimizer.step() print(loss)
四、參數講解
1、輸入的參數列表包括:
input_size: 輸入數據的特征維數,通常就是embedding_dim(詞向量的維度)
hidden_size: LSTM中隱層的維度
num_layers: 循環神經網絡的層數
bias: 用不用偏置,default=True
batch_first: 這個要注意,通常我們輸入的數據shape=(batch_size,seq_length,embedding_dim),而batch_first默認是False,所以我們的輸入數據最好送進LSTM之前將batch_size與seq_length這兩個維度調換
dropout: 默認是0,代表不用dropout
bidirectional: 默認是false,代表不用雙向LSTM
2、輸入數據包括input, (h_0, c_0):
input: shape = [seq_length, batch_size, input_size]的張量
h_0: shape = [num_layers * num_directions,batch, hidden_size]的張量,它包含了在當前這個batch_size中每個句子的初始隱藏狀態,num_layers就是LSTM的層數,如果bidirectional = True,則num_directions = 2,否則就是1,表示只有一個方向
c_0: 與h_0的形狀相同,它包含的是在當前這個batch_size中的每個句子的初始細胞狀態。h_0,c_0如果不提供,那么默認是0
3、輸出數據包括output, (h_t, c_t):
output.shape = [seq_length, batch_size, num_directions * hidden_size]
它包含的LSTM的最后一層的輸出特征(h_t),t是batch_size中每個句子的長度.
h_t.shape = [num_directions * num_layers, batch, hidden_size]
c_t.shape = h_t.shape
h_n包含的是句子的最后一個單詞的隱藏狀態,c_t包含的是句子的最后一個單詞的細胞狀態,所以它們都與句子的長度seq_length無關。
output[-1]與h_t是相等的,因為output[-1]包含的正是batch_size個句子中每一個句子的最后一個單詞的隱藏狀態,注意LSTM中的隱藏狀態其實就是輸出,cell state細胞狀態才是LSTM中一直隱藏的,記錄着信息,這也就是博主本文想說的一個事情,output與h_t的關系。