線性鏈條件隨機場(CRF)的原理與實現


基本原理

損失函數

(線性鏈)CRF通常用於序列標注任務,對於輸入序列\(x\)和標簽序列\(y\),定義匹配分數:

\[s(x,y) = \sum_{i=0}^l T(y_i, y_{i+1}) + \sum_{i=1}^l U(x_i, y_i) \]

這里\(l\)是序列長度,\(T\)\(U\)都是可學習的參數,\(T(y_i, y_{i+1})\)表示第\(i\)步的標簽是\(y_i\),第\(i+1\)步標簽是\(y_{i+1}\)的轉移分數,\(U(x_i,y_i)\)表示第\(i\)步輸入\(x_i\)對應的標簽是\(y_i\)的發射分數。注意這里在計算轉移分數\(T\)時,狀態轉移鏈為\(y_0\rightarrow y_1 \rightarrow \dots \rightarrow y_l \rightarrow y_{l+1}\),因為人為地加入了START_TAG和STOP_TAG標簽。

為了解決標注偏置問題,CRF需要做全局歸一化,具體而言就是輸入\(x\)對應的標簽序列為\(y\)的概率定義為:

\[P(y|x)=\frac{e^{s(x,y)}}{Z(x)} = \frac{e^{s(x,y)}}{\sum_{\tilde{y}\in Y_x}e^{s(x,y)}} \]

因此這里最麻煩的就是計算配分函數(partition function)\(Z(x)\),因為它要遍歷所有路徑。

在訓練過程中,我們希望最大化正確標簽序列的對數概率,即:

\[\log P(y|x)=\log P(\frac{e^{s(x,y)}}{Z(x)}) = s(x,y) - \log Z(x) = s(x,y) - \log (\sum_{\tilde{y}\in Y_x}e^{s(x,y)}) \]

也就是最小化負對數似然,即損失函數為:

\[-\log P(y|x)=\log P(\frac{e^{s(x,y)}}{Z(x)}) = \log (\sum_{\tilde{y}\in Y_x}e^{s(x,y)}) - s(x,y) \]

配分函數計算

接下來我們來討論怎么計算\(Z(x)\)。我們使用前向算法計算\(Z(x)\),偽碼如下:

  1. 初始化,對於\(y_2\)的所有取值\(y_2^*\),定義

\[\alpha_1(y_2^*) = \sum_{y_1^*} \exp(U(x_1, y_1^*) + T(y_1^*, y_2^*)) \]

這里\(y_k\)表示\(k\)時刻的標簽,它的取值空間是標簽控件,如B,I,O等,某一個具體的取值記為\(y_k^*\)\(\alpha_k(y_{k+1}^*)\)可以認為是時刻\(k\)時的非規范化概率。注意這里\(y_{k+1}^*\)我們只用了一個標簽,其實我們要在整個標簽空間遍歷,對於\(y_{k+1}\)的每一個取值都算一遍。
2. 對於\(k = 2, 3, \dots, l-1\)以及\(y_{k+1}\)的所有取值\(y_{k+1}^*\),都有:

\[\log (\alpha_k(y_{k+1}^*)) = \log \sum_{y_k^*}\exp \left(U(x_k, y_k^*)+T(y_k^*, y_{k+1}^*) + \log(\alpha_{k-1}(y_k^*)) \right) \]

這里\(y_k\)\(y_{k+1}\)都是一個具體的取值,這意味着這一步的計算復雜度是\(O(N^2)\)的,其中\(N\)是標簽數目。
3. 最終:

\[Z(x) = \sum_{y_l^*} \exp \left(U(x_l, y_l^*) + \log(\alpha_{l-1}(y_l^*)) \right) \]

注意到偽碼第二步就是所謂的logsumexp,這可能會導致問題。因為如果求指數特別大,可能會導致溢出。因此這里存在一個小trick使得計算時數值穩定:

\[\log \sum_k \exp(z_k) = \max (\mathbf{z}) + \log \sum_k \exp(z_k - \max(\mathbf{z})) \]

證明如下:

\[\log \sum_k \exp(z_k) = \log \sum_k (\exp(z_k -c) \cdot \exp(c)) = \log[\exp(c) \cdot \sum_k \exp(z_k -c)] = c + \log \sum_k \exp(z_k -c) \qquad \text{令} \ c = \max({\mathbf{z}}) \]

代碼實現

以下代碼參考Pytorch關於Bi-LSTM+CRF的tutorial。首先導入需要的模塊:

import torch
import torch.autograd as autograd
import torch.nn as nn
import torch.optim as optim

torch.manual_seed(1)

為了使模型易讀,定義幾個輔助函數:

def argmax(vec):
    """return the argmax as a python int"""
    _, idx = torch.max(vec, 1)
    return idx.item()


def prepare_sequence(seq, to_ix):
    """word2id"""
    idxs = [to_ix[w] for w in seq]
    return torch.tensor(idxs, dtype=torch.long)


def log_sum_exp(vec):
    """Compute log sum exp in a numerically stable way for the forward algorithm
    這個函數在Pytorch和TensorFlow其實都有,這里作者為了講解又實現了一次
    """
    max_score = vec[0, argmax(vec)]
    max_score_broadcast = max_score.view(1, -1).expand(1, vec.size()[1])
    return max_score + \
        torch.log(torch.sum(torch.exp(vec - max_score_broadcast)))

接下來定義整個模型:

class BiLSTM_CRF(nn.Module):

    def __init__(self, vocab_size, tag_to_ix, embedding_dim, hidden_dim):
        super(BiLSTM_CRF, self).__init__()
        self.embedding_dim = embedding_dim
        self.hidden_dim = hidden_dim
        self.vocab_size = vocab_size
        self.tag_to_ix = tag_to_ix
        self.tagset_size = len(tag_to_ix)

        self.word_embeds = nn.Embedding(vocab_size, embedding_dim)
        self.lstm = nn.LSTM(embedding_dim, hidden_dim // 2,
                            num_layers=1, bidirectional=True)

        # 將LSTM的輸出映射到標簽空間
        # 相當於公式中的發射矩陣U
        self.hidden2tag = nn.Linear(hidden_dim, self.tagset_size)

        # 轉移矩陣,從標簽i轉移到標簽j的分數
        # tagset_size包含了人為加入的START_TAG和STOP_TAG
        self.transitions = nn.Parameter(
            torch.randn(self.tagset_size, self.tagset_size))

        # 下面這兩個約束不能轉移到START_TAG,也不能從STOP_TAG開始轉移
        self.transitions.data[tag_to_ix[START_TAG], :] = -10000
        self.transitions.data[:, tag_to_ix[STOP_TAG]] = -10000

        self.hidden = self.init_hidden()

    def init_hidden(self):
        """初始化LSTM"""
        return (torch.randn(2, 1, self.hidden_dim // 2),
                torch.randn(2, 1, self.hidden_dim // 2))

    def _forward_alg(self, feats):
        """計算配分函數Z(x)"""

        # 對應於偽碼第一步
        init_alphas = torch.full((1, self.tagset_size), -10000.)
        # START_TAG has all of the score.
        init_alphas[0][self.tag_to_ix[START_TAG]] = 0.

        # Wrap in a variable so that we will get automatic backprop
        forward_var = init_alphas

        # 對應於偽碼第二步的循環,迭代整個句子
        for feat in feats:
            alphas_t = []  # The forward tensors at this timestep
            for next_tag in range(self.tagset_size):
                # broadcast the emission score: it is the same regardless of
                # the previous tag
                emit_score = feat[next_tag].view(
                    1, -1).expand(1, self.tagset_size)
                # the ith entry of trans_score is the score of transitioning to
                # next_tag from i
                trans_score = self.transitions[next_tag].view(1, -1)
                # The ith entry of next_tag_var is the value for the
                # edge (i -> next_tag) before we do log-sum-exp
                # 這里對應了偽碼第二步中三者求和
                next_tag_var = forward_var + trans_score + emit_score
                # The forward variable for this tag is log-sum-exp of all the scores.
                alphas_t.append(log_sum_exp(next_tag_var).view(1))
            forward_var = torch.cat(alphas_t).view(1, -1)
        # 對應於偽碼第三步,注意損失函數最終是要logZ(x),所以又是一個logsumexp
        terminal_var = forward_var + self.transitions[self.tag_to_ix[STOP_TAG]]
        alpha = log_sum_exp(terminal_var)
        return alpha

    def _get_lstm_features(self, sentence):
        """調用LSTM獲得每個token的隱狀態,這里可以替換為任意的特征函數,
        LSTM返回的特征就是公式中的x
        """
        self.hidden = self.init_hidden()
        embeds = self.word_embeds(sentence).view(len(sentence), 1, -1)
        lstm_out, self.hidden = self.lstm(embeds, self.hidden)
        lstm_out = lstm_out.view(len(sentence), self.hidden_dim)
        lstm_feats = self.hidden2tag(lstm_out)
        return lstm_feats

    def _score_sentence(self, feats, tags):
        """計算給定輸入序列和標簽序列的匹配函數,即公式中的s函數"""
        score = torch.zeros(1)
        tags = torch.cat([torch.tensor([self.tag_to_ix[START_TAG]], dtype=torch.long), tags])
        for i, feat in enumerate(feats):
            score = score + \
                self.transitions[tags[i + 1], tags[i]] + feat[tags[i + 1]]
        score = score + self.transitions[self.tag_to_ix[STOP_TAG], tags[-1]]
        return score

    def _viterbi_decode(self, feats):
        """維特比解碼,給定輸入x和相關參數(發射矩陣和轉移矩陣),或者概率最大的標簽序列
        """
        backpointers = []

        # Initialize the viterbi variables in log space
        init_vvars = torch.full((1, self.tagset_size), -10000.)
        init_vvars[0][self.tag_to_ix[START_TAG]] = 0

        # forward_var at step i holds the viterbi variables for step i-1
        forward_var = init_vvars
        for feat in feats:
            bptrs_t = []  # holds the backpointers for this step
            viterbivars_t = []  # holds the viterbi variables for this step

            for next_tag in range(self.tagset_size):
                # next_tag_var[i] holds the viterbi variable for tag i at the
                # previous step, plus the score of transitioning
                # from tag i to next_tag.
                # We don't include the emission scores here because the max
                # does not depend on them (we add them in below)
                next_tag_var = forward_var + self.transitions[next_tag]
                best_tag_id = argmax(next_tag_var)
                bptrs_t.append(best_tag_id)
                viterbivars_t.append(next_tag_var[0][best_tag_id].view(1))
            # Now add in the emission scores, and assign forward_var to the set
            # of viterbi variables we just computed
            forward_var = (torch.cat(viterbivars_t) + feat).view(1, -1)
            backpointers.append(bptrs_t)

        # Transition to STOP_TAG
        terminal_var = forward_var + self.transitions[self.tag_to_ix[STOP_TAG]]
        best_tag_id = argmax(terminal_var)
        path_score = terminal_var[0][best_tag_id]

        # Follow the back pointers to decode the best path.
        best_path = [best_tag_id]
        for bptrs_t in reversed(backpointers):
            best_tag_id = bptrs_t[best_tag_id]
            best_path.append(best_tag_id)
        # Pop off the start tag (we dont want to return that to the caller)
        start = best_path.pop()
        assert start == self.tag_to_ix[START_TAG]  # Sanity check
        best_path.reverse()
        return path_score, best_path

    def neg_log_likelihood(self, sentence, tags):
        """損失函數 = Z(x) - s(x,y)
        """
        feats = self._get_lstm_features(sentence)
        forward_score = self._forward_alg(feats)
        gold_score = self._score_sentence(feats, tags)
        return forward_score - gold_score

    def forward(self, sentence):
        """預測函數,注意這個函數和_forward_alg不一樣
        這里給定一個句子,預測最有可能的標簽序列
        """
        # Get the emission scores from the BiLSTM
        lstm_feats = self._get_lstm_features(sentence)

        # Find the best path, given the features.
        score, tag_seq = self._viterbi_decode(lstm_feats)
        return score, tag_seq

最后,把上述模型拼起來得到一個完整的可運行實例,這里就不再講解:

START_TAG = "<START>"
STOP_TAG = "<STOP>"
EMBEDDING_DIM = 5
HIDDEN_DIM = 4

# Make up some training data
training_data = [(
    "the wall street journal reported today that apple corporation made money".split(),
    "B I I I O O O B I O O".split()
), (
    "georgia tech is a university in georgia".split(),
    "B I O O O O B".split()
)]

word_to_ix = {}
for sentence, tags in training_data:
    for word in sentence:
        if word not in word_to_ix:
            word_to_ix[word] = len(word_to_ix)

tag_to_ix = {"B": 0, "I": 1, "O": 2, START_TAG: 3, STOP_TAG: 4}

model = BiLSTM_CRF(len(word_to_ix), tag_to_ix, EMBEDDING_DIM, HIDDEN_DIM)
optimizer = optim.SGD(model.parameters(), lr=0.01, weight_decay=1e-4)

# Check predictions before training
with torch.no_grad():
    precheck_sent = prepare_sequence(training_data[0][0], word_to_ix)
    precheck_tags = torch.tensor([tag_to_ix[t] for t in training_data[0][1]], dtype=torch.long)
    print(model(precheck_sent))

# Make sure prepare_sequence from earlier in the LSTM section is loaded
for epoch in range(
        300):  # again, normally you would NOT do 300 epochs, it is toy data
    for sentence, tags in training_data:
        # Step 1. Remember that Pytorch accumulates gradients.
        # We need to clear them out before each instance
        model.zero_grad()

        # Step 2. Get our inputs ready for the network, that is,
        # turn them into Tensors of word indices.
        sentence_in = prepare_sequence(sentence, word_to_ix)
        targets = torch.tensor([tag_to_ix[t] for t in tags], dtype=torch.long)

        # Step 3. Run our forward pass.
        loss = model.neg_log_likelihood(sentence_in, targets)

        # Step 4. Compute the loss, gradients, and update the parameters by
        # calling optimizer.step()
        loss.backward()
        optimizer.step()

# Check predictions after training
with torch.no_grad():
    precheck_sent = prepare_sequence(training_data[0][0], word_to_ix)
    print(model(precheck_sent))
# We got it!

參考資料

[1]. https://towardsdatascience.com/implementing-a-linear-chain-conditional-random-field-crf-in-pytorch-16b0b9c4b4ea
[2]. https://zhuanlan.zhihu.com/p/27338210


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM