GCN原理及代碼實現——基於pytorch


Thomas N.Kipf等人於2017年發表了一篇題為《SEMI_SUPERVISED CLASSIFICATION WITH GRAPH CONVOLUTIONAL NETWORKS》的論文,提出了一種直接在圖上進行卷積操作的算法,在引文網絡和知識圖譜的數據集中取得了state-of-the-art的效果,開啟了圖神經網絡研究的熱潮。GCN及其變體已經成功應用在自然語言處理、計算機視覺以及推薦系統中,刷新了各項任務的性能記錄,GCN被認為是近幾年最具價值的研究方向。本文淺談GCN的原理,並基於自己的理解,參考了網上相關代碼,實現了兩層GCN,算是對GCN的一種入門吧。

節點分類問題傳統做法

在一個圖中(比如引文網絡),圖中的每個節點代表一篇論文,圖中的邊代表論文之間的引用關系,此外還有少量的論文有標簽,即知道該部分論文所屬的研究領域。節點分類任務是對所有的論文進行類別判定。傳統的辦法基於這樣一種假設:在圖中具有連邊的節點應該有相似的屬性,很有可能屬於同一個類別。於是很自然的可以提出下面這個損失函數:

\[L=L_{0} +\lambda L_{reg} \]

\[L_{reg}=\sum_{i,j}A_{ij}\left \| f(X_{i})-f(X_{j}) \right \|^{2}=f(X)^{T}\Delta f(X) \]

其中\(L_{0}\)是有標簽部分節點的損失,\(f()\)表示一種映射關系,可以是神經網絡。\(\Delta = D - A\),是節點的度矩陣與鄰接矩陣的差,稱為圖的拉普拉斯矩陣。“在圖中具有連邊的節點應該有相似的屬性”這一假設過於嚴格了,限制了圖的表達能力。

GCN的做法

直接用神經網絡對整個圖建模,記為\(f(X,A)\),用圖中有標簽的那部分數據訓練模型。文中的創新之處在於提出了一種逐層傳播的模型,能夠很方便地處理高階鄰居關系,相比於傳統做法,省略了圖正則化部分,使的模型更具靈活性,表達能力更強。在這里直接給出圖卷積的公式:

\[H^{(l+1)}=\sigma(\tilde D^{-\frac{1}{2}} \tilde A \tilde D^{-\frac{1}{2}} H^{(l)}W^{(l)}) \]

式中:

  • \(\tilde A = A + I\)\(A\)表示鄰接矩陣,維度為\(N \times N\)\(N\)為節點數量,\(I\)表示和\(A\)維度相同的單位矩陣。
  • \(\tilde D = D + I\),\(D\)表示度矩陣,維度為\(N \times N\)
  • \(H^{(l)}\)表示第\(l\)層各節點的特征,\(H^{(0)}\)表示原始節點特征的矩陣。
  • \(W^{(l)}\)表示第\(l\)可學習的參數。
  • \(\tilde D^{-\frac{1}{2}} \tilde A \tilde D^{-\frac{1}{2}}\)表示對矩陣\(\tilde A\)進行歸一化,避免鄰居數量越多,卷積后結果越大的情況以及考慮了鄰居的度大小對卷積的影響。
  • \(\sigma()\)表示激活函數。

半監督學習框架

假設輸入節點有四個,分別是\(X_{1}\)\(X_{2}\)\(X_{3}\)\(X_{3}\),每個節點的特征向量的維度是\(C\),其中節點\(X_{1}\)\(X_{4}\)是帶標簽的,節點\(X_{2}\)\(X_{3}\)是不帶標簽的,經過多層的卷積操作之后,四個節點的特征向量維度都變成了\(F\)。利用圖中有標簽的節點計算損失函數,對GCN中的參數進行訓練。

兩層GCN示例

\[H^{(0)}=X \]

第一層圖卷積:

\[\tilde A = \tilde D^{-\frac{1}{2}} \tilde A \tilde D^{-\frac{1}{2}} \]

\[H^{(1)}=Relu(\tilde AHW^{(0)}) \]

第二層圖卷積:

\[Z=H^{(2)}=Relu(\tilde AH^{(1)}W^{(1)}) \]

參數訓練:
圖中的所有節點中有一部分節點是帶標簽的,因此我們可以利用這部分帶標簽的節點的類別,計算真實類別與預測類別的損失函數,從而優化GCN參數。損失函數定義如下:

\[L=-\sum_{l\in y_{L}}\sum_{f=1}^{F}Y_{lf}lnZ_{lf} \]

其中\(y_{L}\),是有標簽的節點集合,\(F\)是節點最終特征向量的維度。

cora數據集介紹

1 下載地址https://linqs.soe.ucsc.edu/data
2 文件構成
cora數據集包含了機器學習相關的論文,論文的類別有七種,分別是:

  • Case_Based
  • Genetic_Algorithms
  • Neural_Networks
  • Probabilistic_Methods
  • Reinforcement_Learning
  • Rule_Learning
  • Theory

cora數據集共包含兩個文件。其中.content文件的每一行代表一篇論文,每行的組織形式是:<paper_id> <word_attributes> <class_label>。word_attributes是維度為1433的one-hot向量,class_label是上述七種類型的一種。

content文件

.cites文件記錄了論文之間的引用關系。文件的每一行代表

cites文件

代碼

import numpy as np
import scipy.sparse as sp
import torch
import math
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim

def encode_onehot(labels):
    classes = set(labels)
    class_dict = {c: np.identity(len(classes))[i, :] for i, c in enumerate(classes)}
    
    label_onehot = np.array(list(map(class_dict.get, labels)),
                           dtype=np.int32)
    
    return label_onehot

def normalize(mx):
    rowsum = np.array(mx.sum(1))
    r_inv = np.power(rowsum, -1).flatten()
    r_inv[np.isinf(r_inv)] = 0
    r_mat_inv = sp.diags(r_inv)
    mx = r_mat_inv.dot(mx)
    
    return mx

def accuracy(output, labels):
    pred = output.max(1)[1].type_as(labels)
    correct = pred.eq(labels).double()
    correct = correct.sum()
    
    return correct / len(labels)

def sparse_mx_to_torch_sparse_tensor(sparse_mx):
    sparse_mx = sparse_mx.tocoo().astype(np.float32)
    indices = torch.from_numpy(
    np.vstack((sparse_mx.row, sparse_mx.col)).astype(np.int64))
    values = torch.from_numpy(sparse_mx.data)
    shape = torch.Size(sparse_mx.shape)
    
    return torch.sparse.FloatTensor(indices, values, shape)

def load_data(path="C:/Users/DZL102/Downloads/cora/", dataset="cora"):
    print("Loading data...")
    
    idx_features_labels = np.genfromtxt("{}{}.content".format(path, dataset),
                                        dtype=np.dtype(str))
    features = sp.csr_matrix(idx_features_labels[:, 1:-1], dtype=np.float32)
    labels = encode_onehot(idx_features_labels[:, -1])
    
    # build graph
    idx = np.array(idx_features_labels[:, 0], dtype=np.int32)
    idx_map = {j: i for i, j in enumerate(idx)}
    edges_unordered = np.genfromtxt("{}{}.cites".format(path, dataset),
                                    dtype=np.int32)
    edges = np.array(list(map(idx_map.get, edges_unordered.flatten())), dtype=np.int32,
                    ).reshape(edges_unordered.shape)
    adj = sp.coo_matrix((np.ones(edges.shape[0]), (edges[:, 0], edges[:, 1])),
                       shape=(labels.shape[0], labels.shape[0]),
                       dtype=np.float32)
    
    adj = adj + adj.T.multiply(adj.T > adj) - adj.multiply(adj.T > adj)
    
    features = normalize(features)
    adj = normalize(adj + sp.eye(adj.shape[0]))
    
    idx_train = range(140)
    idx_val = range(200, 500)
    idx_test = range(500, 1500)
    
    features = torch.FloatTensor(np.array(features.todense()))
    labels = torch.LongTensor(np.where(labels)[1])
    adj = sparse_mx_to_torch_sparse_tensor(adj)
    
    idx_train = torch.LongTensor(idx_train)
    idx_val = torch.LongTensor(idx_val)
    idx_test = torch.LongTensor(idx_test)
    print("數據加載成功...")
    
    return adj, features, labels, idx_train, idx_val, idx_test

class GraphConvolution(nn.Module):
    def __init__(self, in_features, out_features, bias=True):
        super(GraphConvolution, self).__init__()
        self.in_features = in_features
        self.out_features = out_features
        self.weight = nn.Parameter(torch.FloatTensor(in_features, out_features))
        self.use_bias = bias
        if self.use_bias:
            self.bias = nn.Parameter(torch.FloatTensor(out_features))
        self.reset_parameters()
        
    def reset_parameters(self):
        nn.init.kaiming_uniform_(self.weight)
        if self.use_bias:
            nn.init.zeros_(self.bias)
    
    def forward(self, input_features, adj):
        support = torch.mm(input_features, self.weight)
        output = torch.spmm(adj, support)
        if self.use_bias:
            return output + self.bias
        else:
            return output

class GCN(nn.Module):
    def __init__(self, input_dim=1433):
        super(GCN, self).__init__()
        self.gcn1 = GraphConvolution(input_dim, 16)
        self.gcn2 = GraphConvolution(16, 7)
        pass
    
    def forward(self, X, adj):
        X = F.relu(self.gcn1(X, adj))
        X = self.gcn2(X, adj)
        
        return F.log_softmax(X, dim=1)

model = GCN(features.shape[1])
optimizer = optim.Adam(model.parameters(), lr=0.01, weight_decay=5e-4)

def train(epochs):
    for epoch in range(epochs):
        optimizer.zero_grad()
        output = model(features, adj)
        loss_train = F.nll_loss(output[idx_train], labels[idx_train])
        acc_train = accuracy(output[idx_train], labels[idx_train])
        loss_train.backward()
        optimizer.step()

        loss_val = F.nll_loss(output[idx_val], labels[idx_val])
        acc_val = accuracy(output[idx_val], labels[idx_val])
        
        if (epoch % 10 == 0):
            print("Epoch: {}".format(epoch + 1),
                 "loss_train: {:.4f}".format(loss_train.item()),
                 "acc_train: {:.4f}".format(acc_train.item()),
                 "loss_val: {:.4f}".format(loss_val.item()),
                 "acc_val: {:.4f}".format(acc_val.item()))


if __name__ == "__main__":
    train(200)

運行結果

Epoch: 1 loss_train: 1.9277 acc_train: 0.1429 loss_val: 1.9348 acc_val: 0.1100
Epoch: 11 loss_train: 1.7104 acc_train: 0.4286 loss_val: 1.7408 acc_val: 0.4267
Epoch: 21 loss_train: 1.4866 acc_train: 0.5714 loss_val: 1.5790 acc_val: 0.5267
Epoch: 31 loss_train: 1.2659 acc_train: 0.6071 loss_val: 1.4243 acc_val: 0.5700
Epoch: 41 loss_train: 1.0554 acc_train: 0.7286 loss_val: 1.2715 acc_val: 0.6233
Epoch: 51 loss_train: 0.8724 acc_train: 0.8357 loss_val: 1.1299 acc_val: 0.7000
Epoch: 61 loss_train: 0.7206 acc_train: 0.8786 loss_val: 1.0145 acc_val: 0.7567
Epoch: 71 loss_train: 0.6001 acc_train: 0.9286 loss_val: 0.9232 acc_val: 0.7767
Epoch: 81 loss_train: 0.5080 acc_train: 0.9429 loss_val: 0.8550 acc_val: 0.7933
Epoch: 91 loss_train: 0.4396 acc_train: 0.9643 loss_val: 0.8056 acc_val: 0.8033
Epoch: 101 loss_train: 0.3885 acc_train: 0.9714 loss_val: 0.7692 acc_val: 0.8067
Epoch: 111 loss_train: 0.3500 acc_train: 0.9714 loss_val: 0.7428 acc_val: 0.8100
Epoch: 121 loss_train: 0.3203 acc_train: 0.9714 loss_val: 0.7234 acc_val: 0.8100
Epoch: 131 loss_train: 0.2968 acc_train: 0.9714 loss_val: 0.7085 acc_val: 0.8133
Epoch: 141 loss_train: 0.2778 acc_train: 0.9714 loss_val: 0.6971 acc_val: 0.8033
Epoch: 151 loss_train: 0.2621 acc_train: 0.9786 loss_val: 0.6881 acc_val: 0.8067
Epoch: 161 loss_train: 0.2489 acc_train: 0.9786 loss_val: 0.6808 acc_val: 0.8167
Epoch: 171 loss_train: 0.2376 acc_train: 0.9786 loss_val: 0.6748 acc_val: 0.8133
Epoch: 181 loss_train: 0.2278 acc_train: 0.9857 loss_val: 0.6700 acc_val: 0.8133
Epoch: 191 loss_train: 0.2193 acc_train: 0.9929 loss_val: 0.6659 acc_val: 0.8100


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM