Graph相關(圖學習|圖神經網絡|圖優化等)(6篇)
[1] Space-Time Graph Neural Networks
標題:時空圖神經網絡
鏈接:https://arxiv.org/abs/2110.02880
作者:Samar Hadou,Charilaos I. Kanatsoulis,Alejandro Ribeiro
機構:The authors are with the Department of Electrical and Systems Engineering, University of Pennsylvania
摘要:我們介紹了時空圖神經網絡(ST-GNN),一種新的GNN結構,專門用於聯合處理時變網絡數據的潛在時空拓撲。我們提出的架構的基石是時間和圖卷積濾波器的組合,然后是逐點非線性激活函數。我們介紹了卷積算子的一般定義,它模擬信號在其底層支持上的擴散過程。在這個定義的基礎上,我們提出了時空圖卷積,它建立在時間和圖移位算子的組合之上。我們證明了具有多元積分Lipschitz濾波器的ST-GNNs對於基礎圖中的小擾動以及時間扭曲引起的時域小擾動是穩定的。我們的分析表明,網絡拓撲和系統時間演化的微小變化不會顯著影響ST GNNs的性能。分散控制系統的數值實驗表明了所提出的ST GNNs的有效性和穩定性。
摘要:We introduce space-time graph neural network (ST-GNN), a novel GNN architecture, tailored to jointly process the underlying space-time topology of time-varying network data. The cornerstone of our proposed architecture is the composition of time and graph convolutional filters followed by pointwise nonlinear activation functions. We introduce a generic definition of convolution operators that mimic the diffusion process of signals over its underlying support. On top of this definition, we propose space-time graph convolutions that are built upon a composition of time and graph shift operators. We prove that ST-GNNs with multivariate integral Lipschitz filters are stable to small perturbations in the underlying graphs as well as small perturbations in the time domain caused by time warping. Our analysis shows that small variations in the network topology and time evolution of a system does not significantly affect the performance of ST-GNNs. Numerical experiments with decentralized control systems showcase the effectiveness and stability of the proposed ST-GNNs.
[2] Semi-relaxed Gromov Wasserstein divergence with applications on graphs
標題:半松弛Gromov Wasserstein散度及其在圖上的應用
鏈接:https://arxiv.org/abs/2110.02753
作者:Cédric Vincent-Cuaz,Rémi Flamary,Marco Corneli,Titouan Vayer,Nicolas Courty
機構:Universit´e Cˆote d’Azur, Inria, Maasai, CNRS, LJAD, MSI, Nice, France, Universit´e Lyon, Inria, CNRS, ENS de Lyon, UCB Lyon , LIP UMR , Lyon, France, Universit´e Bretagne-Sud, CNRS, IRISA, Vannes, France
備注:preprint under review
摘要:比較結構化對象(如圖形)是許多學習任務中涉及的基本操作。為此,基於最佳傳輸(OT)的格羅莫夫-瓦瑟斯坦(GW)距離已被證明能夠成功地處理相關對象的特定性質。更具體地說,通過節點連接關系,GW在圖上運行,被視為特定空間上的概率度量。OT的核心是質量守恆的思想,它在兩個考慮的圖的所有節點之間施加耦合。在本文中,我們認為這種性質對諸如圖字典或划分學習之類的任務是有害的,我們通過提出一種新的半松弛Gromov-Wasserstein散度來放松它。除了直接的計算優勢外,我們還討論了它的性質,並表明它可以導致一種高效的圖字典學習算法。我們通過經驗證明了它與圖上的復雜任務(如划分、聚類和完成)的相關性。
摘要:Comparing structured objects such as graphs is a fundamental operation involved in many learning tasks. To this end, the Gromov-Wasserstein (GW) distance, based on Optimal Transport (OT), has proven to be successful in handling the specific nature of the associated objects. More specifically, through the nodes connectivity relations, GW operates on graphs, seen as probability measures over specific spaces. At the core of OT is the idea of conservation of mass, which imposes a coupling between all the nodes from the two considered graphs. We argue in this paper that this property can be detrimental for tasks such as graph dictionary or partition learning, and we relax it by proposing a new semi-relaxed Gromov-Wasserstein divergence. Aside from immediate computational benefits, we discuss its properties, and show that it can lead to an efficient graph dictionary learning algorithm. We empirically demonstrate its relevance for complex tasks on graphs such as partitioning, clustering and completion.
[3] An Analysis of Attentive Walk-Aggregating Graph Neural Networks
標題:注意力步行聚集圖神經網絡的分析
鏈接:https://arxiv.org/abs/2110.02667
作者:Mehmet F. Demirel,Shengchao Liu,Siddhant Garg,Yingyu Liang
機構:University of Wisconsin–Madison,Quebec AI Institute (Mila),Amazon Alexa AI
備注:Preprint (36 Pages)
摘要:圖形神經網絡(GNNs)具有很強的表示能力,可用於圖形結構數據(如分子和社會網絡)的下游預測任務。它們通常通過聚集來自單個頂點的K-hop鄰域或圖中的枚舉行走的信息來學習表示。先前的研究已經證明了將加權方案納入GNN的有效性;然而,到目前為止,這主要局限於K-hop社區GNN。在本文中,我們的目的是廣泛分析將加權方案納入步行聚合GNN的效果。為了實現這一目標,我們提出了一種新的GNN模型,稱為AWARE,該模型以一種原則性的方式使用注意方案來聚合關於圖中行走的信息,從而獲得用於圖級預測任務的端到端監督學習方法。我們對意識進行理論、實證和解釋性分析。我們的理論分析為加權GNN提供了第一個可證明的保證,展示了圖形信息如何編碼到表示中,以及AWARE中的加權方案如何影響表示和學習性能。我們在分子性質預測(61項任務)和社會網絡(4項任務)領域的經驗證明了AWARE優於先前基線。我們的解釋研究表明,AWARE可以成功地學習捕獲輸入圖的重要子結構。摘要:Graph neural networks (GNNs) have been shown to possess strong representation power, which can be exploited for downstream prediction tasks on graph-structured data, such as molecules and social networks. They typically learn representations by aggregating information from the K-hop neighborhood of individual vertices or from the enumerated walks in the graph. Prior studies have demonstrated the effectiveness of incorporating weighting schemes into GNNs; however, this has been primarily limited to K-hop neighborhood GNNs so far. In this paper, we aim to extensively analyze the effect of incorporating weighting schemes into walk-aggregating GNNs. Towards this objective, we propose a novel GNN model, called AWARE, that aggregates information about the walks in the graph using attention schemes in a principled way to obtain an end-to-end supervised learning method for graph-level prediction tasks. We perform theoretical, empirical, and interpretability analyses of AWARE. Our theoretical analysis provides the first provable guarantees for weighted GNNs, demonstrating how the graph information is encoded in the representation, and how the weighting schemes in AWARE affect the representation and learning performance. We empirically demonstrate the superiority of AWARE over prior baselines in the domains of molecular property prediction (61 tasks) and social networks (4 tasks). Our interpretation study illustrates that AWARE can successfully learn to capture the important substructures of the input graph.
[4] Inference Attacks Against Graph Neural Networks
標題:針對圖神經網絡的推理攻擊
鏈接:https://arxiv.org/abs/2110.02631
作者:Zhikun Zhang,Min Chen,Michael Backes,Yun Shen,Yang Zhang
機構:CISPA Helmholtz Center for Information Security, Norton Research Group
備注:19 pages, 18 figures. To Appear in the 31st USENIX Security Symposium
摘要:圖是現實世界中普遍存在的一種重要的數據表示形式。然而,由於圖形數據的非歐幾里德性質,分析圖形數據在計算上很困難。圖嵌入是將圖數據轉化為低維向量來解決圖分析問題的有力工具。這些向量還可以與第三方共享,以獲得關於數據背后的更多見解。雖然共享圖嵌入很有趣,但相關的隱私風險尚未被探究。本文通過三種推理攻擊,系統地研究了圖嵌入的信息泄漏問題。首先,我們可以成功地推斷目標圖的基本圖屬性,如節點數、邊數和圖密度,精確度高達0.89。其次,給定感興趣的子圖和圖嵌入,我們可以高置信度地確定子圖是否包含在目標圖中。例如,我們在DD數據集上實現了0.98攻擊AUC。第三,我們提出了一種新的圖重建攻擊,該攻擊可以重建與目標圖具有相似圖結構統計信息的圖。我們進一步提出了一種基於圖嵌入擾動的有效防御機制,以在不顯著降低圖分類任務性能的情況下緩解推理攻擊。我們的代碼可在https://github.com/Zhangzhk0819/GNN-Embedding-Leaks.摘要:Graph is an important data representation ubiquitously existing in the real world. However, analyzing the graph data is computationally difficult due to its non-Euclidean nature. Graph embedding is a powerful tool to solve the graph analytics problem by transforming the graph data into low-dimensional vectors. These vectors could also be shared with third parties to gain additional insights of what is behind the data. While sharing graph embedding is intriguing, the associated privacy risks are unexplored. In this paper, we systematically investigate the information leakage of the graph embedding by mounting three inference attacks. First, we can successfully infer basic graph properties, such as the number of nodes, the number of edges, and graph density, of the target graph with up to 0.89 accuracy. Second, given a subgraph of interest and the graph embedding, we can determine with high confidence that whether the subgraph is contained in the target graph. For instance, we achieve 0.98 attack AUC on the DD dataset. Third, we propose a novel graph reconstruction attack that can reconstruct a graph that has similar graph structural statistics to the target graph. We further propose an effective defense mechanism based on graph embedding perturbation to mitigate the inference attacks without noticeable performance degradation for graph classification tasks. Our code is available at https://github.com/Zhangzhk0819/GNN-Embedding-Leaks.
[5] A Regularized Wasserstein Framework for Graph Kernels
標題:圖核的正則化Wasserstein框架
鏈接:https://arxiv.org/abs/2110.02554
作者:Asiri Wijesinghe,Qing Wang,Stephen Gould
機構:School of Computing, Australian National University, Canberra, Australia
備注:21st IEEE International Conference on Data Mining (ICDM 2021)
摘要:我們提出了一個基於正則化最優傳輸的圖核學習框架。該框架提供了一種新的最優傳輸距離度量,即正則化的Wasserstein(RW)差異,它通過特征上的Wasserstein距離及其局部變化、局部重心和全局連通性來保持圖的特征和結構。為了提高學習能力,引入了兩個強凸正則化項。一種是將圖之間的最佳對齊放松為其局部連通頂點之間的簇到簇映射,從而保持圖的局部聚類結構。另一種是考慮節點度分布,以便更好地保持圖的全局結構。我們還設計了一個有效的算法,使快速近似求解優化問題。理論上,我們的框架是健壯的,能夠保證優化的收斂性和數值穩定性。我們使用12個數據集和16個最先進的基線對我們的方法進行了實證驗證。實驗結果表明,對於具有離散屬性的圖和具有連續屬性的圖,我們的方法在所有基准數據庫上都優於所有最新的方法。摘要:We propose a learning framework for graph kernels, which is theoretically grounded on regularizing optimal transport. This framework provides a novel optimal transport distance metric, namely Regularized Wasserstein (RW) discrepancy, which can preserve both features and structure of graphs via Wasserstein distances on features and their local variations, local barycenters and global connectivity. Two strongly convex regularization terms are introduced to improve the learning ability. One is to relax an optimal alignment between graphs to be a cluster-to-cluster mapping between their locally connected vertices, thereby preserving the local clustering structure of graphs. The other is to take into account node degree distributions in order to better preserve the global structure of graphs. We also design an efficient algorithm to enable a fast approximation for solving the optimization problem. Theoretically, our framework is robust and can guarantee the convergence and numerical stability in optimization. We have empirically validated our method using 12 datasets against 16 state-of-the-art baselines. The experimental results show that our method consistently outperforms all state-of-the-art methods on all benchmark databases for both graphs with discrete attributes and graphs with continuous attributes.
[6] A Topological View of Rule Learning in Knowledge Graphs
標題:知識圖中規則學習的拓撲觀
鏈接:https://arxiv.org/abs/2110.02510
作者:Zuoyu Yan,Tengfei Ma,Liangcai Gao,Zhi Tang,Chao Chen
機構:Wangxuan Institute of Computer Technology, Peking University, IBM Thomas J. Watson Research Center, Department of Biomedical Informatics, Stony Brook University
摘要:歸納關系預測是知識圖完成的一項重要學習任務。人們可以利用規則的存在,即一系列關系來預測兩個實體之間的關系。以前的工作將規則視為路徑,主要關注實體之間的路徑搜索。路徑空間巨大,必須犧牲效率或准確性。本文以知識圖中的規則為周期,證明了基於代數拓撲理論的循環空間具有獨特的結構。通過探索循環空間的線性結構,可以提高規則的搜索效率。我們建議收集跨越循環空間的循環基。我們在收集的循環上構建了一個新的GNN框架來學習循環的表示,並預測關系的存在/不存在。我們的方法在基准上實現了最先進的性能。
摘要:Inductive relation prediction is an important learning task for knowledge graph completion. One can use the existence of rules, namely a sequence of relations, to predict the relation between two entities. Previous works view rules as paths and primarily focus on the searching of paths between entities. The space of paths is huge, and one has to sacrifice either efficiency or accuracy. In this paper, we consider rules in knowledge graphs as cycles and show that the space of cycles has a unique structure based on the theory of algebraic topology. By exploring the linear structure of the cycle space, we can improve the searching efficiency of rules. We propose to collect cycle bases that span the space of cycles. We build a novel GNN framework on the collected cycles to learn the representations of cycles, and to predict the existence/non-existence of a relation. Our method achieves state-of-the-art performance on benchmarks.