Please use this identifier to cite or link to this item:
http://elar.urfu.ru/handle/10995/130549
Title: | Convolutional Neural Network Outperforms Graph Neural Network on the Spatially Variant Graph Data |
Authors: | Boronina, A. Maksimenko, V. Hramov, A. E. |
Issue Date: | 2023 |
Publisher: | MDPI |
Citation: | Boronina, A, Maksimenko, V & Hramov, AE 2023, 'Convolutional Neural Network Outperforms Graph Neural Network on the Spatially Variant Graph Data', Mathematics, Том. 11, № 11, 2515. https://doi.org/10.3390/math11112515 Boronina, A., Maksimenko, V., & Hramov, A. E. (2023). Convolutional Neural Network Outperforms Graph Neural Network on the Spatially Variant Graph Data. Mathematics, 11(11), [2515]. https://doi.org/10.3390/math11112515 |
Abstract: | Applying machine learning algorithms to graph-structured data has garnered significant attention in recent years due to the prevalence of inherent graph structures in real-life datasets. However, the direct application of traditional deep learning algorithms, such as Convolutional Neural Networks (CNNs), is limited as they are designed for regular Euclidean data like 2D grids and 1D sequences. In contrast, graph-structured data are in a non-Euclidean form. Graph Neural Networks (GNNs) are specifically designed to handle non-Euclidean data and make predictions based on connectivity rather than spatial structure. Real-life graph data can be broadly categorized into two types: spatially-invariant graphs, where the link structure between nodes is independent of their spatial positions, and spatially-variant graphs, where node positions provide additional information about the graph’s properties. However, there is limited understanding of the effect of spatial variance on the performance of Graph Neural Networks. In this study, we aim to address this issue by comparing the performance of GNNs and CNNs on spatially-variant and spatially-invariant graph data. In the case of spatially-variant graphs, when represented as adjacency matrices, they can exhibit Euclidean-like spatial structure. Based on this distinction, we hypothesize that CNNs may outperform GNNs when working with spatially-variant graphs, while GNNs may excel on spatially-invariant graphs. To test this hypothesis, we compared the performance of CNNs and GNNs under two scenarios: (i) graphs in the training and test sets had the same connectivity pattern and spatial structure, and (ii) graphs in the training and test sets had the same connectivity pattern but different spatial structures. Our results confirmed that the presence of spatial structure in a graph allows for the effective use of CNNs, which may even outperform GNNs. Thus, our study contributes to the understanding of the effect of spatial graph structure on the performance of machine learning methods and allows for the selection of an appropriate algorithm based on the spatial properties of the real-life graph dataset. © 2023 by the authors. |
Keywords: | ADJACENCY MATRIX CLASSIFICATION CLUSTERING CONVOLUTIONAL NEURAL NETWORK (CNN) GRAPH NEURAL NETWORK (GNN) GRAPH STRUCTURES MODULARITY SEGREGATION SPATIAL INVARIANCE |
URI: | http://elar.urfu.ru/handle/10995/130549 |
Access: | info:eu-repo/semantics/openAccess cc-by |
License text: | https://creativecommons.org/licenses/by/4.0/ |
SCOPUS ID: | 85161461992 |
WOS ID: | 001006288200001 |
PURE ID: | 40606100 |
ISSN: | 2227-7390 |
DOI: | 10.3390/math11112515 |
Sponsorship: | Ministry of Education and Science of the Russian Federation, Minobrnauka: NSH-589.2022.1.2 The research funding from the Ministry of Science and Higher Education of the Russian Federation (Ural Federal University Program of Development within the Priority-2030 Program) is gratefully acknowledged. A.E.H. also extends thanks to support President Program for Leading Scientific School Support (grant NSH-589.2022.1.2). |
Appears in Collections: | Научные публикации ученых УрФУ, проиндексированные в SCOPUS и WoS CC |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
2-s2.0-85161461992.pdf | 3,22 MB | Adobe PDF | View/Open |
This item is licensed under a Creative Commons License