File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Article: Graph over-parameterization: Why the graph helps the training of deep graph convolutional network

TitleGraph over-parameterization: Why the graph helps the training of deep graph convolutional network
Authors
KeywordsGraph convolutional neural network
Over-parameterization
Issue Date10-Mar-2023
PublisherElsevier
Citation
Neurocomputing, 2023, v. 534, p. 77-85 How to Cite?
Abstract

Recent studies show that gradient descent can train a deep neural network (DNN) to achieve small training and test errors when the DNN is sufficiently wide. This result applies to various over-parameterized neural network models including fully-connected neural networks and convolutional neural networks. However, existing theory does not apply to graph convolutional networks (GCNs), as GCNs is built according to the topological structures of the data. It has been empirically observed that GCNs can outperform vanilla neural networks when the underlying graph captures geometric information of the data. However, there is few theoretical justification of such observation. In this paper, we establish theoretical guarantees of the high-probability convergence of gradient descent for training over-parameterized GCNs. Specifically, we introduce a novel measurement of the relation between the graph and the data, called the “graph disparity coefficient”, and show that the convergence of GCN is faster when the graph disparity coefficient is smaller. Our analysis provides novel insights into how the graph convolution operation in a GCN helps training, and provides useful guidance for GCN training in practice.


Persistent Identifierhttp://hdl.handle.net/10722/338369
ISSN
2021 Impact Factor: 5.779
2020 SCImago Journal Rankings: 1.085
ISI Accession Number ID

 

DC FieldValueLanguage
dc.contributor.authorLin, Yucong-
dc.contributor.authorLi, Silu-
dc.contributor.authorXu, Jiaxing-
dc.contributor.authorXu, Jiawei-
dc.contributor.authorHuang, Dong-
dc.contributor.authorZheng, Wendi-
dc.contributor.authorCao, Yuan-
dc.contributor.authorLu, Junwei-
dc.date.accessioned2024-03-11T10:28:21Z-
dc.date.available2024-03-11T10:28:21Z-
dc.date.issued2023-03-10-
dc.identifier.citationNeurocomputing, 2023, v. 534, p. 77-85-
dc.identifier.issn0925-2312-
dc.identifier.urihttp://hdl.handle.net/10722/338369-
dc.description.abstract<p>Recent studies show that gradient descent can train a deep neural network (DNN) to achieve small training and test errors when the DNN is sufficiently wide. This result applies to various over-parameterized neural network models including fully-connected neural networks and convolutional neural networks. However, existing theory does not apply to graph convolutional networks (GCNs), as GCNs is built according to the topological structures of the data. It has been empirically observed that GCNs can outperform vanilla neural networks when the underlying graph captures geometric information of the data. However, there is few theoretical justification of such observation. In this paper, we establish theoretical guarantees of the high-probability convergence of gradient descent for training over-parameterized GCNs. Specifically, we introduce a novel measurement of the relation between the graph and the data, called the “graph disparity coefficient”, and show that the convergence of GCN is faster when the graph disparity coefficient is smaller. Our analysis provides novel insights into how the graph convolution operation in a GCN helps training, and provides useful guidance for GCN training in practice.<br></p>-
dc.languageeng-
dc.publisherElsevier-
dc.relation.ispartofNeurocomputing-
dc.subjectGraph convolutional neural network-
dc.subjectOver-parameterization-
dc.titleGraph over-parameterization: Why the graph helps the training of deep graph convolutional network-
dc.typeArticle-
dc.identifier.doi10.1016/j.neucom.2023.02.054-
dc.identifier.scopuseid_2-s2.0-85149840724-
dc.identifier.volume534-
dc.identifier.spage77-
dc.identifier.epage85-
dc.identifier.eissn1872-8286-
dc.identifier.isiWOS:000951824200001-
dc.identifier.issnl0925-2312-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats