File Download

There are no files associated with this item.

Supplementary

Conference Paper: Benign Overfitting in Two-layer Convolutional Neural Networks

TitleBenign Overfitting in Two-layer Convolutional Neural Networks
Authors
Issue Date9-Dec-2022
Abstract

Modern neural networks often have great expressive power and can be trained to overfit the training data, while still achieving a good test performance. This phenomenon is referred to as "benign overfitting". Recently, there emerges a line of works studying "benign overfitting" from the theoretical perspective. However, they are limited to linear models or kernel/random feature models, and there is still a lack of theoretical understanding about when and how benign overfitting occurs in neural networks. In this paper, we study the benign overfitting phenomenon in training a two-layer convolutional neural network (CNN). We show that when the signal-to-noise ratio satisfies a certain condition, a two-layer CNN trained by gradient descent can achieve arbitrarily small training and test loss. On the other hand, when this condition does not hold, overfitting becomes harmful and the obtained CNN can only achieve a constant level test loss. These together demonstrate a sharp phase transition between benign overfitting and harmful overfitting, driven by the signal-to-noise ratio. To the best of our knowledge, this is the first work that precisely characterizes the conditions under which benign overfitting can occur in training convolutional neural networks.


Persistent Identifierhttp://hdl.handle.net/10722/338368

 

DC FieldValueLanguage
dc.contributor.authorCao, Yuan-
dc.contributor.authorChen, Zixiang-
dc.contributor.authorBelkin, Mikhail-
dc.contributor.authorGu, Quanquan-
dc.date.accessioned2024-03-11T10:28:20Z-
dc.date.available2024-03-11T10:28:20Z-
dc.date.issued2022-12-09-
dc.identifier.urihttp://hdl.handle.net/10722/338368-
dc.description.abstract<p>Modern neural networks often have great expressive power and can be trained to overfit the training data, while still achieving a good test performance. This phenomenon is referred to as "benign overfitting". Recently, there emerges a line of works studying "benign overfitting" from the theoretical perspective. However, they are limited to linear models or kernel/random feature models, and there is still a lack of theoretical understanding about when and how benign overfitting occurs in neural networks. In this paper, we study the benign overfitting phenomenon in training a two-layer convolutional neural network (CNN). We show that when the signal-to-noise ratio satisfies a certain condition, a two-layer CNN trained by gradient descent can achieve arbitrarily small training and test loss. On the other hand, when this condition does not hold, overfitting becomes harmful and the obtained CNN can only achieve a constant level test loss. These together demonstrate a sharp phase transition between benign overfitting and harmful overfitting, driven by the signal-to-noise ratio. To the best of our knowledge, this is the first work that precisely characterizes the conditions under which benign overfitting can occur in training convolutional neural networks.<br></p>-
dc.languageeng-
dc.relation.ispartofNeurIPS 2022 (28/11/2022-09/12/2022, New Orleans )-
dc.titleBenign Overfitting in Two-layer Convolutional Neural Networks-
dc.typeConference_Paper-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats