File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Conference Paper: Norm-Based Generalisation Bounds for Deep Multi-Class Convolutional Neural Networks

TitleNorm-Based Generalisation Bounds for Deep Multi-Class Convolutional Neural Networks
Authors
Issue Date2021
Citation
35th AAAI Conference on Artificial Intelligence, AAAI 2021, 2021, v. 9B, p. 8279-8287 How to Cite?
AbstractWe show generalisation error bounds for deep learning with two main improvements over the state of the art. (1) Our bounds have no explicit dependence on the number of classes except for logarithmic factors. This holds even when formulating the bounds in terms of the Frobenius-norm of the weight matrices, where previous bounds exhibit at least a square-root dependence on the number of classes. (2) We adapt the classic Rademacher analysis of DNNs to incorporate weight sharing—a task of fundamental theoretical importance which was previously attempted only under very restrictive assumptions. In our results, each convolutional filter contributes only once to the bound, regardless of how many times it is applied. Further improvements exploiting pooling and sparse connections are provided. The presented bounds scale as the norms of the parameter matrices, rather than the number of parameters. In particular, contrary to bounds based on parameter counting, they are asymptotically tight (up to log factors) when the weights approach initialisation, making them suitable as a basic ingredient in bounds sensitive to the optimisation procedure. We also show how to adapt the recent technique of loss function augmentation to replace spectral norms by empirical analogues whilst maintaining the advantages of our approach.
Persistent Identifierhttp://hdl.handle.net/10722/329780

 

DC FieldValueLanguage
dc.contributor.authorLedent, Antoine-
dc.contributor.authorMustafa, Waleed-
dc.contributor.authorLei, Yunwen-
dc.contributor.authorKloft, Marius-
dc.date.accessioned2023-08-09T03:35:17Z-
dc.date.available2023-08-09T03:35:17Z-
dc.date.issued2021-
dc.identifier.citation35th AAAI Conference on Artificial Intelligence, AAAI 2021, 2021, v. 9B, p. 8279-8287-
dc.identifier.urihttp://hdl.handle.net/10722/329780-
dc.description.abstractWe show generalisation error bounds for deep learning with two main improvements over the state of the art. (1) Our bounds have no explicit dependence on the number of classes except for logarithmic factors. This holds even when formulating the bounds in terms of the Frobenius-norm of the weight matrices, where previous bounds exhibit at least a square-root dependence on the number of classes. (2) We adapt the classic Rademacher analysis of DNNs to incorporate weight sharing—a task of fundamental theoretical importance which was previously attempted only under very restrictive assumptions. In our results, each convolutional filter contributes only once to the bound, regardless of how many times it is applied. Further improvements exploiting pooling and sparse connections are provided. The presented bounds scale as the norms of the parameter matrices, rather than the number of parameters. In particular, contrary to bounds based on parameter counting, they are asymptotically tight (up to log factors) when the weights approach initialisation, making them suitable as a basic ingredient in bounds sensitive to the optimisation procedure. We also show how to adapt the recent technique of loss function augmentation to replace spectral norms by empirical analogues whilst maintaining the advantages of our approach.-
dc.languageeng-
dc.relation.ispartof35th AAAI Conference on Artificial Intelligence, AAAI 2021-
dc.titleNorm-Based Generalisation Bounds for Deep Multi-Class Convolutional Neural Networks-
dc.typeConference_Paper-
dc.description.naturelink_to_subscribed_fulltext-
dc.identifier.scopuseid_2-s2.0-85125029441-
dc.identifier.volume9B-
dc.identifier.spage8279-
dc.identifier.epage8287-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats