File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Conference Paper: GRADIENTS AS FEATURES FOR DEEP REPRESENTATION LEARNING

TitleGRADIENTS AS FEATURES FOR DEEP REPRESENTATION LEARNING
Authors
Issue Date2020
Citation
8th International Conference on Learning Representations, ICLR 2020, 2020 How to Cite?
AbstractWe address the challenging problem of deep representation learning - the efficient adaption of a pre-trained deep network to different tasks. Specifically, we propose to explore gradient-based features. These features are gradients of the model parameters with respect to a task-specific loss given an input sample. Our key innovation is the design of a linear model that incorporates both gradient and activation of the pre-trained network. We demonstrate that our model provides a local linear approximation to an underlying deep model, and discuss important theoretical insights. Moreover, we present an efficient algorithm for the training and inference of our model without computing the actual gradients. Our method is evaluated across a number of representation-learning tasks on several datasets and using different network architectures. Strong results are obtained in all settings, and are well-aligned with our theoretical insights.
Persistent Identifierhttp://hdl.handle.net/10722/341298

 

DC FieldValueLanguage
dc.contributor.authorMu, Fangzhou-
dc.contributor.authorLiang, Yingyu-
dc.contributor.authorLi, Yin-
dc.date.accessioned2024-03-13T08:41:43Z-
dc.date.available2024-03-13T08:41:43Z-
dc.date.issued2020-
dc.identifier.citation8th International Conference on Learning Representations, ICLR 2020, 2020-
dc.identifier.urihttp://hdl.handle.net/10722/341298-
dc.description.abstractWe address the challenging problem of deep representation learning - the efficient adaption of a pre-trained deep network to different tasks. Specifically, we propose to explore gradient-based features. These features are gradients of the model parameters with respect to a task-specific loss given an input sample. Our key innovation is the design of a linear model that incorporates both gradient and activation of the pre-trained network. We demonstrate that our model provides a local linear approximation to an underlying deep model, and discuss important theoretical insights. Moreover, we present an efficient algorithm for the training and inference of our model without computing the actual gradients. Our method is evaluated across a number of representation-learning tasks on several datasets and using different network architectures. Strong results are obtained in all settings, and are well-aligned with our theoretical insights.-
dc.languageeng-
dc.relation.ispartof8th International Conference on Learning Representations, ICLR 2020-
dc.titleGRADIENTS AS FEATURES FOR DEEP REPRESENTATION LEARNING-
dc.typeConference_Paper-
dc.description.naturelink_to_subscribed_fulltext-
dc.identifier.scopuseid_2-s2.0-85101113031-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats