File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Conference Paper: Multi-bias non-linear activation in deep neural networks

TitleMulti-bias non-linear activation in deep neural networks
Authors
Issue Date2016
Citation
33rd International Conference on Machine Learning, ICML 2016, 2016, v. 1, p. 365-373 How to Cite?
AbstractAs a widely used non-linear activation, Rectified Linear Unit (ReLU) separates noise and signal in a feature map by learning a threshold or bias. However, we argue that the classification of noise and signal not only depends on the magnitude of responses, but also the context of how the feature responses would be used to detect more abstract patterns in higher layers. In order to output multiple response maps with magnitude in different ranges for a particular visual pattern, existing networks employing ReLU and its variants have to learn a large number of redundant filters. In this paper, we propose a multi-bias non-linear activation (MBA) layer to explore the information hidden in the magnitudes of responses. It is placed after the convolution layer to decouple the responses to a convolution kernel into multiple maps by multi-thresholding magnitudes, thus generating more patterns in the feature space at a low computational cost. It provides great flexibility of selecting responses to different visual patterns in different magnitude ranges to form rich representations in higher layers. Such a simple and yet effective scheme achieves the stateof-the-art performance on several benchmarks.
Persistent Identifierhttp://hdl.handle.net/10722/351365

 

DC FieldValueLanguage
dc.contributor.authorLi, Hongyang-
dc.contributor.authorOuyang, Wanli-
dc.contributor.authorWang, Xiaogang-
dc.date.accessioned2024-11-20T03:55:51Z-
dc.date.available2024-11-20T03:55:51Z-
dc.date.issued2016-
dc.identifier.citation33rd International Conference on Machine Learning, ICML 2016, 2016, v. 1, p. 365-373-
dc.identifier.urihttp://hdl.handle.net/10722/351365-
dc.description.abstractAs a widely used non-linear activation, Rectified Linear Unit (ReLU) separates noise and signal in a feature map by learning a threshold or bias. However, we argue that the classification of noise and signal not only depends on the magnitude of responses, but also the context of how the feature responses would be used to detect more abstract patterns in higher layers. In order to output multiple response maps with magnitude in different ranges for a particular visual pattern, existing networks employing ReLU and its variants have to learn a large number of redundant filters. In this paper, we propose a multi-bias non-linear activation (MBA) layer to explore the information hidden in the magnitudes of responses. It is placed after the convolution layer to decouple the responses to a convolution kernel into multiple maps by multi-thresholding magnitudes, thus generating more patterns in the feature space at a low computational cost. It provides great flexibility of selecting responses to different visual patterns in different magnitude ranges to form rich representations in higher layers. Such a simple and yet effective scheme achieves the stateof-the-art performance on several benchmarks.-
dc.languageeng-
dc.relation.ispartof33rd International Conference on Machine Learning, ICML 2016-
dc.titleMulti-bias non-linear activation in deep neural networks-
dc.typeConference_Paper-
dc.description.naturelink_to_subscribed_fulltext-
dc.identifier.scopuseid_2-s2.0-84997638302-
dc.identifier.volume1-
dc.identifier.spage365-
dc.identifier.epage373-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats