File Download
There are no files associated with this item.
Supplementary
-
Citations:
- Scopus: 0
- Appears in Collections:
Conference Paper: Multi-bias non-linear activation in deep neural networks
Title | Multi-bias non-linear activation in deep neural networks |
---|---|
Authors | |
Issue Date | 2016 |
Citation | 33rd International Conference on Machine Learning, ICML 2016, 2016, v. 1, p. 365-373 How to Cite? |
Abstract | As a widely used non-linear activation, Rectified Linear Unit (ReLU) separates noise and signal in a feature map by learning a threshold or bias. However, we argue that the classification of noise and signal not only depends on the magnitude of responses, but also the context of how the feature responses would be used to detect more abstract patterns in higher layers. In order to output multiple response maps with magnitude in different ranges for a particular visual pattern, existing networks employing ReLU and its variants have to learn a large number of redundant filters. In this paper, we propose a multi-bias non-linear activation (MBA) layer to explore the information hidden in the magnitudes of responses. It is placed after the convolution layer to decouple the responses to a convolution kernel into multiple maps by multi-thresholding magnitudes, thus generating more patterns in the feature space at a low computational cost. It provides great flexibility of selecting responses to different visual patterns in different magnitude ranges to form rich representations in higher layers. Such a simple and yet effective scheme achieves the stateof-the-art performance on several benchmarks. |
Persistent Identifier | http://hdl.handle.net/10722/351365 |
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Li, Hongyang | - |
dc.contributor.author | Ouyang, Wanli | - |
dc.contributor.author | Wang, Xiaogang | - |
dc.date.accessioned | 2024-11-20T03:55:51Z | - |
dc.date.available | 2024-11-20T03:55:51Z | - |
dc.date.issued | 2016 | - |
dc.identifier.citation | 33rd International Conference on Machine Learning, ICML 2016, 2016, v. 1, p. 365-373 | - |
dc.identifier.uri | http://hdl.handle.net/10722/351365 | - |
dc.description.abstract | As a widely used non-linear activation, Rectified Linear Unit (ReLU) separates noise and signal in a feature map by learning a threshold or bias. However, we argue that the classification of noise and signal not only depends on the magnitude of responses, but also the context of how the feature responses would be used to detect more abstract patterns in higher layers. In order to output multiple response maps with magnitude in different ranges for a particular visual pattern, existing networks employing ReLU and its variants have to learn a large number of redundant filters. In this paper, we propose a multi-bias non-linear activation (MBA) layer to explore the information hidden in the magnitudes of responses. It is placed after the convolution layer to decouple the responses to a convolution kernel into multiple maps by multi-thresholding magnitudes, thus generating more patterns in the feature space at a low computational cost. It provides great flexibility of selecting responses to different visual patterns in different magnitude ranges to form rich representations in higher layers. Such a simple and yet effective scheme achieves the stateof-the-art performance on several benchmarks. | - |
dc.language | eng | - |
dc.relation.ispartof | 33rd International Conference on Machine Learning, ICML 2016 | - |
dc.title | Multi-bias non-linear activation in deep neural networks | - |
dc.type | Conference_Paper | - |
dc.description.nature | link_to_subscribed_fulltext | - |
dc.identifier.scopus | eid_2-s2.0-84997638302 | - |
dc.identifier.volume | 1 | - |
dc.identifier.spage | 365 | - |
dc.identifier.epage | 373 | - |