File Download
There are no files associated with this item.
Links for fulltext
(May Require Subscription)
- Publisher Website: 10.1109/TIP.2021.3124317
- Scopus: eid_2-s2.0-85118666282
- PMID: 34739378
- WOS: WOS:000717767800003
- Find via
Supplementary
- Citations:
- Appears in Collections:
Article: Deep unsupervised active learning via matrix sketching
Title | Deep unsupervised active learning via matrix sketching |
---|---|
Authors | |
Keywords | Data reconstruction Matrix sketching Self-supervised learning Unsupervised active learning |
Issue Date | 2021 |
Citation | IEEE Transactions on Image Processing, 2021, v. 30, p. 9280-9293 How to Cite? |
Abstract | —Most existing unsupervised active learning methods aim at minimizing the data reconstruction loss by using the linear models to choose representative samples for manually labeling in an unsupervised setting. Thus these methods often fail in modelling data with complex non-linear structure. To address this issue, we propose a new deep unsupervised Active Learning method for classification tasks, inspired by the idea of Matrix Sketching, called ALMS. Specifically, ALMS leverages a deep auto-encoder to embed data into a latent space, and then describes all the embedded data with a small size sketch to summarize the major characteristics of the data. In contrast to previous approaches that reconstruct the whole data matrix for selecting the representative samples, ALMS aims to select a representative subset of samples to well approximate the sketch, which can preserve the major information of data meanwhile significantly reducing the number of network parameters. This makes our algorithm alleviate the issue of model overfitting and readily cope with large datasets. Actually, the sketch provides a type of self-supervised signal to guide the learning of the model. Moreover, we propose to construct an auxiliary self-supervised task by classifying real/fake samples, in order to further improve the representation ability of the encoder. We thoroughly evaluate the performance of ALMS on both single-label and multi-label classification tasks, and the results demonstrate its superior performance against the state-of-the-art methods. The code can be found at https://github.com/lrq99/ALMS. |
Persistent Identifier | http://hdl.handle.net/10722/321968 |
ISSN | 2023 Impact Factor: 10.8 2023 SCImago Journal Rankings: 3.556 |
ISI Accession Number ID |
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Li, Changsheng | - |
dc.contributor.author | Li, Rongqing | - |
dc.contributor.author | Yuan, Ye | - |
dc.contributor.author | Wang, Guoren | - |
dc.contributor.author | Xu, Dong | - |
dc.date.accessioned | 2022-11-03T02:22:42Z | - |
dc.date.available | 2022-11-03T02:22:42Z | - |
dc.date.issued | 2021 | - |
dc.identifier.citation | IEEE Transactions on Image Processing, 2021, v. 30, p. 9280-9293 | - |
dc.identifier.issn | 1057-7149 | - |
dc.identifier.uri | http://hdl.handle.net/10722/321968 | - |
dc.description.abstract | —Most existing unsupervised active learning methods aim at minimizing the data reconstruction loss by using the linear models to choose representative samples for manually labeling in an unsupervised setting. Thus these methods often fail in modelling data with complex non-linear structure. To address this issue, we propose a new deep unsupervised Active Learning method for classification tasks, inspired by the idea of Matrix Sketching, called ALMS. Specifically, ALMS leverages a deep auto-encoder to embed data into a latent space, and then describes all the embedded data with a small size sketch to summarize the major characteristics of the data. In contrast to previous approaches that reconstruct the whole data matrix for selecting the representative samples, ALMS aims to select a representative subset of samples to well approximate the sketch, which can preserve the major information of data meanwhile significantly reducing the number of network parameters. This makes our algorithm alleviate the issue of model overfitting and readily cope with large datasets. Actually, the sketch provides a type of self-supervised signal to guide the learning of the model. Moreover, we propose to construct an auxiliary self-supervised task by classifying real/fake samples, in order to further improve the representation ability of the encoder. We thoroughly evaluate the performance of ALMS on both single-label and multi-label classification tasks, and the results demonstrate its superior performance against the state-of-the-art methods. The code can be found at https://github.com/lrq99/ALMS. | - |
dc.language | eng | - |
dc.relation.ispartof | IEEE Transactions on Image Processing | - |
dc.subject | Data reconstruction | - |
dc.subject | Matrix sketching | - |
dc.subject | Self-supervised learning | - |
dc.subject | Unsupervised active learning | - |
dc.title | Deep unsupervised active learning via matrix sketching | - |
dc.type | Article | - |
dc.description.nature | link_to_subscribed_fulltext | - |
dc.identifier.doi | 10.1109/TIP.2021.3124317 | - |
dc.identifier.pmid | 34739378 | - |
dc.identifier.scopus | eid_2-s2.0-85118666282 | - |
dc.identifier.volume | 30 | - |
dc.identifier.spage | 9280 | - |
dc.identifier.epage | 9293 | - |
dc.identifier.eissn | 1941-0042 | - |
dc.identifier.isi | WOS:000717767800003 | - |