File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Article: Multi-component transfer metric learning for handling unrelated source domain samples

TitleMulti-component transfer metric learning for handling unrelated source domain samples
Authors
KeywordsTransfer learning
Metric learning
Component
Mahalanobis distance
Weight matrix
Issue Date2020
PublisherElsevier BV. The Journal's web site is located at http://www.elsevier.com/locate/knosys
Citation
Knowledge-Based Systems, 2020, v. 203, p. article no. 106132 How to Cite?
AbstractTransfer learning (TL) is a machine learning paradigm designed for the problem where the training and test data are from different domains. Existing TL approaches mostly assume that training data from the source domain are collected from multiple views or devices. However, in practical applications, a sample in a target domain often only corresponds to a specific view or device. Without the ability to mitigate the influence of the many unrelated samples, the performance of existing TL approaches may deteriorate for such learning tasks. This problem will be exacerbated if the intrinsic relationships among the source domain samples are unclear. Currently, there is no mechanism for determining the intrinsic characteristics of samples in order to treat them differently during TL. The source domain samples that are not related to the test data not only incur computational overhead, but may result in negative transfer. We propose the multi-component transfer metric learning (MCTML) method to address this challenging research problem. Unlike previous metric-based transfer learning which are only capable of using one metric to transform all the samples, MCTML automatically extracts distinct components from the source domain and learns one metric for each component. For each component, MCTML learns the importance of that component in terms of its predictive power based on the Mahalanobis distance metrics. The optimized combination of components are then used to predict the test data collaboratively. Extensive experiments on public datasets demonstrates its effectiveness in knowledge transfer under this challenging condition.
Persistent Identifierhttp://hdl.handle.net/10722/294308
ISSN
2021 Impact Factor: 8.139
2020 SCImago Journal Rankings: 1.587
ISI Accession Number ID

 

DC FieldValueLanguage
dc.contributor.authorYi, C-
dc.contributor.authorXu, Y-
dc.contributor.authorYu, H-
dc.contributor.authorYan, Y-
dc.contributor.authorLiu, Y-
dc.date.accessioned2020-11-23T08:29:31Z-
dc.date.available2020-11-23T08:29:31Z-
dc.date.issued2020-
dc.identifier.citationKnowledge-Based Systems, 2020, v. 203, p. article no. 106132-
dc.identifier.issn0950-7051-
dc.identifier.urihttp://hdl.handle.net/10722/294308-
dc.description.abstractTransfer learning (TL) is a machine learning paradigm designed for the problem where the training and test data are from different domains. Existing TL approaches mostly assume that training data from the source domain are collected from multiple views or devices. However, in practical applications, a sample in a target domain often only corresponds to a specific view or device. Without the ability to mitigate the influence of the many unrelated samples, the performance of existing TL approaches may deteriorate for such learning tasks. This problem will be exacerbated if the intrinsic relationships among the source domain samples are unclear. Currently, there is no mechanism for determining the intrinsic characteristics of samples in order to treat them differently during TL. The source domain samples that are not related to the test data not only incur computational overhead, but may result in negative transfer. We propose the multi-component transfer metric learning (MCTML) method to address this challenging research problem. Unlike previous metric-based transfer learning which are only capable of using one metric to transform all the samples, MCTML automatically extracts distinct components from the source domain and learns one metric for each component. For each component, MCTML learns the importance of that component in terms of its predictive power based on the Mahalanobis distance metrics. The optimized combination of components are then used to predict the test data collaboratively. Extensive experiments on public datasets demonstrates its effectiveness in knowledge transfer under this challenging condition.-
dc.languageeng-
dc.publisherElsevier BV. The Journal's web site is located at http://www.elsevier.com/locate/knosys-
dc.relation.ispartofKnowledge-Based Systems-
dc.subjectTransfer learning-
dc.subjectMetric learning-
dc.subjectComponent-
dc.subjectMahalanobis distance-
dc.subjectWeight matrix-
dc.titleMulti-component transfer metric learning for handling unrelated source domain samples-
dc.typeArticle-
dc.identifier.emailYan, Y: ygyan@hku.hk-
dc.description.naturelink_to_subscribed_fulltext-
dc.identifier.doi10.1016/j.knosys.2020.106132-
dc.identifier.scopuseid_2-s2.0-85086630326-
dc.identifier.hkuros319011-
dc.identifier.volume203-
dc.identifier.spagearticle no. 106132-
dc.identifier.epagearticle no. 106132-
dc.identifier.isiWOS:000552126200022-
dc.publisher.placeNetherlands-
dc.identifier.issnl0950-7051-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats