File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Article: Domain Adversarial Reinforcement Learning for Partial Domain Adaptation

TitleDomain Adversarial Reinforcement Learning for Partial Domain Adaptation
Authors
KeywordsAdversarial learning
partial domain adaptation
reinforcement learning
Issue Date2022
Citation
IEEE Transactions on Neural Networks and Learning Systems, 2022, v. 33, n. 2, p. 539-553 How to Cite?
AbstractPartial domain adaptation aims to transfer knowledge from a label-rich source domain to a label-scarce target domain (i.e., the target categories are a subset of the source ones), which relaxes the common assumption in traditional domain adaptation that the label space is fully shared across different domains. In this more general and practical scenario on partial domain adaptation, a major challenge is how to select source instances from the shared categories to ensure positive transfer for the target domain. To address this problem, we propose a domain adversarial reinforcement learning (DARL) framework to progressively select source instances to learn transferable features between domains by reducing the domain shift. Specifically, we employ a deep Q-learning to learn policies for an agent to make selection decisions by approximating the action-value function. Moreover, domain adversarial learning is introduced to learn a common feature subspace for the selected source instances and the target instances, and also to contribute to the reward calculation for the agent that is based on the relevance of the selected source instances with respect to the target domain. Extensive experiments on several benchmark data sets clearly demonstrate the superior performance of our proposed DARL over existing state-of-the-art methods for partial domain adaptation.
Persistent Identifierhttp://hdl.handle.net/10722/345165
ISSN
2023 Impact Factor: 10.2
2023 SCImago Journal Rankings: 4.170

 

DC FieldValueLanguage
dc.contributor.authorChen, Jin-
dc.contributor.authorWu, Xinxiao-
dc.contributor.authorDuan, Lixin-
dc.contributor.authorGao, Shenghua-
dc.date.accessioned2024-08-15T09:25:39Z-
dc.date.available2024-08-15T09:25:39Z-
dc.date.issued2022-
dc.identifier.citationIEEE Transactions on Neural Networks and Learning Systems, 2022, v. 33, n. 2, p. 539-553-
dc.identifier.issn2162-237X-
dc.identifier.urihttp://hdl.handle.net/10722/345165-
dc.description.abstractPartial domain adaptation aims to transfer knowledge from a label-rich source domain to a label-scarce target domain (i.e., the target categories are a subset of the source ones), which relaxes the common assumption in traditional domain adaptation that the label space is fully shared across different domains. In this more general and practical scenario on partial domain adaptation, a major challenge is how to select source instances from the shared categories to ensure positive transfer for the target domain. To address this problem, we propose a domain adversarial reinforcement learning (DARL) framework to progressively select source instances to learn transferable features between domains by reducing the domain shift. Specifically, we employ a deep Q-learning to learn policies for an agent to make selection decisions by approximating the action-value function. Moreover, domain adversarial learning is introduced to learn a common feature subspace for the selected source instances and the target instances, and also to contribute to the reward calculation for the agent that is based on the relevance of the selected source instances with respect to the target domain. Extensive experiments on several benchmark data sets clearly demonstrate the superior performance of our proposed DARL over existing state-of-the-art methods for partial domain adaptation.-
dc.languageeng-
dc.relation.ispartofIEEE Transactions on Neural Networks and Learning Systems-
dc.subjectAdversarial learning-
dc.subjectpartial domain adaptation-
dc.subjectreinforcement learning-
dc.titleDomain Adversarial Reinforcement Learning for Partial Domain Adaptation-
dc.typeArticle-
dc.description.naturelink_to_subscribed_fulltext-
dc.identifier.doi10.1109/TNNLS.2020.3028078-
dc.identifier.pmid33064659-
dc.identifier.scopuseid_2-s2.0-85124053103-
dc.identifier.volume33-
dc.identifier.issue2-
dc.identifier.spage539-
dc.identifier.epage553-
dc.identifier.eissn2162-2388-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats