File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Conference Paper: Dynamic scholarly collaborator recommendation via competitive multi-agent reinforcement learning

TitleDynamic scholarly collaborator recommendation via competitive multi-agent reinforcement learning
Authors
KeywordsMulti-agent
Competition
Collaborator recommendation
Reinforcement learning
Dynamic
Issue Date2017
Citation
RecSys 2017 - Proceedings of the 11th ACM Conference on Recommender Systems, 2017, p. 331-335 How to Cite?
Abstract© 2017 ACM. In an interdisciplinary environment, scientific collaboration is becoming increasingly important. Helping scholars make a right choice of potential collaborators is essential in achieving scientific success. Intuitively, the generation of collaboration relationship is a dynamic process. For instance, one scholar may first choose to work with Scholar A, and then work with Scholar B after accumulating additional academic credits. To address this property, we propose a novel dynamic collaboration recommendation method by adapting the multi-agent reinforcement learning technique to the coauthor network analysis. The collaborator selection is optimized from several different scholar similarity measurements. Unlike prior studies, the proposed method characterizes scholarly competition, a.k.a. different scholars will compete for potential collaborator at each iteration. An evaluation with the ACM data shows that multi-agent reinforcement learning plus scholarly competition modeling can be significant for collaboration recommendation.
Persistent Identifierhttp://hdl.handle.net/10722/285797
ISI Accession Number ID

 

DC FieldValueLanguage
dc.contributor.authorZhang, Yang-
dc.contributor.authorZhang, Chenwei-
dc.contributor.authorLiu, Xiaozhong-
dc.date.accessioned2020-08-18T04:56:40Z-
dc.date.available2020-08-18T04:56:40Z-
dc.date.issued2017-
dc.identifier.citationRecSys 2017 - Proceedings of the 11th ACM Conference on Recommender Systems, 2017, p. 331-335-
dc.identifier.urihttp://hdl.handle.net/10722/285797-
dc.description.abstract© 2017 ACM. In an interdisciplinary environment, scientific collaboration is becoming increasingly important. Helping scholars make a right choice of potential collaborators is essential in achieving scientific success. Intuitively, the generation of collaboration relationship is a dynamic process. For instance, one scholar may first choose to work with Scholar A, and then work with Scholar B after accumulating additional academic credits. To address this property, we propose a novel dynamic collaboration recommendation method by adapting the multi-agent reinforcement learning technique to the coauthor network analysis. The collaborator selection is optimized from several different scholar similarity measurements. Unlike prior studies, the proposed method characterizes scholarly competition, a.k.a. different scholars will compete for potential collaborator at each iteration. An evaluation with the ACM data shows that multi-agent reinforcement learning plus scholarly competition modeling can be significant for collaboration recommendation.-
dc.languageeng-
dc.relation.ispartofRecSys 2017 - Proceedings of the 11th ACM Conference on Recommender Systems-
dc.subjectMulti-agent-
dc.subjectCompetition-
dc.subjectCollaborator recommendation-
dc.subjectReinforcement learning-
dc.subjectDynamic-
dc.titleDynamic scholarly collaborator recommendation via competitive multi-agent reinforcement learning-
dc.typeConference_Paper-
dc.description.naturelink_to_subscribed_fulltext-
dc.identifier.doi10.1145/3109859.3109914-
dc.identifier.scopuseid_2-s2.0-85030448681-
dc.identifier.spage331-
dc.identifier.epage335-
dc.identifier.isiWOS:000426967000050-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats