File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Conference Paper: An End-to-End Deep RL Framework for Task Arrangement in Crowdsourcing Platforms

TitleAn End-to-End Deep RL Framework for Task Arrangement in Crowdsourcing Platforms
Authors
Keywordscrowdsourcing platform
task arrangement
reinforcement learning
deep Q-Network
Issue Date2020
PublisherIEEE Computer Society. The Journal's web site is located at http://ieeexplore.ieee.org/xpl/conhome.jsp?punumber=1000178
Citation
Proceedings of 2020 IEEE 36th International Conference on Data Engineering (ICDE), Dallas, TX, USA, 20-24 April 2020, p. 49-60 How to Cite?
AbstractIn this paper, we propose a Deep Reinforcement Learning (RL) framework for task arrangement, which is a critical problem for the success of crowdsourcing platforms. Previous works conduct the personalized recommendation of tasks to workers via supervised learning methods. However, the majority of them only consider the benefit of either workers or requesters independently. In addition, they do not consider the real dynamic environments (e.g., dynamic tasks, dynamic workers), so they may produce sub-optimal results. To address these issues, we utilize Deep Q-Network (DQN), an RL-based method combined with a neural network to estimate the expected long-term return of recommending a task. DQN inherently considers the immediate and the future rewards and can be updated quickly to deal with evolving data and dynamic changes. Furthermore, we design two DQNs that capture the benefit of both workers and requesters and maximize the profit of the platform. To learn value functions in DQN effectively, we also propose novel state representations, carefully design the computation of Q values, and predict transition probabilities and future states. Experiments on synthetic and real datasets demonstrate the superior performance of our framework.
Persistent Identifierhttp://hdl.handle.net/10722/291189
ISSN

 

DC FieldValueLanguage
dc.contributor.authorShan, C-
dc.contributor.authorMamoulis, N-
dc.contributor.authorCheng, CKR-
dc.contributor.authorLi, G-
dc.contributor.authorLi, X-
dc.contributor.authorQian, Y-
dc.date.accessioned2020-11-07T13:53:30Z-
dc.date.available2020-11-07T13:53:30Z-
dc.date.issued2020-
dc.identifier.citationProceedings of 2020 IEEE 36th International Conference on Data Engineering (ICDE), Dallas, TX, USA, 20-24 April 2020, p. 49-60-
dc.identifier.issn1084-4627-
dc.identifier.urihttp://hdl.handle.net/10722/291189-
dc.description.abstractIn this paper, we propose a Deep Reinforcement Learning (RL) framework for task arrangement, which is a critical problem for the success of crowdsourcing platforms. Previous works conduct the personalized recommendation of tasks to workers via supervised learning methods. However, the majority of them only consider the benefit of either workers or requesters independently. In addition, they do not consider the real dynamic environments (e.g., dynamic tasks, dynamic workers), so they may produce sub-optimal results. To address these issues, we utilize Deep Q-Network (DQN), an RL-based method combined with a neural network to estimate the expected long-term return of recommending a task. DQN inherently considers the immediate and the future rewards and can be updated quickly to deal with evolving data and dynamic changes. Furthermore, we design two DQNs that capture the benefit of both workers and requesters and maximize the profit of the platform. To learn value functions in DQN effectively, we also propose novel state representations, carefully design the computation of Q values, and predict transition probabilities and future states. Experiments on synthetic and real datasets demonstrate the superior performance of our framework.-
dc.languageeng-
dc.publisherIEEE Computer Society. The Journal's web site is located at http://ieeexplore.ieee.org/xpl/conhome.jsp?punumber=1000178-
dc.relation.ispartofIEEE 36th International Conference on Data Engineering (ICDE)-
dc.rightsInternational Conference on Data Engineering. Proceedings. Copyright © IEEE Computer Society.-
dc.rights©2020 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.-
dc.subjectcrowdsourcing platform-
dc.subjecttask arrangement-
dc.subjectreinforcement learning-
dc.subjectdeep Q-Network-
dc.titleAn End-to-End Deep RL Framework for Task Arrangement in Crowdsourcing Platforms-
dc.typeConference_Paper-
dc.identifier.emailShan, C: sxdtgg@hku.hk-
dc.identifier.emailMamoulis, N: nikos@cs.hku.hk-
dc.identifier.emailCheng, CKR: ckcheng@cs.hku.hk-
dc.identifier.emailLi, X: xli2@hku.hk-
dc.identifier.authorityMamoulis, N=rp00155-
dc.identifier.authorityCheng, CKR=rp00074-
dc.description.naturelink_to_subscribed_fulltext-
dc.identifier.doi10.1109/ICDE48307.2020.00012-
dc.identifier.scopuseid_2-s2.0-85085856670-
dc.identifier.hkuros318668-
dc.identifier.spage49-
dc.identifier.epage60-
dc.publisher.placeUnited States-
dc.identifier.issnl1084-4627-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats