File Download
  Links for fulltext
     (May Require Subscription)
Supplementary

Conference Paper: Crowdsourced POI labelling: location-aware result inference and Task Assignment

TitleCrowdsourced POI labelling: location-aware result inference and Task Assignment
Authors
Issue Date2016
PublisherIEEE Computer Society. The Journal's web site is located at http://ieeexplore.ieee.org/xpl/conhome.jsp?punumber=1000178
Citation
The 32nd IEEE International Conference on Data Engineering (ICDE 2016), Helsinki, Finland, 16-20 May 2016. In Conference Proceedings, 2016, p. 1-12 How to Cite?
AbstractIdentifying the labels of points of interest (POIs), aka POI labelling, provides significant benefits in location-based services. However, the quality of raw labels manually added by users or generated by artificial algorithms cannot be guaranteed. Such low-quality labels decrease the usability and result in bad user experiences. In this paper, by observing that crowdsourcing is a best-fit for computer-hard tasks, we leverage crowdsourcing to improve the quality of POI labelling. To our best knowledge, this is the first work on crowdsourced POI labelling tasks. In particular, there are two sub-problems: (1) how to infer the correct labels for each POI based on workers' answers, and (2) how to effectively assign proper tasks to workers in order to make more accurate inference for next available workers. To address these two problems, we propose a framework consisting of an inference model and an online task assigner. The inference model measures the quality of a worker on a POI by elaborately exploiting (i) worker's inherent quality, (ii) the spatial distance between the worker and the POI, and (iii) the POI influence, which can provide reliable inference results once a worker submits an answer. As workers are dynamically coming, the online task assigner judiciously assigns proper tasks to them so as to benefit the inference. The inference model and task assigner work alternately to continuously improve the overall quality. We conduct extensive experiments on a real crowdsourcing platform, and the results on two real datasets show that our method significantly outperforms state-of-the-art approaches. © 2016 IEEE.
Persistent Identifierhttp://hdl.handle.net/10722/232184
ISBN
ISSN
2023 SCImago Journal Rankings: 1.306

 

DC FieldValueLanguage
dc.contributor.authorHu, H-
dc.contributor.authorZheng, Y-
dc.contributor.authorBao, Z-
dc.contributor.authorLi, G-
dc.contributor.authorFeng, J-
dc.contributor.authorCheng, RCK-
dc.date.accessioned2016-09-20T05:28:18Z-
dc.date.available2016-09-20T05:28:18Z-
dc.date.issued2016-
dc.identifier.citationThe 32nd IEEE International Conference on Data Engineering (ICDE 2016), Helsinki, Finland, 16-20 May 2016. In Conference Proceedings, 2016, p. 1-12-
dc.identifier.isbn978-150902019-5-
dc.identifier.issn1084-4627-
dc.identifier.urihttp://hdl.handle.net/10722/232184-
dc.description.abstractIdentifying the labels of points of interest (POIs), aka POI labelling, provides significant benefits in location-based services. However, the quality of raw labels manually added by users or generated by artificial algorithms cannot be guaranteed. Such low-quality labels decrease the usability and result in bad user experiences. In this paper, by observing that crowdsourcing is a best-fit for computer-hard tasks, we leverage crowdsourcing to improve the quality of POI labelling. To our best knowledge, this is the first work on crowdsourced POI labelling tasks. In particular, there are two sub-problems: (1) how to infer the correct labels for each POI based on workers' answers, and (2) how to effectively assign proper tasks to workers in order to make more accurate inference for next available workers. To address these two problems, we propose a framework consisting of an inference model and an online task assigner. The inference model measures the quality of a worker on a POI by elaborately exploiting (i) worker's inherent quality, (ii) the spatial distance between the worker and the POI, and (iii) the POI influence, which can provide reliable inference results once a worker submits an answer. As workers are dynamically coming, the online task assigner judiciously assigns proper tasks to them so as to benefit the inference. The inference model and task assigner work alternately to continuously improve the overall quality. We conduct extensive experiments on a real crowdsourcing platform, and the results on two real datasets show that our method significantly outperforms state-of-the-art approaches. © 2016 IEEE.-
dc.languageeng-
dc.publisherIEEE Computer Society. The Journal's web site is located at http://ieeexplore.ieee.org/xpl/conhome.jsp?punumber=1000178-
dc.relation.ispartofInternational Conference on Data Engineering Proceedings-
dc.rights©2016 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.-
dc.titleCrowdsourced POI labelling: location-aware result inference and Task Assignment-
dc.typeConference_Paper-
dc.identifier.emailCheng, RCK: ckcheng@cs.hku.hk-
dc.identifier.authorityCheng, RCK=rp00074-
dc.description.naturepostprint-
dc.identifier.doi10.1109/ICDE.2016.7498229-
dc.identifier.scopuseid_2-s2.0-84980390225-
dc.identifier.hkuros265278-
dc.identifier.spage1-
dc.identifier.epage12-
dc.publisher.placeUnited States-
dc.customcontrol.immutablesml 161004-
dc.identifier.issnl1084-4627-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats