File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Conference Paper: Fatigue-Aware Bandits for Dependent Click Models

TitleFatigue-Aware Bandits for Dependent Click Models
Authors
Issue Date2020
PublisherAAAI Press. The Journal's web site is located at https://aaai.org/Library/AAAI/aaai-library.php
Citation
Proceedings of the 34th Association for the Advancement of Artificial Intelligence (AAAI) Conference on Artificial Intelligence (AAAI-20), 32nd Conference on Innovative Applications of Artificial Intelligence & the 10th Symposium on Educational Advances in Artificial Intelligence, New York, NY, USA, 7-12 February 2020, v. 34 n. 4, p. 3341-3348 How to Cite?
AbstractAs recommender systems send a massive amount of content to keep users engaged, users may experience fatigue which is contributed by 1) an overexposure to irrelevant content, 2) boredom from seeing too many similar recommendations. To address this problem, we consider an online learning setting where a platform learns a policy to recommend content that takes user fatigue into account. We propose an extension of the Dependent Click Model (DCM) to describe users' behavior. We stipulate that for each piece of content, its attractiveness to a user depends on its intrinsic relevance and a discount factor which measures how many similar contents have been shown. Users view the recommended content sequentially and click on the ones that they find attractive. Users may leave the platform at any time, and the probability of exiting is higher when they do not like the content. Based on user's feedback, the platform learns the relevance of the underlying content as well as the discounting effect due to content fatigue. We refer to this learning task as “fatigue-aware DCM Bandit” problem. We consider two learning scenarios depending on whether the discounting effect is known. For each scenario, we propose a learning algorithm which simultaneously explores and exploits, and characterize its regret bound.
DescriptionAAAI-20 Technical Tracks 4 - Section: AAAI Technical Track: Machine Learning
Persistent Identifierhttp://hdl.handle.net/10722/310146
ISSN

 

DC FieldValueLanguage
dc.contributor.authorCao, J-
dc.contributor.authorSun, W-
dc.contributor.authorShen, ZM-
dc.contributor.authorEtt, M-
dc.date.accessioned2022-01-24T02:24:32Z-
dc.date.available2022-01-24T02:24:32Z-
dc.date.issued2020-
dc.identifier.citationProceedings of the 34th Association for the Advancement of Artificial Intelligence (AAAI) Conference on Artificial Intelligence (AAAI-20), 32nd Conference on Innovative Applications of Artificial Intelligence & the 10th Symposium on Educational Advances in Artificial Intelligence, New York, NY, USA, 7-12 February 2020, v. 34 n. 4, p. 3341-3348-
dc.identifier.issn2159-5399-
dc.identifier.urihttp://hdl.handle.net/10722/310146-
dc.descriptionAAAI-20 Technical Tracks 4 - Section: AAAI Technical Track: Machine Learning-
dc.description.abstractAs recommender systems send a massive amount of content to keep users engaged, users may experience fatigue which is contributed by 1) an overexposure to irrelevant content, 2) boredom from seeing too many similar recommendations. To address this problem, we consider an online learning setting where a platform learns a policy to recommend content that takes user fatigue into account. We propose an extension of the Dependent Click Model (DCM) to describe users' behavior. We stipulate that for each piece of content, its attractiveness to a user depends on its intrinsic relevance and a discount factor which measures how many similar contents have been shown. Users view the recommended content sequentially and click on the ones that they find attractive. Users may leave the platform at any time, and the probability of exiting is higher when they do not like the content. Based on user's feedback, the platform learns the relevance of the underlying content as well as the discounting effect due to content fatigue. We refer to this learning task as “fatigue-aware DCM Bandit” problem. We consider two learning scenarios depending on whether the discounting effect is known. For each scenario, we propose a learning algorithm which simultaneously explores and exploits, and characterize its regret bound.-
dc.languageeng-
dc.publisherAAAI Press. The Journal's web site is located at https://aaai.org/Library/AAAI/aaai-library.php-
dc.relation.ispartofProceedings of the AAAI Conference on Artificial Intelligence-
dc.titleFatigue-Aware Bandits for Dependent Click Models-
dc.typeConference_Paper-
dc.identifier.emailShen, ZM: maxshen@hku.hk-
dc.identifier.authorityShen, ZM=rp02779-
dc.identifier.doi10.1609/aaai.v34i04.5735-
dc.identifier.hkuros331473-
dc.identifier.volume34-
dc.identifier.issue4-
dc.identifier.spage3341-
dc.identifier.epage3348-
dc.publisher.placeUnited States-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats