File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Article: Reviewing Experts' Restraint from Extremes and Its Impact on Service Providers

TitleReviewing Experts' Restraint from Extremes and Its Impact on Service Providers
Authors
Keywordsuser rating average
expertise
sentiment analysis
platform strategy
online word-of-mouth
text analysis
Issue Date2021
Citation
Journal of Consumer Research, 2021, v. 47, n. 5, p. 654-674 How to Cite?
AbstractThis research investigates reviewing experts on online review platforms. The main hypothesis is that greater expertise in generating reviews leads to greater restraint from extreme summary evaluations. The authors argue that greater experience generating reviews facilitates processing and elaboration and enhances the number of attributes implicitly considered in evaluations, which reduces the likelihood of assigning extreme summary ratings. This restraint-of-expertise hypothesis is tested across different review platforms (TripAdvisor, Qunar, and Yelp), shown for both assigned ratings and review text sentiment, and demonstrated both between (experts vs. novices) and within reviewers (expert vs. pre-expert). Two experiments replicate the main effect and provide support for the attribute-based explanation. Field studies demonstrate two major consequences of the restraint-of-expertise effect. (i) Reviewing experts (vs. novices), as a whole, have less impact on the aggregate valence metric, which is known to affect page-rank and consumer consideration. (ii) Experts systematically benefit and harm service providers with their ratings. For service providers that generally provide mediocre (excellent) experiences, reviewing experts assign significantly higher (lower) ratings than novices. This research provides important caveats to the existing marketing practice of service providers incentivizing reviewing experts and provides strategic implications for how platforms should adopt rating scales and aggregate ratings.
DescriptionBronze open access
Persistent Identifierhttp://hdl.handle.net/10722/302283
ISSN
2021 Impact Factor: 8.612
2020 SCImago Journal Rankings: 8.916
ISI Accession Number ID
Errata

 

DC FieldValueLanguage
dc.contributor.authorNguyen, Peter-
dc.contributor.authorWang, Xin Shane-
dc.contributor.authorLi, Xi-
dc.contributor.authorCotte, June-
dc.date.accessioned2021-08-30T13:58:10Z-
dc.date.available2021-08-30T13:58:10Z-
dc.date.issued2021-
dc.identifier.citationJournal of Consumer Research, 2021, v. 47, n. 5, p. 654-674-
dc.identifier.issn0093-5301-
dc.identifier.urihttp://hdl.handle.net/10722/302283-
dc.descriptionBronze open access-
dc.description.abstractThis research investigates reviewing experts on online review platforms. The main hypothesis is that greater expertise in generating reviews leads to greater restraint from extreme summary evaluations. The authors argue that greater experience generating reviews facilitates processing and elaboration and enhances the number of attributes implicitly considered in evaluations, which reduces the likelihood of assigning extreme summary ratings. This restraint-of-expertise hypothesis is tested across different review platforms (TripAdvisor, Qunar, and Yelp), shown for both assigned ratings and review text sentiment, and demonstrated both between (experts vs. novices) and within reviewers (expert vs. pre-expert). Two experiments replicate the main effect and provide support for the attribute-based explanation. Field studies demonstrate two major consequences of the restraint-of-expertise effect. (i) Reviewing experts (vs. novices), as a whole, have less impact on the aggregate valence metric, which is known to affect page-rank and consumer consideration. (ii) Experts systematically benefit and harm service providers with their ratings. For service providers that generally provide mediocre (excellent) experiences, reviewing experts assign significantly higher (lower) ratings than novices. This research provides important caveats to the existing marketing practice of service providers incentivizing reviewing experts and provides strategic implications for how platforms should adopt rating scales and aggregate ratings.-
dc.languageeng-
dc.relation.ispartofJournal of Consumer Research-
dc.subjectuser rating average-
dc.subjectexpertise-
dc.subjectsentiment analysis-
dc.subjectplatform strategy-
dc.subjectonline word-of-mouth-
dc.subjecttext analysis-
dc.titleReviewing Experts' Restraint from Extremes and Its Impact on Service Providers-
dc.typeArticle-
dc.description.naturelink_to_OA_fulltext-
dc.identifier.doi10.1093/jcr/ucaa037-
dc.identifier.scopuseid_2-s2.0-85100845405-
dc.identifier.hkuros325106-
dc.identifier.volume47-
dc.identifier.issue5-
dc.identifier.spage654-
dc.identifier.epage674-
dc.identifier.isiWOS:000637288400003-
dc.relation.erratumdoi:10.1093/jcr/ucab008-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats