File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Article: A Unified Explanation of Variability and Bias in Human Probability Judgments: How Computational Noise Explains the Mean–Variance Signature

TitleA Unified Explanation of Variability and Bias in Human Probability Judgments: How Computational Noise Explains the Mean–Variance Signature
Authors
KeywordsBayes
biases
noise
probability
sampling
Issue Date2023
Citation
Journal of Experimental Psychology General, 2023, v. 152, n. 10, p. 2842-2860 How to Cite?
AbstractHuman probability judgments are both variable and subject to systematic biases. Most probability judgment models treat variability and bias separately: a deterministic model explains the origin of bias, to which a noise process is added to generate variability. But these accounts do not explain the characteristic inverse U-shaped signature linking mean and variance in probability judgments. By contrast, models based on sampling generate the mean and variance of judgments in a unified way: the variability in the response is an inevitable consequence of basing probability judgments on a small sample of remembered or simulated instances of events. We consider two recent sampling models, in which biases are explained either by the sample accumulation being further corrupted by retrieval noise (the Probability Theory + Noise account) or as a Bayesian adjustment to the uncertainty implicit in small samples (the Bayesian sampler). While the mean predictions of these accounts closely mimic one another, they differ regarding the predicted relationship between mean and variance. We show that these models can be distinguished by a novel linear regression method that analyses this crucial mean–variance signature. First, the efficacy of the method is established using model recovery, demonstrating that it more accurately recovers parameters than complex approaches. Second, the method is applied to the mean and variance of both existing and new probability judgment data, confirming that judgments are based on a small number of samples that are adjusted by a prior, as predicted by the Bayesian sampler.
Persistent Identifierhttp://hdl.handle.net/10722/367553
ISSN
2023 Impact Factor: 3.7
2023 SCImago Journal Rankings: 1.868

 

DC FieldValueLanguage
dc.contributor.authorSundh, Joakim-
dc.contributor.authorZhu, Jian Qiao-
dc.contributor.authorChater, Nick-
dc.contributor.authorSanborn, Adam-
dc.date.accessioned2025-12-19T07:57:40Z-
dc.date.available2025-12-19T07:57:40Z-
dc.date.issued2023-
dc.identifier.citationJournal of Experimental Psychology General, 2023, v. 152, n. 10, p. 2842-2860-
dc.identifier.issn0096-3445-
dc.identifier.urihttp://hdl.handle.net/10722/367553-
dc.description.abstractHuman probability judgments are both variable and subject to systematic biases. Most probability judgment models treat variability and bias separately: a deterministic model explains the origin of bias, to which a noise process is added to generate variability. But these accounts do not explain the characteristic inverse U-shaped signature linking mean and variance in probability judgments. By contrast, models based on sampling generate the mean and variance of judgments in a unified way: the variability in the response is an inevitable consequence of basing probability judgments on a small sample of remembered or simulated instances of events. We consider two recent sampling models, in which biases are explained either by the sample accumulation being further corrupted by retrieval noise (the Probability Theory + Noise account) or as a Bayesian adjustment to the uncertainty implicit in small samples (the Bayesian sampler). While the mean predictions of these accounts closely mimic one another, they differ regarding the predicted relationship between mean and variance. We show that these models can be distinguished by a novel linear regression method that analyses this crucial mean–variance signature. First, the efficacy of the method is established using model recovery, demonstrating that it more accurately recovers parameters than complex approaches. Second, the method is applied to the mean and variance of both existing and new probability judgment data, confirming that judgments are based on a small number of samples that are adjusted by a prior, as predicted by the Bayesian sampler.-
dc.languageeng-
dc.relation.ispartofJournal of Experimental Psychology General-
dc.subjectBayes-
dc.subjectbiases-
dc.subjectnoise-
dc.subjectprobability-
dc.subjectsampling-
dc.titleA Unified Explanation of Variability and Bias in Human Probability Judgments: How Computational Noise Explains the Mean–Variance Signature-
dc.typeArticle-
dc.description.naturelink_to_subscribed_fulltext-
dc.identifier.doi10.1037/xge0001414-
dc.identifier.pmid37199970-
dc.identifier.scopuseid_2-s2.0-85166960626-
dc.identifier.volume152-
dc.identifier.issue10-
dc.identifier.spage2842-
dc.identifier.epage2860-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats