File Download
There are no files associated with this item.
Links for fulltext
(May Require Subscription)
- Publisher Website: 10.1037/xge0001414
- Scopus: eid_2-s2.0-85166960626
- PMID: 37199970
- Find via

Supplementary
- Citations:
- Appears in Collections:
Article: A Unified Explanation of Variability and Bias in Human Probability Judgments: How Computational Noise Explains the Mean–Variance Signature
| Title | A Unified Explanation of Variability and Bias in Human Probability Judgments: How Computational Noise Explains the Mean–Variance Signature |
|---|---|
| Authors | |
| Keywords | Bayes biases noise probability sampling |
| Issue Date | 2023 |
| Citation | Journal of Experimental Psychology General, 2023, v. 152, n. 10, p. 2842-2860 How to Cite? |
| Abstract | Human probability judgments are both variable and subject to systematic biases. Most probability judgment models treat variability and bias separately: a deterministic model explains the origin of bias, to which a noise process is added to generate variability. But these accounts do not explain the characteristic inverse U-shaped signature linking mean and variance in probability judgments. By contrast, models based on sampling generate the mean and variance of judgments in a unified way: the variability in the response is an inevitable consequence of basing probability judgments on a small sample of remembered or simulated instances of events. We consider two recent sampling models, in which biases are explained either by the sample accumulation being further corrupted by retrieval noise (the Probability Theory + Noise account) or as a Bayesian adjustment to the uncertainty implicit in small samples (the Bayesian sampler). While the mean predictions of these accounts closely mimic one another, they differ regarding the predicted relationship between mean and variance. We show that these models can be distinguished by a novel linear regression method that analyses this crucial mean–variance signature. First, the efficacy of the method is established using model recovery, demonstrating that it more accurately recovers parameters than complex approaches. Second, the method is applied to the mean and variance of both existing and new probability judgment data, confirming that judgments are based on a small number of samples that are adjusted by a prior, as predicted by the Bayesian sampler. |
| Persistent Identifier | http://hdl.handle.net/10722/367553 |
| ISSN | 2023 Impact Factor: 3.7 2023 SCImago Journal Rankings: 1.868 |
| DC Field | Value | Language |
|---|---|---|
| dc.contributor.author | Sundh, Joakim | - |
| dc.contributor.author | Zhu, Jian Qiao | - |
| dc.contributor.author | Chater, Nick | - |
| dc.contributor.author | Sanborn, Adam | - |
| dc.date.accessioned | 2025-12-19T07:57:40Z | - |
| dc.date.available | 2025-12-19T07:57:40Z | - |
| dc.date.issued | 2023 | - |
| dc.identifier.citation | Journal of Experimental Psychology General, 2023, v. 152, n. 10, p. 2842-2860 | - |
| dc.identifier.issn | 0096-3445 | - |
| dc.identifier.uri | http://hdl.handle.net/10722/367553 | - |
| dc.description.abstract | Human probability judgments are both variable and subject to systematic biases. Most probability judgment models treat variability and bias separately: a deterministic model explains the origin of bias, to which a noise process is added to generate variability. But these accounts do not explain the characteristic inverse U-shaped signature linking mean and variance in probability judgments. By contrast, models based on sampling generate the mean and variance of judgments in a unified way: the variability in the response is an inevitable consequence of basing probability judgments on a small sample of remembered or simulated instances of events. We consider two recent sampling models, in which biases are explained either by the sample accumulation being further corrupted by retrieval noise (the Probability Theory + Noise account) or as a Bayesian adjustment to the uncertainty implicit in small samples (the Bayesian sampler). While the mean predictions of these accounts closely mimic one another, they differ regarding the predicted relationship between mean and variance. We show that these models can be distinguished by a novel linear regression method that analyses this crucial mean–variance signature. First, the efficacy of the method is established using model recovery, demonstrating that it more accurately recovers parameters than complex approaches. Second, the method is applied to the mean and variance of both existing and new probability judgment data, confirming that judgments are based on a small number of samples that are adjusted by a prior, as predicted by the Bayesian sampler. | - |
| dc.language | eng | - |
| dc.relation.ispartof | Journal of Experimental Psychology General | - |
| dc.subject | Bayes | - |
| dc.subject | biases | - |
| dc.subject | noise | - |
| dc.subject | probability | - |
| dc.subject | sampling | - |
| dc.title | A Unified Explanation of Variability and Bias in Human Probability Judgments: How Computational Noise Explains the Mean–Variance Signature | - |
| dc.type | Article | - |
| dc.description.nature | link_to_subscribed_fulltext | - |
| dc.identifier.doi | 10.1037/xge0001414 | - |
| dc.identifier.pmid | 37199970 | - |
| dc.identifier.scopus | eid_2-s2.0-85166960626 | - |
| dc.identifier.volume | 152 | - |
| dc.identifier.issue | 10 | - |
| dc.identifier.spage | 2842 | - |
| dc.identifier.epage | 2860 | - |
