Abstract

Bayesian models of cognition posit that people compute probability distributions over hypotheses, possibly by constructing a sample-based approximation. Since people encounter many closely related distributions, a computationally efficient strategy is to selectively reuse computations -- either the samples themselves or some summary statistic. We refer to these reuse strategies as amortized inference. In two experiments, we present evidence consistent with amortization. When sequentially answering two related queries about natural scenes, we show that answers to the second query vary systematically depending on the structure of the first query. Using a cognitive load manipulation, we find evidence that people cache summary statistics rather than raw sample sets. These results enrich our notions of how the brain approximates probabilistic inference.

Details

Title
Amortized Hypothesis Generation
Author
Dasgupta, Ishita; Schulz, Eric; Goodman, Noah; Gershman, Samuel
University/institution
Cold Spring Harbor Laboratory Press
Section
New Results
Publication year
2017
Publication date
May 12, 2017
Publisher
Cold Spring Harbor Laboratory Press
ISSN
2692-8205
Source type
Working Paper
Language of publication
English
ProQuest document ID
2071165241
Copyright
�� 2017. This article is published under http://creativecommons.org/licenses/by-nd/4.0/ (���the License���). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.