Authors: Felix A. Sosa, Sam J. Gershman, Tomer D. Ullman
How are people able to understand everyday physical events with such ease? One hypothesis is that people use an approximate probabilistic simulation of the world. A contrasting hypothesis is that people use a collection of abstractions or features. The two hypotheses explain complementary aspects of physical reasoning. We develop a ``blended model'' that synthesizes the two hypotheses: under certain conditions, simulation is replaced by a visuo-spatial abstraction (linear path projection). This abstraction purchases efficiency at the cost of fidelity. As a consequence, the blended model predicts that people will make systematic errors whenever the conditions for applying the abstraction are met. We tested this prediction in two experiments where participants made judgments about whether a falling ball will contact a target. First, we show that response times are longer when straight-line paths are unavailable, even when simulation time is held fixed, arguing against a pure-simulation model (Experiment 1). Second, we show that people incorrectly judge the trajectory of the ball in a manner consistent with linear path projection (Experiment 2). We conclude that people have access to a flexible mental physics engine, but adaptively invoke more efficient abstractions when they are useful.
All python modules used for this project.
abstraction.pycontains the major utilities for the path projection abstraction.combine_csv.pycontains the utility used to zip all CSV files containing participant data into one usable for analyses.graphics.pycontains the graphics engine utilities.handlers.pycontains collision handler utilities.json_utilities.pycontains all JSON config file utilities.model_utilities.pycontains all model utilities.mp.pycontains the model fitting utilities.objects.pycontains the scene object classes used for modeling objects in scenes.physics.pycontains the physics engine.scene.pycontains the scene utilities.stimuli_generation.pycontains examples of stimuli generation methods used to generate stimuli for Experiment 1 and Experiment 2.submit_grid_job.pycontains a method for submitting grid search jobs on SLURM.utiltiy.pycontaints misc utilities.video.pycontains examples of methods for converting scenes into usable video stimuli.
All figures used in the paper (.pdfs).
All data collected and generated for the project.
stimulicontains the stimuli shown in the main trials for Experiment 1 and Experiment 2.empiricalcontains the raw anonymized data from Experiment 1 and Experiment 2, the cleaned data used to generate the figures, and the anonymized demographics data for each experiment.jsoncontains the JSON files the define the scenes used to create the stimuli.model_fitscontains a subset of the model fits performed during model fitting.
All jspsych code used to create and run Experiments 1 and 2.
Jupyter notebooks for data analyses used in the paper.
data_cleaningdetails the methods by which data was cleaned for analysis.experiment1_analysisdetails the analyses reported in Experiment 1.experiment2_analysisdetails the analyses reported in Experiment 2.