Project Details
Description
Causal explanations provide answers to why an event happened. Beliefs in causal explanations (for example, investing in new technologies will cause your available money for retirement to increase) guide which behaviors to engage in for the future (e.g., invest in crypto-currency). But people can – and often do – believe causal explanations of the world that are wrong. Understanding what in the nature of an incorrect causal explanation makes it believable is critically important for teaching people to reject incorrect explanations of events. In this work, the PIs investigate what makes incorrect causal explanations of events appealing to people and what encourages the adoption of these misplaced beliefs. Holding incorrect causal explanations can have critically damaging effects, such as people pursuing health treatments that are ineffective or investing in financial strategies that do not pay out. It is therefore important to better understand what makes incorrect causal explanations appealing so strategies can be deployed to counteract their adoption. In the work, the PIs conduct a series of studies to provide a strong understanding of what in the nature of a causal explanation makes it appealing. Across studies, the PIs explore many different causal elements of explanations. Using their results, the PIs then make a preliminary attempt to reduce endorsement of incorrect causal explanations. This research has a broader impact on science by involving students in research that has a strong translational component. Such research helps students connect science to the real-world, growing their interest in science and critical thinking at large. Additionally, the proposed work will have broader impacts on science literacy by isolating what in scientific explanations may make them more or less likely to be believed. The PIs use psychological methods from the causal explanation literature to study perceptions of a wide range of misplaced and incorrect causal explanations. These methods include having people read explanations and rate how compelling, satisfying, and believable the explanations are. In addition, participants in these studies make judgments about the causal structure of the explanations, such as how many causal factors the explanations include, how complex the explanations are, and how many events the explanations can explain. The PIs use large samples of online participants to ensure that people with many different beliefs are being included in the studies. In each study, the PIs have participants rate incorrect causal explanations (e.g., “eating sugar is the main cause of type 2 diabetes”) as well as fact-based causal explanations of the same events (e.g., “type 2 diabetes has multiple causes, including being overweight and having a genetic predisposition”). This comparison allows for isolation of what is unique about misplaced explanations. Using machine learning, the researchers investigate the degree to which there are characteristic structures of factual and misinformation explanations, beyond how they are perceived (e.g., the complexity of causal structure), which may allow for more automatic differentiation between these explanation types. Finally, the PIs use their findings to create a set of behavioral studies where they alter how explanations are presented to explore the degree to which presentation impacts endorsement of the explanations. Specifically, the PIs create new causal explanations that manipulate the causal elements that were most predictive of endorsement for incorrect causal beliefs in earlier experiments (e.g., complexity, number of causal factors). The goal is to see if by changing these important causal elements, endorsement of incorrect beliefs can be reduced. Through these studies we can learn more generally how to prevent the uptake of incorrect information in favor of fact-based explanations.This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
Status | Active |
---|---|
Effective start/end date | 1/06/22 → 31/05/25 |
Funding
- National Science Foundation
Fingerprint
Explore the research topics touched on by this project. These labels are generated based on the underlying awards/grants. Together they form a unique fingerprint.