Cognitive Constraints for the Possibility of Useable Causal Knowledge presented by Professor Patricia Cheng (UCLA) | June 8, 2023

Professor Patricia Cheng
 Professor Patricia Cheng
 

This talk first defines the nature of the causal induction problem—the challenge of inducing causal knowledge in the mind’s representation-dependent reality—and describes two cognitive constraints that are essential components of a solution to that challenge. These constraints enable the possibility of causal knowledge that is “useable,” in a minimalist sense that the learned causal knowledge remains valid when applied. One constraint is causal invariance—the unchanging operation of a causal relation from a learning context to an application context. The other is coherence—a preference for explanations of a phenomenon or an event that are logically consistent and require the fewest assumptions. These constraints are grounded in two premises: 1) all understandings of reality concern representations of it; in particular, cause-and-effect relations are never “observable” and 2) causal invariance in the world does not depend on which episodes of experience inform the learning and which inform the application of the acquired knowledge. From these premises, it follows that if the goal is to attain useable causal knowledge, for reasoners on this or other planets, causal invariance must be the assumed function for decomposing an observed outcome into contributions from the candidate causes and background causes under the simplest explanation of the outcome. The essentiality of the constraints implies that even preschool children would succeed in incorporating causal invariance and coherence in their induction process, although the current standard statistical method for analyzing data involving binary outcomes fails. It also implies that materials designed to evoke the two constraints should effectively foster causal belief formation or revision. The talk will showcase two experiments, one involving preschool children and the other targeting individuals from ten conservative US states with the highest percentage of climate deniers. The latter experiment showed that participants who underwent a 32-minute online intervention, compared to those in the control condition, were 1.5 times more likely to report engaging in political climate actions or express intent to do so when assessed two years later, suggesting a lasting impact. Moreover, this effect was observed equally among self-reported conservatives and liberals, as would be expected if the constraints are embedded across humanity.

4:00 p.m. - 5:00 p.m.
2112 Social Science Plaza A


 

Is Intransigence Good for Group Learning? presented by Professor Cailin O'Connor | May 26, 2023

A long history of thought holds that stubbornness can be good for science. If individual scientists stick to their theories, even when they are not the most promising, the entire community will consider a wide set of options, engage in debate over alternatives, and, ultimately, develop a better understanding of the world. This talk looks to network modeling to address the question: is intransigence good for group learning? The answer will be nuanced. A diverse set of models show how some intransigence can improve group consensus formation. But another set of results suggests that too much intransigence, or intransigence of a stronger form, can lead to polarization and poor outcomes.

4:00 p.m. - 5:00 p.m.
2112 Social Science Plaza A


 

Rethinking Language Production using Information Theory presented by Professor Futrell | April 20, 2023
Talking is a complex task in which a speaker must transform a communicative intent into a series of motor actions in real time. Empirical research on language production has revealed that it is a largely incremental process: speakers do not start with a full plan for an utterance, but instead plan it as they go. I present a theory of incremental language production based on a combination of information theory and control theory, where the choice to produce a particular word in context is determined by a policy that maximizes communicative reward subject to a channel capacity constraint on cognitive control. I show that the theory captures human data on errors in word choice, word order preferences based on the “accessibility” of words, and disfluencies.


 

Theories in 3D visual perception presented by Professor Pizlo | March 9, 2023

 

 

 

 

© UC Irvine Center for Theoretical Behavioral Sciences - 2187 Social Science Plaza A, Irvine, CA 92697-5100 - 949.824.7569