Speaker:Jian-Qiao ZHU, postdoctoral researcher, Princeton University

Time: 10:00-11:30AM, Thursday, Apr 17th

Venue:Room 1113, Wangkezhen Building

Host:Lusha Zhu

Abstract

Bayesian theories have successfully explained a wide array of phenomena in human cognition ranging over perception, intuitive physics, motor control, social reasoning, decision-making, and many others. Recently, with the rapid advancements in training autoregressive language models at scale, a new type of intelligence—distinct from human intelligence—has emerged. Can Bayesian frameworks, which have effectively captured natural intelligence, also provide valuable insights into artificial intelligence? In this talk, I revisit the Bayesian theories that have significantly deepened our understanding of human cognition, highlighting recent extensions that account for non-Bayesian behaviors by relaxing the assumption of exact Bayesian inference. I will discuss how autoregressive language models can be interpreted through a Bayesian perspective, focussing on the implicit priors of these models acquired during training. Finally, building upon this new viewpoint, I will present early examples demonstrating how leveraging small language models can inspire novel theoretical insights into human cognition.

Bio

Zhu Jian-Qiao is a postdoctoral researcher in the Computational Cognitive Science Lab at Princeton University, working with Tom Griffiths. He received his PhD in Psychology from the University of Warwick, supervised by Adam Sanborn and Nick Chater. Zhu’s research explores the rational and computational foundations of human and machine cognition, with a particular focus on decision-making. He develops Bayesian and neural network models to investigate how people make probability judgments and risky choices, including the Bayesian Sampler (Zhu et al., 2020; 2024, Psychological Review) and Arithmetic-GPT (Zhu et al., 2025, ICLR).