Speaker: Dr. Xing Tian NYU Shanghai https://slangscience.github.io/slang/xingtian_cn.html
Time: 13:00-14:30, June 12th, 2018
Location: #1113, Wangkezhen Building, Peking University
Abstract: We need to link action and perception to efficiently interact with the external world. One of such linking mechanisms has been proposed as motor-to-sensory transformation – signals from the motor system transmit to the sensory systems within the brain. Motor-to-sensory transformation has been hypothesized as a key mechanism that underlies many human cognitive functions, such as speech production and control. Specifically, auditory consequences of speech production can be predicted via the motor-to-sensory transformation, and the predicted speech can be compared with feedback to constrain and update production. In a series of studies, we tested the crucial components of this motor-to-sensory transformation model in speech production and control, including dynamics processing, neural representation and pathways, and interaction with perception. Evidence from behavioral, electrophysiological (EEG/MEG) and neuroimaging (fMRI) studies using novel imagined speech paradigms suggest that motor-to-sensory transformation via a dynamic representational conversion can generate precise auditory prediction in phonological level, as well as in acoustic levels such as attributes of pitch and even loudness. Moreover, such multiple-level prediction can modulate behavioral and neural perceptual responses at corresponding speech perception levels. These consistent results suggest that the neural representation induced by motor-to-sensory transformation during speech production converges to the same multi-level representational format as the neural representation established during perception. Such a coordinate transformation between motor and perceptual systems in a top-down predictive process forms the neurocomputational foundation that enables the interaction with a bottom-up process to shape our cognition.
Host: Dr. Huan Luo