Project Details
A comparison of top-down effects on phonological processing induced by lexical context and by situational language context: An investigation using electrocortikography and magnetoencephalography
Applicant
Dr. Yulia Oganian
Subject Area
Human Cognitive and Systems Neuroscience
General, Cognitive and Mathematical Psychology
Cognitive, Systems and Behavioural Neurobiology
General, Cognitive and Mathematical Psychology
Cognitive, Systems and Behavioural Neurobiology
Term
from 2016 to 2018
Project identifier
Deutsche Forschungsgemeinschaft (DFG) - Project number 301795112
A word can sound different depending on the speaker or background noise. Yet, we are highly skilled in extracting stable percepts from noisy and variable acoustic signals. It is well established that this ability is based on the recognition of phonemes, the smallest speech units that alter meaning (consonants and vowels), localized to the left superior temporal gyrus (STG). A fundamental principle of phoneme recognition is that acoustically varying inputs are mapped fast and effortlessly to a finite number of phonemic categories. However, predictions originating in proximate inputs or in situational information, such as the input language, can alter this mapping. An example for such predictive coding are lexical top-down effects that can entail categorization of a phonetic token preceded by /woo/ as /d/, but as /t/ when preceded by /boo/, resulting in /wood/ and /boot/. Similarly, due to acoustic differences between these phonemes across languages, the same token might be perceived as /d/ in an English context, but as /t/ in a Spanish context. It is currently unknown what neural mechanisms mediate such predictive effects on phonemic representations in the left STG, although describing these mechanisms is pertinent to neural models of speech recognition. Indeed, until recently, such endeavors were hindered by the crude resolution of available methods in human neuroscience, lacking sufficient spatial (e.g. magnetic encephalography, MEG) or temporal (magnetic resonance imaging) resolution. This changed with the development of human electrocorticography (ECoG). With ECoG, neural activity is recorded subdurally from a confined cortical area with high spatiotemporal resolution. I propose to employ whole-brain MEG and ECoG of posterior STG in 5 studies, designed to investigate the neural mechanisms mediating the effects of lexical and situational language context on phoneme recognition. Study 1 will employ MEG to describe the timing of integration between acoustic input and lexical expectations in phonemic categorization. Study 2 will employ ECoG to further identify spatiotemporal changes underlying these lexical top-down effects. Studies 3 and 4 will elucidate the differences between neural encoding of first (L1) and second (L2) language phonology and phonemic sequence probabilities. For this, ECoG signal will be recorded in Spanish-English bilinguals listening to sentences in their L1 and L2. A fifth MEG study will compare effects of lexical and language contexts on phonemic categorization. Subjects will perform phonemic categorizations on tokens embedded in either lexical or only language context in L1 and L2, thus making these two sources of predictions independent. The crucial contrast between conditions in which lexical and language context will induce equal percepts will directly pit their underlying mechanisms against each other. Overall, this research program will set the foundations for neural models of predictive effects on speech perception.
DFG Programme
Research Fellowships
International Connection
USA