Abstract
A fundamental property of spoken language comprehension is the rapid recognition and integration of words into the prior discourse, which provides constraints on the upcoming speech. Beyond incremental interpretation of adjacent words, the challenge is to understand how discontinuous words are integrated, as in garden-path sentences (e.g., "The dog walked in the park was brown"). To discover the timing (when) and neural location (where) of the key computations (what) involved in the processing of discontinuous dependencies, we combined time-resolved, source-localised EEG/MEG signals, probabilistic language models of different aspects of incremental processing using corpora, NLP models, and human behavioural data, and brain–model correlation techniques (RSA). We show that the initial semantic–syntactic integration of "The dog walked" into a scenario with the noun as the subject of the verb in bilateral fronto-temporal regions constrains the integration of the final verb "was" involving left-lateralised language-relevant and domain-general regions.