
Language & Cognition (LangCog)
Hosted by Psychology Language Development Labs (Snedeker Lab & Bergelson Lab), Meaning and Modality Lab (Davidson Lab)
Tuesdays, 5:30 – 7:00 pm ET in William James Hall 1550 (Harvard University)
Organizers: Briony Waite (bwaite@g.harvard.edu) and Yiqian Wang (yiqian_wang@g.harvard.edu)
Supported by Mind Brain Behavior Interfaculty Initiative
Spring 2026 Schedule
| Date | Speaker and Talk Title | Location | Abstract |
|---|---|---|---|
| Jan 27 | no meeting | ||
| Feb 3 | Jingyi Wu Is Language a Window to Human Mind? Evidence from Linguistic and Visual Categorization | WJH 1550 | Available See below↓ |
| Feb 10 | Aditya Yedetore Classical Computation and Connectionist Models | WJH 1550 | Available See below↓ |
| Feb 17 | Elizabeth Coppock Unifying dependent-indefinite and independent-universal reduplicated numerals in Newar | WJH 1550 | Available See below↓ |
| Feb 24 | Canceled due to weather | ||
| March 3 | Johanna Alstott A cautionary note on word learning tasks and presupposition triggering | WJH 1050 (different room) | Available See below↓ |
| March 10 | Daria Bikina Uniqueness without articles: what Russian bare singulars tell us about (in)definiteness | WJH 1550 | Available See below↓ |
| March 17 | Spring break, no meeting | ||
| March 24 | Anna Papafragou Pragmatics and the acquisition of taxonomic nouns | WJH 1550 | Available See below↓ |
| March 31 | Andrea de Varda Modeling human language(s) and higher-level cognition with large language models | WJH 1050 (different room) | Available See below↓ |
| April 7 | Hope Kean The Neural Architecture of Human Reasoning | WJH 1550 | Available See below↓ |
| April 14 | Rhiannon Luyster Curiosity, creativity, and language in autism | WJH 1550 | Available See below↓ |
| April 21 | Mélissa Berthet Bonobo vocal communication and the evolution of compositionality | WJH 1050 (different room) | Available See below↓ |
| April 28 | Moshe Poliak | WJH 1550 |
Looking for what we had in the previous semester?
Talks in Spring 2026
Apr 21
Bonobo vocal communication and the evolution of compositionality
Speaker: Mélissa Berthet, University of Milan, Postdoc at Department of Philosophy
Abstract (click to view):
Compositionality – the capacity to combine meaningful elements into larger structures whose meaning depends on the meanings of the parts and the way they are combined – is a hallmark of human language. In this talk, I will present new findings on the compositional capacities of wild bonobos. Specifically, by conducting a comprehensive investigation of meaning and adapting methods from linguistics, I will show that bonobo vocal communication extensively relies on compositionality. This suggests that the ability to construct complex meanings from smaller vocal units was already present in our ancestors at least 7 million years ago.
Apr 14
Curiosity, creativity, and language in autism
Speaker: Rhiannon Luyster, Emerson College, Professor in Department of Communication Sciences and Disorders
Abstract (click to view):
The emergence of curiosity, creativity and language are foundational hallmarks of early development. In the context of autism research, language is commonly considered, but creativity and curiosity are not. This presentation will explore examples of curiosity and creativity in autism, particularly in the context of language development, with an eye towards how we might consider a strength-based approach to communication and cognitive development in autism.
Apr 7
The Neural Architecture of Human Reasoning
Speaker: Hope Kean, MIT, Postdoc at Department of Brain & Cognitive Sciences
Abstract (click to view):
Humans possess a remarkable ability for abstract logical reasoning, such as drawing conclusions from premises and generalizations from examples. The neural basis of this reasoning remains poorly understood, particularly its relationship to language. We addressed this question using three complementary approaches: fMRI in healthy adults performing inductive and deductive reasoning tasks, behavioral testing in individuals with severe language impairment (aphasia), and precise neuroimaging in individuals with atypical brain anatomy.
Our findings reveal a surprising neural architecture for logical thought. First, we identify a network of anterior frontal brain regions specialized for abstract formal reasoning that robustly dissociates from the language network, the domain-general Multiple Demand network, and other high-level cognitive systems. Second, we demonstrate that the language network is not engaged during logical reasoning, and individuals with profound aphasia perform intact logical reasoning tasks, providing further evidence that linguistic representations are neither utilized nor required for inductive or deductive reasoning. Finally, we show that this functional specificity is preserved even in individuals with severe cortical deformation from brain cysts, including the logic network. This suggests that functional modularity is a fundamental organizing principle of neural architecture, robust even to extreme anatomical variation.
Mar 31
Modeling human language(s) and higher-level cognition with large language models
Speaker: Andrea de Varda, MIT, Postdoctoral fellow at McGovern Institute for Brain Research
Abstract (click to view):
Large language models (LLMs) have recently emerged as powerful candidates for modeling several domains of human cognition. Because they operate over natural language, they provide flexible representations that can be evaluated against human behavior and brain activity. In this talk, I will present a set of studies that use LLMs to test how far this modeling approach can go—first in the domain of language, and then in the domain of reasoning.
In the first part, I ask whether multilingual language models can explain how the human brain processes the extraordinary diversity of the world’s languages. Using fMRI data from native speakers of 21 languages spanning 7 language families, we show that model embeddings reliably predict brain responses within languages and, crucially, transfer zero-shot across languages and language families. These results point to a shared representational component in the human language network, largely driven by semantic content, that aligns with the representations learned by multilingual models.
In the second part, I ask whether LLMs can also serve as models of the broader organization of human higher-level cognitive systems, including reasoning. First, analyzing large reasoning models (LLMs further trained to solve problems generating chains of thought), we show that the number of reasoning tokens they use predicts human reaction times across seven diverse reasoning tasks. And second, we show that the internal organization of those models mirrors well-known observations on how the human brain allocates resources to cognitive functions, including a separation between purely linguistic processes, domain-general, and domain-specific reasoning.
Together, these studies show that models optimized on language can capture human brain responses to linguistic input across diverse languages, and reasoning-trained variants of these models can mirror the costs of higher-order cognition and the functional organization of those systems in the human brain.
Mar 24
Pragmatics and the acquisition of taxonomic nouns
Speaker: Anna Papafragou, University of Pennsylvania, Professor of Linguistics
Abstract (click to view):
A long-standing idea in language acquisition is that the “basic” level of taxonomic nouns (“dog”) is privileged over both the broader, superordinate level (“animal”) and the more specific, subordinate level (“dalmatian”). This asymmetry has often been attributed to the perceptual naturalness of basic-level categories. In this talk, I suggest that children’s use and comprehension of taxonomic nouns do not necessarily reflect conceptual factors but often reveal pragmatic pressures that are also active in adult communicators. On this perspective, what learners find easy or hard to learn is determined not (only) by what is easy or hard to conceptualize but by what sorts of distinctions are more or less likely to be pragmatically useful (e.g., informative). I show that asymmetries in the acquisition of noun taxonomies can be understood in terms of children’s developing ability to take into account pragmatic principles. This approach leads to a rethinking of foundational assumptions in word learning, and reveals the pervasive role of pragmatics in early semantic development.
Mar 10
Uniqueness without articles: what Russian bare singulars tell us about (in)definiteness
Speaker: Daria Bikina, Harvard University, PhD candidate in the Department of Linguistics
Abstract (click to view):
Languages like English mark (in)definiteness overtly with articles, but many languages, including Russian, lack articles altogether. A natural assumption would be that in languages like that, bare nouns freely alternate between definite and indefinite readings. A closer look shows that the situation is more nuanced, and even in the absence of articles, bare nouns do not uniformly correspond to both definites and indefinites. Theoretical accounts diverge sharply. Some claim that bare singulars encode uniqueness and thus are definites (or kind-denoting expressions). Others argue that they are always existential, and definite readings arise through some pragmatic mechanism. A third class of approaches treats them as ambiguous, deriving (in)definiteness effects from information structure.
In this talk, I argue that Russian bare singulars systematically show sensitivity to uniqueness, a component of meaning considered central for definite readings. Across three experiments manipulating uniqueness, familiarity, and word order, I show that bare singulars are sharply degraded under forced non-uniqueness when domain restriction is blocked. Apparent null effects are only observed in contexts that allow the domain of restriction to be pragmatically narrowed. These findings challenge fully existential accounts and suggest that bare singulars encode a uniqueness constraint whose empirical visibility depends on how the contextual domain is constructed.
Mar 3
A cautionary note on word learning tasks and presupposition triggering
Speaker: Johanna Alstott, MIT, PhD student in Linguistics
Abstract (click to view):
A central topic in linguistic semantics is presupposition:certain words can be felicitously used only if a certain piece of information (the presupposition) is already known to the interlocutors. Cross-linguistically, some kinds of meanings tend to be presupposed more often than others, but the reason why remains an open question. For example, predicates with initial-state and change-of-state components tend to encode them as presupposition and assertion, respectively, instead of the reverse. Recently, Bade, Schlenker, and Chemla (2024)–henceforth BSC24–have argued based on a series of artificial word learning experiments that this cross-linguistic asymmetry in presupposition triggering reflects conceptual biases privileging changes-of-state over initial states. In their experiments, they gauged how participants encoded the initial-state and change-of-state entailments of a nonce verb wug, and they interpret their results as suggesting that participants treated the initial state as presupposed and the change-of-state as asserted. This finding, they argue, favors their conceptual-bias hypothesis over competing accounts. In this work, we further test the validity of BSC24’s paradigm by trying to replicate their original effect and, in parallel, ascertaining whether their results generalize to another nonce predicate. We not only fail to extend BSC24’s results to our new nonce word but also fail to replicate their original effect: our participants treated BSC24’s wug and our new nonce word as non-presuppositional. Furthermore, a closer look at BSC24’s original studies suggests that non-presuppositional construals were common there, too. We discuss several reasons why this could have been the case. All told, our outlook is pessimistic: adult artificial word-learning paradigms do not, in fact, illuminate the mechanisms of presupposition triggering.
Feb 24
This talk is canceled due to weather-related travel disruptions
Using neural networks to test hypotheses about language acquisition
Speaker: Tom McCoy, Yale University, Assistant Professor in the Department of Linguistics
Abstract (click to view):
A central challenge in linguistics is understanding how children acquire their first language. A variety of influential hypotheses have been put forth about the learning strategies that children might use (e.g., the syntactic bootstrapping hypothesis: Gleitman 1990). Such hypotheses have received important empirical support in controlled experimental settings (e.g., Naigles 1990), but it is challenging to test whether these hypotheses continue to hold for children’s large-scale, naturalistic input because it would be unethical to perform extensive interventions on a child’s primary linguistic data. In this talk, I will discuss how neural network language models – the type of system powering ChatGPT – can be used to test hypotheses about which strategies are effective for learning from naturalistic child-directed language, providing a source of evidence that complements the controlled experiments that can be run with actual children. I will discuss three case studies using this paradigm, one focusing on the syntactic bootstrapping hypothesis, one focusing on the poverty of the stimulus argument as it pertains to English yes/no questions, and one focusing on syllable structure in Optimality Theory. Taken together, these projects illustrate one way in which neural network models can contribute to linguistic theory. (Work done in collaboration with Xiaomeng Miranda Zhu, Aditya Yedetore, Erin Grant, Robert Frank, Tal Linzen, and Tom Griffiths).
Feb 17
Unifying dependent-indefinite and independent-universal reduplicated numerals in Newar
Speaker: Elizabeth Coppock, Boston University, Associate Professor of Linguistics
Abstract (click to view):
Newar (also known as Nepal Bhasa) is a Sino-Tibetan language spoken in the Kathmandu Valley region of Nepal with a rich classifier system. Classifier-affixed numerals can be reduplicated to produce a distributive reading, in a manner familiar from the literature on reduplicated numerals in Telugu, Hungarian, and Kaqchikel. For example, “My sons caught three-CLF.ANIM three-CLF.ANIM fish” (where CLF.ANIM = animate classifier) means that the sons caught three fish each. In this way, reduplicated numerals produce what is known as “dependent indefinites”, that is, indefinites that depend on the presence of a higher-scoping quantificational operator. But in addition, Newar reduplicated numerals have universal uses, which do not depend on having a quantificational element elsewhere in the sentence, as in “One one letter is correct”, meaning “every letter is correct”. Thus reduplicated numerals in Newar have both ‘dependent-indefinite’ and what we might call ‘independent-universal’ uses. I offer a way of unifying these two uses in a semantics that relies on sequences. This sequence-based analysis offers an iconic treatment of the semantics of reduplication, where the repetition in form is reflected by repetition in the meaning. This is the main point, but there is also a side point: In the course of developing the analysis, we are forced to confront the question of whether to give a Chierchia-like or Krifka-like constituency for the classifier construction; I advocate a structure where the classifier and the numeral form a unit to the exclusion of the noun, a la Krifka.
Feb 10
Classical Computation and Connectionist Models
Speaker: Aditya Yedetore, Boston University, PhD student in Linguistics
Abstract (click to view):
We hypothesize that Connectionist models which succeed at symbol manipulation tasks do so by employing Classical representations and processing mechanisms. To evaluate this hypothesis, we pursue two complementary lines of inquiry, one theoretical and one empirical. On the theoretical side, we develop a precise account of Classical computation together with a formal language for specifying and reasoning about Classical representations and processes. Using this framework, we define Compositionality, Systematicity, and Productivity, and show how Compositional, Systematic, and Productive patterns of generalization entail a Classical computational architecture. On the empirical side, we train Connectionist models on a simple symbol manipulation task and test whether their behavior exhibits signatures of Classical computation. Consistent with the theoretical predictions, we find evidence that successful models instantiate core aspects of Classical systems.
Feb 3
Is Language a Window to Human Mind? Evidence from Linguistic and Visual Categorization
Speaker: Jingyi Wu, PhD candidate in Linguistics at Zhejiang University; Visiting Scholar at UCLA Visual Intelligence Lab
Abstract (click to view):
Do linguistic structures reflect the organization of human conceptual representations? While language allows speakers to describe the same event in multiple ways, prior work shows that verb meanings and argument realization are systematically constrained, raising the possibility that linguistic form mirrors underlying event cognition. Here, we investigate whether event categories are consistently represented across linguistic and visual modalities. Across two experiments, participants were asked to categorize event types using either linguistic descriptions or short video clips depicting the same classes of events. We find that humans reliably and consistently categorize events in both modalities, exhibiting parallel patterns of categorization across language and vision. By contrast, state-of-the-art computational vision models fail to reliably recover these event categories from the same visual inputs. These results suggest that linguistic and visual event processing draw on a shared underlying system of event representation, providing converging evidence that language offers a window into the structure of human event cognition.