I'm a second year graduate student in the Linguistics department at the University of Maryland, College Park where I'm also part of the Language Science Center. Broadly, I'm interested in meaning, its acquisition, and the relationship between linguistic and conceptual structure. With tools from formal semantics, psycholinguistics, and psychophysics, I'm looking into the lexical specifications and acquisition of quantifiers, as well as how they interface with extralinguistic cognition. I'm advised by Jeff Lidz and Paul Pietroski.
First- & Second-order Quantifiers: Universal quantifiers like each, every, and all are expressible using the tools of first- or second-order logic. So, how are they in fact represented in speakers' minds? Put another way, does the meaning of every highlight sets or individuals? With Jeff Lidz, Paul Pietroski, and Justin Halberda, I'm developing a set of experimental diagnostics that try to answer this question. The idea is that all else (task, participant, truth-conditions) equal, preferences for individual- or set-based verification strategies reflect underlying first- and second-order representations, respectively. For example, participants use an individual-based strategy to evaluate a statement like "each of the big dots are blue" but switch to a set-based strategy when evaluating a statement like "all of the big dots are blue". We think a change in representational format (first- vs. second-order) is to blame.
More & Most: Relatedly, we've been looking at how the meanings of more and most bias different visual search and memory encoding strategies. One upshot is that when evaluating statements like "more of the dots are blue", adults and kids represent the focused (blue) and non-focused (non-blue) sets and perform a direct comparison. When evaluating statements like "most of the dots are blue", on the other hand, people attend to and represent the focused set (blue dots) and the superset (dots) and perform a proportional comparison. In displays with only two colors, this is a sub-optimal strategy since it introduces more noise into the number estimates than the simple direct comparison would! With Athena Wong, we've begun to extend these predictions to Cantonese quantifiers as well. We take this to be good evidence (1) for a specific meaning specification of proportional quantifiers like most and (2) for the idea that meaning carries some weight in deciding what verification strategy gets deployed. People don't always take the cognitively easiest or otherwise superior route.
Event Concepts & Verb Learning: I'm working with Laurel Perkins, Mina Hirzel, Alexander Williams, and Jeff Lidz to identify events -- like x taking y from z -- that infants view under a 3-participant concept but that adults often describe with transitive clauses like "The girl took the truck". The ultimate goal is to better understand how learners relate the arguments in a given clause to the participants in the event that clause describes. And eventually, to give an account of how they use this information to acquire verb meanings with the help of syntactic bootstrapping.
Pre-UMD: Before coming to Maryland I studied Cognitive Science at Johns Hopkins. I was fortunate enough to work with Justin Halberda on a number of projects, some of which were related to the Approximate Number System and its interface with language. I also had the opportunity to work with Akira Omaki and Emily Atkinson on a project investigating the relationship between working memory and parsing.
Click the icons for PDFs of abstracts () and posters ()