Welcome
I am an assistant professor of Computer Science at Wellesley College.
Research
My research focuses on the intersection of language, cognition, and computation.
Within computer science, much of my work focuses on evaluating large language models for natural language and code generation. What are the abilities and limitations of LLMs? Can LLMs help non-expert programmers? I study these questions from a variety of angles, including model development, benchmarking and evaluation, and human-computer interaction studies.
Within linguistics, I study how context-sensitive meaning is encoded in natural language. I build computational models to understand how conversation participants use knowledge about each other's mental states, and use psycholinguistic methods to understand how people select context-sensitive expressions.
News
o I'm on junior research leave F24-S25
o New preprint: Substance Beats Style: Why Beginning Students Fail to Code with LLMs
o October 2024: MultiPL-T presented at OOPSLA 2024
o October 2024: Can It Edit? presented at COLM 2024
o September 2024: AustenAlike accepted to NLP4DH at EMNLP 2024
o August 2024: StudentEval presented at ACL 2024
o August 2024: Two papers presented at the TeachNLP workshop at ACL 2024
o New preprint: GlyphPattern, a benchmark for abstract pattern recognition in VLMs
o June 2024: Non-Expert Programmers in the Generative AI Future presented at CHIWORK 2024
o June 2024: I co-presented Manuscript Connections at the Computer Vision and Art History Today convening
o June 2024: Parenthesized Modifiers in English and Korean: What They (May) Mean presented at ELM 3
o May 2024: What Parenthesized Modifiers (May) Mean presented at HSP 2024
o May 2024: How Beginning Programmers and Code LLMs (Mis)read Each Other presented at CHI 2024