Since working at JHU, I have become more and more involved in computational cognitive science, focusing on interpretation. This is to some degree a return to my roots; I have an undergraduate degree in Computer Science. I’m interested in three main questions in computational semantics and Natural Language Understanding:

  • What does a computationally implemented theory of linguistic semantics/pragmatics look like?

By “computationally implemented”, I mean that the theory is not just expressed mathematically, but in terms of running code. To address this first question, I have worked on building implemented versions of such models, and on providing tools for others to do so. See The Lambda Notebook project for more details.

  • Can converging evidence from computational models and NLP-oriented data sets contribute to linguistic theory, and if so, how?
  • Can linguistic theory contribute to applied Natural Language Understanding projects, and if so, how?

These questions I have addressed in collaboration, especially Ben Van Durme, and Aaron Steven White. See also The Decompositional Semantics Initiative for more details on the overarching project, as well as my work on Lexical Semantics and the semantics of Clause-embedding predicates (Reisinger et al. 2015; White et al. 2016; White & Rawlins 2016; White, Rawlins & Van Durme 2017; White & Rawlins 2018; White et al. 2018; White et al. 2019; White & Rawlins 2020). As part of this, I have been a co-PI on funded collaborative grant projects from DARPA as part of the LORELEI, AIDA, and KAIROS projects, as well as from the NSF.

Related teaching: I have occasionally taught seminars within Cognitive Science on these topics, and together with Ben Van Durme, offer every few years a graduate seminar called Event Semantics in Theory and Practice, that brings together linguistic event semantics with NLU annotation and models. At ESSLLI 2017, Aaron Steven White and I co-taught an advanced course on computational lexical semantics. At NASSLLI 2020, I will teach a 3-day mini-course on implementing semantic compositionality.

  1. Reisinger, D., Frank Ferraro, Craig Harman, Rachel Rudinger, Kyle Rawlins & Benjamin Van Durme. 2015. Semantic proto-roles. Transactions of the ACL 3. 475–488. DOI: 10.1162/tacl_a_00152
  2. White, Aaron Steven & Kyle Rawlins. 2016. A computational model of S-selection. In Mary Moroney, Carol-Rose Little, Jacob Collard, & Dan Burgdorf (eds.), Proceedings of SALT 26, 641–663. DOI: 10.3765/salt.v26i0.3819
  3. White, Aaron Steven & Kyle Rawlins. 2018. The role of veridicality and factivity in clause selection. In Sherry Hucklebridge & Max Nelson (eds.), Proceedings of NELS 48, Download: https://ling.auf.net/lingbuzz/004012
  4. White, Aaron Steven & Kyle Rawlins. 2020. Frequency, acceptability, and selection: a case study of clause-embedding. Glossa 5(1). 1–41. DOI: 10.5334/gjgl.1001
  5. White, Aaron Steven, Kyle Rawlins & Benjamin Van Durme. 2017. The semantic proto-role linking model. In Proceedings of the European chapter of the Association for Computational Linguistics, 92–98. ACL. Download: https://www.aclweb.org/anthology/E17-2015/
  6. White, Aaron Steven, D. Reisinger, Keisuke Sakaguchi, Tim Vieira, Sheng Zhang, Rachel Rudinger, Kyle Rawlins & Benjamin Van Durme. 2016. Universal decompositional semantics on universal dependencies. In Proceedings of the 2016 conference on Empirical Methods in Natural Language Processing, 1713–1723. ACL. DOI: 10.18653/v1/D16-1177
  7. White, Aaron Steven, Rachel Rudinger, Kyle Rawlins & Benjamin Van Durme. 2018. Lexicosyntactic Inference in Neural Models. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, 4717–4724. Association for Computational Linguistics. Download: https://aclweb.org/anthology/D18-1501/
  8. White, Aaron Steven, Elias Stengel-Eskin, Siddharth Vashishtha, Venkata Govindarajan, Dee Ann Reisinger, Tim Vieira, Keisuke Sakaguchi, Sheng Zhang, Francis Ferraro, Rachel Rudinger, Kyle Rawlins & Benjamin Van Durme. 2019. The Universal Decompositional Semantics Dataset and Decomp Toolkit. Download: https://arxiv.org/abs/1909.13851