This page attempts to give an overview of what it is that I do, given no or little background in linguistics. The page is aimed at roughly three groups of people: (i) family members and people who knew me before I started doing this, (ii) undergraduates who are considering taking a course in linguistics or semantics and are wondering what it might involve, and (iii) prospective graduate students wondering how semantics might fit into cognitive science.
What is linguistics?
Natural languages are full of patterns. Many of these patterns are specific to human languages; for example, if you were designing an “ideal” communication system you wouldn’t necessarily include them. We also find great systematicity in the patterns in natural languages, with many patterns recurring again and again in a range of unrelated languages. We also find many patterns recurring again and again in novel utterances by speakers of a particular language. (By “novel” I mean the utterance of a sentence that the speaker has never heard or produced before in their life.)
An example of the first kind of systematicity is that in all languages (as far as I know), there is the part of speech that is standardly called a verb. An example of the second kind of systematicity is that even when uttering a novel sentence (for example, something improbable like The auditorium contains seventeen large elephants), and you will always find yourself putting any determiners (the, seventeen) before their corresponding nouns. You will never utter a sentence like auditorium the contains large elephants seventeen. There are of course other languages where this isn’t true (e.g. Thai, where various determiner-like objects appear following the noun).
One major goal of linguistics is to find, model, and explain the systematicity of such patterns. In doing so what we aim at are models of a speaker’s natural language competence and performance - their ability to understand and produce utterances, and especially novel utterances. Another way of looking at this is that we aim at modeling the grammars of natural languages.
My specific interest is in interpretation, i.e. semantics/pragmatics. Given the “auditorium” example above, an interpreter of English could look at a room and immediately judge whether such a sentence is true or false (or perhaps, not appropriate) relative to that room. In fact, even without a room, you can estimate the probability of such a sentence being true, and if you heard someone seriously utter such a sentence in context, you would be able to draw some inferences about that person and their goals. (Without context, for example when overhearing, we can even draw inferences about what the context might be: perhaps they are doing a logic puzzle or playing a game?) What goes into these interpretive abilities? How do you hear (or read) the sequence of sounds/words in novel combinations and come out with inferences about other agents and the world? These are the core questions that semantics/pragmatics addresses.
Why do it?
Language is central to much human behavior. Not only that, it is highly human-specific. While there are instances of animal communication, most researchers agree that these instances do not have the properties of a human language (e.g. they lack recursion, compositionality, and so on). Consequently, it is commonly held that humans have a specialized cognitive “module” dedicated to acquiring the ability to speak a natural language. By studying language we are studying a core aspect of human cognition, and can also gain insight of the structure of cognition in general (e.g. issues of how cognitive modules interact, and what it means for cognition to be modular).
On the more practical side, models of the human ability to produce and understand language can inform computational models used in machine natural language understanding, question answering, machine translation, and related tasks. Purely statistical or machine-learning-based approaches in these domains can go far, but I do not believe they can achieve human-level competence without actual empirically adequate models of human competence itself.
Why cognitive science?
Or, why am I, with degrees in linguistics, working in a cognitive science department? Linguistics as a discipline more typically ends up in the humanities or in the social sciences. Certain sub-fields are probably correctly still classified this way. But mainstream theoretical linguistics is not a humanities discipline, and I do think that it falls squarely in the empirical/natural sciences. The domain of data is the communicative activities of human beings. Moreover, I think linguistics is best conceived of as a part of cognitive science — where the goal is understanding of the mind and cognition through a range of methodologies (theorizing, computational/mathematical modeling, and experimentation). While many linguists approach grammars as more abstract mathematical objects, I prefer to keep in mind that there must be a close correspondence between any abstract grammar and an implementation of that grammar in the mind/brain at multiple levels of abstraction. The cognitive science department at JHU has researchers working on topics such as language, writing systems, and vision with a diverse array of perspectives and approaches; theoretical linguistics is an important part of this array.