What is the structure of language, meaning, and thought? Many scientists believe that the three are closely related: As arbitrary as grammar often seems to be, most of the time, it is dictated by the meaning of the sentence, and the way in which meaning and grammar interact tells us a great deal about how the mind is structured.
Through the VerbCorner Project, professional scientists as well as amateurs are working together to determine how grammar and meaning interact, and through that, open a window onto the workings on the human mind.
Verbs are at the heart of both meaning and grammar, and so we have started with them. The project is divided up into a series of tasks, each of which is focused on a particular component of verb meaning. Each task has a fanciful backstory -- which we hope you enjoy! -- but at its heart, each one is focused on a component of meaning scientists believe is particularly important in the structure of language and thought. You can read about the components currently under investigation here.
Why can't we just look up the meanings of verbs in a dictionary? Dictionaries are a useful tool for humans, but like any tool, much of the work is done by the person wielding the tool. When you look up a word, all you get is a definition, which itself consists of more words. To find out what they mean, you would have to look them up, too, getting only more words, and so on. In any case, most scientists do not believe that language meaning is structured like a dictionary, with words having definitions. There are a number of hypotheses as to how language is structured, which is part of what the VerbCorner project is investigating.
Ultimately, we hope to probe dozens of aspects of the meaning of thousands of verbs. This is a massive project, which is why we need your help! We will be sharing the results of this project freely with scientists and the public alike, and we expect it to make a valuable contribution to linguistics, psychology, and computer science.
You can learn more about the project below in the FAQ, on the blog, and the forum.
This research was approved by the MIT Committee on the Use of Human Subjects. You may contact them with questions that are not be addressed by the research team (see here).
Two reasons. One reason is to make the tasks more interesting, but that's not the most important reason. What you are doing in these tasks is providing subtle linguistic judgments about abstract aspects of meaning. This is actually something very hard for humans to do (there is science to show this), and it is something linguists have to train to do. Yet we are recruiting amateur volunteers, many of whom have no training.
It turns out (again, there is science to show this), that when you embed an abstract question into a "real life" scenario, people do a much better job of answering the question. Each of the tasks you see is the result of extensive testing: We tried many different stories until we found one that helped people do the task successfully.
The purpose for each of the current tasks is listed here.
The VerbCorner Project is very large, so we have divided it up into a series of phases. For instance, the first phase consisted of checking six different components of meaning for each of 641 verbs in all the different sentence contexts that those verbs can appear in. On average, 3-4 people checked each component in each case. You can read about the results here. For Phase 2, we added 427 verbs an additional task.
The exact meaning of a verb depends on the sentence it is in, so we need to test each verb in all the types of sentences it can appear in. That is far too many sentences for anyone to have written down a list (it would take forever), and science has yet to work out the rules that dictate which verbs can appear in which sentence types. We generated our sentences from a linguistic database that represents the current state of the art, but it is imperfect.
When you mark a sentence as ungrammatical, though, it's not a waste! We will be using that information to update the linguistic database.
They are generated from VerbNet, a state-of-the-art linguistic database covering the syntax of verbs. VerbNet was based on Beth Levin's Verb Alternations, with many subsequent updates, and is linked to a number of other linguistic resources. One of the purposes of the VerbCorner Project is to expand VerbNet to have more extensive coverage of verb meaning in addition to syntax.
Below are some suggested resources for those who want to learn more about the science behind the VerbCorner Project (in addition to what is available on the blog and in the forum).
The first scientific publication about the VerbCorner Project, describing the basic goals and the (very) initial results. Written for a scientific audience.
The Stuff of Thought by Steven Pinker
A thorough discussion of how the structure of language informs our understanding of thought. Clearly written and fun to read, requiring very little background.
Verb Alternations by Beth Levin
This classic, influential reference laid out (and provided key evidence for) the basic vision adopted by the VerbCorner Project: When verbs are classified according to the syntactic rules that they follow, the resulting classes show clear bases in meaning. (see this blog post for a brief overview.)
Learnability and Cognition by Steven Pinker
How a theory of verb meaning could explain children's acquisition of verb grammar. Possibly the clearest description of the theory of language meaning being explored by the VerbCorner Project.
Semantic Structures by Ray Jackendoff
This is the primary source for the theory of semantics adopted both in VerbCorner and in Pinker's Learnability and Cognition.
Semantic Role Labeling by Martha Palmer, Daniel Gildea and Nianwen Xue
Synthesis Lectures are monographs the provide material suitable for teaching modules on new techniques and approaches in computational linguistics. The Synthesis Lecture on Semantic Role Labeling covers the linguistic background for PropBank, VerbNet and FrameNet, and then goes on to describe the process of developing automatic semantic role labeling systems using machine learning. The background chapters on linguistics provide a 30 page review of Fillmore, Jackendoff, Dowty and Levin, intended for students without a strong linguistics background.
Joshua Hartshorne (MIT)
Claire Bonial(CU-Boulder)
Martha Palmer(CU-Boulder)
Daniel Peterson (CU-Boulder)
Gabriel Frattallone (MIT) (website design)