Richard Montague died in March of 1971 at the age of 40. He was found in his shower, apparently murdered in an event that remains shrouded in mystery to this day. A professor of philosophy at UCLA, Montague left behind another kind of mystery. He had written a short paper that was not to be published until two years after his death. Montague had devoted most of his academic career to the arcane field of mathematical logic and he had published several important articles on the subject as well as a respected textbook. But this new unpublished paper was different. While it was full of logical notation and reflected a recently developed interest at Montague's namely linguistics. The problem with Montague's paper was that linguists could simply not understand it. Barely over 10 pages. It bristled with mathematical formulas and logical notation, and in those days linguists were generally not trained in logic. And even those familiar with logic had great difficulty with Montague's paper as it employed advanced logical techniques in new ways. These complex innovations were in the service of a simple but revolutionary goal. Montague sought to show that language at its essence was logical. This might seem like a strange claim. We're all familiar with ways in which language seems quite illogical. George Carlin points out that we drive on a parkway, but park on a driveway. We apparently have a spelling rule in English, I before E except after C. But the rule is violated by many ordinary words, like either being an ancient in general. The relation between sound and spelling in English seems so illogical that George Bernard Shaw showed how the word fish could actually be written G H O T I. But these peculiarities have nothing to do with Montague's point. He was focusing on the meaning of English sentences, what linguists call semantics. Now, intuitively it seems clear that logic is operating at that level. After all, we clearly understand when two sentences are paraphrases of one another. Or when one sentence is a logical consequence of another. How could this not be the case? For example, in our legal systems, we're willing to imprison or even kill people based on the meanings of particular sentences that are on the books. And think about it, I'm giving testimony in a murder trial and state Smith did not kill Jones. But then a recording is produced of me saying Jones was killed by Smith. At this point, I need to either argue that the recorded claim was a lie or I have to give up my defensive Smith. And this is because there's a clear logical relation between sentence A, Jones was killed by Smith and sentenced B, Smith did not kill Jones. If A is true, B must be false. We simply take it for granted the sentences have logical relations. If someone utters the sentence, Jones was killed by a large polar bear and then goes on to claim there was no such bear, then something is going wrong. In logical terms, we say that the sentence entails that a polar bear exists. So this is the first part of Montague's claim. Languages logical. By itself this is not particularly revolutionary. In fact, I hope to have convinced you that this is pretty much taken for granted in ordinary life. We all have strong intuitions about the logic of sentences. But what monarchy wanted to do is prove that language has this logical structure. He wanted a system of rules that could automatically determine the logic of sentences without relying on human intuition and principal a computer ought to be able to do it. This was indeed a revolutionary goal. Something that linguists had utterly failed to do. In fact, it was widely held that this was not possible. Human language, it was felt was too vague and obscure. It wouldn't make sense to try to impose a logical system on human language. In the years, just before Montague's violent and mysterious death, Montague had not only convinced himself this was possible, he'd been developing new techniques to show how to do it. And this is what is explained in the short paper that was not to be published till after he died. A paper with the rather forbidding title, the proper treatment of quantification in ordinary English or P T Q as it has become universally known. For all its complexity PTQ is addressing a simple question. What exists? This is something that everyone wonders about. For example, Renee D Card and his famous meditations was attempting to answer to specific verses of this question namely does God exist? And does the soul exist? These are really big versions of the question. But of course there are many more mundane versions. Does Bigfoot exist? Or do Unicorns exist? Now PTQ won't provide any help figuring out whether God Bigfoot or Unicorns actually exist. Instead what it seeks to show is what a given sentence says about existence. Remember the sentence Jones was killed by a large polar bear. So if you uttered that sentence, it would be bizarre to then expressed doubt about whether there are any large polar bears. The sense you uttered involves a commitment to the existence of polar bears. But how exactly does it do that? This is a key question that P T Q seeks to answer. To see how it does this, recall that PTQ is about the proper treatment of quantification. As used in linguistics the term quantification refers to the function of a word like three in the sentence. I saw three polar bears. It tells you the quantity of polar bears you saw. Qualifiers don't have to be very specific. Most, some and every function as qualifiers in the sentences I saw most polar bears. I saw some polar bears or I saw every polar bear. Maybe the simplest qualifier is A, as in I saw a polar bear. Well, it's like a and some are termed existential quantifiers and they are in fact used to express an existential commitment. Consider these different sentences. John was killed by a polar bear. A polar bear visited our cabin. Or we saw the tree where a polar bear was hiding. As long as the existentially quantified noun phrase, a polar bear appears somewhere. The sentence expresses a commitment to the existence of a polar bear. So existentially quantified noun phrase in general takes this form Q N P. Where Q is any existential quantifier such as a and some etc. And N P is any noun phrase that can be a single noun like bear, a multi word down like polar bear or a modified down like large polar bear. So it might seem that we already have an answer to our question about the existential commitments of sentences. The rule seems to be that we look for all the existentially quantified noun phrases in a sentence. And that determines the existential commitments of the whole sentence. Thus a polar bear attacked a bird, expresses two commitments to the existence of a bird and a polar bear. But this simple rule doesn't always work. This is illustrated by an innocent sounding sentence. Undoubtedly the most famous sentence from P T Q. The paper that completely transformed the modern study of meaning. This is the sentence. John seeks a unicorn. Uttering this sense makes no commitment to the existence of unicorns. It could perfectly well be true that john is seeking a unicorn even if no unicorns in fact exist. So are simple rule doesn't always work. Many times a sentence containing an existentially quantified NP like a unicorn involves a commitment to the existence of a unicorn. But other times it doesn't. This is bad for Montague's project of producing a simple set of rules that show the existential commitment of sentences as well as other aspects of their meeting. After all, human languages are infinite sets of sentences. So we probably can't just make a list of all the exceptions to our rule. So what is going on? It's pretty clear what the culprit is in our famous sentence, it's the verb, seek. If we replace it with a different verb, like John sees a unicorn, then the existential commitment returns. Montague's solution to this is based on a general observation about verbs like seek. They introduce hypothetical states of affairs and in this way are different from ordinary verbs. Like sees. In Montague's terminology verbs like seek, talk about different sets of possible worlds where the actual world may or may not be one of those possible worlds. Ordinary verbs like see don't normally have this ability. They restrict themselves to the actual world. Montague appeals to a large, perhaps infinite set of possible worlds. One way to think about this set is in terms of all the ways in which the actual world could have been different. So there's a possible world in which it's raining today, in contrast to the actual world where it's sunny. In another possible world, Donald trump was not elected president and yet another I'm an A list Hollywood actor who is also able to slam dunk a basketball. Unicorns roam freely in many of these possible worlds. And it is to these worlds, Montague appeals to explain his famous sentence. The word seeks talks about a set of possible worlds. The verb phrase seeks, a unicorn applies the existential commitment of a unicorn to each of these possible worlds. What John seeks is that the actual world is one of those possible worlds in which a unicorn is to be found. The word see, on the other hand, is only talking about the actual world. When we combined see where the unicorn, we again imposed the existential commitment of a unicorn to the world's being talked about. But in this case, the only world being talked about is the actual world. Appealing to possible worlds in this way, it's one of Montague's great insights. It's not just the observation that certain words like seek invoke sets of possible worlds. What Montague's shows, is that the use of possible worlds makes it possible to simplify the relation between the structure of sentences and the meanings they express. By appealing to possible worlds, Montague is able to avoid having to list exceptions concerning the commitments of existential NPs. They always express the same commitment to existence, in whatever world is being talked about. Instead, we just need to know which verbs talk about the actual world and which ones invoke sets of possible worlds. This is a crucial feature of Montague's proposal. A certain type of expression always makes the same contribution to the overall meaning of a sentence. The resulting vision is a simple, transparent model in which the meaning of sentences emerges directly from its surface representation. We'll probably never know who it was that brutally murdered Richard Montague in his shower. But in a few short years before his life was cut short, Montague had developed a system of ideas that would completely transform the way linguists look at meaning. Before Montague, our apprehension of meetings was shrouded in mystery, thought to be forever beyond the realm of logic and science. Because they were mysterious in this way, it was hard to see how linguistic meaning could be something that a computer might grasp. The publication of P T Q in 1973, revolutionized the theory of meaning nearly a half century ago. What Montague claimed is that there is a fully logical system of meaning hidden beneath the surface of all human languages. In fact, Montague said it had been hiding in plain sight all along. In his theory, the surface forms of English themselves constitute a rigorous logical language. We just never noticed it before. Up to this point, the revolution and meaning has remained purely theoretical. Since Montague's death, there have been no computational systems since that time that showed any hint of actual understanding of meaning. No substantial computer system has worked with the kinds of logical meanings that Montague had described. But they are tantalizing indications that this is changing. It may be that some current computer systems are actually starting to show some small science that they have underlying system of meaning. These systems may in fact be on the verge of the ultimately break through the achievement of true understanding. But the way these systems work with meaning is stranger and more obscure than what Montague's theory would lead one to expect. To see this, we need to return to the field of machine translation.