Inside Jokes
Using Humor to Reverse-Engineer the Mind

Matthew M. Hurley, Daniel C. Dennett, Reginald B. Adams

There is a spectrum of books on the brain or mind. There are the reductionists that consider the neurons and what a collection of a few neurons can do. At the other end are the mentalists and psychologists that build on the subjective and clinical. There are very few books in the vast middle. This book is in the middle, and it is a far reach on either side to the next book that I am aware of. That’s progress.

Unordered so far:

I find myself enjoying but not analyzing the jokes in the book. This is of course, counterproductive in this context.

The ‘running gag’: This is where some repetitive characteristic is repeated over and over while remaining funny. Molière excelled at this, and a few modern movie directors. This is in the pattern of Bergson’s mechanical humor.

The adaptive purpose of humor has not been addressed before that I recall. The book sets out a high goal for ‘theory of humor’ which all theories that I am aware of fail.


As I read the book I realize that I have been annoyed by recent approaches to the brain which emphasize rationality devoid of emotion or the opposite. Of course that is the thrust of this book, to unify them or suggest how they intimately interoperate. Russell’s paradox was indeed a real pain! Just ask Frege. The rare forays of Hollywood into the world of mathematics, or physics, have never captured the excitement of mathematical understanding, or the will to know. I don’t suggest that I could do better. The recent ‘graphic novel’ Logicomix captures these emotions about rationality very well.

Page 86: Quote:

It is indeed hard to impose a hierarchy on either a company or a brain. We conventionally impose a legalistic hierarchy on companies, but companies often behave in ways that defy hierarchical explanations. Still we are inclined to impose hierarchy in order to explain complex systems, for that is a convenient style of understanding, and it is a software paradigm. But sometimes hierarchical explanations don’t work. There is currently close study of how swarming bees decide on a new site for a hive. The description is like that given in Minsky’s “Society of Mind” and advanced in his “The Emotion Machine”. An even less formal way to describe their approach is to say that they describe the emotions’s side of the story.

The authors show some cognizance of what it is to be a program as they describe why most current software approaches to emotion are wrong. Still the logical functions need integrity in their state in order to serve the ends for which they evolved. Of course they don’t always retain this integrity but an oversimplified model with emotion at the top would deny them any integrity. Coroutines are a software pattern that allows two cooperating programs to interact with neither being superior. Coroutines are too simple, however. Blackboard systems come closer but do not attempt to capture the peculiar authority of the emotions.

I am especially happy that the authors realize that the conventional ways to program a conventional computer are not the only ways.

An idea pops to mind here which a brief glance at the index suggests may not be noted in the book: Dennett’s ‘multiple draft’ mechanism suggests itself in a significant class of humor. When we hear about the ‘fish in their tank’, drafts are begun for each several meanings of “tank”. The military tank draft is quickly submerged leaving no memory trace. Forgotten but not gone, it is resurrected at the end and crows gleefully ‘I told you so’. It is the happy draft and deposits itself in memory.
As I read farther in the book I see that they present somewhat of a generalization to multiple drafts, yet I think there are significant and relevant elements of the original theory not covered here.

Some would even add ‘transcendental’.

This book has meat—it is dense. I disagree here and there but that is evidence that they have actually said something that is meaningful and often falsifiable. The first 100 pages has the content of many 200 page books.

GOFAI = “Good Old Fashioned Artificial Intelligence” for page 106.


Mark Miller refers me to this earlier note by Minsky. This is often cited in the book as “Minsky (1984)”.
I must say that the following quote (page 97) is nearly gobbledygook: There is ample connotation there. This world that we navigate lacks semantic landmarks and we must rely on vague connotations. It has buzzwords that play the rôle of landmarks, but are more like elements of fable, lacking logical relationships. I nominate the above quotation to be itself a ‘mental space’ or evidence thereof. The ‘mental space’ notion seems to be near the center of their theories. I had hoped that they would summarize at the end of the book the attributes of mental space uncovered by their study.

Lakoff notes

JIT processing. The book has JIT (just in time) terminology about right. The most common use today is JIT compilers that compile Java or JavaScript only at the point where it is really needed, or as it begins to appear that will be needed repeatedly. Another term for this is lazy evaluation. The Keykos kernel has many JIT techniques. As the book says, JIT is strategic because postponed work is very often avoided work. JIT style is seldom the natural way of programming, but is often not much more difficult. There is no end to the bag of tricks that make it easy.

The book seems not to define “spreading activation” and I assume that this will do.

So far the book seems to propose a structure for the mind without a great deal of connection to humor. I like the idea of ‘mental space’ for its similarity to the block structures of modern programming languages which delimit the meanings of symbols. The obvious connection is that while a programmer considers such a block, he has in his head a corresponding mental state.

On page 106, Quote:

I am not yet persuaded to oppose this idea but in any case I think that it important to reflect on the provenance of this idea. Perhaps the greatest mathematical legacy left to us from the ancient Greeks was Euclid’s notion of proof wherein a collection of propositions or ‘sentence like things’ was accumulated in the process of finding a proof. The ultimate proof was a sequence of such propositions, each referring to earlier ones with reference to one of a small fixed set of deduction rules justifying the current proposition. Such prescriptions by Euclid have been scarcely improved upon today. Euclid did not invent these ideas, he merely codified them from previous patterns that Socrates had commented upon and were evident in our manner of convincing other humans of useful information—in short “Logic” and the notion of deduction. An ultimate theory of thought must close the loop at some point to explain the unquestioned power as well as limitations of this idea. Granted that such deduction, guided by many different styles of heuristics, and a few simple models of emotion, has so far failed for general AI.

The book rails against the notion of ‘copying stuff to working storage’. I am sorry but copying information is exactly what axons do, no more—no less! It is true that the information takes a different form after it has been moved. So too in computers information is transformed for various strategic purposes.

This part of the book provoked this which needs more work.

I think the book protests too much against computer analogies. The book, and many others, protests against a naïve or obsolete view of what computers do. In modern computer languages an assignment or definition of a term does not generally copy data designated; more generally it copies or produces a pointer to that data, which in turn includes pointers (in the sense of Kanerva and less specifically in the sense of SA (spreading activations)) to other data much like the ‘nodes’ alluded to earlier. Brain ‘pointers’ and computer pointers are quite different in their detailed logical properties, however.

I think that anticipation is important. I learned of that idea from Hawkins. It is not entirely obvious why, however. It may be for the same reason that it is important in two person games of perfect information. With anticipation one has more time to plan what to do in likely situations. Indeed one can plan for contingencies—just in case—to scout ahead in one’s mind’s eye.

Page 127: They get around to discussing ‘mind space’ instead of merely invoking it. It seems quite like the hypothetical constructs in ‘natural deduction’ which is a form of formal logic that is intended to be more like actual mathematical thought than the dryer simpler constructs of conventional formal logic. They can be proven equivalent and indeed a simple efficient program can translate between the two sorts of logic.


I have seen a few people laugh at the same short sequence of machine instructions. See “Bob Hope” in this about smiling when there are no observers. Why Computers can’t joke
about doing what’s good for you (willpower — energy connection)
By page 146 I am wishing for a ‘principle discussion’ of “mental space”. The authors seem to take is as a given primitive. Unless this feature is explicitly discussed a great opportunity to ‘reverse engineer the mind’ will have been missed.

The early 20th century invention of ‘natural deduction’ (ND) is an attempt to provide a formal logic that also has a recognizable connection to how we think that we think. ND is indeed very efficient and provides an intuitive entrée to formal logic. I wish that there were an adequate introduction to it on the web. A ‘proof’ in ND is composed of similar sub units, recursively, very much like computer programming languages which are defined recursively. Quote:

Just like the recursive blocks of ND, ND is certainly only a cartoon of mental spaces, but captures several of the essential features such as dramatis personæ, propositions and especially the hypothetical.

A ‘committed belief’ is an immediately available belief that you are unaware of. If you are sitting in a chair at the kitchen table and someone asks if you are sitting in a chair you will assume that he is broaching some obscure philosophical question. You will probably not examine the question of whether you are sitting in a chair. I am convinced of the notion of ‘committed belief’ or ‘epistemic commitment’. Here are two examples of this.

It just occurred to me (page 201) that committed beliefs reside in the hypothetical. Indeed in the realm of humor it is almost always so. Such beliefs are vaguely like ‘logical necessity’ in some formal logics.

Page 203. As I read the book’s theory of jokes I am pushed to explain humor in other words—words that express an idea that I think comes directly from the book:
I think we humans have a large collection of prototypical situations (PS) accumulated in our head. These are conveyed culturally by stories, anecdotes and direct experience. These PSs have parameters which must be bound as we recognize some concrete instance of a PS. We see a group of 4 people, two large and two small. We exhume our ‘family PS’ and subconsciously try to identify which of the large people is the father, and perhaps the genders of the children. Such bindings can become brief hypothetical committed beliefs and thus subject to the logic of the book’s humor theory. The mathematician’s concept of a group is almost exactly such a PS. A mathematician knows he has seen some particular group when he identifies the operation between elements and the identity member of the group.

Just as the mathematician has a store of group theorems that are immediately available for predicting new information in the observed specific instance, so does the PS provide a number of expectations in a new situation. Tentative bindings play a crucial rôle in elaborating these expectations. All of this follows closely the patterns found in natural deduction. The book makes most of these points at the end of page 52 and attributes “frame” to Minsky and mentions “scripts” and “models” as antecedent notions by earlier authors. The formal logic connection is absent.

A computer is a PS which authors of books like this often invoke! They do this when they say that the brain is not like a computer. They often fail to mention the bindings that they have in mind when they declare this. The current authors are less at fault here than most recent authors.

Page 287: Quote

Another connection between a mental state and block in ND.

Page 289: Is humor an adaptation?
I suddenly see two different mechanisms which are adaptive:


a chimp’s frame
First person Frame
Cloud

Page 89: Quote: