You are using an older browser version. Please use a supported version for the best MSN experience.

The science of funny

TechCrunch TechCrunch 13/05/2016 John Holden

Psychologists in Canada have developed a mathematical formula for quantifying the humor of nonsense.

Most of us would be forgiven for thinking math and humor are mutually exclusive. One is perceived as logical while the other is considered an intangible, uniquely human trait. Still, science knows no limits and, as The Cat in the Hat once declared, “It is fun to have fun but you have to know how.”

“I like nonsense, it wakes up the brain cells. Fantasy is a necessary ingredient in living.” No quote could better sum up the approach that American writer and illustrator, Theodor Seuss Geisel, AKA Dr. Seuss, brought to his craft.

Seuss, the author of classics like The Cat in the Hat and The Lorax, took nonsense to a whole new level. He invented hundreds of non-words — like schloppitty-schlopp, diffendoofer and yuzz-a-ma-tuzz — in order to make the imaginary worlds he created for his characters that much more ridiculous.

Maybe “ridiculous” is too negative. To quote fellow children’s author Roald Dahl, Seuss’ capacity to invent funny non-words was nothing short of “remarkulous.”

It’s natural to assume there’ll never be another quite like him again. He was a genius, with a creative flare matched only by his prolificacy. But what if the characteristics that make The Cat in the Hat funny could be quantified scientifically?

We like to think of one’s sense of humor as a subjective, personal trait, shaped by countless variables and unique to each individual. So when someone suggests it could be approached methodically — its parts isolated, measured and quantified — the only appropriate reaction is LOL.

Hold that LOL. A group of psychologists at the University of Alberta in Canada conducted a study that successfully developed a way to measure humor in words. Actually, the metric is for non-words, the kind of stuff Dr. Seuss was so good at.

“We were originally conducting research on people with a speech and language disorder caused by brain damage, known as Aphasia,” explains Dr. Chris Westbury, lead author of the study. “Test subjects were shown computer-generated letter strings and asked to determine whether they were actual words or not.” Westbury noticed a pattern: Participants would consistently laugh at some made up words but not at others.

“Snunkoople” was one such non-word that almost always made people laugh. “Clester,” on the other hand, did not. The reason for this phenomenon, says Westbury, relates to a word’s entropy: a measurement for the likelihood of various combinations of letters being generated. The lower the entropy, the funnier the non-word is likely to be.

It is fun to have fun but you have to know how.
— The Cat in the Hat

In a study extended to the general public, participants were asked to compare two non-words and decide which one they thought funnier, after which they were shown a single non-word and asked to score it from 1 to 100 in terms of funniness. “We found the bigger the difference in the entropy between the two words, the more likely the subjects were to choose the way we expected them to,” says Westbury.

Entropy is a measurement used in a variety of fields, including thermodynamics, encoding, ecology and anesthesiology, and varies by definition in each.

In the context of this research, Westbury and his team used what’s known as Shannon entropy. “This is originally defined over a given signal/message, and is computed as a function of the probabilities of all symbols across the entire signal, i.e. across a set of symbols whose probabilities sum to 1,” he explains.

“Under Claude Shannon’s definition, a signal like ‘AAAAAA’ has the lowest possible entropy, while a signal like ‘ABCDEF’ has the highest entropy. The idea, essentially, was to quantify information in terms of predictability. A perfectly predictable message like ‘AAAAAA’ has the lowest information for the same reason you would hit me if I walked into your office and said ‘Hi! Hi! Hi! Hi! Hi! Hi!’ After I’ve said it once, you have my point — I am saying hello — and repeating it is uninformative. Variation is informative.

“What we computed in our research was the contribution of each non-word to the total entropy of the English language,” he adds. “In essence, we treated each non-word as but one part of a very long signal that is the English language. Contribution to total entropy is a measure of how unexpected/improbable/weird a particular string is, but that is not quite entropy, because, in the strictest sense, it is a metric for global, rather than local, probability. If I say to you, ‘I love the cat, I love you, and I love hablump,’ you will be struck by ‘hablump’ because it is ‘unexpected,’ ‘improbable,’ ‘weird.’ We quantified how unexpected/improbable/weird each non-word was (the local probability of that part of the signal), in the context of the predictability of the signal that is English as she is spoken (or written).”

Confused? You’re not alone. Essentially, the research accurately demonstrated that the more improbable a non-word sounds, the funnier it is likely to be.

We didn’t need a PhD to figure that out. The Dr. Seuss catalogue provides countless examples of non-words — barbaloots, truffula and snergelly, for example — which were not created with entropy in mind.

For Seuss, it was instinctive. Despite this, the researchers discovered numerous parallels between their theory and the creative non-language found in so many of his stories. “We have found that Dr. Seuss, who was well-known for his ability to make funny non-words, did so using combinations of letters that were predictably lower in entropy,” says Westbury. A Seuss word like “yuzz-a-ma-tuzz” from the book On Beyond Zebra, has the necessary low entropy because of the regularity of a letter as usually uncommon as Z. “He may have simply been coming up words that he thought sounded funny, but essentially the probability of the individual letters is what matters most.”

The research went even further than simply confirming what we already knew. Not only did they confirm that weird non-words tend to make us laugh, they discovered something more profound: “Non-words are funny to the extent that they are weird,” says Westbury.

Put another way, “there is correlation between quantifiable non-word weirdness and funniness ratings,” he says. “I like to think that what we have demonstrated is how people use humor to do math. We use a subjective feeling of ‘funniness’ to successfully estimate string improbability.” Quite a bombshell to end an interview, don’t you think? But as Dr. Seuss might say: “Don’t cry because it’s over. Smile because it happened.”

More from TechCrunch

image beaconimage beaconimage beacon