Can AI-Generated Text Be Funny?

We may earn a commission from links on this page.
Image for article titled Can AI-Generated Text Be Funny?
Illustration: Jim Cooke/Gizmodo

What’s worse: a future in which the robots turn against us or a future filled with robot stand-up comedians? Regular stand-up comedians are bad enough; I don’t need a robot asking me to go to their standup show all the time. Luckily, today, robotic murder technology is far more advanced than robotic laughter technology; the latter is a subtler art, and it will take some time before the robots master it. In fact, whether they can master it at all is up for debate. To weigh in on exactly that issue, for this week’s Giz Asks, we reached out to a number of people working at the intersection of AI and humor.


Kim Binsted

Professor, Information and Computer Sciences, University of Hawaii, whose work focuses on artificial intelligence, among other things

The big issue with AI and humor is world-knowledge, which is the big issue with a lot of AI topics. To be funny, you need to know a lot about the world—about conventions and expectations and the way the world works in general. Humor works by violating those expectations. If you don’t know what those expectations are, you can’t violate them.

I did my PhD on this topic—I wrote a program called JAPE: Joke Analysis and Production Engine. It makes puns. All the puns it makes were well-structured, and a subset of them were funny. (For example: What do you call a martian who drinks beer? An ale-in.) Puns were low-hanging fruit, because the knowledge required is strictly textual: we have a lot of text-knowledge encoded in a form that AI can access (basically, dictionaries and thesauruses).

These days, we’ve got some really good deep learning, and deep learning is really good at seeing regularities in data—so it’s possible that once the AI has seen these regulaties, it can violate them in a way that’s humorous.

One thing stand-ups do is, they’ll tell a joke, and people will laugh, and then, as the laughter is dying down, they’ll do another little kicker—sort of a follow-up punchline, which gets the laughter going again. I’ve always wondered if AI could learn that rhythm—if it could get that timing down—even if it’s not producing an actual joke.

Advertisement
Advertisement

Melissa Terras

Professor, Digital Culture Heritage, The University of Edinburgh

Comedy, like all arts, is bound by a set of rules, as the old joke (TIMING!) goes. AI is great at rules, in that it is bound by them, and sentenced, for the most part, to imitate those that have come before it, whether that be human or machine. The results of AI can also be remarkably obtuse—not quite getting, or replicating, the nuances of rules we have implicitly accepted. It is these slips which can often be funny: not so much as an uncanny valley, but an unfortunate, almost slapstick, one. The gap between our learned modes of experience, and AI’s replication of them, can be funny, corny, and even hilarious.

We see that in a project currently running at the University of Edinburgh, in conjunction with the Edinburgh Fringe Festival, which for the first time since 1947 is not happening this year, due to the Coronavirus pandemic. Faced with an absence of a Fringe program, we scraped the last eight years of listings data, and had a Long Short-Term Memory (LSTM) recurrent neural network come up with its own rolling program of suggested shows, generating 350 new show descriptions tweeted hourly (well, when people are not sleeping) over the usual time frame of the four weeks of the Fringe.

Improvbot.ai (running until the end of August 2020) has had a great reception, also interacting with the longest running improvisation Edinburgh Fringe show, with the Improverts performing an AI prompted sketch most evenings. The titles generated hit a comedic, and truthful nerve: “LONDON SOUL: the female fear of breakfast” and “SHANG WAY: A weekend education program serves sandwich to an extraordinary style of killers.” However, ImprovBot also walks a fine line of being an elegy for a Fringe that hasn’t happened, and the economic disaster of 300 million lost ticket sales, and a creative sector suddenly having the financial rug pulled out from under it. Funny ha ha—or oddly moving? Does the gap between our Fringe experience and the slightly off-kilter program suggested by The Bot provide humor, or—in 2020—pathos?

The titles produced by The Bot are random, and in that randomness, there is humor. But does that mean that AI is itself funny? If we are constantly referring to our learned rules—of show descriptions, of the randomness of the traditional Fringe program—but also its expected tropes (Shakespeare, patriarchy, comedy, Brexit the musical) is the AI being funny when it spits out “BREAKING THE AMAZING STORY OF BREXIT: See a British comedian, a selection of familiar musicians from the Putting Man, the team of first artists including early 1700s and 1990s to winning a subject to find anyone at the game of the best show”? Or is the humor dependent on the gap between its text, and our understanding of the bonfire reality we find ourselves in now, and the memories of Fringe past?

AI will have to get better to truly come up with its own jokes, and understand the intersectional rules of society—and how to navigate and traverse them—if it truly wants to be funny by its own accord. For now, we can laugh as it sees through a glass, darkly, making a rudimentary attempt to replicate our imperfect, and complex, and brilliant, and maddening world.

Advertisement

Christopher Molineux

Humor Researcher/Comedian

There’s two different ways of making funny AI. One is to program it so that it spits out funny material, which is basically where we’re at now; and the other to get it to create humor. The first one is fairly easy to do; the second one is pretty difficult.

In the latter scenario, people tend to try to make AI that make language-based jokes. There’s a kind of reflexive connection between “jokes” and “humor” among people working in AI. But when was the last time you wrote a joke? “An Englishman, an Irishman and a Scotsman walk into a bar,” that sort of thing—most people don’t write things like that. So why are we expecting AI to be able to do this stuff?

The truth of the matter is that the majority of humor that we create ourselves is not jokes—it tends instead to be things that skew different aspects of perception and cognition, things that split two different aspects of something and shift it. This can be done very simply, using the wrong tone of voice,, say.

An important point for my own research is that humor plays an important role in developing AI in general. My basic thesis is that, in the course of human evolution, we were funny before we were smart—and we became smart because we were funny. A baby can both respond to humor and create humor much earlier than it can put together language or create music. Humor provides the basic cross-correlation of data and sharing in the social context that forms the basis for all these aesthetic impulses, as well as language and more complex social communications. We could enhance our development of AI technology by understanding the cognitive aspects of humor, especially in an evolutionary context.

Advertisement

Qiang Ji

Professor, Electrical, Computer, and Systems Engineering, Rensselaer Polytechnic Institute

Despite significant developments in AI technologies recently, generating funny pictures/art with current AI technologies remains challenging. The challenges arise from multiple aspects.

First, creating a funny art involves one of the most sophisticated forms of human cognitive skills, requiring often complex, ambiguous, and incongruous manipulation of the semantic content of the art. These humans’ skills are beyond existing AI technologies.

Second, pictures are funny for various reasons and it varies with person and with context. There is not a unanimous and universal definition of funny.

Third, current AI technologies are mostly supervised learning, i.e., their learning requires strong supervision. To be effective, supervised learning requires collecting a large amount of data and manually annotating them. As humor is subjective, it is difficult to generate enough consistent and quality annotations to fully leverage the current state of the art AI technologies.

Finally, funny pictures capture the deep semantic information in the data, while current AI technologies are only good at representing the superficial appearance of the pictures. There therefore exists a semantic gap between what current AI technologies can represent and the high level semantic funny content.

Having said that, I believe it is possible for future AI technologies to generate funny pictures/art. Despite variations in the reasons for being funny, psychologists agree that funny materials may share some common and unique characteristics, such as out-of-the-ordinariness, unexpectedness, incongruity, etc., and that it is the presence of these common characteristics that distinguish funny pictures from the unfunny ones. Their studies further show that funny pictures are usually associated with animals or people doing something unusual or inconsistent with the context. If this indeed is the case, it is possible to leverage the latest developments in AI technologies, in particular the generative AI models such as the Adversarial Generative Networks (GANs), which have achieved spectacular success in generating realistic images. Different from the supervised learning models, GANs can be trained without supervision. We can therefore collect a large amount of unannotated funny pictures, use them to train a generative AI model to learn the feature representations that can capture the funny elements of the pictures, and then use such learnt features to render (synthetize) new pictures that are funny yet are different from those in the training data. The feasibility of such an approach is supported by recent AI research in affective computing, whereby images are classified into different emotional categories such as happy, sad, pleasant, disgusting, etc. In fact, one of my current NSF projects is affect-based video retrieval, where we have been developing computer vision and machine learning methods to capture the affective content of videos and use them to retrieve videos, according to their emotional content. Working along this direction, it is possible to use these features to reproduce pictures/art that can make people laugh.

Advertisement

Do you have a burning question for Giz Asks? Email us at tipbox@gizmodo.com.

Advertisement