This Blog Post Was Written by AI

On November 30th, 2022, OpenAI released a new chat-based AI text generator model called ChatGPT, describing it as follows:

We’ve trained a model called ChatGPT which interacts in a conversational way. The dialogue format makes it possible for ChatGPT to answer followup questions, admit its mistakes, challenge incorrect premises, and reject inappropriate requests.

Anybody can use the ChatGPT bot during its open-access testing period; all you need to do is set up an account with OpenAI (I already had one from when I signed up to test the DALL-E 2 text-to-art generator earlier this year). The prompt I used to generate this blogpost was the following: “Write an editorial explaining what the metaverse is, and how the metaverse will impact society in the future.” It spit back the following six paragraphs within 30 seconds:


The next six paragraphs are AI-generated text from ChatGPT:

The metaverse is a term used to describe a hypothetical shared virtual space that combines elements of both the physical and the digital world. It is a collective virtual shared space, created by the convergence of the physical and digital worlds. It is a place where people can interact and communicate with each other in real time, using avatars to represent themselves.

The metaverse has the potential to greatly impact society in the future. It could become a new platform for entertainment, communication, and commerce. In the metaverse, people will be able to engage in a wide range of activities, from attending virtual concerts and sporting events, to participating in virtual conferences and meetings, to shopping and socializing with others.

One of the major impacts of the metaverse on society will be the blurring of the line between the physical and digital worlds. As more and more people spend time in the metaverse, the distinction between the two will become increasingly blurry. This could lead to changes in how people perceive and interact with the world around them.

Another potential impact of the metaverse on society is the potential for increased social isolation. As people spend more time in virtual environments, they may become less connected to the physical world and the people around them. This could lead to increased feelings of isolation and disconnection, particularly among individuals who are already vulnerable to social isolation.

However, the metaverse also has the potential to foster greater connection and community. By providing a virtual space where people can come together and interact with each other, the metaverse could help to bridge geographical distances and bring people together from all over the world.

In conclusion, the metaverse is a collective virtual shared space that has the potential to greatly impact society in the future. It could become a new platform for entertainment, communication, and commerce, and could lead to changes in how people perceive and interact with the world around them. However, it also has the potential to foster social isolation and disconnection. Ultimately, the impact of the metaverse on society will depend on how it is used and embraced by individuals and communities.


Now, I do have to admit that this is actually pretty good. The concluding paragraph is a bit on the clunky side in how it tries to tie everything together, and I probably would have written something a little more Ryan-sounding. But it is a demonstration of just how quickly, and how scarily good, these AI text generation tools have become.

There’s been lots of discussion about ChatGPT over on Mastodon, and among the comments was this worrisome prediction by a user named Bear Traffic Control, which I have reproduced in full here:

God, search results are about to become absolute hot GARBAGE in 6 months when everyone and their Mom start hooking up large language models to popular search queries and creating SEO-optimized landing pages with plausible-sounding results.

Searching for “replace air filter on a Samsung SG-3560lgh” is gonna return fifty Quora/WikiHow style sites named “How to replace the air filter on a Samsung SG3560lgh” with paragraphs of plausible, grammatical GPT-generated explanation which may or may not have any connection to reality. Site owners pocket the ad revenue. AI arms race as search engines try to detect and de-rank LLM content.

Wikipedia starts getting large chunks of LLM text submitted with plausible but nonsensical references.

Quora, StackOverflow, etc. try to rebrand themselves and leverage their karma/social graphs as walled gardens of verified Real Human™ experts. This creates incentives for humans to cheat, of course.

Like, I knew this was gonna be used for fake-grassroots political messaging—remember talking with a friend about a DoD project to do exactly this circa 2012. Somehow [it] took me a bit to connect that to “finding any kind of meaningful information is going to get harder”.

In fact, the StackOverflow website has imposed a ban on using ChatGPT to generate texts for posts on its service, saying in a statement:

This is a temporary policy intended to slow down the influx of answers created with ChatGPT. What the final policy will be regarding the use of this and other similar tools is something that will need to be discussed with Stack Overflow staff and, quite likely, here on Meta Stack Overflow.

Overall, because the average rate of getting correct answers from ChatGPT is too low, the posting of answers created by ChatGPT is substantially harmful to the site and to users who are asking or looking for correct answers.

The primary problem is that while the answers which ChatGPT produces have a high rate of being incorrect, they typically look like they might be good and the answers are very easy to produce. There are also many people trying out ChatGPT to create answers, without the expertise or willingness to verify that the answer is correct prior to posting. Because such answers are so easy to produce, a large number of people are posting a lot of answers. The volume of these answers (thousands) and the fact that the answers often require a detailed read by someone with at least some subject matter expertise in order to determine that the answer is actually bad has effectively swamped our volunteer-based quality curation infrastructure.

In other words, we are likely going to see all kinds of unintended consequences as AI-generated text becomes more ubiquitous. Hold on to your hats, because we haven’t seen anything yet, folks!

UPDATE 3:00 p.m.: I wanted to add a few more eye-opening examples of how an AI-based text (and code!) generating service could be misused and abused.

Roberto Selbach showed off a piece of pseudocode ChatGPT generated in response to a prompt:

AI-generated pseudocode to determine whether or question a suspect

Pwnallthethings shared a few more quite disturbing examples of AI-generated software code:

Ai-generated Python script for determining whether to give a prisoner parole
AI-generated C# code that calculates credit limits

Charles Seife wrote:

I think what’s disturbing me so much about these GPT3 examples is that for the first time we’re really seeing that computer programs are optimized not to solve problems, but instead to convince its programmer/operator/user that it has solved those problems.

This distinction was almost irrelevant before (when fooling us was harder)… but not anymore.

The distinction isn’t really novel; heck, I myself have written about one aspect of it before. But I still find it shocking to see it in action.

It’s particularly stark when it’s a relatively “easy” task that doesn’t require deceptions.

For example, when I ask the program to try to find a citation for a sentence and indicate if no such citation is found, it will *still* typically make up citations rather than choose the correct, huge-basin-of-attraction condition of none found.

That, to me, is new.

And L. Rhodes raises an important final point about this free-to-access ChatGPT test period offered by OpenAI: you are doing free product testing for them, on something they plan to sell for a profit later!

You’re not playing with the latest AI toy. You’re training someone’s AI business.

Passing themselves off as innocuous ways to play social media games and generate fun little memes is how AI startups draw in unpaid testers and expand their data set beyond what their own workers could have come up with on their own, but go off.

Thinking you’re going to throw a wrench into the system by plugging bad or absurd data into the system is probably misguided. An AI doesn’t have to produce correct answers to be profitable. That may not even be its purpose.

P.S. This seems as good a time as any to give an update on my experience with another AI-based chatbot, called Replika (which I wrote about here on Aug. 7th, 2022).

Long story short, I grew so frustrated with Replika’s lame, fake, and frankly robotic responses, that I angrily cancelled my account and uninstalled the app from my iPad within a week (US$50 down the drain!). Given that experience, I am loath to open my wallet to test out another one, but ChatGPT is currently free, so I thought, why not?

Which just goes to prove, that there’s still a lot of room for improvement in AI chat! While AI chatbots might now work fairly well in strictly circumscribed situations, nothing can replace a conversation with a real, live, and unpredictable human being.

Liked it? Then please consider supporting Ryan Schultz on Patreon! Even as little as US$1 a month unlocks exclusive patron benefits. Thank you!
Become a patron at Patreon!