ChatGPT: Lies, Damn Lies, and Hallucinating AI

Picture of Greg Verdino

Greg Verdino

Greg is a business futurist, a top global keynote speaker, an entrepreneur, and the author of two books including NEVER NORMAL. He is a leading authority on digital transformation and the power of adaptability. It’s his mission to empower individuals and organizations to thrive in the age of exponential change.

By now, it’s pretty well known that ChatGPT has a tendency to invent facts, given that it has no basis to “know” true vs false. It’s just using probabilities to string together words and phrases in a way that mimics humanlike writing. It’s also likely to invent fake sources for those fake facts. (Literally, fake news…) In fact, as anyone who has spent a fair amount of time using the hot bot knows, imaginary sources are a prevalent problem.

So prevalent that Chris Moran, the head of editorial innovation at online news site The Guardian, took to his own Opinion page to write about two inquiries from two researchers looking to verify reporting that ChatGPT had attributed to named reporters working for the publication. Pretty typical stuff for researchers and reporters. Except that neither article exists.

The same day Moran published his Opinion piece, news broke that ChatGPT had invented a sexual harassment scandal and named real George Washington University law professor Jonathan Turley as the perpetrator. Its source? An imaginary Washington Post article from 2018. In fact, when WaPo did its own investigation into the incident, reporters there found that Microsoft’s Bing — which also incorporates GPT technology — repeated the accusation.

Where a couple of odd calls to an editorial desk might be a nuisance (for both the desk and the researcher who likely assumed they were looking for nothing more than a check-the-box verification), false accusations of criminal activity are something far more dangerous.

“Hallucinations” like these are likely to become less prevalent over time, as the large language models that underly chatbots like ChatGPT, Bing, and Google’s Bard are trained on even larger data sets, refined through reinforcement learning, and fine-tuned based on the millions upon millions of prompts from and interactions with early users. But incorrect and illogical errors may remain a fixture of GPTs for a long time — possibly forever.

I should pause here to point out that, inflammatory headline aside (damn you, tempting clickbait!), ChatGPT does not, in fact, “lie” any more than it tells the “truth.” It does neither. It has no understanding of the words it strings together. It has no moral compass, no motives, does no reasoning, and never pauses to reflect. Again, it merely strings together words in a manner that often turns out to be true, but sometimes doesn’t. ChatGPT doesn’t know the difference. But you, dear human, do…

And that’s where you come in. What’s a user to do?

The lesson for any consumer or business end user who employs ChatGPT (or any of the many applications built on top of OpenAI’s GPTs) as a research or writing assistant is a simple one: Verify. Verify. Verify.

If you thought prompt engineering would be an important job skill in the age of AI, wait until I tell you how important fact-checking will be. It’s important to keep in mind that an AI like ChatGPT hasn’t been trained to tell the truth or get things right; It has (essentially) been trained to respond to user queries with information that sounds plausibly accurate, to make connections between words without understanding the meaning behind those words as a means of generating believably human-like text.

If I sound a bit like a broken record, so be it. It’s far too easy to anthropomorphize this technology and assume it has more agency over its actions than it does. And this is precisely where any human user is bound to get in trouble. Ultimately, you are both the arbiter of the truth you tell and the person who is accountable for it — whether or not you incorporate AI into your writing workflow.

There are 100 million+ people using ChatGPT today. And that’s a lot. But bear in mind that this or Google’s GPT model will be coming to your Microsoft Office and Google Workspace apps – that’s BILLIONS of users with access to what is arguably the most powerful — and still highly flawed! — generative AI models in the world. And then there’s GPT-powered search, which Microsoft already offers as part of Bing inside its Edge browser and Google is still testing before rolling it out to the general population.

So even if you’re not a ChatGPT power user today, you’ll be using technology like this (perhaps without even thinking twice about it, as is the case with so many AI applications today) before you know it.

If you’re a generative AI decisionmaker at a media company, marketing agency, brand, or really any company of any kind, the lessons are: Cautious will beat crazy. Editorial (or content) standards are more important than ever. Clear, written guidelines (even policies) and effective communication, roll-out, and enforcement of those guidelines will keep you from looking like idiots (at best) and landing in hot water (at worst).

More Ideas

Join the Adaptability Movement

Turn Greg’s ideas into lasting impact. Discover the two essential tools for measuring and growing your personal, professional, and organizational adaptability.

AQai Adaptability Assessments

The AQai adaptability assessment provides a scientifically validated measure of your personal AQ across 15 sub-dimensions. 

Adapt Manifesto

The Adapt Manifesto aims to align leaders around a set of seven core values and 10 practical principles that provide a step-by-step action plan for repeatedly and reliably adapting to change.