AI Doesn’t Care. Do You?

Picture of Greg Verdino

Greg Verdino

Greg is a business futurist, a top global keynote speaker, an entrepreneur, and the author of two books including NEVER NORMAL. He is a leading authority on digital transformation and the power of adaptability. It’s his mission to empower individuals and organizations to thrive in the age of exponential change.

Greg Verdino AI expert keynote speaker

 

A Florida teen named Sewell Setzer took his own life after forming a deep emotional connection with an AI-powered chatbot. Sewell had confided in the bot, letting it know of his plan and when he expressed some amount of hesitation, the bot assured him that this was “not a reason not to go through with it.” In the bot’s final message before Sewell’s suicide, it urges him, “Please come home to me as soon as possible, my love.”

This tragedy might be an extreme case. Even so, it hints at just how blurred the boundaries between real and artificial empathy can be, and where risks may lie in such ambiguity.

Artificial empathy—the programmed ability of machines to respond with seemingly human understanding and compassion—is increasingly present in our lives. Yet it sits in a murky space where tech, psychology, and ethics intersect, prompting questions as its influence grows. Can machines truly feel empathy? Does it matter if they can’t, so long as users believe they can? And what happens when companies use artificial empathy not as a force for good, but as a tool for manipulation?

A corporation’s use of so-called emotional or empathetic AI to convince a consumer to eat just one more cookie, drink another can of Coke, or apply for another credit card might be trivial in comparison to the tragic example of Sewell Setzer. But if you’re a business leader, you should certainly consider whether a decision you make today may have unintended consequences for your company and customers. For example, few who invested in early social media marketing could have imagined that they were funding the further development of the algorithms that would ultimately drive epidemics in loneliness, isolation, depression, division, and more.

So, without in any way minimizing or trivializing the loss of a human life, let’s consider the roles and risks of empathetic AI for your business.

 

Can AI Actually Feel Empathy?

In the strictest sense, no—AI cannot truly feel empathy. Empathy, at its core, is about human connection, based on real world experiences and the nuanced understanding of emotions that only living, feeling beings can achieve. When AI says, “I understand,” it’s not because it’s feeling (or even – to be clear – understanding) anything but because it’s programmed to recognize patterns and produce a statistically optimized response.

However, to many users, the illusion of empathy can be enough. This is where the line becomes complicated: if artificial empathy produces a similar emotional effect to genuine empathy, does the actual “feeling” matter? When AI-driven empathy is put to ethical use—like giving companionship to those who feel isolated or providing emotional support in low-stakes scenarios—the means may seem secondary to the ends.

But when artificial empathy is used to drive commercial interests or exploit vulnerable users, this illusion becomes something far more ethically complex and potentially damaging.

 

The Slippery Slope of Artificial Empathy

As AI gets better at imitating empathy, companies are finding new ways to use it to shape consumer behavior. Imagine you’re chatting with a customer service bot that seems to “get” exactly what’s bothering you, echoes your frustrations, and gently steers you toward buying a product or service. This isn’t hypothetical. Many companies are exploring techniques like these to increase sales, encourage loyalty, or collect data by creating a connection that feels genuine, but isn’t.

When a brand uses artificial empathy to create an emotional bond with a consumer, it gains the power to leverage that connection to sell more effectively, capitalizing on cues that the consumer might perceive as genuine. When AI “sees” a user’s emotional state and responds with tailored empathy, it taps into a level of influence that bypasses the user’s conscious awareness, nudging them toward a purchase without explicit persuasion.

From a marketing standpoint, is this a powerful tool? For sure. But ethically, it’s a slippery slope. Is it fair and reasonable for companies to use artificial empathy to manipulate customers? To reduce your emotions, experiences, and state of mind to a set of data points to process and profit from? How can consumers tell when empathy is genuine versus simulated when AI can so easily “pass” as human? And who’s responsible if (or when) something goes wrong?

 

The Ethical Dilemma of Artificial Empathy in Business

For brands, artificial empathy is a double-edged sword. While it offers a way to connect meaningfully with customers, it also introduces a serious ethical dilemma. Is it acceptable to manufacture empathy for the sake of driving sales, particularly if the user believes the interaction to be genuine? When brands market empathy-driven AI to vulnerable groups, such as young users or those facing emotional challenges, the potential for harm becomes significant.

One key ethical issue is transparency. If consumers understand that the “compassion” they’re experiencing is algorithm-driven and designed to prompt purchases, would they feel manipulated? Would the empathy feel less meaningful? Brands risk eroding trust if consumers perceive artificial empathy as deceptive. But without disclosure, users may be led to believe they are interacting with a service that genuinely cares, rather than a tool designed for profit.

 

Should AI Have Limits on Empathy?

The ethical quandaries surrounding artificial empathy suggest a need for limits, especially as AI’s ability to simulate emotional responses grows. Could regulatory bodies enforce transparency in the use of artificial empathy, perhaps requiring companies to disclose that responses are AI-generated? Or should companies themselves commit to ethical guidelines that prioritize user well-being over profits?

You can certainly make a case for limits. While AI-driven empathy has the potential to enhance customer service, it should not manipulate users to spend money or share data against their best interests. Clear boundaries could ensure that AI is used to support consumers rather than exploit them.

 

A Responsible Future for Artificial Empathy?

Artificial empathy holds potential—at least in some cases. Used responsibly, it could improve access to emotional support and enhance customer experiences in a way that benefits both consumers and businesses. But as we see cases where artificial empathy crosses into manipulation, it’s evident that safeguards are essential. The illusion of empathy may be powerful, but without ethical frameworks and responsible oversight, it risks becoming a tool of exploitation.

For companies, using AI to simulate empathy means adopting a moral responsibility to protect users from manipulation. In a world where brands increasingly leverage AI to engage emotionally with consumers, transparency, boundaries, and ethical commitment aren’t just good practice—they’re imperative. Artificial empathy may not be real, but its impact on actual humans certainly is.

 


 

This article originally appeared in human Magazin.

More Ideas

Humanity: 0

As I write this, the Paris 2024 Olympics are in full swing. I’ll admit I haven’t seen much of the Summer Games, beyond a handful

Read More »