AI has a trust problem. And while that might seem like something for Sam, Satya, or Sundar to sort out, the truth is it’s a challenge that affects every organization racing to integrate AI into their operations.
OpenAI’s release of GPT-4o and its slick demos of an uncanny voice assistant grabbed headlines as a largely uncritical tech press swallowed then regurgitated the hype with little critical reflection. But the bigger OpenAI story was the resignation of technical co-founder Ilya Sutskever and head of alignment Jan Leike.
Sutskever’s departure is hardly a surprise, given his alleged role in Sam Altman’s ouster then reinstatement last November – a public drama during which the board of directors charged Altman with being “not consistently candid.”
But Leike’s departure is noteworthy on its own. As the company’s trust and alignment czar, he directly blamed the company’s emphasis on shiny objects over safety as his key reason for leaving. As if to prove Leike’s point, OpenAI swiftly disbanded Leike’s entire AI safety team upon his departure.
Now, OpenAI is hardly the first AI company to deprioritize safety. Early last year, Microsoft laid off its entire team focused on AI ethics and the societal impacts of AI. Google fired two of its top ethicists in 2020 and 2021, then disbanded its internal ethics watchdog organization at the start of 2023.
And last week alone, OpenAI wasn’t the only technology company that had trouble around trust. A Slack user discovered that the popular corporate messaging app is training its AI model on proprietary user data without consent, even as its parent company — Salesforce.com – positions itself as a leader in trustworthy AI. (Salesforce did swiftly respond to outcries: Trust us, it’s fine!)
As I’m writing this, OpenAI is back in the news for another little white lie that underscores a larger betrayal of trust. When observers noticed that the voice of ChatGPT-4o bore a striking resemblance to Scarlett Johansson’s chatbot character Samantha from the movie Her, Altman and CTO Mira Murati claimed any similarities are a “coincidence.” Johansson has since gone public about OpenAI’s attempt to hire her to voice their bot and her belief that the company clearly trained it to mimic her even though she turned them down.
Despite missteps and misdeeds throughout the AI ecosystem, OpenAI may have a unique challenge that stems from its own leadership team when it comes to credibility.
Beyond that company’s unique problems, the AI industry’s trust problem runs deeper than headlines and the hype cycle, though. And not all responsibility rests on the shoulders of tech giants.
Five Key Barriers to Trusted AI

Let’s look at a handful of major barriers standing in the way of trusted (and trustworthy) AI.
Transparency and Explainability
AI systems often function as “black boxes.” Even developers struggle to understand and explain how these systems reach their conclusions. This lack of clarity erodes user trust, especially when AI is deployed in sensitive domains like healthcare, criminal justice, and finance, government, and even human resources.
Much of the responsibility here does lie with the developers that train and maintain the foundation models most generative AI applications are built upon, and the applications companies that must ensure their systems perform as intended. But this doesn’t let end user organizations off the hook for things like being well aware of whatever might be hidden in Terms of Service and conducting proper diligence before committing to any AI technology partnership.
Ethical and Societal Concerns
AI systems, reflecting the biases in their training data, can exacerbate discrimination. For instance, facial recognition technology’s higher error rates for people of color highlight the potential for harm in law enforcement applications. Ensuring ethical AI deployment requires rigorous standards to mitigate these biases and safeguard fairness.
At the same time, there’s growing awareness around AI’s high environmental impact, fears around the potential for workforce displacement, and concerns about privacy, data protection, and intellectual property rights.
Lagging Regulation
Regulatory frameworks struggle to keep pace with AI’s rapid advancement, leading to significant gaps that can be exploited. This regulatory lag means that many AI systems are deployed without thorough vetting for safety, fairness, and ethical considerations. It also means that end user organizations may lack the guidance and clarity they need to adopt and use AI systems confidently.
While this falls squarely on legislators, the fact is corporate leaders should approach AI (or any new technology) from a stance of common sense and responsibility, regardless of whether there’s regulation to prescribe or prohibit certain uses.
Responsibility and Accountability
When AI systems falter or cause harm, pinpointing responsibility becomes murky. This ambiguity enables organizations to evade accountability, undermining trust further. The industry must establish robust accountability frameworks, ensuring that all stakeholders — from developers to end-users — are clearly delineated and responsible. Continuous human oversight is essential to identify and rectify biases and errors early on, preventing their reinforcement over time.
It’s clear that not all harms stem from the model developers themselves. Take, for instance, the FTC’s recent move to ban national retailer Rite Aid from using AI facial recognition in its stores after finding that the system inaccurately flagged woman and people of color as shoplifters – and that Rite Aid failed to implement reasonable safeguards in its deployment of the technology.
All of this underscores the idea that, when it comes to AI in business, ethics aren’t optional.
The ”Credible Liar” Challenge
This trust-buster is a bit different than the others. In the two years since generative AI arrived on the scene, many have noted its ability to present inaccuracies, incomplete information, outright falsehoods, and invented information in a voice and tone that implies and instills absolute confidence. At best, this is annoying. At worst, it’s downright deceptive.
Despite this, recent research indicates that people may trust AI systems more than they trust other people. This creates all sorts of quandaries — from how easy it is to spread believable AI-generated disinformation to the way businesses may use ultra-persuasive AI to sway customers’ beliefs and behaviors. All of this is only going to get stickier in a world where most content is AI-generated, virtual influencers are everywhere, and search engines provide AI summaries instead of links to reputable sources.
While some of this is outside your control, none of it should be off your radar. The fact is, trust is crucial for scaling your organization’s AI programs for success.
Three Communities of Trust

When setting up and scaling AI for your organization, it’s crucial that you commit to building trust into your approach from the outset, and in three broad areas.
In the Market
Lack of trust in the market erodes customer relationships and damages your business reputation. Build trust with your consumers by being transparent about your use of generative AI in any public-facing programs, applications, tools, systems, or marketing campaigns. Demonstrate your commitment to data privacy and security. Proactively address potential biases. And consistently deliver accurate and valuable personalized experiences.
Among Your Employees
Lack of trust among your team members stands in the way of end-user adoption. Here, it’s essential that you maintain human agency, autonomy, authority, and accountability in all key business decisions and in every internal workflow.
By Your Company’s Leaders
Lack of trust by leaders across the enterprise slows progress, kills your credibility as an AI changemaker, and makes it harder to sell in and scale AI implementations. Here, it’s important to build confidence in using AI systems for high-stakes decision-making, efficient and effective operations, and both internal and market-facing communications.
10 Ways to Build Trust in AI Systems

Now, with a basic understanding of why trust is important to each of these three stakeholder communities, let’s explore 10 steps you can take to establish trust as you embed AI throughout your organization.
- Assemble a diverse and multidisciplinary team to build, evaluate, and buy AI systems for your organization. This helps make sure that a wide range of perspectives and experiences are considered, reducing the risk of unconscious bias in AI training and output, promoting fairness, and increasing the system’s relevance and usability for a broader audience.
- Be intentional about the creation of inclusive and appropriate data sets. It may be necessary to collect additional data about underrepresented and marginalized groups to promote responsible and inclusive use of AI. Bear in mind that you won’t always control or have good visibility into the core data set — for example, when you buy or build systems that use popular generative AI foundation models like OpenAI’s GPT-4. In these cases, it’s important to conduct adequate diligence before you determine your level of comfort with a potential vendor’s data practices. Publicly available vendor risk profiles like the ones published by Credo.ai can be helpful. And our own discussion guide for technology partner evaluation provides a practical framework for asking the right data questions. And when you’re fine-tuning these third-party models or applications with your own proprietary data, pay attention to any unintentional bias that may have seeped in over time.
- Clearly explain the ways in which data is being combined and used within AI algorithms to validate the output of AI systems and correct for possible biases which may come up later. Here again, the burden of explainability may fall mainly on the developers of the underlying foundation models or the third-party application companies that embed those models into the AI systems that you buy. If you train or fine-tune any of these systems with your organization’s proprietary customer or campaign data, make sure that you have a solid understanding of how the addition of your own data affects the performance of the model and influences its outputs.
- Ask which groups will benefit from using the AI system, which will be harmed, and if the data being used is appropriate or fit for the intended purpose. Keep in mind that potential harms might be internal (for example, when a productivity-enhancing AI system might result in the elimination of headcount or the de-skilling of substantial workstreams). Or they might be external. Consider inappropriate or even unethical uses of personal data, predatory or discriminatory business practices, or even low-quality or inaccurate content produced by generative AI systems without adequate human oversight.
- Consider establishing an ethics board to provide holistic oversight on the ethical and responsible development of AI systems. Many organizations recruit outside advisors for their ability to lend diverse viewpoints to ethics discussions.
- Build in appropriate monitoring and validation mechanisms as the AI system is used over time. Maintain a registry of all active AI projects. Establish a robust AI governance program to continually evaluate and address potential organizational, regulatory, and reputational risks before they become problematic. As real issues inevitably arise, address them promptly and own up to any errors.
- Enlist independent third parties to conduct periodic audits. Doing so will help make sure that your AI systems are performing as intended and are producing accurate, fair, and unbiased outcomes. Independent outsiders can also evaluate the sufficiency and effectiveness of your organization’s overall AI governance model.
- Track both the performance of AI systems and the impact of decisions suggested by them. These are critical steps toward aligning intentions with outcomes, an early warning for any variations or degradations in performance over time, and a basis for ongoing discussions around risk and mitigation. When you clearly communicate performance issues to your internal leadership and key stakeholders, you create the kind of transparency that fosters trust and credibility. If you encourage others in your organization to report any performance issues that arise in their own experience with your AI systems, you create deeper engagement around responsible AI.
- Create standards to govern the development, purchase, and usage of AI systems in your organization. At my AI consultancy CognitivePath, we view these essential guardrails as “freedom within a frame” – a set of policies and practices that mitigate the most common and most egregious risks to protect your employees, company, customers, and brand, while empowering your internal end users to experience the productivity gains, creativity boost, and engine for innovation that AI-powered workflows can offer.
- Safeguard users by following social norms, along with applicable laws and regulations. It’s equally important to safeguard internal users and external constituents. At the same time, business leaders should acknowledge the complexity inherent in adhering to norms and even regulations. Norms and even the notion of what constitutes bias or fairness vary by country and culture. Regulations and laws are nascent, open to interpretation, and vary by region. In other words, there’s no “perfect,” but your brand’s purpose and values can (and must) be your guide for responsible AI.
Bottom Line: The Effort is Essential

Trust is the key to unlocking AI’s true potential for organizations. By proactively
building trust with customers, employees, and leadership, we lay the foundation for responsible AI adoption and innovation. This demands inclusive data practices, explainable systems, governance, and auditing that center on human values. While complex, the effort is essential.
When your AI reflects your organization’s values and commitment to fairness and agency, it becomes a powerful engine of trust and an important driver of growth. The future of AI-enabled business depends on the trust we consciously build in AI today. With focus and care, we can drive responsible AI innovation that propels organizations, work, and the world forward.