Loading stock data...

A recent EY study reveals a widening disconnect between what senior executives believe the public cares about in AI and what consumers actually worry about. In a comprehensive survey of more than 15,000 consumers across 15 countries, researchers found that executives often misjudge the issues that matter most to people when it comes to responsible AI. The gap is not minor; it spans nearly all measures of responsible AI, including data accuracy and privacy protection, with everyday consumers expressing roughly twice the level of concern as CEOs. This misalignment threatens the momentum of the enterprise AI boom, as companies shovel resources into technologies that may face consumer pushback or regulatory and reputational risk if the public’s fears are not addressed. In response, EY offers a robust, award-winning nine-principle framework designed to guide enterprises toward more sustainable, trusted AI adoption, alongside a practical three-step playbook to close governance gaps. The overarching narrative is clear: AI strategy cannot be detached from public sentiment, governance, and trust. The report further emphasizes that the next wave of AI—especially agentic AI that can act without direct human oversight—will only intensify governance headaches unless risk management evolves in lockstep with capability. The findings suggest a path forward that treats responsible AI as a strategic differentiator rather than a compliance checkbox, one that can build competitive advantage through transparency, accountability, and stakeholder inclusivity.

The Gap Between Executives and Public Concerns

The EY study exposes a striking misalignment between executive perceptions and public concerns regarding AI. By juxtaposing senior leaders’ responses with data gathered from a broad consumer base across multiple markets, EY highlights that executives often overestimate the public’s tolerance for risk and their willingness to accept AI-enabled changes without robust safeguards. The research shows that when it comes to core issues of responsible AI—such as data integrity, privacy protections, security, and the ethical implications of AI systems—the general public registers substantially higher levels of anxiety than CEOs report. In practical terms, consumers appear to be roughly twice as worried as leaders about these essential guardrails, a discrepancy that has significant consequences for how enterprises design, deploy, and govern AI systems.

This gap matters for multiple reasons. First, it undermines the credibility of AI initiatives if a large portion of the public feels that corporations are moving too quickly or not taking fundamental protections seriously. Second, it risks eroding trust, which is indispensable to the long-term scalability of AI-driven products and services. If consumers perceive a gap between corporate messaging and actual practice, they may withhold data, push back on features, or withdraw from platforms that rely on AI. Third, the misalignment can complicate regulatory compliance and risk management, as public sentiment often foreshadows evolving policy and enforcement trends. EY’s analysis reinforces that the gap is not merely academic; it has tangible implications for enterprise value, operational resilience, and brand equity.

The methodology behind EY’s conclusions is designed to illuminate depth and breadth of consumer sentiment. The consumer survey spans 15 countries, providing a cross-cultural perspective on risk perception in AI applications. This global lens reveals that concerns are not monolithic; instead, they encompass a spectrum of issues that vary in emphasis by context, but share a common thread: the public expects AI systems to be trustworthy, secure, and aligned with human values. When mapped to executives’ self-assessments about “responsible AI,” the misalignment becomes even more pronounced in areas like data protection, transparency, and the ongoing governance of AI systems post-deployment. The result is a clear call for a recalibrated approach to AI strategy—one that builds governance into the DNA of innovation, rather than tacking it on as an afterthought.

In addition to data points and cross-country analysis, EY’s findings reveal a troubling pattern among organizations that consider themselves “AI veterans.” For example, among firms that claim full AI integration, a striking 71% of executives believe they understand consumer sentiment, compared with just 51% at organizations still grappling with AI adoption. This discrepancy suggests that the more a company believes it has “made AI work,” the more detached it may become from genuine public sentiment. A related insight is that newer entrants to the AI field, or teams less saturated with AI experience, tend to be more in sync with public concerns. Conversely, veteran organizations often display overconfidence about governance maturity, especially with regard to privacy, security, and reliability—areas where public worries are most pronounced. EY’s conclusion is not that experience is inherently risky; rather, it’s a signal that true integration requires a deeper, ongoing cultivation of trust that aligns internal perception with external realities.

The consumer research component of EY’s study also highlights a broader phenomenon: a disconnect between theoretical openness to AI and actual user behavior, especially in sensitive domains like banking and healthcare, where trust is paramount. The study underscores that risk and acceptance are not binary; rather, they exist on a continuum shaped by perceptions of data handling, algorithmic fairness, and the ability to challenge AI-driven decisions. The insight that roughly one in three executives claims their company has “fully integrated and scaled AI solutions” further indicates a degree of wishful thinking within the executive suite, suggesting that many organizations may overestimate the sophistication of their AI governance and the maturity of their operational practices. In short, the gap is both practical and perceptual: it arises from inflated confidence about control and governance in the face of real-world consumer expectations and the complexities of deploying AI responsibly at scale.

To translate these insights into action, EY argues that closing the governance gap is not a matter of adding more controls or chasing headlines about new technologies. Rather, it requires a holistic rethink of how AI is governed across the enterprise, with an emphasis on integrated processes, continuous education, and accountability to a broad set of stakeholders, including customers, employees, regulators, and the public at large. The study frames the problem as a journey rather than a one-off deployment: AI governance must evolve in parallel with advances in AI functionality, ensuring that governance mechanisms remain aligned with the capability envelope as systems become more capable and autonomous. This reframing sets the stage for EY’s nine-principle framework and the three-step plan, positioning responsible AI as a strategic, value-creating discipline rather than a reactive compliance obligation.

The Nature of the Problem: Overconfidence Among Mature Adopters and What the Public Fears

The EY research delves into the dynamics of AI adoption, revealing a paradox: organizations that consider themselves advanced in AI are often the ones most overconfident about their understanding of consumer sentiment and their governance readiness. This overconfidence is not a minor issue; it reshapes strategic decisions, investment focus, and the way companies communicate risk both internally and externally. The problem begins with a misreading of public sentiment. While many senior executives assume broad public acceptance of AI-enabled transformations—especially in areas like customer service automation or financial planning—the public harbor concerns that can jeopardize adoption if not thoughtfully addressed. The public’s worries, as highlighted by EY’s consumer data, tend to center on issues of fake content and manipulation, the potential for vulnerable populations to be exploited, and the broader social implications of AI, including misinformation and the erosion of trust in information sources.

Cathy Cobey, EY’s Global Head of Responsible AI, emphasizes the complexity of AI implementation: it’s not a “one-and-done” deployment but a continuous journey that requires governance and controls to keep pace with evolving AI capabilities. This perspective underscores a fundamental distinction between legacy technology rollouts and modern AI ecosystems, which continually learn, adapt, and influence user experiences. The implication for executives is clear: governance structures must be dynamic, capable of evolving alongside the technology they regulate, and resilient enough to sustain trust across an extended lifecycle of AI-enabled processes and products.

The study further reveals that perceived maturity in AI correlates inversely with awareness of public sentiment among veteran adopters. Firms that report full integration often exhibit a belief that they understand consumer sentiment to a high degree, with 71% of executives in these organizations affirming this understanding. In contrast, organizations still grappling with AI adoption report a more modest alignment, at 51%. This stat signals a disjunction between self-perceived mastery and actual alignment with consumer opinion, highlighting a potential risk: decision-makers may be extrapolating from limited or skewed data about customer attitudes rather than integrating systematically gathered, real-time feedback.

Newcomers to AI appear more attuned to public concerns, which aligns with the notion that early-stage implementations demand heightened attention to governance and user trust. In this segment, executives appear more vigilant about privacy, security, and reliability issues—areas that directly reflect consumer expectations. Such alignment suggests a path forward: when governance teams embed consumer insights early in the development lifecycle, they can preempt issues that might otherwise surface post-deployment. This proactive approach also frames responsible AI as a critical driver of product quality and customer satisfaction, rather than a constraint on innovation.

A critical takeaway from this portion of EY’s findings is that real AI integration extends beyond the mere installation of the latest chatbots or pre-trained models. It requires a comprehensive reengineering of business processes, the retraining of personnel, and the establishment of robust data governance systems. The implication for leadership is to cultivate a culture of ongoing accountability that spans the entire organization, including product design, operations, customer support, risk management, and legal/compliance. Without this cross-functional alignment, AI initiatives risk becoming siloed experiments that fail to deliver durable value or to earn the trust of customers and stakeholders.

This section also surfaces the stark gap between theoretical willingness to use AI and actual adoption behavior in sensitive domains. Banking and healthcare, where trust is not optional but essential, show pronounced gaps between stated willingness and real-world usage patterns. The public’s concerns around data misuse, privacy breaches, and the potential for biased or opaque decision-making in high-stakes contexts intensify the need for rigorous governance practices that can withstand scrutiny and deliver verifiable safeguards. The juxtaposition of consumer anxiety and executive overconfidence thus becomes a compelling rationale for a structured, principled approach to governance—one that aligns AI capabilities with fundamental human values and rights.

EY’s nine-principle framework is designed specifically to address these governance gaps, offering a structured pathway to translate awareness of public concerns into concrete, ongoing governance, risk management, and operational controls. The framework recognizes that responsible AI must permeate not only the design and deployment of AI systems but also the governance architecture, performance metrics, accountability mechanisms, and stakeholder communications that collectively sustain trust over time. The following sections explore the nine principles in depth, as well as how they map to practical actions, organizational structures, and measurable outcomes that collectively close the gap between executive perception and public concern.

EY’s Nine Principles to Close AI Governance Gaps

To address the governance gaps identified in its research, EY has articulated a comprehensive nine-point framework designed to tighten oversight, embed ethical considerations, and align AI initiatives with broader societal expectations. The nine principles encapsulate a holistic approach to responsible AI that goes beyond mere compliance, embedding accountability, data protection, reliability, security, transparency, explainability, fairness, sustainability, and regulatory alignment into the core of AI development and deployment. Each principle serves as a pillar for governance practices, ensuring that AI systems operate within trusted boundaries and deliver outcomes that respect stakeholders’ rights and interests.

The nine principles are organized around key dimensions of governance:

  • Accountability: Establishing clear ownership for AI systems, including responsibility for outcomes, risk management, and remediation when failures occur.
  • Data protection: Safeguarding personal information and ensuring that data use complies with legal and ethical norms, while maintaining data integrity and quality.
  • Reliability: Ensuring AI systems perform consistently as intended, with robust testing, monitoring, and fail-safes that minimize unexpected behavior.
  • Security: Protecting AI infrastructure and data from cyber threats and unauthorized access, including resilient security architectures and incident response planning.
  • Transparency: Providing clear disclosures about AI system purposes, capabilities, and limitations so users can understand how outputs are produced.
  • Explainability: Ensuring decision criteria can be reasonably understood, questioned, and, when necessary, challenged by human operators.
  • Fairness: Assessing impacts on all stakeholders to promote inclusive outcomes and prevent discrimination or bias in AI-driven decisions.
  • Compliance: Aligning AI practices with applicable laws, regulations, and industry standards across jurisdictions and use cases.
  • Sustainability: Embedding considerations of environmental, social, and governance impacts throughout the AI lifecycle, including operations, data usage, and socio-economic effects.

These principles form an integrated framework that EY argues is essential for robust AI governance. The framework is designed not only to mitigate risk but also to address consumer concerns directly, translating ethical commitments into tangible, auditable practices. By tying governance to concrete controls, processes, and performance indicators, organizations can demonstrate their commitment to responsible AI in ways that are meaningful to customers, regulators, and other stakeholders.

A core objective of EY’s nine-principle approach is to elevate governance from a static compliance exercise to a dynamic, operating standard that guides the entire AI lifecycle. Data protection, for instance, is not merely about regulatory adherence but about preserving customer trust in how data is handled, stored, and used across applications. Transparency and explainability are not exclusive to model development; they extend to ongoing usage, allowing stakeholders to understand why certain outputs are produced and under what conditions they may be questioned or overridden. The inclusion of fairness and sustainability signals a recognition that AI’s impact extends beyond immediate business outcomes to broader social and environmental effects, encouraging organizations to evaluate the long-term implications of AI decisions on people and communities.

EY’s research indicates a notable gap in current practice: on average, companies maintain strong controls for only three of the nine areas. The most pronounced gaps appear in fairness and sustainability, suggesting that many organizations struggle to implement inclusive outcomes that consider diverse stakeholder perspectives, as well as lifecycle-level considerations of physical, social, economic, and planetary impacts. This finding implies a need to shift governance from a compartmentalized approach—where some domains are well-managed and others are neglected—to a fully integrated system in which each principle reinforces the others. In practical terms, this means designing governance structures, risk management processes, data governance programs, and accountability mechanisms that collectively address all nine pillars rather than prioritizing a subset based on convenience or regulatory pressure.

The nine-principle framework also recognizes that addressing consumer concerns requires more than internal controls; it calls for external transparency and stakeholder engagement. For example, while data protection ensures confidentiality and ethical data use, a transparent stance on AI system purpose and design enables users to evaluate outputs and understand potential biases or limitations. Similarly, explainability goes hand in hand with accountability—you must be able to justify decisions in human terms and enable users to challenge decisions when necessary. The framework thus emphasizes a holistic approach to governance, where governance, engineering, user experience, and business strategy align to deliver AI that is not only technically sound but also socially responsible and trusted.

In the broader governance narrative, the nine principles serve as both a diagnostic tool and a roadmap. They help organizations assess where they stand today and chart a course toward more mature, resilient AI governance. The emphasis on fairness and sustainability suggests a shift toward accountability that is comprehensive and forward-looking, integrating stakeholder welfare and environmental and societal considerations into AI’s lifecycle decisions. As AI technologies become more capable and integrated into daily business operations, the nine-principle framework positions organizations to manage escalating expectations from customers, regulators, and civil society while preserving the innovative edge that AI can offer.

The practical implication of deploying the nine principles is that governance becomes actionable and measurable. Organizations can define specific controls, metrics, and governance roles tied to each principle, enabling ongoing monitoring and continuous improvement. For example, under fairness, companies might implement systematic bias audits, stakeholder impact assessments, and inclusive design reviews. Under sustainability, teams could map AI lifecycle decisions to environmental footprints and social outcomes, integrating these considerations into procurement, deployment, and decommissioning plans. Under transparency and explainability, governance teams could establish disclosure policies, model documentation standards, and user education initiatives. By turning each principle into concrete practices, leaders can drive consistent, demonstrable progress and build the trust needed to sustain AI adoption in the long run.

In summary, EY’s nine-principle framework represents a comprehensive, integrated approach to responsible AI governance. It responds directly to the governance gaps identified in the research, translating public concerns into structured governance actions that permeate the entire AI lifecycle. The principles are interdependent, each reinforcing the others to create a resilient, accountable, and trustworthy AI enterprise. By implementing these principles as part of an enterprise-wide strategy, organizations can close the gap between executive perception and public concern while unlocking the strategic value of responsible AI. The framework’s emphasis on accountability, data protection, reliability, security, transparency, explainability, fairness, compliance, and sustainability provides a robust blueprint for sustainable, ethical, and competitive AI in the modern business landscape.

The Next Wave of AI: Governance in the Age of Agentic AI

As EY surveys the AI landscape, it anticipates a forthcoming shift to agentic AI—systems capable of making decisions and taking actions with minimal or no direct human oversight. This prospective evolution is not a distant fantasy; it represents a substantive advancement that could amplify both the value and risk of AI-enabled processes. EY underscores that the governance challenge will intensify as agents gain greater autonomy and capability. With greater autonomy comes increased accountability complexity, as decisions may be executed without human intervention, leaving organizations more exposed to unintended consequences, governance blind spots, and ethical concerns that demand proactive, adaptive controls.

The EY findings indicate a broad recognition among executives that current risk management approaches may be insufficient to manage the risks associated with these more powerful systems. On average, companies report strong controls for only three of the nine key responsible AI areas, and more than half (over 51%) acknowledge that creating proper oversight for today’s AI tools—and especially for future, more advanced tools—is already difficult. This self-assessed difficulty reflects a critical realization: the status quo risk frameworks, policies, and governance processes may not scale effectively as AI capabilities escalate. In parallel, many organizations planning to deploy advanced AI within the next year have not yet adequately familiarized themselves with the associated risks, indicating a gap between intention and preparedness that could become a major governance fault line.

Cathy Cobey emphasizes the ongoing need for continuous education of both consumers and executive leadership, including boards, about the risks of AI technologies and how governance and controls should respond. This emphasis on ongoing education points to a cultural and organizational requirement: trust in AI cannot be built by one-off training or periodic reviews; it requires an ongoing, dynamic commitment to risk awareness, governance efficacy, and transparent communications about how safeguards are implemented and updated as the technology evolves. The transition to agentic AI thus not only demands technical readiness but a concerted governance and communications strategy to maintain trust across all stakeholders.

The governance implications of agentic AI extend beyond risk management to include decision rights, accountability allocations, and the need for human oversight mechanisms that can reassert control if needed. Agentic AI’s potential to operate with a degree of independence complicates the traditional boundaries between decision-makers and automated systems. It raises questions about who is accountable when an AI-driven action results in harm, how to audit autonomous decisions, and what redress mechanisms exist for affected parties. EY’s foresight suggests that to harness the benefits of agentic AI, organizations must embed governance into development lifecycles, design rigorous oversight processes, and ensure that governance is capable of adapting to rapid, algorithmic evolution. The aim is not to stifle innovation but to create a governance posture that is resilient, transparent, and capable of maintaining trust as AI systems gain in capability and autonomy.

In this context, the concept of “controls at the point of use” becomes even more critical. As AI systems begin to act autonomously, the controls must be embedded into the operational environment in a way that cannot be bypassed by upstream design choices or downstream execution complications. This requires a combination of technical safeguards, such as robust monitoring, anomaly detection, red-teaming, and continuous auditing, with organizational safeguards, including clear lines of accountability, escalation protocols, and governance committees empowered to intervene when necessary. The governance architecture must be designed to ensure that agentic AI technologies operate within the boundaries set by the organization and society, with a clear mechanism for human intervention and oversight when required.

EY’s next-wave perspective also highlights the importance of governance extensibility across the enterprise. As AI ecosystems become more complex, with multiple models, data sources, and autonomous agents operating in concert, governance cannot remain siloed within a single department or function. Instead, enterprise-wide governance constructs must coordinate across product, engineering, risk, compliance, legal, and executive leadership. This holistic approach ensures that risk management, ethics, privacy, and human-centered design considerations are integrated into every stage of AI development and deployment, from ideation to sunset.

The implications for businesses are expansive. Organizations must strengthen their risk management frameworks to accommodate agentic AI’s potential for autonomous decision-making, elevate the role of governance in product and platform strategy, and invest in continuous education and communication to keep leaders, employees, and customers aligned with evolving AI capabilities and safeguards. The central thesis of EY’s agentic AI perspective is practical: anticipate governance challenges early, embed controls into the architecture and human processes, and maintain an ongoing, transparent dialogue with stakeholders about how AI capabilities are evolving and how governance evolves in response. In this way, companies can pursue the operational and strategic advantages of more capable AI while maintaining trust and avoiding governance gaps that could undermine long-term adoption and value realization.

CEO Alignment: Where Leaders Stand Relative to Public Sentiment

A notable insight from EY’s study is the relative alignment of CEOs with consumer concerns compared with other board members and executives. The data show that CEOs are more in sync with public sentiment about responsible AI, and they are also less likely to claim that their organizations possess bulletproof controls than their board peers. This pattern suggests both a leadership advantage and a potential communication bottleneck: while CEOs may genuinely understand and reflect public concerns, the rest of the executive team may not be disseminating or acting on that awareness effectively within the organization.

The data highlight that half of all CEOs report taking primary responsibility for AI strategy, surpassing every other role except Chief Technology and Information Officers. This indicates a strong top-down commitment to AI leadership and strategy, underlining the strategic centrality of AI to the organization. CEOs’ deeper engagement with customers further strengthens their exposure to public opinion, providing them with firsthand insights into user needs, fears, and expectations. Such engagement enhances their capacity to translate consumer concerns into strategic priorities, governance requirements, and external communications that shape brand perception and trust.

Despite these strengths, the study identifies a critical port of failure: the cascade of concerns from the CEO down through the organization. When CEOs understand public sentiment but other executives do not, concerns may fail to disseminate effectively across the enterprise. This breakdown in internal communication can leave front-line teams and product developers out of the loop, increasing the risk that AI initiatives proceed without sufficient consideration of user trust, privacy, and fairness. This gap underscores the need for deliberate, structured mechanisms to transmit public sentiment and governance imperatives throughout the organization, ensuring that every function—from product to operations to risk—operates with a shared awareness of consumer concerns and a common commitment to responsible AI practices.

EY’s findings also reveal a practical, business-oriented implication: a robust alignment between CEO perspectives and customer concerns can become a competitive differentiator. When top leadership demonstrates a clear, transparent stance on AI safeguards and governance, it can instill confidence across customers, partners, and regulators. Conversely, if other executives are perceived as out of step with public sentiment, it can erode trust and create a perception of risk, even if the CEO privately understands the issues at hand. The leadership challenge, therefore, is to maintain consistent messaging, translate concerns into actionable governance steps, and ensure that the entire leadership team—across all functions—embodies a customer-centric, responsible AI ethos.

This section also highlights a broader leadership dynamics insight: the importance of internal storytelling and cascading risk information. Even when CEOs are aligned with customer concerns, translating that alignment into day-to-day practices requires clear communication channels, governance rituals, and performance metrics that make responsible AI a visible, integral objective rather than a tacit assumption. EY emphasizes that the issue is not merely about knowledge gaps but about ensuring that that knowledge is shared, interpreted, and translated into concrete actions and outcomes across the organization. When CEOs can articulate customer concerns and demonstrate how governance mechanisms address them, they set a tone at the top that can permeate through all levels of the organization, reinforcing the behaviors, processes, and cultural norms necessary for responsible AI.

In this context, EY identifies a practical opportunity: use CEO-led transparency and accountability to drive organizational alignment and competitive advantage. By positioning responsible AI as a strategic differentiator and embedding governance into the core brand narrative, organizations can build trust with customers and stakeholders. The CEO’s role becomes more than strategy; it becomes the public face of how AI is governed, how data is protected, and how decisions are explained and justified. The outcome is a stronger, more resilient AI program, capable of delivering business value while maintaining the trust that is essential for sustained adoption and growth in the AI era.

EY’s Three-Step Solution: Listen, Act, Communicate

To translate its findings into practical action, EY proposes a three-pronged approach that extends beyond traditional risk management. The plan is designed to embed consumer voices into the heart of governance, integrate responsible AI into everyday development practice, and elevate the importance of transparent communication as a differentiator and trust-builder. Each step is intended to address a distinct but interconnected facet of governance, creating a more cohesive and proactive approach to responsible AI across the enterprise.

Step 1: Listen. The first step centers on exposing the entire C-suite to customer voices. EY recommends moving beyond isolated customer research by actively involving back-office leaders—such as chief technology officers (CTOs) and chief information officers (CIOs)—in direct, customer-facing experiences. The idea is to immerse leadership in the realities of customer perception, feedback, and concerns. EY suggests placing these executives in focus groups, exposing them to customer surveys, and in certain sectors—like healthcare—requiring senior leaders to spend time in clinical or patient-facing environments. This approach aims to foster empathy, grounding strategic decisions in authentic user experiences and ensuring that leadership understands how AI products and services are perceived in real-world contexts. The goal is to convert consumer insights into concrete governance actions, ensuring that risk considerations originate from actually listening to users rather than inferring from metrics alone.

Step 2: Act. The second phase focuses on integrating responsible AI throughout the development process. This entails building on established design principles—such as human-centric design and rigorous A/B testing—to create “human-centric responsible AI design” as an integral element of innovation. The emphasis here is on embedding responsible AI into the DNA of product development rather than coding it as a separate, compliance-driven add-on. This step requires teams to translate consumer concerns into specific design decisions, performance criteria, and governance checkpoints that guide AI development from early ideation through deployment and ongoing operation. The objective is to ensure that responsible AI is not just a checklist at launch but a continuous, embedded practice that informs how products evolve and how risks are mitigated as AI capabilities expand. This approach must also incorporate governance considerations into the innovation lifecycle, ensuring that safeguards are tested, monitored, and updated in response to new data, new usage patterns, and new regulatory expectations.

Step 3: Communicate. The final element emphasizes transparent communication, framing responsible AI as a competitive differentiator rather than a mere compliance burden. EY argues that organizations that openly explain their safeguards, governance processes, and decision-making rationales can build stronger trust with customers and outperform competitors who remain silent about their AI practices. This step includes consistent, proactive communication strategies about AI governance, including what data is used, how models are trained, how decisions are explained to users, and how concerns are addressed. The communication strategy should be integrated with marketing, product management, and investor relations to ensure that the organization’s responsible AI commitments are visible, credible, and persuasive to stakeholders. The aim is to transform governance and safeguards into a reputational asset that demonstrates accountability, fosters trust, and supports meaningful engagement with customers and partners.

Together, these three steps form a practical, replicable blueprint for building trust and operational resilience in AI programs. The Listen phase ensures leadership is grounded in customer realities; the Act phase ensures governance is inseparable from product development and innovation; and the Communicate phase ensures transparency and stakeholder engagement become strategic advantages. The three-step approach is designed to create a virtuous cycle: consumer insights inform governance and development; robust governance improves product outcomes and reduces risk; and transparent communication reinforces trust and loyalty, enabling enterprises to differentiate themselves in a crowded AI marketplace.

The business logic behind EY’s three-step plan is straightforward. By embedding customer voices into governance, enterprises are less likely to chase disruptive AI capabilities at the expense of trust. The human-centric design approach reduces the risk of unintended consequences and misaligned outcomes that can erode customer confidence. And by communicating governance and safeguards clearly, organizations can build credibility and avoid the “race to the bottom” where competitors hide behind opaque policies. In this sense, responsible AI becomes not a cost center or compliance obligation but a strategic asset that strengthens customer relationships, enhances brand value, and creates durable competitive advantage.

The three steps are not stand-alone actions but interconnected practices that should be woven into a single, continuous governance program. The Listen component informs the design and development decisions made during the Act phase, ensuring that the organization’s AI products reflect the needs and concerns of real users. The Act phase, in turn, produces outputs—policies, controls, and design patterns—that the Communicate step can openly describe to customers and other stakeholders. The feedback loop created by listening to customers, acting through governance-integrated innovation, and communicating safeguards in a transparent manner allows organizations to demonstrate continuous improvement and accountability in the face of evolving AI capabilities.

In practice, applying EY’s three-step solution requires concrete governance and organizational changes. For Listen, organizations may establish cross-functional listening posts, bring customer insights into strategy reviews, and operationalize feedback into design and risk decisions. For Act, they will formalize human-centric design processes, implement guardrails in development pipelines, integrate bias testing and fairness assessments, and embed governance as part of the standard product lifecycle. For Communicate, they will craft consistent messaging about AI governance, publish accessible explanations of decision processes, train customer-facing teams to respond to AI-related inquiries, and maintain ongoing dialogues with customers about safeguards, updates, and expected behavior. Implementing these steps at scale will demand leadership commitment, resource allocation, and a culture that values transparency, accountability, and continuous learning.

The Trust Gap as a Competitive Opportunity

EY’s research highlights a paradox: while many companies treat responsible AI as a compliance obligation, the data reveals a substantial opportunity to leverage responsible AI as a competitive differentiator. The EY Responsible AI framework has already earned recognition, including a prestigious industry award in 2025—the Newsweek AI Impact Award in the category of “Best Of – Extraordinary Impact in AI Transparency or Responsibility.” This accolade underscores the growing market and regulatory value of transparent, accountable AI practices and signals to customers, investors, and regulators that the organization is committed to ethical AI. The award also serves as external validation that responsible AI can be a strategic asset rather than a mere risk mitigation tactic.

Consumers are genuinely concerned about AI safety, and many organizations are not yet doing enough to articulate how they address these concerns. The gap between concern and explanation creates a space where forward-thinking companies can differentiate themselves through proactive governance, clear disclosures, and accessible dialogue with customers. By taking the lead on responsible AI and making it a central element of brand messaging, a company can position itself ahead of competitors and reassure stakeholders that AI is being deployed in a manner consistent with societal values and expectations. This repositioning of responsible AI—from compliance burden to strategic differentiator—has important implications for marketing, customer trust, and long-term customer relationships. The business case rests on building credibility and demonstrating value to customers who care about safety, privacy, and fairness, turning genuine worries into trust-building opportunities.

From a strategic vantage point, embracing responsible AI as a differentiator means integrating governance into the market-facing narrative and product strategy. The three-step framework—Listen, Act, Communicate—provides a practical blueprint for translating consumer concerns into governance and product decisions that can be seen and verified by customers. In this light, responsible AI becomes part of the company’s value proposition: a signal of reliability, ethics, and regard for societal impacts. When customers observe that a brand actively seeks to understand and address their concerns, the likelihood of loyalty and advocacy increases, and the risk of reputational damage due to AI missteps decreases. This approach also resonates with regulators who increasingly scrutinize AI practices, as well as with partners who want assurance that their data and collaborative processes are treated with integrity. The competitive implication is clear: those who champion responsible AI not only reduce risk but also enhance differentiation in a landscape where technology is rapidly commoditized.

EY’s framework also has particular resonance for enterprise-scale AI initiatives that cross multiple business units and geography. The nine principles create a universal language for governance that can be standardized across diverse teams, enabling organizations to maintain consistent practices, reporting, and accountability regardless of location or function. The translation of consumer insights into governance actions can be scaled by implementing consistent risk assessment methodologies, model inventories, data lineage traceability, and standardized governance dashboards. By aligning internal processes with externally visible commitments to responsible AI, organizations can build trust across stakeholder groups while enabling faster, safer deployment of responsible AI across the enterprise.

Finally, the business case for responsible AI as a differentiator hinges on the ability to communicate measurable outcomes. EY’s model suggests that organizations should establish tangible metrics tied to each of the nine principles, including data protection incidents per quarter, model reliability uptime, security audit findings, transparency disclosures, explainability indices, fairness audits, regulatory compliance rates, and sustainability indicators such as environmental impact and social outcomes. By tracking and reporting these metrics, leaders can demonstrate progress toward responsible AI goals, justify investments in governance, and show customers that the organization is serious about safeguarding trust and delivering ethical AI experiences. This data-driven approach reinforces the argument that responsible AI is not merely a risk management requirement but a source of competitive differentiation that can drive customer loyalty, brand reputation, and long-term value creation.

In summary, the trust gap identified by EY represents a meaningful, time-sensitive opportunity for organizations to reposition responsible AI as a strategic capability. The nine-principle framework and the three-step approach provide a practical, scalable pathway to translate public concerns into credible governance, product design, and transparent communication. By leading with consumer voices, embedding responsible AI throughout development, and communicating governance in a clear and proactive way, enterprises can close the gap between perception and reality, reduce risk, and realize the competitive advantages of trustworthy AI in an era of increasing automation, personalization, and agentic capacities. The overarching message is that responsible AI is not only about compliance or risk mitigation; it is a driver of trust, differentiation, and enduring value in the AI-enabled economy.

Practical Implementation: Turning EY’s Framework Into Action

While EY’s nine-principle framework provides a comprehensive blueprint for responsible AI governance, turning this framework into real-world practice demands deliberate planning, cross-functional collaboration, and ongoing measurement. The practical implementation of this framework is not a one-time exercise but a structured, continuous program that spans governance, technology, operations, and communications. The essence of successful deployment lies in translating high-level principles into concrete, repeatable processes with clear ownership, measurable outcomes, and robust evidence of impact. This section outlines a practical roadmap for organizations seeking to operationalize EY’s framework at scale, with a focus on cross-functional alignment, capability building, and governance integration that can sustain responsible AI over time.

First, map the nine principles to concrete organizational structures and processes. This involves identifying ownership for each principle, creating cross-functional governance bodies, and integrating the principles into existing risk and product development functions. For each principle—Accountability, Data protection, Reliability, Security, Transparency, Explainability, Fairness, Compliance, Sustainability—organizations should define specific controls, policies, and performance indicators. For example, Accountability could be tied to a governance council or executive sponsor responsible for end-to-end AI outcomes education and remediation. Data protection could map to data stewardship roles and data governance policies, while Reliability might require operational monitoring, anomaly detection, and continuity planning. Security would anchor on cyber resilience and incident response protocols, Transparency and Explainability would require model documentation, disclosure policies, and user-facing explanations, and Fairness would require bias auditing and stakeholder impact assessments. Compliance would align with regulatory scanning and audit readiness, and Sustainability would integrate lifecycle environmental and social impact considerations into AI program decision-making.

Second, establish governance architecture that facilitates cross-functional collaboration and continuous improvement. This includes creating an AI governance office or council with representation from product, engineering, risk, compliance, legal, and executive leadership. The council would oversee policy development, risk assessment, and the alignment of AI initiatives with the nine principles. A formal process for risk assessment should be defined, including how risk is identified, quantified, and mitigated across stages of the AI lifecycle—from ideation to deployment to decommissioning. The governance architecture should also define escalation pathways, incident response protocols, and a mechanism for periodic independent reviews or third-party validation to maintain objectivity and credibility. In addition, a centralized repository that inventories models, data sources, training data, model lineage, and governance controls will enable traceability, audits, and faster remediation in case of issues.

Third, integrate responsible AI into development lifecycles and product design. The Act phase should embed human-centric responsible AI design principles into innovation processes. This includes adopting design thinking approaches, user research, and co-creation with diverse communities to ensure products reflect broad perspectives and reduce biases. It also means instituting rigorous validation steps, such as bias and fairness assessments, privacy impact analyses, and security threat modeling. Teams should implement continuous testing and monitoring, including A/B testing with built-in guardrails, to ensure AI systems perform under diverse conditions and maintain alignment with user expectations. A robust data governance framework is essential to ensure data quality, lineage, consent, and minimization, while data protection and privacy controls must be implemented from the outset and validated repeatedly through the lifecycle.

Fourth, advance transparency and explainability through information architecture, documentation, and user-facing disclosures. Organizations should adopt clear documentation standards for models, data, and decision processes, including model cards, data sheets for datasets, and governance dashboards that summarize risk posture, safety controls, and performance metrics. Transparent communications should also cover the intended use, limitations, and potential risks of AI systems, with accessible explanations provided to users to promote informed decision-making. The explainability principle implies that outputs can be interpreted and questioned by humans in a way that supports accountability and remediation if needed. It is important to ensure that explanations are appropriate for different stakeholder groups, balancing technical detail with readability, and that mechanisms exist to challenge or override AI decisions when warranted.

Fifth, embed accountability and governance in performance management and incentives. A critical part of implementation is ensuring that leaders and teams are rewarded for responsible AI outcomes. Governance goals should be integrated into performance reviews, incentive schemes, and strategic planning cycles, reinforcing accountability for AI risk management, ethical considerations, and customer trust. This alignment helps reduce organizational incentives to cut corners for short-term gains and encourages a culture of careful, deliberate AI deployment. By linking governance outcomes to compensation and career progression, organizations can sustain a long-term commitment to responsible AI across leadership levels and functional domains.

Sixth, invest in capability-building and culture change. Implementing EY’s framework requires ongoing education for both leaders and practitioners. This includes training on data governance principles, privacy-by-design practices, bias detection and mitigation, model risk management, and the ethical implications of AI. Organizations should also invest in building internal expertise around AI governance, including roles such as AI ethics officers, data stewards, model risk managers, and governance analysts. Culture change initiatives should emphasize learning, curiosity, and accountability, encouraging teams to raise concerns, report issues, and experiment with guardrails in a safe, controlled environment. Embedding cross-functional collaboration and knowledge-sharing platforms helps ensure that governance is living, evolving, and integrated throughout the organization, rather than a siloed function that only a compliance team handles.

Seventh, design measurement frameworks and reporting that demonstrate progress and impact. A robust measurement framework should translate the nine principles into concrete, auditable metrics. Examples include incident counts related to data privacy or model misbehavior, time-to-detection and time-to-remediation metrics, fairness audits and remediation rates, data protection compliance rates, and transparency and explainability indicators such as user comprehension scores or the frequency of model explanations requested by users. Sustainability metrics should capture environmental and social outcomes linked to AI deployments, including energy usage, data center efficiency, and social impact measures. Regular reporting—be it quarterly governance dashboards, executive summaries, or regulatory communication—helps track progress, identify gaps, and demonstrate accountability to customers, regulators, and investors.

Eighth, implement risk-based prioritization and phased rollout. Given the breadth of the nine principles, prioritization is essential to manage complexity and resource constraints. Organizations should perform risk-based prioritization to determine which AI initiatives pose the highest risk to privacy, safety, fairness, and other core concerns. A staged rollout approach can help manage risk, starting with pilot programs that include rigorous governance controls and post-implementation reviews. Lessons learned from initial deployments should feed back into governance and development processes to continuously improve risk handling. This phased approach also allows for iterative improvements in explainability, data governance, and fairness practices, ensuring that governance evolves in step with functionality.

Ninth, strengthen external collaboration and stakeholder engagement. Responsible AI governance benefits from external perspectives and accountability mechanisms. Organizations should engage with regulators, industry bodies, customers, and civil society to collect feedback, align on standards, and demonstrate commitment to responsible AI. Collaborative initiatives can include joint risk assessments, shared best practices, and independent audits to validate governance claims. By actively engaging external stakeholders, organizations can build trust, anticipate policy changes, and create governance models that reflect a broader social conscience. This collaboration further reinforces the legitimacy of AI initiatives and reduces the risk of reputational damage from missteps or perceived opacity.

The practical roadmap above emphasizes a holistic and scalable approach to turning EY’s nine principles into a functioning governance system. It anchors governance in concrete structures, processes, and metrics while prioritizing cross-functional collaboration, continuous improvement, and transparent communication. Organizations that implement this approach are better positioned to navigate the complexities of responsible AI in a rapidly evolving landscape, achieve stronger customer trust, and realize the long-term strategic value of enterprise AI.

Beyond Governance: Building Trust, Value, and Growth with Responsible AI

The EY study’s core argument is that responsible AI should not be a defensive posture but a strategic engine for trust, growth, and competitive differentiation. The combination of a principled governance framework, a practical three-step plan, and a clear linkage to business outcomes creates a compelling business case for responsible AI. When organizations invest in governance as a core capability—across people, processes, data, and technologies—they can unlock a cycle of value creation that includes improved customer trust, reduced operational risk, higher quality products, and stronger market positioning.

Key business benefits emerge when responsible AI becomes a strategic differentiator:

  • Enhanced customer trust: Transparent governance and explainable AI foster confidence in products and services, encouraging adoption and loyalty.
  • Regulatory alignment and resilience: Proactive governance helps anticipate policy changes, simplifies compliance, and reduces the likelihood of penalties or costly remediation.
  • Operational excellence and efficiency: Systematic data governance, model monitoring, and lifecycle management improve performance, reduce errors, and optimize resource use.
  • Risk reduction and incident management: Structured risk management and rapid remediation capabilities minimize the impact of negative AI events on the business.
  • Brand value and competitive advantage: Publicly communicating responsible AI commitments and governance practices strengthens brand equity and differentiates the company from competitors.
  • Talent attraction and retention: A culture that prioritizes ethics and governance can attract and retain employees who value responsible AI and corporate responsibility.

The alignment of governance, strategy, and brand is critical. If a company’s governance posture is perceived as incomplete or opaque, it can undermine customer trust and provide ammunition to competitors or critics. Conversely, a well-executed governance framework—grounded in the nine principles and reinforced by the three-step listen-act-communicate approach—can turn responsible AI into a strategic asset that sustains growth and resilience. This requires leaders who are willing to place governance at the center of AI strategy, embed customer insights into decision-making, and communicate openly about safeguards, governance structures, and the rationale behind AI initiatives.

The EY framework also implies a cultural transformation. It promotes continuous learning, openness to feedback, and a commitment to accountability at all levels of the organization. Such a culture supports not only safer, more trustworthy AI but also more agile and resilient organizations capable of navigating disruptions, adapting to new data and regulatory expectations, and delivering value to customers in a way that aligns with social norms and ethical standards. As AI technologies continue to mature and scale within enterprise environments, the governance practices described in EY’s framework will be essential for sustainable, long-term success.

In practical terms, organizations should approach responsible AI as an ongoing program rather than a series of discrete projects. This means appointing an accountable governance function, implementing robust data governance, ensuring ongoing risk assessments, and maintaining an active pipeline of improvements to model safety, privacy, and fairness. It also means integrating responsible AI into everyday decision-making and performance management, so that governance is not an afterthought but a core capability that informs product strategy, customer experience, and market positioning.

This perspective reinforces that the path to sustainable enterprise AI success lies in balancing innovation with accountability. The nine principles provide a comprehensive guardrail system that protects customers, employees, and the organization itself, while the three-step Listen, Act, Communicate framework translates that guardrail system into practical, scalable actions. Together, they offer a clear, actionable blueprint for enterprises seeking to harness the transformative power of AI without compromising trust or societal values. In this way, responsible AI becomes not only a safeguard but a strategic advantage in an increasingly AI-driven economy.

Conclusion

EY’s analysis clearly demonstrates a substantive gap between what CEOs perceive as public concerns about AI and what consumers actually worry about. The discrepancy spans key areas of responsible AI, from data protection to privacy, and it is large enough to threaten the momentum of the enterprise AI revolution if left unaddressed. The findings show that traditional assumptions about public acceptance are often misaligned with the lived reality of users, underscoring the need for more nuanced, proactive governance. The report also highlights that public concern is not merely a barrier to adoption; it is a strategic opportunity for those who bring transparency, accountability, and ethical considerations to the forefront of their AI programs.

In response, EY offers a compelling nine-principle framework designed to close governance gaps and align AI strategy with public expectations. The framework covers critical dimensions such as accountability, data protection, reliability, security, transparency, explainability, fairness, compliance, and sustainability. It emphasizes that most organizations currently maintain strong controls for only a fraction of these areas, with notable gaps in fairness and sustainability that must be addressed to achieve truly responsible AI. This framework provides a structured pathway for organizations to hardwire responsible AI into governance, operations, and culture, ensuring that AI initiatives generate value without compromising trust.

The report also presents a practical three-step plan—Listen, Act, Communicate—that translates governance principles into action. Listen requires exposing the entire C-suite to customer voices, Act calls for integrating responsible AI throughout development with a human-centric mindset, and Communicate asks organizations to articulate their safeguards and governance openly to differentiate themselves and build trust. This approach is designed to turn responsible AI into a strategic differentiator rather than a mere compliance burden. It emphasizes that trust is earned through continuous engagement with customers and transparent governance, and that doing so can yield competitive advantages in a rapidly evolving AI landscape.

Looking ahead, EY’s analysis warns that the challenge will intensify with the rise of agentic AI, which can operate with increasing autonomy. Current risk management approaches may not be sufficient, and a broader, more dynamic governance model will be necessary to keep pace with these capabilities. The emphasis on continuous education—of both customers and boards—reflects the need to sustain trust in an era where AI systems can make decisions independent of human oversight. CEOs emerge as a linchpin in this ecosystem, showing the strongest alignment with consumer concerns and assuming primary responsibility for AI strategy, yet the broader leadership team must cascade concerns and governance throughout the organization to ensure a unified, responsible AI program.

Ultimately, the EY study frames responsible AI as a strategic opportunity. Organizations that take the lead on governance, transparency, and consumer engagement stand to reap the benefits of increased trust, stronger customer relationships, and a defensible market position in an era of rapid AI advancement. The nine-principle framework and the listen-act-communicate approach provide a practical, scalable path to achieve this, enabling enterprises to balance ambitious AI innovation with enduring social responsibility. By treating responsible AI as a core strategic asset rather than a peripheral risk, companies can navigate the complexities of the AI landscape, safeguard public trust, and realize the full potential of enterprise AI in a way that respects people, data, and the planet.