Nano Tools for Leaders® — a collaboration between Wharton Executive Education and Wharton’s Center for Leadership and Change Management — are fast, effective tools that you can learn and start using in less than 15 minutes, with the potential to significantly impact your success and the engagement and productivity of the people you lead.

Goal

Deploy AI systems with confidence by ensuring they are fair, transparent, and accountable — minimizing risk and maximizing long-term value.

Nano Tool

As organizations accelerate their use of AI, the pressure is on leaders to ensure these systems are not only effective but also responsible. A misstep can result in regulatory penalties, reputational damage, and loss of trust. Accountability must be designed in from the start — not bolted on after deployment.

Action Steps

1. Define Clear Use Cases and Boundaries

Specify a well-understood purpose for each AI system. Document what the AI should and should not do, including red lines (e.g., no use of facial recognition in sensitive contexts). Link the use case directly to business goals and ethical commitments.

2. Establish a Governance Framework

Form a cross-functional governance board or policy that includes leaders from legal, risk, ethics, and operations — not just data science. Set standards, review high-impact use cases, and update guardrails regularly as technologies and risks evolve.

3. Assign Human Accountability

Designate a person or team responsible for the AI system’s behavior and impact — beyond technical oversight. Ensure this group has legal, ethical, and operational authority, as well as clear pathways for raising and addressing concerns in real time.

4. Ensure Explainability

Use AI models that can be explained to non-experts. Communicate what the model does, what it’s trained on, and why it made specific decisions. If a decision can’t be explained, it can’t be trusted.

5. Test for Bias and Harm

Regularly audit AI outputs for unintended bias or discriminatory impact, aligned with organizational values and risk tolerance. Simulate edge cases using synthetic or real-world data, and embed fairness checks throughout the development lifecycle.

6. Document and Communicate Decisions

Maintain clear records of how the AI was trained, tested, deployed, and updated. Share high-level information with stakeholders and employees to build trust, and continue to evaluate systems post-deployment.

How Organizations Use It

The following examples detail current responsible AI (RAI) governance and activity.

JP Morgan

Extensive, visible RAI activity throughout the firm; head of AI policy reports to the CEO; chief information security officer released a public letter to third-party suppliers (April 2025); dedicated RAI governance w/i model risk; 20+ staff (not counting other RAI functions); in-house RAI development and research.

Salesforce

Office of Ethical and Humane Use (now part of its broader RAI efforts) established in 2018 to guide product development in line with ethical principles and to proactively tackle emerging ethical and safety challenges associated with technology — especially AI; office includes ethicists, policy experts, researchers, and technologists who work across the company to assess risk and build trust; RAI is incorporated in enterprise goal-setting.

Mastercard

Established an AI Governance Council to oversee AI initiatives through cross-functional review, human oversight, and ethical guardrails; formalized Data and Tech Responsibility Principles, including privacy, transparency, accountability, fairness, and inclusion as core pillars; recently partnered with Quebec Artificial Intelligence Institute (Mila) to advance RAI research — particularly in bias testing and mitigation — and is bringing those findings into real-world AI deployments.

Contributors to This Nano Tool

Kevin Werbach, PhD, Faculty Lead, Wharton Accountable AI Lab; Liem Sioe Liong/First Pacific Company Professor; Chair of the Department of Legal Studies & Business Ethics, The Wharton School.

Knowledge in Action: Related Executive Education Programs

Additional Resources

Access all Wharton Executive Education Nano Tools

Download this Nano Tool as a PDF

Comments

New This Week

A grill with food being cooked, including vegetables and meat, with visible flames and tongs in use. The overlay text reads "This Week in Business."
Podcast

Why Reverse Morris Trust Deals Demand Strategic Discipline

April 17, 202612 min listen

Wharton management professor explains how reverse Morris Trust deals shape strategic mergers and acquisitions.

Person holding a smartphone displaying the text "AI" alongside visual elements suggesting artificial intelligence technology.

The Wharton Blueprint for AI Agent Adoption

April 17, 20268 min read

Available April 21, 2026

Various sports balls and a whistle on a blue background with strategic play icons, featuring the Wharton School logo and the word "Moneyball."
Podcast

From Masters Victory to Motion Data: Golf’s Analytical Evolution

April 15, 20261 hr 3 min listen

Sports technology founder examines Rory McIlroy’s Masters win alongside emerging AI-driven innovations in golf performance and talent evaluation.