The following article was written by Dr. Cornelia C. Walther, a visiting scholar at Wharton and director of global alliance POZE. A humanitarian practitioner who spent over 20 years at the United Nations, Walther’s current research focuses on leveraging AI for social good.

Artificial Intelligence (AI) has the potential to radically reshape how societies operate, from optimizing resource distribution to enhancing education, health care, and economic opportunities. Done right, AI can serve as a powerful lever for inclusion—amplifying the voices of the marginalized, widening access to essential services, and closing development gaps. Done poorly, it can embed and even exacerbate the inequalities it was meant to address. The key difference lies in intentional design, implementation, and governance centered on inclusivity and equity rather than profit or efficiency alone.

Inclusion is not just a moral or social issue; it’s a strategic necessity in the age of AI. Companies that ignore exclusion risk reputational harm, potential regulatory backlash, and lost opportunities in emerging markets. Conversely, those embracing inclusive principles gain a competitive edge, foster innovation, and earn community trust.

AI Has the Potential to Help — and Harm — Humankind

Global development stands at a crossroads, with the World Bank warning that 2020-2030 could become a “lost decade.”

About 700 million people — 8.5% of the global population — survive on under $2.15 a day, over twice the population of the United States. Even more starkly, 3.5 billion people (half of humanity) live on less than $7 daily, showing that population growth has eroded many income gains since 1990. Inequality remains entrenched, with one in five living in highly unequal societies — predominantly in Latin America and sub-Saharan Africa. Meanwhile, to reach just $25 a day (a modest benchmark for high-income nations), global incomes would need a fivefold increase amid sluggish growth and COVID-19 setbacks. Compounding this crisis, nearly 20% of the world’s population will face severe climate shocks, disproportionately harming poorer regions and perpetuating entrenched vulnerability.

Amid these overlapping crises — stagnating poverty reduction, persistent inequality, and climate vulnerability — AI arises both as a promise and a peril. On one hand, AI is powering disease diagnostics, enhancing agricultural productivity, and expanding educational resources. Conversely, it threatens to become a new fault line, dividing those who can harness it from those left behind while at the same time consuming gigantic amounts of energy and hereby worsening the climate conundrum.

One part of the crux of that challenge is that AI thrives on data, which reflects our world. When training sets contain historical biases, discriminatory practices, or imbalances in representation, AI models risk perpetuating and amplifying those inequities. Studies have repeatedly shown that biased facial recognition algorithms misidentify people with darker skin at far higher rates than white individuals — errors that have led to wrongful arrests and deepened distrust in technology. Similarly, hiring algorithms trained on historically imbalanced datasets may favor certain demographics, effectively shutting out qualified candidates from underrepresented communities.

“Without careful stewardship, AI may simply magnify existing divisions and biases.”

How AI Can Impact Digital Poverty

These issues are not simply “technical bugs” but structural problems that further enshrine social divides. Data poverty mirrors digital poverty. Marginalized groups lack digital infrastructure and are often absent or misrepresented in the datasets fueling AI applications. AI will replicate and reinforce the world’s systemic inequities if we fail to address these underlying disparities.

Bridging the digital divide is imperative, but focusing on digital access alone is insufficient. We must look toward a vision of “analog abundance,” where the tangible benefits of AI — improved water distribution, better health care outreach, more resilient agriculture — directly serve communities, especially those that lack personal digital devices or solid online connectivity. Recent initiatives by the World Economic Forum’s Centre for the Fourth Industrial Revolution and the Stanford Institute for Human-Centered AI emphasize that inclusive AI solutions must consider local contexts, cultural nuances, and analog infrastructures. AI can optimize irrigation in drought-prone regions, enhance the distribution of essential medicines, or improve early warning systems for climate disasters without requiring every individual to be digitally connected. However, these types of prosocial AI applications require human aspirations and political priorities behind the scenes.

Key Elements to Building an Inclusive AI Ecosystem

To move from exclusion to inclusion, businesses and policymakers need a deliberate framework. This begins with the design phase. Diverse teams and inclusive datasets reduce blind spots. At the same time, transparent governance structures — such as model cards for model reporting and adherence to ethical guidelines — ensure that our AI systems serve equitable ends.

Investing in accessibility also matters. Partnerships between academia, businesses, governments, and NGOs can expand internet connectivity, strengthen digital literacy programs, and provide training in AI readiness. Making the next generation future-proof requires “double literacy,” a holistic understanding of both our brains and algorithms. Both must be combined with critical thinking, problem-solving skills, and oversight competencies within communities that stand to benefit most.

Meanwhile, AI should complement, not displace, analog systems. In agricultural regions without stable internet, AI-driven insights can still guide farmers on optimal planting times or pest management strategies through radio broadcasts, printed materials, or local extension officers. This hybrid approach ensures that the benefits of AI reach communities regardless of their baseline technological infrastructure.

The ABCD framework below offers a roadmap to ensure business leaders and policymakers can effectively identify AI exclusion and improve AI inclusion. It can serve to frame the four principal challenges at stake — and offer four practical approaches for turning them into opportunities.

“Marginalized groups lack digital infrastructure and are often absent or misrepresented in the datasets fueling AI applications.”

The ABCD Framework of Identifying AI Exclusion

Agency

Who controls AI? Empower individuals and communities to shape AI tools that impact their lives. Transparency, explainability, and community input can prevent AI from becoming an instrument of subjugation.

Bonding

How does AI foster connection? AI should bring people together, facilitating collaboration rather than deepening isolation. For instance, tools that bridge language barriers can amplify voices typically left unheard.

Climate

What is the environmental cost of AI? As AI’s computational demands grow, it is vital to develop sustainable practices. Responsible deployment means balancing the energy needs of model training with renewable energy investments and more efficient algorithms.

Dilemmas

How do we mitigate unintended consequences? From biased recommendations to exclusionary hiring tools, careful attention must be paid to governance, accountability, and correcting harmful outcomes. Global standards, like the ones championed by UNESCO and the OECD, can guide practitioners toward more ethical and inclusive AI.

Leaders may want to focus on the D, or “Dilemmas,” because these highlight the tension between AI’s promise as a democratizing force and its peril as a mechanism of injustice; it also illustrates the tension of AI as a commercial determinant of society and its potential to become a social driver of shared quality of life.

The ABCD Framework of Improving AI Inclusion

Audit and Improve Data Practices

Scrutinize data sources to identify biases and representation gaps. Adopt robust data governance frameworks to ensure that the insights fueling AI systems are as inclusive and fair as possible.

Build with Inclusive Design

Involve stakeholders from underrepresented communities in the ideation, design, and testing phases of AI deployment. These perspectives can help developers uncover blind spots and craft solutions that resonate globally.

Champion Ethical and Sustainable Policies

Advocate for regulations and industry guidelines prioritizing transparency, accountability, and environmental responsibility. Align corporate AI strategies with global ethical frameworks to build trust and credibility.

Drive Measurable Digital Equity

Invest in initiatives that broaden access to digital tools, skills training, and supporting infrastructure, especially in underserved regions. Consider metrics for social impact alongside traditional KPIs — companies that demonstrate tangible societal benefits will gain long-term trust and loyalty.

Moving Beyond Boxes to Common Ground

The societal rifts we face today will not be “fixed” by AI alone. In fact, without careful stewardship, AI may simply magnify existing divisions and biases. The power to shape AI’s trajectory remains — for the time being — in human hands. As we enter this new era, the question is not whether we could or if we should leverage AI for social good, but how.

AI can help us transcend digital divides and foster analog abundance, where its benefits uplift the many rather than the few. To make that happen, inclusion must be systematically embedded from the start through diverse teams, transparent governance, ethical data practices, and a willingness to learn from traditionally overlooked communities. We have choices, and the future depends on them — and us.