The following article was written by Dr. Cornelia C. Walther, a visiting scholar at Wharton and director of global alliance POZE. A humanitarian practitioner who spent over 20 years at the United Nations, Walther’s current research focuses on leveraging AI for social good.
As artificial intelligence transforms the world, it brings both opportunities and challenges. The conversation is no longer just about the future of work but the future of life itself. While AI promises faster, more efficient outcomes, we must consider whether we truly want to continue repeating old patterns. As Einstein wisely observed, doing the same thing over and over while expecting different results is madness. With AI’s power, we have a unique opportunity to transcend past limitations and envision a better future — one where billions of people currently excluded from access to education, health care, clean water, and information can finally benefit.
AI did not create the problems we face, but it can help solve them. Human actions, not technology, have largely driven environmental degradation and preventable poverty. However, the future is not bound by the past unless we allow it. AI holds immense potential to drive social progress — transforming health care, education, and sustainability — yet it also risks worsening inequalities and entrenching biases. In light of the United Nations’ high-level report on “Governing AI for Humanity” and the establishment of the Global Digital Compact this September, global cooperation is critical to ensure AI serves the public good. In this context, a new paradigm — “prosocial AI” — provides a path forward, emphasizing quality over quantity and moving beyond mere business goals to elevate humanity.
AI did not create the problems we face, but it can help solve them.
Understanding the Hidden Risk of AI
AI’s impact is profound. It can revolutionize education by personalizing learning, democratizing health care with tailored diagnostics, and optimizing supply chains to reduce waste. According to McKinsey & Company, AI could add up to $13 trillion to the global economy by 2030. Yet, this promise comes at a cost.
The computational resources required to train large AI models are gigantic. Much has been said about the carbon footprint of air travel, yet compared to the training of even one LLM (large language model) like Claude, the footprint of a transatlantic flight is like a mouse tiptoeing for a split second. MIT research found that the computational and environmental costs of training grow proportionally to model size and then explode when additional tuning steps are used to increase the model’s final accuracy, often with little performance improvement. And this is just the beginning. By 2030, data centers — essential to maintain and scale AI infrastructure — could account for a 160% increase in global power consumption, threatening to make AI one of the largest contributors to energy demand over the coming years, according to 2024 research from Goldman Sachs.
AI’s tendency to reinforce bias is another caveat. Algorithms reflect the data they’re trained on, which often contains historical biases. For example, AI systems used in the U.S. criminal justice system have been more likely to flag Black defendants as high-risk compared to white counterparts. This bias spills into hiring, education, and health care, where AI decisions perpetuate existing inequalities. Research by Stanford’s Human-Centered AI Institute in 2024 highlighted the danger of biased algorithms in health care, where underrepresented groups suffer from inaccurate predictions. Many cardiovascular algorithms, for example, were trained predominantly on male data, leading to unreliable assessments for female patients. Such biases can lead to unequal access to treatments, worsening health care outcomes for marginalized groups. Biased inputs lead to flawed outputs. Sadly, the old saying “Garbage in, garbage out” — or GIGO — still stands. But we have a choice.
With AI’s power, we have a unique opportunity to transcend past limitations and envision a better future.
What Is Prosocial AI and Why Does It Matter?
The Four T’s of Prosocial AI
To mitigate these risks and still harness the benefits, prosocial AI focuses on four pillars:
- Tailored: AI solutions must address specific societal challenges. For example, health care AI tailored to rural communities can vastly improve access by adapting to local infrastructure.
- Trained: AI should be trained on diverse datasets that represent all demographics to prevent the perpetuation of bias.
- Tested: Rigorous testing, including ethical audits and stress tests, ensures AI systems function equitably and align with societal values.
- Targeted: Prosocial AI must tackle measurable societal challenges, such as reducing carbon emissions or improving education access, ensuring meaningful contributions to society.
Prosocial AI is not just ethical — it’s also smart. Companies that integrate ethical AI into their core strategies can become leaders in both innovation and corporate responsibility. According to PwC, companies with strong environmental, social, and governance (ESG) frameworks, enhanced by AI, outperform competitors financially and foster greater brand loyalty. Research also shows that businesses prioritizing societal impact tend to attract diverse customers and investors who are increasingly focused on sustainability and equity.
The Four Wins of Prosocial AI
Prosocial AI can benefit all stakeholders. Some illustrative examples:
- Individuals: Personalized platforms in mental health and education may provide 24/7 access to vital social services, boosting individual well-being independently of income or geographic location.
- Institutions: Companies can use AI to eliminate human hiring biases, fostering more inclusive and resilient workplaces while expanding their talent pools.
- Countries: In climate action and resource optimization, algorithms can support governments in optimizing the planning and scaling of public welfare while making citizen participation more than mere window dressing.
- Planet: AI can address environmental challenges by improving resource management and biodiversity conservation (though its growing energy consumption must be addressed before that).
Prosocial AI offers a framework where technological innovation aligns with societal benefits while minimizing risks.
Implementing the SOCIAL Roadmap of Prosocial AI
Prosocial AI offers a framework where technological innovation aligns with societal benefits while minimizing risks. It seeks not to replace human qualities but to complement and enhance them. By embedding AI strategies into broader societal and planetary goals, businesses can reduce ethical risks, foster loyalty, and unlock new market opportunities.
- Societal Impact: Redefine success to include social and environmental goals. NVIDIA, for example, has sought to help partners manage renewable energy solutions.
- Optimized Operations: Google’s DeepMind reduced data center energy consumption by 40%, demonstrating how AI can improve efficiency while promoting sustainability.
- Collaborative Innovation: Microsoft’s AI for Good initiative shows how AI can be leveraged across sectors, from crisis management to disaster relief.
- Inclusive Design: Various health providers have begun to use diverse datasets to ensure equitable health care outcomes, avoiding bias and fostering inclusivity. The success of these initiatives depends on inclusive data sets though.
- Accountability: Salesforce integrates ethics directly into its AI systems, conducting regular audits to ensure transparency and build public trust.
- Long-term Value: Unilever and other companies use AI-powered supply chain optimization to reduce waste while boosting profits, ensuring both environmental and business benefits.
It seems paradoxical to cite large businesses to illustrate the SOCIAL benefits of AI, but the examples above show the Janus-faced potential that this expanding technological treasure chest holds for people and planet.
Prosocial AI is not just a trend but a path. Positive outcomes will not happen automatically. We cannot expect the technology of tomorrow to reflect values that the humans of today do not manifest. Change starts with a strategic shift in how we think about success. It is a smart choice — because it comes with a win-win-win-win. Beyond an extension of corporate social responsibility, making sure our AI systems serve society is good for the people we are, the communities we belong to, the countries we are part of, and the planet we depend on.
Read Walther’s new book, Human Leadership for Humane Technology, which explores the relationship of natural and artificial intelligence in our rapidly evolving world.