The following article was written by Dr. Cornelia C. Walther, a visiting scholar at Wharton and director of global alliance POZE. A humanitarian practitioner who spent over 20 years at the United Nations, Walther’s current research focuses on leveraging AI for social good.
We are living through a threshold moment in human history, and most of us haven’t fully grasped its magnitude. Those of us born before the mid-1990s represent something that will never exist again: the last generation to spend our formative years in an analogue world. We learned to think, to relate, to solve problems in an environment of productive friction — wrestling with paper-based dictionaries, getting physically lost before finding our way home, experiencing the uncomfortable cognitive pull that comes from sustained attention without the dopamine micro-hits of infinite scrolling.
The cognitive architectures developed through analogue learning, from arithmetic to deep reading, via spatial navigation to face-to-face conflict resolution, result in neural pathways that are fundamentally different from those shaped primarily by digital interfaces. Growing up in an environment that was minimally mediated by artificial assets, we developed our executive functions against resistance. Our children and grandchildren are developing theirs in an environment of infinite algorithmic accommodation.
This constellation confronts us with an uncommon obligation, and a gigantic opportunity.
Those of us born before the mid-1990s represent something that will never exist again: the last generation to spend our formative years in an analogue world.
The Weight of Being Last
Consider what we’ve witnessed within a single lifetime: the emergence of the internet, smartphones, social media platforms that have rewired social psychology, and now generative AI systems that can write, code, create art, and increasingly perform cognitive labor once considered uniquely human. We remember what it felt like to learn without Google, to build relationships without LinkedIn, to make decisions without algorithmic recommendation systems mediating our reality.
This positional knowledge carries both privilege and responsibility. We have experienced uncertainty without algorithmic assistance and know the “texture” of attention before it is fractionated into eight-second increments. We experienced communities before they became networks, and networks before they became extraction machines for behavioral data.
Research into digital well-being increasingly suggests what many of us intuit: Something important is being lost in the transition. Not progress itself because the benefits of global connectivity, large-scale access to information, and low-cost technological advancements are tangible. But the way we’re implementing these systems, driven by commercial imperatives rather than the quest for human flourishing and planetary health, is creating second-order effects we’re only beginning to understand. If the current trend continues, humanity is bound to repeat the mistakes of the first, second, and third industrial revolutions, when innovation was propelled primarily by commercial interests.
This brings us to the central tension. Innovation has been overwhelmingly shaped by a singular metric: return on investment. The algorithms that now mediate reality for billions were designed to maximize engagement, advertising revenue, market capitalization. Not human flourishing. Not ecological regeneration. Not the full exploration of human potential. It is time to reframe this, and consider generative AI as a social determinant of life.
Innovation has been overwhelmingly shaped by a singular metric: return on investment.
Beyond ROI: A Return on Values
The traditional triple bottom line — people, planet, profit — was a meaningful evolution in business thinking. But it’s not sufficient for the algorithmic age. We need a framework that explicitly includes purpose and recognizes that the systems we’re building now will shape consciousness itself for generations to come. It is a time for prepared leadership.
This is where prosocial AI offers a strategic pathway forward. The emerging paradigm recognizes four interdependent dimensions: economically viable (pro-profit), socially beneficial (pro-people), ecologically regenerative (pro-planet), and developmentally enhancing (pro-potential). Purpose-driven companies with genuine stakeholder orientation outperform their peers over meaningful time horizons, because return on values and return on investment are converging, not diverging.
The algorithmic architectures we design today will either amplify human capability or atrophy it. They will either support the development of agency, critical thinking, emotional intelligence, and creativity — or they will result in the outsourcing of these capacities to systems optimized for other ends. Ultimately, it is useful to remember that AI is neutral; it is a means to an end, not an end in itself. And whether it brings gloom or glory for society depends on us. We cannot expect the technology of tomorrow to be better than the humans of today. The old saying “Garbage in, garbage out” still holds true. It is possible to turn this toward “Values in, values out.” Will we?
Those of us who grew up drinking from garden hoses, breathing air not yet degraded by the full weight of industrial carbon, and eating food still connected to regional ecosystems have a lived understanding of what planetary health feels like. Generation AI may not. Their normal will be what we leave them. Their cognitive architecture will be shaped by the algorithmic architecture we design or allow to be designed by default.
This is the obligation (and the opportunity) that comes with being last.
The algorithmic architectures we design today will either amplify human capability or atrophy it.
The ABCD of Agency Amid AI
For business leaders navigating this transition who are simultaneously parents, grandparents, or community members, the path forward requires reclaiming agency in how AI systems are developed and deployed. This practical framework may be useful:
Aspire
Define your North Star beyond quarterly returns. What does success look like when measured across the four dimensions of prosocial AI? What kind of cognitive, social, and ecological environment are you helping to create? Articulate your aspiration for how AI will amplify your own potential, as well as how it can serve to amplify the collective human potential within your organization and beyond.
Believe
Cultivate conviction that different paradigms are possible. The prevailing narrative, that AI development is an inevitable race to the bottom in terms of ethical considerations, that commercial imperatives must always trump social ones, is not destiny. It’s a choice. Companies integrating ethical AI frameworks are demonstrating that prosocial approaches can be competitive advantages. Believe that the systems we build can serve human flourishing, not just shareholder value.
Choose
Make specific, concrete decisions aligned with prosocial AI principles. This means choosing business partners, investment strategies, and product roadmaps that explicitly prioritize the quadruple bottom line. It means choosing transparency over opacity, choosing to build friction into systems where friction supports development, choosing metrics that capture human and ecological outcomes alongside financial ones.
Do
Execute with urgency. Establish AI ethics committees with teeth. Develop procurement policies that preference prosocial AI providers. Create internal capability for evaluating algorithmic impact on human development and environmental systems. Partner with researchers studying AI’s long-term effects. Share learnings openly to accelerate collective wisdom. Advocate for regulatory frameworks that protect Generation AI’s right to cognitive development and environmental health.
Our Uncommon Moment
The analogue world that shaped us is disappearing with remarkable speed. Within two decades, there will be no one in positions of leadership who remembers what it felt like to develop cognition in that environment. The opportunity, and obligation, is ours.
We can design algorithmic architectures that serve the full spectrum of human potential and planetary flourishing. Or we can continue defaulting to systems optimized for narrow commercial metrics that externalize their true costs onto future generations.
The choice, like so much in this threshold moment, is ours to make. But only for a little while longer.



