The following article was written by Hamilton Mann, group vice president, digital marketing and digital transformation at Thales and lecturer at INSEAD and HEC Paris; Cornelia C. Walther, a visiting scholar at Wharton and director of global alliance POZE; and Michael Platt, Wharton marketing professor, Penn Integrates Knowledge (PIK) Professor, and director of the Wharton Neuroscience Initiative.
The world is on the verge of a profound transformation, driven by rapid advancements in Artificial Intelligence (AI), with a future where AI will not only excel at decoding language but also emotions.
This future is already unfolding. A vivid example has recently made headlines, with OpenAI expressing concern that people may become emotionally reliant on its new ChatGPT voice mode. Another example is deepfake scams that have defrauded ordinary consumers out of millions of dollars — even using AI-manipulated videos of the tech baron Elon Musk himself. As AI systems become more sophisticated, they increasingly synchronize with human behaviors and emotions, leading to a significant shift in the relationship between humans and machines. While this evolution has the potential to reshape sectors from health care to customer service, it also introduces new risks, particularly for businesses that must navigate the complexities of AI anthropomorphism.
Companies must consider how these AI-human dynamics could alter consumer behavior, potentially leading to dependency and trust that may undermine genuine human relationships and disrupt human agency. They need to act responsibly about the long-term consequences of customers forming emotional bonds with their AI systems instead of human representatives, as this is a matter of safety that falls under their responsibility and could be likened to manipulation.
Leaders should acknowledge one critical element about AI systems, which is that they are emotionally invasive because they have many apparent similarities with our own ways of behaving, and they communicate through the natural language operating system of our species.
AI-Human Synchrony
Below, we highlight some key mechanisms through which AI and humans are becoming increasingly synchronized:
Mimicking Human Learning
Reinforcement Learning (RL) mirrors human cognitive processes by enabling AI systems to learn through environmental interaction, receiving feedback as rewards or penalties. This learning mechanism is akin to how humans adapt based on the outcomes of their actions.
Large Language Models (LLMs), such as ChatGPT and BERT, excel in pattern recognition, capturing the intricacies of human language and behavior. They understand contextual information and predict user intent with remarkable precision, thanks to extensive datasets that offer a deep understanding of linguistic patterns. The synergy between RL and LLMs enhances these capabilities even further. RL facilitates adaptive learning from interactions, enabling AI systems to learn optimal sequences of actions to achieve desired outcomes while LLMs contribute powerful pattern recognition abilities. This combination enables AI systems to exhibit behavioral synchrony and predict human behavior with high accuracy.
The synergy between RL and deep neural networks demonstrates human-like learning through iterative practice. An exemplar is Google’s AlphaZero, which refines its strategies by playing millions of self-iterated games, mirroring human learning through repeated experiences.
Replicating Human Interactions
AI systems enhance their responses through extensive learning from human interactions, akin to brain synchrony during cooperative tasks. This process creates a form of “computational synchrony,” where AI evolves by accumulating and analyzing human interaction data. Affective Computing, introduced by Rosalind Picard in 1995, exemplifies AI’s adaptive capabilities by detecting and responding to human emotions. These systems interpret facial expressions, voice modulations, and text to gauge emotions, adjusting interactions in real-time to be more empathetic, persuasive, and effective. Such technologies are increasingly employed in customer service chatbots and virtual assistants, enhancing user experience by making interactions feel more natural and responsive. Patients also report physician chatbots to be more empathetic than real physicians, suggesting AI may someday surpass humans in soft skills and emotional intelligence.
Moving Towards Resonance
Brain-Computer Interfaces (BCIs) represent the cutting edge of human-AI integration, translating thoughts into digital commands. Companies like Neuralink are pioneering interfaces that enable direct device control through thought, unlocking new possibilities for individuals with physical disabilities. For instance, researchers have enabled speech at conversational speeds for stroke victims using AI systems connected to brain activity recordings. Future applications may include businesses using non-invasive BCIs, like Cogwear, Emotiv, or Muse, to communicate with AI design software or swarms of autonomous agents, achieving a level of synchrony once deemed science fiction.
As BCIs evolve, incorporating non-verbal signals into AI responses will enhance communication, creating more immersive interactions. However, this also necessitates navigating the “uncanny valley,” where humanoid entities provoke discomfort. Ensuring AI’s authentic alignment with human expressions, without crossing into this discomfort zone, is crucial for fostering positive human-AI relationships.
Turning to Neuroscience
Neuroscience offers valuable insights into biological intelligence that can inform AI development. For example, the brain’s oscillatory neural activity facilitates efficient communication between distant areas, utilizing rhythms like theta-gamma to transmit information. This can be likened to advanced data transmission systems, where certain brain waves highlight unexpected stimuli for optimal processing.
Sharp wave ripples (SPW-Rs) in the brain facilitate memory consolidation by reactivating segments of waking neuronal sequences. AI models like OpenAI’s GPT-4 reveal parallels with evolutionary learning, refining responses through extensive dataset interactions, much like how organisms adapt to resonate better with their environment.
From Brain Waves to AI Frequencies
Drawing inspiration from brain architecture, neural networks in AI feature layered nodes that respond to inputs and generate outputs. High-frequency neural activity is vital for facilitating distant communication within the brain. The theta-gamma neural code ensures streamlined information transmission, akin to a postal service efficiently packaging and delivering parcels. This aligns with “neuromorphic computing,” where AI architectures mimic neural processes to achieve higher computational efficiency and lower energy consumption.
Business Implications
The advanced synchronization of AI with human behavior, enhanced through anthropomorphism, presents significant risks across various sectors.
In health care, while AI could revolutionize patient care by better understanding and responding to patient needs, there is a risk that overreliance on AI might undermine trust in human practitioners and lead to ethical concerns around data privacy and decision-making.
In education, AI tutoring systems that adapt to students’ cognitive and emotional states could also raise concerns about data security, potential biases in personalized learning, the erosion of human oversight in education, and even cognitive and emotional “atrophy.”
In customer service, AI-driven chatbots and virtual assistants that interpret and respond to customer emotions with a very human-like voice, while improving the customer experience, might lead to reduced human interaction and undermine human agency.
Such risks have the potential to damage brand loyalty and customer trust, ultimately sabotaging both the top line and the bottom line, while creating significant externalities on a human level.
In this light, here are five guideposts any business should consider when it comes to AI-Human dynamics:
- Organizations should start by questioning how the interactions between their AI systems and users pose risks on legal, ethical, and reputational levels, particularly due to the excessive dependance their systems are likely to generate in humans and the resulting loss of human agency.
- They need to argue for or against the extent to which AI should emulate human emotions, balancing innovation with the responsibility to protect consumers from potential psychological harm.
- Implementing stringent ethical guidelines is crucial. Businesses must ensure transparency in AI interactions, clearly distinguishing between human and AI agents to prevent emotional manipulation.
- Companies should advocate for and adhere to industry-wide standards that safeguard against the misuse of AI-driven emotional engagement, ensuring that such technologies enhance rather than replace human connections.
- There must be a focus on protecting consumer autonomy, ensuring that AI systems do not exert undue influence over user decisions or behaviors.
With AI invasively entering the spectrum of human emotion, it is more necessary than ever for business leaders to approach AI integration with a heightened sense of new risks and responsibilities that attend the potential benefits of this new technology.
The Path Forward
As AI continues to advance, we must navigate the delicate balance between innovation and responsibility. The integration of AI with human cognition and emotion marks the beginning of a new era — one where machines not only enhance certain human abilities but also may alter others.
Neglecting or failing to manage such risks can lead companies to engage in “responsible AI-washing,” offering AI-based services that ultimately result in outcomes contrary to the regularly invoked value proposition of empowering individuals by harnessing AI as an ally or a co-pilot, in bringing out their best selves, safely and for the benefit of all humanity.
As we move forward, it is a core business responsibility to shape a future that prioritizes people over profit, values over efficiency, and humanity over technology.