Listen to the podcast:
Advances in cognitive neuroscience are enabling insights into the brain like never before. Neuroscientist Friederike Fabritius and Hans Hagemann, co-founder of the Munich Leadership Group, combine science with management consulting to discover which techniques for peak performance actually work, in their book The Leading Brain: Powerful Science-Based Strategies for Achieving Peak Performance. Hagemann recently joined the Knowledge@Wharton show, which airs on SiriusXM channel 111, to discuss science-based strategies for peak performance.
Here are five key takeaways from the interview:
Regulating your negative emotions is critical to peak performance. When you try to inhibit negative emotions that you feel — anger, frustration, disappointment — in the workplace, the rational and emotional systems in you compete with each other. When your brain is busy trying to tamp down negative feelings, you become too distracted to perform well. “Two systems in your brain are competing,” Hagemann said. “That leads to not being focused on anything anymore.” To regain cognitive control, recognize and ‘label’ how you feel, he said.
Peak performance is not about entering a stress state. “Peak performance means that you find the environment that gets you in a position, and in a situation, where you can really perform at your best,” Hagemann said. “We don’t have the idea of a stressed out top performer.” Instead, the peak performer is someone whose emotions are under control and as such they can think optimally. “We are talking about an easygoing situation where you feel that everything is easy for you to do,” he said. “The best possible situation in this context is experiencing flow, where everything seems to go very smoothly and you are very creative and everything is coming to your mind easily.”
Gender and age matter. Hagemann refers to a “performance profile” as the amount of intellectual arousal needed to help an individual achieve peak performance. That amount will make a difference between men and women, old and young. On an axis ranging from deep sleep to a panic attack, some people are “sensation seekers,” Hagemann said, and need a lot of arousal to hit their peak. That means they are often running on testosterone — he called it “a very male thing” — while others can hit their peak with fewer stresses placed on them.
“There is one thing that determines the highest performance, and that is psychological safety.”
Lean towards rewards, not threats. Every company has a “reward” circle and a “threat” circle. In a “threat” state, “you get a rush of cortisol in your bloodstream. That makes your muscles stronger, but it can cut off your cognitive thinking if it is strong enough,” Hagemann said. In “reward” circles, people feel good and perform better. “Creating a climate of appreciation in companies is the best thing you can do,” he said. “This is very strongly supported by the research that Google did recently.”
Create a psychologically safe workplace. “In the end, there is one thing that determines the highest performance, and that is psychological safety,” Hagemann said. “If the team knows it is psychologically safe — which [includes] the reward cycle, the climate of appreciation, being respected and accepted — there is a high predictability for high performance.”
Join The Discussion
One Comment So Far
Anumakonda Jagadeesh
A Thought provoking article.
A superintelligence is a hypothetical agent that possesses intelligence far surpassing that of the brightest and most gifted human minds. “Superintelligence” may also refer to a property of problem-solving systems (e.g., superintelligent language translators or engineering assistants) whether or not these high-level intellectual competencies are embodied in agents that act in the world. A superintelligence may or may not be created by an intelligence explosion and associated with a technological singularity.
University of Oxford philosopher Nick Bostrom defines superintelligence as “an intellect that is much smarter than the best human brains in practically every field, including scientific creativity, general wisdom and social skills.” The program Fritz falls short of superintelligence even though it is much better than humans at chess because Fritz cannot outperform humans in other tasks. Following Hutter and Legg, Bostrom treats superintelligence as general dominance at goal-oriented behavior, leaving open whether an artificial or human superintelligence would possess capacities such as intentionality (cf. the Chinese room argument) or first-person consciousness (cf. the hard problem of consciousness)(Wikipedia).
Technological researchers disagree about how likely present-day human intelligence is to be surpassed. Some argue that advances in artificial intelligence (AI) will probably result in general reasoning systems that lack human cognitive limitations. Others believe that humans will evolve or directly modify their biology so as to achieve radically greater intelligence. A number of futures studies scenarios combine elements from both of these possibilities, suggesting that humans are likely to interface with computers, or upload their minds to computers, in a way that enables substantial intelligence amplification.
Some researchers believe that superintelligence will likely follow shortly after the development of artificial general intelligence. The first sentient machines are likely to immediately hold an enormous advantage in at least some forms of mental capability, including the capacity of perfect recall, a vastly superior knowledge base, and the ability to multitask in ways not possible to biological entities. This may give them the opportunity to—either as a single being or as a new species—become much more powerful than humans, and to displace them.
A number of scientists and forecasters argue for prioritizing early research into the possible benefits and risks of human and machine cognitive enhancement, because of the potential social impact of such technologies.
Most surveyed AI researchers expect machines to eventually be able to rival humans in intelligence, though there is little consensus on when this will likely happen. At the 2006 AI@50 conference, 18% of attendees reported expecting machines to be able “to simulate learning and every other aspect of human intelligence” by 2056; 41% of attendees expected this to happen sometime after 2056; and 41% expected machines to never reach that milestone.
In a survey of the 100 most cited authors in AI (as of May 2013, according to Microsoft Academic Search), the median year by which respondents expected machines “that can carry out most human professions at least as well as a typical human” (assuming no global catastrophe occurs) with 10% confidence is 2024 (mean 2034, st. dev. 33 years), with 50% confidence is 2050 (mean 2072, st. dev. 110 years), and with 90% confidence is 2070 (mean 2168, st. dev. 342 years). These estimates exclude the 1.2% of respondents who said no year would ever reach 10% confidence, the 4.1% who said ‘never’ for 50% confidence, and the 16.5% who said ‘never’ for 90% confidence. Respondents assigned a median 50% probability to the possibility that machine superintelligence will be invented within 30 years of the invention of approximately human-level machine intelligence.
Dr.A.Jagadeesh Nellore(AP),India