Advanced analytics, artificial intelligence and machine learning are arguably the most powerful general-purpose technologies invented since the dawn of modern computing. Extracting value from these is an imperative for business and society. It requires a deeper understanding and self-reflection among leaders of human strengths and frailties in contrast to that of modern, software-based machines and algorithms, writes Ravi Bapna in this opinion piece. Bapna is a professor of business analytics and information systems at the University of Minnesota’s Carlson School of Management.
Companies and societies are at the precipice of rebuilding their foundations to compete in an age of advanced analytics, artificial intelligence (AI) and machine learning (ML). Yet, in the real economy — or in the world outside the tech companies — I see more struggle than success in making advanced analytics and AI a management discipline.
Most leaders in these companies recognize that the perfect storm of big-data, computing capacity and algorithmic advances has arrived. They hear about spectacular use cases such as AI outperforming trained radiologists in detecting retinopathy in preemies. Research also shows that text analytics of earnings calls reveal that executives’ use of euphemisms (think ‘headwinds’) obscure the details of bad news and delay negative investor reaction. Yet, many leaders feel unsure about this new environment and are struggling to extract value from these cutting-edge technologies.
I propose four self-reflections for leaders that make a case for why they need to consider adopting AI, ML and advanced analytics. These self-reflections map to four fundamental pillars of advanced analytics that I have seen create value in the last five years.
Let’s start with the classic Polyani’s paradox that is well captured in the phrase, “We know more than we can tell.”
(1) The power and the limitations of tacit knowledge: Decades of investments in IT systems have automated business processes based on these explicitly defined business rules and yielded productivity gains. Yet, the hard logical rules based approach can only go so far, not because anything is wrong with the approach. It’s just that as humans we are not great at being able to articulate the very rules we use to make decisions on a day-to-day basis. This is attributed to the role of tacit knowledge. The magic of the human mind to intuitively cognitively process the world around them that goes hand in hand with the frailty to be able to verbalize the rules or procedures behind such processing.
In comes a pillar of advanced analytics that uses machine learning for predictive modeling. In the traditional programming approach that we might use to develop, say, an ERP system, we would give data (e.g., transactions) and business rules as inputs, and press a button to get output (e.g., a cash conversion cycle analysis report). In contrast, in predictive modeling, we start with past data and labeled outcomes as inputs, and then use math and computation to ‘learn’ the rules that map the input data to a particular type or value of an outcome. We could use this approach for instance to predict, in advance, the cash conversion cycle under different market conditions, which would be of tremendous value to the CFO for managing capital. What many executives do not realize is that they are almost certainly sitting on tons of administrative data from the past that can be harnessed in a predictive sense to help make better decisions.
(2) Cognitive challenges in processing multi-dimensional spaces: Most MBA students when asked to compute the distance between two points on a piece of paper, say, on a X-Y graph, can go back to remembering the Pythagoras Theorem from their grade school math class. But it blows their mind when I suggest to them that the same ‘Euclidean distance’ formula can extend to, say, 3000 dimensions and is probably being used by Netflix to recommend movies to them, or being used by their favorite retailer to cluster them into segments to target with differential advertising.
In another exercise, I often give students a transaction dataset from a credit card operation and ask them to find an unusual, say, anomalous, transaction. More often than not, most groups apply the concept of outlier analysis they learned in their statistics class and sort the data by the transaction amount to flag a few very high-value transactions.
“[As] humans we are not great at being able to articulate the very rules we use to make decisions on a day-to-day basis.”
Then I tell them the story of how, when I was with a group of EMBA students in Rio, I, embarrassingly, was blocked by my bank while trying to buy a somewhat expensive piece of jewelry for my wife. The interesting aspect was that it wasn’t the transaction amount in itself that was anomalous (several air tickets I buy for business travel cost more). It wasn’t the fact that I was in Rio that was anomalous (I have been there a few times). It was the combination of the geography, the amount and the product category (among the several thousand products that I could potentially buy) that was anomalous. My transaction was unusually distant from all other points in a three-thousand dimensional space, considering the product categories. I duly get reprimanded that I have not been buying enough jewelry for my wife. My students and executives get to understand that their grade school math — when applied to a several thousand dimensional dataset that many banks have (think of every product category that you may or may not buy) — can help detect anomalies in a hyper-dimensional space. This prevents millions of dollars of losses for banks.
These algorithms fall in the descriptive analytics pillar, a branch of machine learning that generates business value by exploring and identifying interesting patterns in your hyper-dimensional data, something at which we humans are not great.
(3) Our weakness in counter-factual thinking: A common cognitive challenge I see in many workplaces and leadership settings is the difficulty executives have in stepping away from the data presented to them and asking for a counter-factual. Often the data appear in the form of a report or a pretty visualization, that, say, ‘quantifies’ the effectiveness of a new feature in the product, or as was the case in a recent project, the efficacy of a new mobile app channel that the brand introduced. A well-meaning executive who funded the app development was pleased to see that engagement levels of customers were higher compared to when the interactions were only web based. Further, a new proposal for additional app features, that supposedly quantified the ROI from the app based on the value of the increase in engagement and the cost of developing the app, was at his desk.
They key question that not enough executives ask, but they should ask before believing ROI calculations such as these, is whether all the increase in engagement is due to the app. This is the art of counter-factual thinking. Suppose you had a time machine and you could travel back in time, and not launch the app, would you have seen the same increase in engagement? Could there be other factors that might possibly be driving the observed change? Often, people come up with seasonality or promotions that others in the company might be running. An even bigger challenge arises when unimaginable and therefore unobservable factors (say, sunny weather increases people’s optimism and they are more likely to download the app and more likely to engage more) might be driving the outcome that is attributed to the intervention.
Counter-factual thinking is a leadership muscle that is not exercised often enough. This leads to sub-optimal decision-making and poor resource allocation. Overcoming this requires embracing causal analytics, a pillar that is most often missed out by the industry ‘ladder of analytics’ frameworks that are out there. Causal analytics is the idea of companies adopting the gold standard methodology of randomized experiments, or other quasi-causal methods such as propensity score matching or difference-in-difference techniques, for determining whether X causes Y. Does investing in display advertising increase sales? Do consumers influence each other for purchasing your service? Does allowing your consumers to co-create your products with your firm increase their lifetime value? The list is endless. Not answering these questions in a causal manner or using the highest paid person’s opinion to make such inferences is a sure shot way of destroying value for your company.
“A common cognitive challenge … is the difficulty executives have in stepping away from the data presented to them and asking for a counter-factual.”
(4) Our challenges in combinatorial thinking: Many leaders and executives have risen up the corporate ladder using a variety of heuristics to make sense of complex situations that have multiple moving parts. Most decision-making operates in the context of optimizing some goal (say, maximizing revenue or minimizing costs) in the presence of a variety of constraints (say, budgets, or service quality levels that have to be maintained). A marketing executive tasked with growing revenue did the hard work of using state-of-the-art causal and predictive analytics to estimate the individual customer level costs and benefits of three different campaigns they were considering. But when faced with a budget constraint at deployment, implying they had to pick a subset of customers to target (there wasn’t enough budget to target everybody), they came up with a simple heuristic to rank order the customers by the ratio of their benefit to cost. This made a lot of sense to his team. The campaign ran, but revenue growth was unfortunately negative, even though they only targeted people that had a positive benefit-to-cost ratio.
Combinatorial thinking, which is encompassed in the prescriptive analytics pillar of advanced analytics, can provide answers. Say the budget was $100, and you had two people who responded to a particular campaign with benefits (to the company) of $100, $1 respectively and costs (to the company) of $100, $1 respectively. Both have the same benefit to cost ratio of 1. If you pick person 2, say by flipping a coin, you don’t have the budget to add person 1 (this is the well-known knapsack problem), so you end up with a revenue of $1, whereas the prudent thing to do here (given revenue maximization goal) would have been to pick person 1, and get a revenue of $100. Suppose each of these individuals also had next best outcomes in response to other marketing campaigns. It may be even better to serve individuals their second or third best individually optimal outcome to maximize the overall revenue for the company. This is where combinatorial optimization algorithms excel and humans fail.
In short, advanced analytics, machine learning and AI are arguably the most powerful general-purpose technologies invented since the advent of modern computing. Extracting value from these is an imperative for business and society. It requires a deeper understanding and self-reflection among leaders of human strengths and frailties in contrast to that of modern, software based, machines and algorithms. Clarity in this area will allow us to design tasks, occupations, jobs, processes and business models that combine human intelligence and machine intelligence in the best possible way.