During the past several months, Knowledge at Wharton has published articles about the rush to send back-office work overseas, known as cross-border business process outsourcing (BPO). With increasing momentum, companies in the West are going overseas to buy services ranging from customer contact at call centers to complex financial analyses. One of the toughest challenges that confronts companies that outsource work to an overseas BPO provider is measuring the results. This issue came up at an executive education program about BPO held at Wharton last month. In the essay that follows, Wharton professor Ravi Aron, who co-directed the program with colleague Jitendra Singh, explores the complicated but important questions involved in measuring the effectiveness of BPO relationships and sidestepping the most risky pitfalls. The essay is based on field research that Aron conducted with fellow researchers Mukesh Kadab and Ganesh Ramakrishnan.   

 

If the Second Law of Thermodynamics ­– which states that entropy in a closed system can never decrease — had a corporate cousin, CFOs and COOs might state it thus: Disorder and entropy in an unmeasured or unmonitored system will always increase. Indeed, the problem of measurement informs almost all decisions that are made in the context of BPOs.

 

The twin questions of what to outsource and how to outsource are sometimes reflected in the equally problematic questions of what to measure in a BPO relationship and how to measure it. Some hope does exist, however, for the COO who is confounded by the complexity of measuring the effectiveness of process execution. Before we let in that ray of hope, though, let us first explore how the use of metrics and the attendant issues of risk and complexity influence the BPO decision.

 

The outputs of some processes are clearly understood and are readily quantifiable. Processes such as medical transcription, in-bound call support and data transformation (from paper to digitized data stored in a database) are all clearly understood and their outputs can be measured. In the case of processes such as customer analytics, MIS reporting, yield analysis or education and training, however, the task of measuring outputs is much more complicated. Useful metrics to measure the effectiveness of such processes simply don’t exist. There are two major problems, in most cases. First, there isn’t a good measure of the quantum of output from these processes. And second, there are few if any accurate proxies for the quality of output of these processes.

 

Consider, for example, a division of information analysts whose job is to produce a series of MIS reports to track the yield on financial products of a large, diversified financial services firm. Should the CTO measure the number of reports this division produces? That may be a problem, since the creation of a large number of reports may actually indicate the team’s inability to produce a single, product-centric report that pulls together all the profitability-related data into a single view. Should the MIS director measure the timeliness of the reports? That approach, too, could prove problematic: While the reports may be delivered on time to decision makers, that may not guarantee that the reports are accurate or relevant. A report requisitioned by a product manager may be both accurate and timely, but it may fail to provide the information necessary for her to decide whether to withdraw or continue a product. The problem is clear: the CTO who outsources such work has no way of measuring either the quantity or quality of the division’s output.

 

When confronted with operational costs that spiral out of control and processes that seemingly do not seem to deliver to the satisfaction of senior management, CFOs and CTOs often ponder the outsourcing option. Frequently, these C-level executives recognize the benefits of outsourcing: these include gains from specialization and scale volumes that a vendor can bring; benefits of labor market arbitrage; and the conversion of an operational cost schedule composed of fixed costs, overheads and variable costs into a pure variable cost schedule (or close to one). However, they correctly worry about not being able to specify the service-level agreements (SLAs) and quality of process execution accurately enough to make the BPO relationship work. When the risk of process failure is not fully known, the consequences of a failed effort to outsource processes seem to be amplified. This is, perhaps, the managerial equivalent of the fear of the unknowable.

 

Execution Failures

The idea of risk is, in fact, a useful handle with which to grasp the problem of metrics. In many cases that my colleagues and I encountered in our field research, we noted that senior managers often speak of risk and failures in process execution (sometime loosely termed as “errors”) interchangeably. We therefore began with the idea of risk and asked the following question: Is there a way to map the extent and nature of risk to the kind of process execution failure (or error) and thereby provide useful proxies for risk that can be monitored when processes are outsourced? We found that the risk to the firm from failures in process execution arises from two kinds of failures. We call them carry-forward errors and direct errors.

 

A carry-forward error occurs when an element of information that emerges from a BPO provider (or a captive center) is incorrect or inaccurate. Poor decisions may be made based on the inaccurate information flows from the BPO provider, which result in losses to the company outsourcing work. For example, consider a report or analysis that a provider firm may produce for a retail bank that incorrectly identifies a set of customers as being unprofitable to serve. Other examples of this species of error include errors made in customer data analytics that result in lost opportunities to sell new or related products; errors made in calibrating the effectiveness of training programs that result in lower levels of information worker productivity, and so on.

 

It is important to note that a carry-forward error does not lead to a loss of revenue (or incremental costs) if it is not factored into one or more decisions.

In a sense it can be thought of as a dormant error that is embedded in the information flows generated by a provider firm (or a captive center). It becomes active only if a certain set of decisions are made on the basis of the information that contains these errors.

 

In contrast, direct errors have an immediate impact on the bottom line — they stem from flaws in execution that result in a loss of revenue, or incremental costs, or both. In the case of direct errors, costs are incurred regardless of whether the information is used — or not used — in the decision-making process. Examples of these errors include: classifying an accounts receivable item as paid (when it was not), or double payment of an accounts payable item (by an F&A provider), unverified (or under verified) submission of documentation to regulatory bodies (by a provider firm in the bio-tech industry), lack of due diligence in recruiting (by a HR services provider firm) and so on. Direct errors may have additional impact – they could have carry forward impact depending on the nature of the processes in which they occur.

 

Direct errors are usually identifiable and their impact (as in magnitude of loss) is usually quantifiable. Carry-forward errors are often not identifiable and even when these errors are spotted after the damage has been done, it is often not possible to trace back the mistakes to their point of origination (an information worker). For instance, if information analysts based in Manila or Bombay make a mistake in customer analytics leading to lost opportunities for a British multinational bank, it is difficult to quantify what the extent of the loss was. It is also equally hard to identify the analysts responsible. The information analysts could claim that the corporation’s information system had incomplete or inaccurate data, or that they were given ambiguous instructions by the decision makers that requisitioned their reports. In devising a scheme of metrics it is necessary to factor in the different causes that lead to the two kinds of errors.

 

To understand how direct errors occur and to measure their impact, it is necessary to understand the series of tasks that go into execution of processes. To understand this, let us first investigate the idea of a knowledge continuum. The knowledge continuum has a data origin and a knowledge end. At the data end of the spectrum, information workers perform routine tasks of data transformation which do not call for expertise, analysis, judgment or interpretation. At the knowledge end of the continuum, information workers work on aggregate and summary information flows to extract knowledge to support decision making. These tasks require a high degree of expertise, training and the exercising of judgment based on a complex welter of skills. Mistakes made by workers working on the data end of the knowledge continuum often result in direct errors while carry forward errors originate at the knowledge end of the continuum.

 

Preventive Measures

This observation leads us to some techniques that can be used to measure these errors and adopt preventive measures.

 

The Function Value Analysis Technique (FVAT): This technique is often used to measure the relative value of functions that may be affected by direct errors. The FVAT is based on the empirical observation that while a set of processes that can be migrated (or outsourced) may result in several tasks being performed, there are some functions that are particularly important for BPO users. We term the one or two functional elements from a collection of functions (and underlying processes) that are disproportionately important to the user as High Functional Value Elements.

 

For instance, while an F&A service provider may perform as many as 12 to 14 functions, the two that are often of very high value to the user are timeliness in the settlement of Accounts Receivable and Accounts Payable and accuracy in the settlement of these accounts. Similarly, in the case of providers of offshore marketing and customer-contact support, of the several tasks performed by the offshore provider, two that are of critical importance to the user are accuracy of data integration (across multiple channels) and the number of problems resolved in the first call by the teleworker.

 

Rather than measure the quality of output of each outsourced process, if BPO users consistently monitor the BPO provider’s quality along these two dimensions and specify quality levels along these two dimensions in the service-level agreements, then the ambiguity associated with what to measure and how to measure it disappears. Further, rather than measure the intermediate processes and spell out how providers should execute these, users can instead specify tolerance levels (proxies for minimum quality) for each of the high-functional value elements.

 

As an aside, it becomes clear here that the managerial style in this case involves specifying what the BPO user firms want rather than telling the providers how to deliver a certain result. Further, this technique can be employed pre-emptively by randomly sampling the provider’s processes.

 

A road map for the use of this technique would read like this:

§         Map processes to functions

§         Perform functional value decomposition

§         Identify high-function value elements

§         Try to measure the in-house quality levels achieved for each of the high function value elements

§         Specify the tolerance levels for these functions in the SLAs before the process is migrated or outsourced.

 

Carry-forward errors are difficult to identify and are often located in strategic processes that are information-intensive. Any of the tasks (sub-processes) that constitute each process can give rise to errors in these processes. However, initial evidence from our field research suggests that most of the errors typically originate in a few (between one and three) tasks within the process.

 

For instance, in the case of a remote service center that performs yield analysis for a corporate bank, errors usually arise in the task of mapping revenue flows to specific products. While a product may not be profitable by itself, it may have an impact on other products and may be the driving factor in retaining several profitable customers with the bank. The final yield estimates on products requires the aggregation of information from several sources and the careful calibration of such information.

 

Similarly, the task of reconciling customer account balances with cash flows that originate outside the firm (checks) and transactions (payment advices, invoices) is a task that generates a disproportionate number of errors in firms that provide customer account management services. We term these tasks pivot elements within processes. The information worker performing these tasks usually requires one or more supervisors or managers to sign off on the job before it is completed.

 

Firms that run successful captive centers often find ways of controlling these errors. They use techniques that we call secondary monitoring, which essentially involves monitoring those who monitor the pivot element tasks. Since it is difficult to identify these errors and trace them to their origin, it is often not enough to tell the provider firm what to do and allow the price mechanism to provide the incentive to invest in quality. It is necessary to exert some control over the BPO provider firm. In this case, the provider and the user agree on the steps that the provider will take to monitor the pivot elements. In addition, the user firm monitors the managers who monitor the pivot elements.

 

One large financial services firm has formally designated this role as the “process champion.” The user firm specifies in the contract the extent of control that it will exercise over the provider firm and how this control will gradually diminish over time – or in other words specify the path to controlled redundancy. There are statistical measures and techniques that are employed in performing both these tasks which are discussed in greater length with illustrations from field research in a research paper by the authors.

 

Finally, there is a question of ‘Core Vs. Critical’ that has impact on the choice of metrics that a firm should use in measuring the effectiveness of process execution. While firms would not want to outsource a process that defines their core competence, should they outsource processes that are critical to their functioning? This question is misleading because it assumes that firms know which processes are critical and the ones that are not. For instance, while most firms would agree that the finance function is ‘critical’ to the firm, there is considerable debate over exactly which processes within the finance function are critical. Most managers would agree that working capital management, budgeting and capital planning are critical to the firm, but there is little agreement about the criticality of functions such as managing accounts receivable and payable or billing. There may be some activities that are at the core of the firm – these may be supported by critical functions – these in turn may draw from the information feeds of high-volume, low-impact processes. This observation leads us to a way of determining the criticality of a function based on what we call the threshold of criticality of a task.

 

Thresholds of Criticality

Some processes are critical in the aggregate – i.e. the end result of several repeated executions of the same process is critical to the firm, while a single execution of the process is of low consequence. These processes are said to have a high threshold of criticality. Some examples of such processes include transaction processing, accounts payable, and customer contact, including outbound and inbound calls.

 

In contrast, there are processes where even a single execution of the process has significant risk and/or opportunity cost implications for the firm. We argue that such processes have a low threshold of criticality. Some examples of such processes include customer data analytics, equity research or MIS reporting.

 

Processes with a high threshold of criticality typically represent lower risk. Execution failures in such processes almost always result only in direct errors. Processes with a low threshold of criticality can result in both carry-forward errors and direct errors. With such processes it may be necessary to resort to secondary monitoring. Depending on the extent and threshold of risk associated with the process, the extent of control and the pre-emptive sampling rate of the process can be determined and specified as a part of the SLAs. The techniques used towards this end are discussed in greater length with illustrations from field research in a research paper by the authors.

 

In conclusion, what a firm does not measure, it cannot monitor and what it cannot monitor will decay. As such, it is extremely important to formulate a set of metrics before making the BPO outsourcing decision. It is crucial to apply those metrics in-house first to get a realistic estimate of how well the firm is executing its processes. Depending on the kind of process and its risk profile, there are appropriate techniques that can be used to measure the quality of the process’s output. Once these measurements are in place, a firm can go forth and negotiate a BPO contract which specifies the service level agreements and the means of tracking them unambiguously.