Loading the Elevenlabs Text to Speech AudioNative Player…

Most AI rollouts begin with a seemingly rational question: Which jobs are more costly, and therefore more worth automating? In many organizations, this logic translates into focusing on automating the highest‑paid person’s job in a workflow first. New research suggests that’s often the wrong starting point.

In a sequential workflow, where work passes from one person to the next, AI doesn’t just substitute for workers. It rewires how team members monitor one another, how effort is sustained, and what managers must pay them to keep performance high. In other words, automation decisions are also organizational design decisions, and focusing on cost cutting alone can backfire.

A recent study by Wharton’s Pinar Yildirim and co-authors Xienan Cheng and Mustafa Dogan models how managers should deploy limited AI capacity in teams, which positions are most vulnerable to being displaced by AI, and what happens to wages and pay inequality when AI is adopted in team settings.

How AI Disrupts Peer Monitoring

Many teams rely on an informal but powerful discipline mechanism: peer monitoring. In sequential work, each worker can infer whether the prior step was done properly and gauge their own effort accordingly. The authors call this a “domino effect”: If one person shirks (because the project is now less likely to succeed and extra effort is costly), downstream teammates respond by shirking, too. This cascade is what deters shirking in the first place — workers do not want to trigger a collapse.

Now, consider introducing AI into this team environment. In the model, an AI agent always exerts effort (it doesn’t get tired, bored, or opportunistic), and workers can observe whether the previous step “worked,” but can’t tell whether it was done by a human or AI. That last detail matters: If people can’t tell where AI is in the chain, they have to form beliefs, and those beliefs change incentives.

Why Replacing the Highest-Paid Worker Can Be a Trap

The study highlights three effects leaders must weigh when replacing a role with AI:

  1. Direct cost savings: You no longer pay (or need) the human wage for that position.
  2. Direct incentive cost: If a worker thinks AI might replace them (or later roles), shirking becomes more tempting — so you must pay more to keep them motivated.
  3. Indirect incentive cost: Changing the probability that someone later in the chain is AI can weaken incentives for the people before them, raising wage/bonus needs upstream.

The two “incentive costs” are not immediately obvious but predictable for sequential work where workers’ effort depends on what they think happens next.

Automation decisions are also organizational design decisions, and focusing on cost cutting alone can backfire.

What Do Managers Need to Keep in Mind When Deploying AI?

The key takeaway is leaders must treat AI as a system‑wide redesign rather than a localized change, and they will capture more value from automation. Replacing one person with AI can change everyone else’s motivation, even if their tasks are untouched by AI.

Here are the four rules to follow for deploying AI in sequential teams:

1) Don’t “hard‑wire” AI into one position — randomize it

One of the study’s results is that optimal AI strategy is typically stochastic: Rather than permanently replacing a single role, the manager does better by randomly assigning AI to cover certain positions across projects, shifts, or cycles.

What does this look like in practice? AI handles the first step on some tickets; humans handle it on others. AI takes the last step on some customer cases; humans close the rest. Or teams rotate “AI coverage” by day, by queue, or by project sprint.

This matters because a deterministic “AI is always here” policy can create stable beliefs that unintentionally weaken peer discipline. Randomization preserves uncertainty in a way that can reduce the wage premium you’d otherwise need to pay to maintain effort.

2) Protect the connector roles in teams

Workers assigned to the mid-stage tasks of a project are unique because they are both observers (of upstream teammates) and are observed (by downstream teammates). That makes them the information connector that keeps peer monitoring alive in a team.

The study finds that the mid-stage workers should face zero replacement risk in the optimal AI strategy. The downstream and upstream positions face positive replacement risk, and the downstream positions are typically at higher risk.

Before automating tasks, managers should identify which roles are doing this invisible coordination and monitoring work and protect them from automation.

Replacing one person with AI can change everyone else’s motivation, even if their tasks are untouched by AI.

3) You might be better off leaving some AI capacity unused

Perhaps the most surprising result is even when the manager has AI capacity available, it can be optimal not to use all of it. Why would underutilizing AI ever make sense? When workers believe AI is always in the system, project success becomes less sensitive to any single person’s effort, so shirking feels less costly, and managers must pay more to deter it. Underutilization adds uncertainty about whether AI is present at all, which can strengthen incentives and lower required compensation.

In practical terms, this argues against a simplistic adoption KPI like “maximize AI utilization.” A better KPI is “maximize team output per dollar of total cost,” including incentive and coordination costs.

4) Expect wage ripple effects and a shift in inequality

AI creates winners and losers, even inside a seemingly homogeneous team. The study predicts a wage pattern that leaders often miss. The wages of the workers who are at the upstream and middle stages of a project increase after optimal AI adoption, but those who are typically assigned the downstream stage tasks do not see changes in the wages, even though they’re most likely to be replaced. Put differently, AI savings introduced in part of the workflow may show up as compensation pressure elsewhere in the system.

This article was partially generated by AI with additional writing and editing by Pinar Yildirim and Knowledge at Wharton staff. Read our AI policy here.