Wharton's Greg Shea speaks with Wharton Business Daily on Sirius XM about how AI can help with the rationing of healthcare.

As media reports about shortages of ventilators and hospital beds show, the COVID-19 pandemic will most probably lead to rationing of care. In this opinion piece, Gregory P. Shea, Krzysztof “Kris” Laudanski and Cassie A. Solomon explore the likely impact of care rationing in the absence of the best possible information on decision quality, patients and care providers. They also consider the potential benefits of artificial intelligence (AI) in guiding decisions about how care can be rationed. Shea and Solomon are co-authors of Leading Successful Change, published by Wharton School Press. Laudanski is a faculty member at the University of Pennsylvania, focusing on anesthesiology and critical care. 

Now how many steps behind are we? That is perhaps the most feared question for any leader in a crisis, and one that has proved to be an ongoing issue in the management of COVID-19. People in many quarters continue to labor mightily to catch up, and yet the question persists. Late to contain and delayed in converting to mitigation, we have yet to embrace the next step — care rationing. Thinking through this question could benefit us today and anyone considering artificial intelligence (AI) today or tomorrow.

Let us work some numbers on the back of an envelope. The estimated percentage of the population that the novel coronavirus is likely to infect has remained in the 40% to 70% range for several months. Let us be conservative and say 50%. That means the virus will infect some 165 million Americans. Of that total, data suggest that about 5% will need hospitalization, which adds up to about 8 million people. Again, apparently, about 2% of those infected will need an ICU bed and about 1% will need ventilator support. That means about 1.65 million people will require ventilators. The United States has about 200,000 ventilators, according to the Society of Critical Care Medicine.

Such a large mismatch means that only massive changes in the napkin math would matter. What’s more, these numbers mean that intensive care units, the places most likely to employ ventilators — which normally run close to capacity with the deathly ill — could very well find bed demand for newly arrived COVID-19 patients potentially filling all beds — not just open beds, but all beds — every day. In other words, an ICU could disgorge all its patients at 8 a.m. and refill during the day by only admitting COVID-19 patients, leaving no room for a patient with a heart attack or stroke or acute sepsis or pulmonary embolism.

So far the virus seems to run its idiosyncratic course regardless of treatment, seemingly overpowering, at least for now, most treatment. Data from China suggest morbidity rates for COVID-19 ventilator patients running as high as 86%. The numbers were not too different for patients requiring oxygen delivered alternatively (79%). Again, let’s be conservative and say that the percentage is 80. Can we, given the likely pronounced and prolonged deficit in available ICU beds and ventilators, identify the 20% who will most likely benefit from being in an ICU breathing through a ventilator? Not yet. Do we therefore risk lives through the misallocation of resources? Absolutely. Can we accelerate our ability to decrease, perhaps daily, that risk? Probably. That brings us to the use of AI to try to catch up with this pandemic.

An ICU could disgorge all its patients at 8 a.m. and refill during the day by only admitting COVID-19 patients, leaving no room for a patient with a heart attack or stroke or acute sepsis or pulmonary embolism.

In recent days, several articles have covered the various ways of deploying AI in order to ameliorate the current pandemic: to forecast the spread of the virus, fight misinformation, scan through existing drugs to see if any can be repurposed, and speed design of anti-viral treatments and a vaccine. We see another critically important application of this technology — to augment physician decision making in the all-too-likely event of care rationing portrayed above.

Recent articles have also detailed the way that care for COVID-19 patients has had to be rationed in Italy, where the healthcare system has been overwhelmed by the need. Some three weeks ago, Italy had 2,502 cases of the virus. A week later, Italy had 10,149 cases — too many patients for each one to receive adequate care. The Italian College of Anesthesia, Resuscitation and Intensive Care (SIAARTI) published guidelines for the criteria that doctors should follow under these extraordinary circumstances. The document compares the choices Italian doctors make to the forms of wartime triage required in the field of “catastrophic medicine,” according to an opinion piece published in The New York Times.

Care cannot be provided to all patients who need it, so it becomes necessary to accept that “agonizing choices may be required to determine which patients get lifesaving treatments and which do not,” the article noted. Pause and consider the profundity of this statement, the courage to utter it and its jarring applicability to the U.S. (and elsewhere) today, especially since the U.S. now leads the world in confirmed coronavirus cases, according to The New York Times.

Critical Questions

Clinicians will face several questions as COVID-19 patients come looking for care. These questions qualify as only marginally medical when applied to the seriously ill. Supply and demand prompt them, not acuity of need. The supply and demand realities will occur at various points along a patient’s journey from the ER to the ICU. The questions include:

  • Who should be admitted to a hospital? Who should be turned away?
  • Who can be accommodated in the ICU? Who should be placed on ventilation support?
  • Who should be withdrawn from ventilation support to make a place for someone whose chances of survival are greater?

And then, depending on the answers to any of questions one to three,

  • Who should be provided only with palliative care?

Answering these questions will likely determine the efficacious application of scarce resources or, restated, whether we squander them through ill-informed or even random distribution.

There’s something else at stake here. In combination, articles note the all-too-likely coming need to ration care along with the impact of rationing care on the providers doing the rationing. We run the risk of damaging those providers for life even as we speak increasingly of our dependence on and our gratitude to them. Let’s take a moment to try to convey that reality before offering a way both to lessen its likelihood and to enhance our ability to ration care.

View from an ER

Let’s begin with a fictitious but all-too-possible scene to lay out the way a reality of sickness and scarcity created by policy and system failure could play itself out in very personal and long-lasting fashion for care recipients and providers alike:

A bone-tired ER physician pauses amidst the near chaos to wash her hands for seemingly the thousandth time today … and to collect herself. At an epidemiological level, she knows that she staffs the front lines of a pandemic. At an individual level, she knows that she is performing battlefield triage. She chokes back a gasp. She had not signed up to make bed allocation choices to the ICU based on her best estimate of likely survival rates. Where was the objective data? Who was reviewing, converting it to information, and then updating care, let alone triage, protocols? Where was the protection against common decision-making biases? How is she supposed to function in these conditions, especially given her own exhaustion, anxiety and ever narrowing cognitive abilities, propelled as she was by high-test caffeine and, perhaps soon, by the Ritalin she had stashed in her white coat?

 The physical burnout does not faze her much. For better or worse, she had experience with that well before the pandemic. Endless preaching about work-life balance or integration combined with resilience training had yielded some benefit. No, it is her anticipation of long shadows across the trail ahead that worries her — shadows born of repetitive, traumatic choices, the substance for the memories, flashbacks, and perhaps even PTSD that would reach out from those shadows, perhaps for the rest of her life, shadows born of a mounting number of fate-making but only best guessed rationing decisions.

Nothing theoretical here … these are her decisions. Did she do harm in holding that ICU bed, in not allotting it to someone who, perhaps in her ignorance, she believed would die regardless? How likely was it that this person would die regardless of care received? Should she factor the possibility of a miracle into her triage? Was a 95% likelihood good enough … or 85% or 75%? How should she factor patient age, number of kids, race, gender, ethnicity or their socioeconomic status? What about the person whom she denied a bed in what she guessed — yes, guessed — would soon fill with a patient more likely to benefit from now rationed ICU care? Was this choice the lesser of two evils or were both options equally bad? Who will second guess her and to what effect?

 Seemingly long ago in medical school, they had covered such scenarios albeit in a somewhat other worldly way. That was long before COVID-19. Today, however, yesterday, and for as far as she could see, these questions were vividly and starkly hers. She owned them and they possessed her. She knew that her answers would stay with her, perhaps for the remainder of her life and the lives of all whom they affected. She straightens her white lab coat as if she were straightening herself, smiles softly at a red-eyed nurse who wipes down her visor, and whispers words of support to a tech who mists her hazmat uniform with disinfectant. She changes exam gloves as she enters the next ER bay, ‘Hi, I’m doctor….’ 

AI offers the prospect of improved and improving decision-making, not perfect decision-making, not at all.

How likely is it that such a scene will unfold not once but regularly over the days ahead? How real and how deep is the struggle portrayed in the scene?

One of the co-authors of this article, Krzysztof (“Kris”) Laudanski, a critical care intensivist at the University of Pennsylvania, explains: “I decide to withdraw ventilation support in the ICU maybe once a week, always in consultation with my colleagues and, of course, with the family. I have time to think and to collaborate and to prepare. The family and I reach that decision together. I need time to guide them compassionately through the process of letting a loved one die in dignity and without haste. It takes time.”

Regardless of their values, “decisions like these mean we are allowing their loved one to die. But with COVID-19, we are looking at a situation where physicians will be asked to make this kind of decision in the ED and in the ICU at least hourly. We won’t have time for our usual careful and consensual process. The family may not even be available.  The medical staff is not trained for this kind of decision making or to manage the price it will extract. The consequences for everyone will be devastating.”

What is to be done? In a pandemic, masses of data emerge rapidly, too much, too varied, and too fast for humans to process into information. AI can mine that data moment by moment looking for information such as the impact upon recovery of underlying medical conditions, age, and frailty and generate a prognosis far more comprehensively and with greater precision than any exhausted, front line physician. AI could also, potentially, sort through the effect of practice biases such as what ventilation pressure physicians employ, a practice that varies, for example, by country. AI offers the prospect of improved and improving decision-making, not perfect decision-making, not at all.

AI and Human Judgment

Properly trained, an AI algorithm can augment physician judgment about when to offer or subtract life-saving care. Human judgment will remain important and ultimate, but it can be supported with the dispassionate independent score-keeping capabilities of AI. AI is a logical extension of the current risk-assessment tools used intermittently today in medicine. AI can assess risk and illuminate a set of guidelines that supports clinicians as they must decide who receives care and who does not, and it can become even more accurate and “smart” as new data is added. AI cannot express compassion, but its potential impartiality may better allow us to apply ours. AI cannot hold our hand, but it may well direct us to whose hand to hold by telling us who can likely heal and who most likely cannot.

How quickly can we develop such AI tools, time being of the essence?  Training an AI algorithm requires data — but not as much as one might think.  We could have access to data from China, (but may not trust its applicability or veracity), but other countries are collecting data too. At Penn, Kris is developing an effective AI tool with a small data set, but training AI on a bigger data set would yield greater accuracy and enable less bias. The Veterans Administration database will soon have enough patients to use to create this kind of AI algorithm; the National Health System in England will undoubtedly soon have high quality data too.

AI cannot hold our hand, but it may well direct us to whose hand to hold by telling us who can likely heal and who most likely cannot.

AI should not supplant the judgment of a human doctor. Even with the support of an AI prognosis augmenting the capability of doctors, physicians will ultimately make the final choices. Humans will supervise the inputs that create the original learning algorithm, and they will check on what it is learning as it evolves. Humans will discern if the AI decision tool is confusing artifact and finding. Ascribing sickness on February 1, 2020, to living in Wuhan Province would be accurate but of precious little use.

Amidst a pandemic and shortages of medical resources triage will occur. The cold and undeniably heartless consequence of supply however stretched and pulled and demand.  AI can serve humans in ameliorating this hateful reality, but triage will occur. Care will be rationed.  Only the issue of how remains. Humans – physicians – would still be the ones to tell a patient (and their family) that the patient will not or will no longer be afforded ventilation support or perhaps even hospitalization. But the physician could do so based on the most informed protocols possible resting atop the most current data available probed and analyzed in the most sophisticated manner possible. By looking backwards at the real data about who survives on ventilation support and who does not, we believe the AI can be built based on these facts and (relatively) free of the bias inherent in much human medical judgment.

We humans like not just being in the loop but being the loop. With COVID-19, however, our loop just isn’t fast enough.

Employing an AI tool to aid physician decision-making as a pandemic spits out not just the infected but also data can mean both higher quality decisions, however knee buckling, and greater assurance of all involved regarding the quality of those unwanted decisions. Small solace? To be sure. But likely solace nonetheless while angst and pain bathe all involved — solace to the family that the decision was indeed as skillfully approached as possible, to the society that the scarce, oh so dear resources were employed as effectively as possible, and to the physician that he or she can take greater surety in their judgment. The evolving algorithm should help afford increased emotional and psychological well-being for a struggling patient (and their family), a society already gnashing its collective teeth, and several generations of those who provide care and comfort to the sickest among us.

The idea of an algorithm helping humans both cognitively and emotionally to deal with a crisis may at this moment seem novel and perhaps outright unsettling. We humans like not just being in the loop but being the loop. With COVID-19, however, our loop just isn’t fast enough.

We “accept” AI in other facets of our lives, especially in aspects of business. It is time to put our pride aside and step into the future of the interface and interaction between humans and AI. We must move to develop this kind of clinical support AI as soon as humanly possible. People will die as we misapply essential resources and scar our healthcare providers, particularly our physicians. Time is “a wastin’.”