Listen to the podcast:
Last month, Amazon unveiled a service based on AI and machine-learning technology that could comb through patient medical records and extract valuable insights. It was seen as a game changer that could alleviate the administrative burden of doctors, introduce new treatments, empower patients and potentially lower health care costs. But it also carries risks to patient data privacy that calls for appropriate regulation, according to Wharton and other experts.
Branded Comprehend Medical, the Amazon Web Services offering aims “to understand and analyze the information that is often trapped in free-form, unstructured medical text, such as hospital admission notes or patient medical histories.” Essentially, it is a natural language processing service that pores through medical text for insights into disease conditions, medications and treatment outcomes from patient notes and other electronic health records. Among its users are Beth Israel Deaconess Medical Center (an affiliate of Harvard Medical School) in Boston, and the Fred Hutchinson Cancer Research Center in Seattle, home to three Nobel laureates.
The new service is Amazon’s latest foray into the health care sector. In June, the company paid $1 billion to buy online pharmacy PillPack, a Boston-based startup that specializes in packing monthly supplies of medicines to chronically ill patients. In January, Amazon teamed up with Berkshire Hathaway and JPMorgan Chase to form a health care alliance that aims to lower costs and improve the quality of medical care for their employees.
“Health care, like everything else, is becoming more of an information-based industry, and data is the gold standard — and Amazon knows as well as anyone how to handle and analyze data,” said Robert Field, Wharton lecturer in health care management who is also professor of health management and policy at Drexel University. “It’s a $3.5 trillion industry and 18% of our economy, so who wouldn’t want a piece of that?”
Katherine Hempstead, senior policy adviser at the Robert Wood Johnson Foundation, said the biggest contribution of Amazon’s new Comprehend Medical service is goosing patient empowerment to a significant degree. “It has the potential to put consumers much more in the driver’s seat with their own health,” she said, adding that such a move nevertheless has “good and bad implications.”
“It’s like a giant haystack and there are … needles of gold in there that are going to unlock treatments and diagnoses and diseases and correlations.” –Robert Field
But Arnold Rosoff, a Wharton professor emeritus of legal studies and health care management, and a senior fellow at Penn’s Leonard Davis Institute of Health Economics, said he is “concerned about personal health information privacy.” Rosoff, Field and Hempstead discussed the pros and cons of using AI in health care on the Knowledge@Wharton radio show on SiriusXM.
“We have data, data all around but not a drop of information,” said Field, paraphrasing the famous line from a poem by Samuel Taylor Coleridge in the early 19th century. “It’s like a giant haystack and there are … needles of gold in there that are going to unlock treatments and diagnoses and diseases and correlations. It’s beyond a mortal human’s ability to do those calculations.” Amazon’s new service, he said, “opens the door to the possibility of finding those needles. And that’s the exciting possibility.”
AI offers “enormous” promise when it comes to bringing in new and improved treatments for patient conditions, such as in the area of radiology, added Hempstead. Machine learning also potentially enables the continual improvement of treatment models, such as identifying people who could participate in clinical trials. Moreover, Amazon’s service could “empower a consumer to be more in charge of their own health and maybe be more active consumer of medical services that might be beneficial to their health,” she said.
On the flip side, it also could enable insurers to refuse to enroll patients that they might see as too risky, Hempstead said. Insurers are already accessing medical data and using technology in pricing their products for specific markets, and the Amazon service might make it easier for them to have access to such data, she noted.
For doctors, the service could lighten their administrative burdens. Much of their workload has become computerized and digitized, Hempstead said, but providers find those processes “burdensome” and a lot of patient data remain in fragmented and non-standardized forms. “So the idea of being able to use machine learning to recognize patterns and non-standard information is enormously labor-saving and productive.”
But there are “huge business implications” of the Amazon service in this arena. For example, patients who are better equipped with information about their health conditions might make fewer visits to their doctors. “It could erode loyalty to physicians because consumers could be managing their own medical information more and deciding what they want to do and when,” Hempstead said. However, if patient health information gets on open platforms with the requisite permissions, it could open opportunities for retail clinics “to do much more in terms of managing someone’s care, knowing more about patients’ medical history,” she added.
“When you start aggregating information from these different sources, you’ve got to be careful that the information is good enough.” –Arnold Rosoff
The service also could transform the way patients go about managing their health. “It could be disruptive if patients felt much more motivated to be the owners of their own medical records and might actually use this software to … monitor trends in their own health, maybe coupled with the development of direct-to-consumer labs or drugstore kiosks where they could get health readings about themselves,” Hempstead said. “The software could search through the data and identify patterns to alert consumers — ‘Hey, your iron is low and this is the third [occurrence] in the last six months.’”
In addition, Amazon could provide uniformity in a field with a rising number of providers — and replace them. It “could eat the lunch of a whole army of developers and consultants that do one-off kinds of predictive analytics and software that’s compartmentalized into different types of clinical conditions,” Hempstead said. “To the extent this software is able to democratize some of that utility and make it more widely usable by a broader base of stakeholders, it could cut into a lot of that consulting revenue stream.”
Privacy and Other Concerns
Field agreed that it “would be great” to empower consumers with more information on their health, but raised a crucial question. “Who’s paying the bill here?” he asked. “It’s going to be insurance companies, health systems and perhaps other large corporate entities, and they’re the ones who are going to want first dibs on the data. We have to keep an eye on how they’re going to use the data.” Among the risks is that those entities could use the patient data for purposes like marketing their products, he added.
Field also was skeptical of claims that the use of technology to extract finer insights into patient conditions could bring efficiencies such as cost savings. “They’ve said that about every aspect of computerization of medicine and every other industry,” he said. “Each time it adds layers of cost and those savings are yet to be realized. So, I’m not an optimist on that.”
Another big concern is the accuracy of the data if patients have a greater ability to manage their information on electronic databases, according to Rosoff. He noted that thus far, providers such as hospitals, physicians and pharmacies have entered the information about patients into electronic health records. HIPAA, or the Health Insurance Portability and Accountability Act, protects the privacy of patients’ data, but that is based on the premise that health care providers supply the information, he said.
The quality of such data is at risk if patients have the ability to add or remove information about themselves from databases. “When you start aggregating information from these different sources, you’ve got to be careful that the information is good enough,” Rosoff said. For example, when a mother keeps records of whether her kids had measles and mumps, and whether they’ve had vaccines, and doctors have to decide whether to give a certain drug or not, they have to know whether the mother’s record is accurate in terms of timing or what action was taken. “There’s tremendous potential for it to be better,” he said, “but there’s enough potential for it to go off the rails.”
The potential for data breaches is another risk, added Rosoff, citing examples from other firms. “In this case, it might be Amazon. Are they going to tell you right away if there’s been a data breach, and then they’re going to help you sort that out?” he asked. “There’s no government agency at this point that’s taking the lead and saying this is what people need to know before they start down this path.”
Field said companies such as the now-defunct Cambridge Analytica were able to use new technologies “to micro-target” voters, and it might be possible to do the same in health care “to figure out — are you going to get asthma or cancer or depression?” On the other hand, such data could fall into the hands of hackers or be leaked in breaches. “As important as our banking records or even our political records are, health care gets at who we are and what our life is going to be like,” he said. As such, using AI to comb health data provides both “tremendous risk, tremendous reward.”
“There is this worry about being seduced by these algorithms and these decision engines.” –Katherine Hempstead
When it comes to data privacy regulation, Rosoff said the U.S. lags the European Union. In May, the EU put into place its General Data Protection Regulations, or GDPR. Under GDPR, for example, organizations have to alert regulators within 72 hours of a breach. In the U.S., sometimes months go by before regulators are made aware of breaches, Sinan Aral, management professor at MIT who co-leads the university’s Initiative on the Digital Economy, noted in a Knowledge@Wharton article.
But the U.S. is making strides: The 21st Century Cures Act, which was signed into law in 2016, aims to use new technologies to expedite review processes for biologics and medical devices. “It wanted to back off on burdensome regulations that would keep the future from coming forward,” Rosoff said. But it’s not clear how it will protect the data. “Smartwatches and biometric monitoring tell me that maybe I’m on the edge of a heart attack and I ought to take some medicine or do something different. That is put up in the cloud and somebody could get it there, and I’m concerned about that.” [The Food and Drug Administration recently refined regulations for digital health, covering wearables, telemedicine and personalized medicine.]
Finally, there’s the concern that medical professionals would become over-reliant on these new technologies to the detriment of their own learning. “There is this worry about being seduced by these algorithms and these decision engines,” Hempstead said. “You worry about what if clinicians, patients and others started to unquestionably rely on these models? How do they keep learning and improving? Is there a point where that actually isn’t the best opportunity for the best clinical care?”
But the AI train has left the station and will not be coming back. An increasing number of companies already are using AI and machine learning to find new uses in health care, Field said. Apple, for example, is using its smartwatch as a medical device. “Everyone is trying to get a piece of this, and it’s only a matter of time before the whole industry is on the bandwagon.”