• AI and Digital Health

    Critical Questions About AI: A Researcher Takes Pulse of Clinicians, Patients

Like many emerging technological advances, artificial intelligence (AI) promises to change the way medicine is practiced. Computer algorithms can discern patterns in electronic health records that might have eluded human recognition, leading to earlier diagnosis and more effective treatment. But putting AI to use depends on buy-in, both from clinicians and the public. And depending on the way AI is designed and deployed, it may run the risk injecting bias in the system.

Richard Sharp, Ph.D., director of Mayo Clinic’s Biomedical Ethics Research Program, is heading a team studying what people think about medical innovations like genetic testing, neuromodulation and now AI.

Richard Sharp, Ph.D.

"We want to anticipate people's concerns and proactively try to address them," says Dr. Sharp. "We don't want AI developers to go down a path of pursuing a particular product that's likely to generate a lot of pushback from patients and providers."

Ideally, AI tools will meet the needs of patients and be seamlessly integrated into clinical practice.

One of the trickiest parts of conducting this kind of anticipatory research is framing questions in ways that are not loaded, Dr. Sharp says. To study attitudes about AI, "we really had to work hard as researchers to figure out how to cue this up as a discussion that was neutral, but also sufficiently rich in content to allow patients to engage with us," he explains.

For example, if the researchers present patients with a long list of wonderful endeavors that Mayo Clinic is undertaking in the digital health space — more than 200 AI-based projects are underway in a wide range of fields — then patient responses may be uniformly positive. In contrast, a more open-ended approach that asks patients what they know about AI may lead to responses full of science fiction tropes and catastrophic scenarios from popular culture, where machines become sentient and controlling.

Ultimately, the team was able to pose neutral questions by presenting patients with real-world case studies illustrating future applications of AI and then discussing with respondents about how they felt about those case studies.

Dr. Sharp and his colleagues found that because patients already recognized that digital tools were transforming many aspects of their lives, they were not surprised to hear that these tools would affect the delivery of health care, too.

In general, patients were enthusiastic about AI, but they voiced several concerns. One was about cost and whether AI tools might make clinical care even more expensive. Another concern was whether humans would still be involved in medical care. Patients were concerned about the ability of AI to get information right and that doctors will continue to have one-on-one interactions with them.

"I think for many folks, they don't want to turn over such a personal, intimate activity as providing health care to a machine," says Dr. Sharp.

Patients also had insight about potential pitfalls of AI. For one, they worried that physicians might become too reliant on AI and perhaps wouldn't be as good at diagnosing disease or prescribing treatments in the future.

"Today, most of us are not nearly as good at spelling as we were 10 years ago because of the fact that spell check algorithms do it for us," says Dr. Sharp. "When patients raised parallel concerns about the 'de-skilling' of doctors, we found that very interesting because it suggests that patients may be more familiar with these technologies than we might have anticipated."

Another nuanced take had to do with access and health disparities. Even though patients recognized AI tools may increase access to health-related expertise in rural areas, they expressed concern that these tools also could create barriers that would make health care delivery even less equitable. For example, if AI algorithms are developed based on a largely white, English-speaking, insured population, the insights they generate might not benefit disadvantaged populations.

"They felt that this may be another advance in health care that further exacerbates the divide between rich and poor," says Dr. Sharp. He and his colleagues published their findings in Digital Medicine.

Dr. Sharp's team is beginning to consider specific applications of AI in clinical practice. For example, they are looking at the potential impact of tools to diagnose and recommend an initial course of treatment for major depressive disorder. The researchers have been conducting interviews with clinicians in family medicine and primary care, as well as psychiatry, to determine what they think about those kinds of additions to their diagnostic tool set.

They are finding that physicians’ perspectives vary according to their fields. So far, respondents from family medicine have expressed enthusiasm about these tools and the potential to help them make a critical diagnosis. Psychiatrists, in contrast, have expressed skepticism that their expertise in behavioral health conditions could be captured by an AI tool. They worry that an algorithm might miss additional considerations relevant to a diagnosis.

"It's interesting," says Dr. Sharp. "Now we're trying to get a better handle on what's driving those different reactions. Why is it that some clinicians are excited about these tools and others are not?"

As Dr. Sharp and his team continue to explore the reactions of patients and providers to AI, his biggest concerns lie not with the technology, or the end user, but in the health care system itself.

"We recognize that historically health care has been delivered in ways that have resulted in inequities," says Dr. Sharp. "AI is emerging in a context in which there are deep structural disparities in health care — and we don't want these tools to replicate the injustices of the past. I think the biggest challenge we have is how to develop solutions to ensure that health care AI is not biased in the way it is delivered."  

— Marla Broadfoot, Ph.D.

Related Articles