Artificial intelligence (AI) is changing the field and practice of medicine, including legal liability and the perception of who is responsible when patients suffer harm.
While AI holds promise to improve the quality and safety of healthcare and reduce errors and harm to patients, legal liability risks are a potential barrier to investment and development in this technology, as well as to the quality of healthcare. ”
Michael Bruno, Professor of Radiology and Medicine, Pennsylvania State University School of Medicine
Bruno is currently working with a team of researchers from Brown University and Seton Hall University School of Law to find that a physician’s understanding of responsibility is influenced by how AI is integrated into a clinician’s workflow. The study was published today (March 10) in the journal Nature Health.
Researchers presented mock jurors with a hypothetical medical malpractice case in which a patient suffered irreversible brain damage because a radiologist failed to detect cerebral hemorrhage from a computed tomography (CT) scan, even though the AI correctly identified the scan as abnormal.
They found that mock jurors were almost 50% more likely to side with the plaintiff and turn against the radiologist when the radiologist reviewed the CT scan just once after the AI flagged the scan, and when they read the scan twice: once before receiving the AI’s feedback and once after.
Almost a year ago, Bruno hosted a two-day research summit on “Human Factors and Artificial Intelligence in Healthcare” on the Penn State College of Medicine campus, bringing together an international group of multidisciplinary experts from academia and industry to establish future research priorities in the field of human-AI collaboration.
“This type of information is critical for stakeholders who are trying to decide whether their hospital should purchase an AI product, tell a doctor to follow a certain workflow, or settle a lawsuit because an error has already occurred, because it allows them to weigh the costs and benefits in a more informed way,” said Brian Shepherd, a Seton Hall University law professor and co-author of the paper.
The researchers explained that they chose to focus on radiology-based cases because the integration of AI into radiology practice is more advanced than in other medical fields, meaning that physician-AI interaction is a plausible scenario. Because most medical malpractice cases are resolved out of court and out of the public record, or even if they do go to court, they can take years to litigate, so using fictional cases allows researchers to gather information that would otherwise not be available.
For the study, the team recruited 282 participants, who were randomized to read one of two scenarios. In the first scenario, the AI flags the case as abnormal, and a radiologist reviews the images once and concludes there is no evidence of bleeding in the brain.
At the second time, the CT was reviewed and interpreted twice by a radiologist. The first time is before receiving feedback from the AI system, and the second time is after the AI flags the case as abnormal. In both cases, the radiologist concluded that there was no evidence of cerebral hemorrhage. After reading the case, participants were asked whether the radiologist was fulfilling his or her duty of care to the patient.
Almost 75% of mock jurors found that the radiologist failed in his duty of care once he examined the CT. However, when the radiologist performed two CT scans, that rate dropped to 53%. The findings suggest that legal risks could be reduced by changing radiologists’ workflows – when and how often imaging tests are reviewed and interpreted when AI is involved, the researchers explained. However, these changes come at a cost.
“There are all these biases that motivate radiologists not to buy into AI because the cost of not buying into AI is too high. If you don’t buy into AI and you’re wrong, that’s going to be used against you,” said co-author Grayson Baird, associate professor of radiology at Brown University and director of the Brown Institute of Radiology, Psychology and Law and the Brown Radiology Human Factors Institute. “That cost is passed on to patients, who then have to deal with the anxiety and discomfort of subsequent treatments, imaging, and testing. We all pay for it, too, because healthcare costs increase.”
Although the study did not explore the underlying reasons behind the relationship between AI and perceptions of legal liability, the researchers explained that the findings show that how people judge fault when AI systems are used depends on the context.
The study builds on previous research conducted by the research team, which used the same hypothetical case and found that mock jurors were less likely to find radiologists at fault when they agreed with the AI interpretation compared to when they disagreed. Perceptions of liability were also reduced when mock jurors were presented with the AI fault rate compared to when the AI fault rate was unknown to jurors. In another study, other researchers found that AI can influence doctors’ decision-making and change their minds about treatment decisions.
“How people perceive AI, and how that perception affects human responsibilities, is rapidly evolving along with technology, and we need to pay close attention to this,” said corresponding author Michael Bernstein, associate professor of radiology at Brown University and associate director of the Brown Human Factors Institute in Radiology.
sauce:
pennsylvania state university
Reference magazines:
Bernstein, M.H. Others. (2026) Radiologist-AI workflow and malpractice claim risk. natural health. DOI: 10.1038/s44360-026-00085-2. https://www.nature.com/articles/s44360-026-00085-2.

