A recent Office of Inspector General (OIG) investigation revealed the alarming statistic that approximately half of all patient harm incidents are missed in U.S. hospitals. While this number definitely indicates a failure, it is not an intended failure or a failure of individual providers. The bottom line is that our systems as built today do not provide a complete picture of the safety risks facing patients.
At the heart of this report is a simple fact: If we can’t see harm happening, we can’t prevent it. Although healthcare has made significant advances in recent years in reducing infections, preventing falls, and minimizing medication errors, we still operate with a fundamentally incomplete understanding of the extent of patient harm.
The limits of reporting that relies on humans
For decades, improving health care safety has relied primarily on voluntary incident reporting systems. The provider identifies and documents safety events and submits a report for entry into the risk management database.
In many ways, this approach has benefited us and is driving significant improvements in measurable areas of patient safety. However, humans cannot and cannot report everything. We are limited by time pressures, competing priorities, and the inherent difficulty of recognizing patterns across thousands of data points and daily interactions. Because traditional medical coding systems only capture what healthcare providers consciously document, countless safety events are destined to slip through the cracks and remain buried in clinical records and discharge summaries.
Some in the medical safety field have proposed abandoning safety event reporting altogether, arguing that the current system is too inefficient to justify the investment. However, this perspective misses an important point. The problem here is not the act of reporting itself, but the way it is reported.
Why AI is the missing link in safety reporting
AI continues to reshape the way we live, work, and process information, delivering two innovative capabilities that will revolutionize how we report and respond to safety events.
First, it helps us find the harm we’re missing. By analyzing millions of clinical documents, AI can identify patterns and safety events in seconds that would be impossible with months of manual labor. They can also scan unstructured data such as nursing notes, narrative reports, and clinical documents for subtle markers that may suggest complications, treatment delays, or system failures that medical staff may not be aware of, much less formally reported.
I have seen a glimpse of this potential firsthand in my work with patient safety organizations at Press Ganey. For example, examining written reports of disability-related markers (terms such as “wheelchair,” “interpreter,” and “mobility aid”) revealed previously invisible safety challenges unique to patients with disabilities. This is not because humans doing manual reporting can be ignored, but because humans are limited in their ability and time.
Second, AI can dramatically improve the effectiveness of existing reporting systems. Rather than completely abandoning safety event reporting, AI offers an opportunity to transform these systems into intelligent, practical devices that improve care at scale. Tasks that used to be tedious, such as drafting follow-up communications, clustering similar events, and prioritizing cases that require immediate attention, can now be done in seconds instead of hours, and with incredible accuracy. In fact, innovative event reporting and learning systems are already starting to include AI-powered features, helping safety teams work more efficiently and identify life-saving trends faster than ever before.
Healthcare leaders: Now is the time to act
For leaders around the world, the idea that hospitals miss half of the events that cause harm to patients should serve as a wake-up call to strengthen rather than eliminate safety reporting. To reconsider it. To innovate until the systems we rely on work for us and our patients. The goal of zero harm cannot be achieved without a complete understanding of the safety risks faced by patients.
Now that tools like AI are available to us, it is our responsibility to accelerate the adoption of systems built to fill in our blind spots, detect unreported safety events, and expand our ability to catch what human-dependent systems miss. This requires investment not only in technology, but also in the infrastructure and training needed to enable teams to work with AI and act on the insights it provides.
No technology alone is sufficient to eliminate patient harm. Medical care always requires human judgment, empathy, and a commitment to continuous improvement. However, it can amplify our human abilities in amazing ways.
Safety is a prerequisite for providing comprehensive care. For the first time, we have the tools at our disposal to revolutionize the way we approach things. The question is not whether you can afford to invest in this change, but whether you can afford not to. The time to accept incomplete safety data is past. Now is the time for comprehensive patient protection powered by AI.
Dr. Tejal Gandhi is Chief Safety and Transformation Officer at Press Ganey.

