Evidence of results is key to improving healthcare
The recent trend of rapidly adopting interventions to improve quality and safety in healthcare has not been followed by a trend to produce evidence of results, according to a UCSF researcher on patient outcomes. He emphasizes that new interventions should undergo rigorous testing to determine whether, how and where the intervention is effective, in the same way that this process is required for the adoption of all medical technologies.
Andrew Auerbach, MD, MPH, associate professor of medicine at the University of California, San Francisco, explores the tensions between needing to improve care and knowing how to do so effectively in the August 9, 2007 issue of “The New England Journal of Medicine.” Auerbach is the lead author of a “Sounding Board” piece on this topic.
“In the rest of the medical field, innovation begins with scientific experimentation and proceeds through evaluative trials in successive phases,” says Auerbach. “These principles are safeguards that prevent harm, and when proven effective, lead to clear benefits for patients.”
Each year, thousands of patients experience harm as a result of misuse of medical therapies, physician error and more, he says. This has led to the urgent movement within the healthcare industry to improve quality and safety standards, according to Auerbach and colleagues Seth Landefeld, MD, professor of medicine and chief, division of geriatrics at UCSF, and Kaveh Shojania, MD, Ottawa Health Research Institute.
While the researchers strongly agree with the moral imperative to improve care, they note that many new quality and safety measures are inadequately tested or not tested at all. And few new initiatives ask for proof to determine whether or not these strategies are working. Evaluating the effectiveness of quality and safety interventions is critically important because putting new systems into place consumes large amounts of resources, including staff, time and money, but often yield only small or unclear benefits, they say.
“Whether or not quality-improvement interventions actually improve patient outcomes depends on multiple factors related to patients, providers and organizations, and these factors remain poorly understood,” explains Auerbach. “Given the complexity of quality and safety problems, the complexity of their causes, and how little we understand them, the imperative to choose interventions carefully and monitor their success may be at least as important as the imperative to proceed quickly.”
In the article, Auerbach and colleagues consider several common arguments the healthcare industry uses to justify rapid dissemination of new quality and safety interventions, and offer counterpoints to each argument. They write that the urgency institutions feel to improve patient care should not override understanding how to identify and fix problems effectively. They cite the need to improve the treatment of many diseases such as cancer, AIDS and heart disease is equally urgent, yet in these areas, healthcare demands evidence that a therapy or treatment works before recommending it to a wide patient population. To date, recent trends toward rapid dissemination of quality and safety interventions have proceeded with far less proof.
The authors also consider the common argument that some solutions appear to be so obviously beneficial that requiring evidence is a waste of valuable time and resources. Hand washing to reduce transmission of infections is an example in favor of this argument. Numerous strategies have been implemented at healthcare institutions across the nation to address this problem, yet an increase in hand washing by medical staff remains unestablished and effective strategies for translating solutions into practice continue to elude many institutions, they note.
“Even if an intervention is known to be effective and easily implemented in one setting, reproducing those results in a number of different places is a huge challenge,” says Auerbach. “It is very likely that many quality and safety interventions will help patients, but it is unlikely that a particular type of intervention will work everywhere or that every site needs to implement them. The problem with proceeding quickly and not having a plan to evaluate our progress is that we won’t know which innovations are effective, which are ineffective, which need to be refined and which could be the basis for broad recommendations.”
Auerbach and colleagues propose a framework for evaluating interventions whenever feasible. In instances when it is not possible to conduct a randomized trial, the authors point out that other forms of evaluation can yield robust answers about the effectiveness of interventions.
“We think it is critical to begin a dialogue and to try to come to some better solutions about how we can fix systems, improve quality and safety, and show real outcomes that benefit patients and institutions,” concludes Auerbach. “It is crucial that we pursue solutions that do not blind us to potential new problems, squander scarce resources or delude us about the effectiveness of our efforts.”
UCSF is a leading university that advances health worldwide by conducting advanced biomedical research, educating graduate students in the life sciences and health professions, and providing complex patient care.