Twelve years ago, Dr. Anthony Weiss and I presented a paper to the Healthcare Division of the American Society for Quality. We made a call for quality improvement in behavioral health and argued that, in general, what passed for quality improvement was really about compliance. We were echoing a call for quality improvement made by The Institute for Healthcare Improvement in their 2006 reports Crossing the Quality Chasm and Improving the Quality of Health Care for Mental and Substance-Use Conditions: Quality Chasm Series. In that report, they called for action from “clinicians, health care organizations, purchasers, health plans, quality oversight organizations, researchers, public policy makers, and others to ensure that individuals with mental and substance-use health conditions receive the care that they need to recover.” The report also observed that there are many studies demonstrating the gap between what is known to be effective and what is actually delivered. It was also observed there is a gap in research between what is efficacious and what is effective. However, Reese et al (2014) point out, “continuous outcome feedback may be a viable means both to improve outcomes and to narrow the gap between research and practice.”
However, it does not appear that much has changed since then. Kilbourne et al. (2015) found “the overall quality of mental health care has hardly improved since publication of these reports and, in some cases, have worsened over time.” To be sure, most organizations now have some sort of quality improvement plan in place as called for by a myriad of stakeholders. Yet, stakeholders, for the most part, are still measuring compliance. A typical quality audit is looking for process measures. Patel et al (2015) in their analysis of quality measures found that seventy-two percent were process measures. Kilbourne also stated, “Only a few studies have linked quality of care process measures to improvements in patient functioning and clinical outcomes, calling into question the validity of these measures.” We mentioned above the observation of a gap between what is known to be effective and what is delivered. This is also a process measure. We need to take it a step further and ask if what is delivered achieves the desired results. As Funk et al. (2009) point out, failure to deliver quality services is essentially a violation of basic human rights. However, the authors also point out, “poor quality of care can be substantially redressed through concerted and systematic quality improvement strategies.”
At the same time, we also see many missed opportunities for quality improvement; seclusion and restraint for example. It is generally agreed that the use of those interventions must be reduced if not eliminated. The use of these interventions lends itself perfectly to root cause analysis. Root cause analysis can assess whether the intervention was called for and what could be done differently. It can inform training efforts and work toward reducing the use of seclusion and restraint. We recently participated in a quality audit of the use of seclusion and restraint in children’s programs. The audit looked to see if the proper forms were filled out and communicated timely to all stakeholders. There was no attempt to look at the appropriate use of the intervention or efforts to reduce the use of seclusion and restraint.
The Centers for Medicare and Medicaid Services (CMS) announced a video series titled “Teach me clinical quality language.” While it sounded promising, it was simply about more process measures that could readily be gleaned for electronic health records. CMS also provides guides for quality measures in inpatient psychiatric hospitals. Again, mostly process measures based on claims data. The Substance Abuse and Mental Health Services Administration announced quality measures to “help states and behavioral health clinics assess treatment and document performance.” Again, they were mostly process measures. For example, one measure looked at how many people with schizophrenia were prescribed and remained on antipsychotics. Nothing whatsoever on how antipsychotics were combined with other treatment modalities or what progress toward recovery was made by those prescribed antipsychotics.
We must pay more attention to Donabedian’s (2005) model for improvement. He identifies three components to improvement- Structure, processes, and outcomes. Quality improvement based on outcomes is primarily what is missing in our current system. The ACT Academy (2021) pointed out that process measures “reflect the way your systems and processes work to deliver the desired outcome.” In other words, process measures are not a bad way to start but we still need to measure if they are actually producing the outcomes we are looking for. The ACT Academy also pointed out, “According to Donabedian, outcome measures remain the ‘ultimate validators’ of the effectiveness and quality of healthcare.” The ACT Academy explains further:
It is important to have both process and outcome measures as they connect the theory of change to your expected outcomes. If you measure just outcomes, you cannot be sure the changes actually occurred in practice and therefore cannot link the improvements to outcomes. If you measure just process, you cannot be sure if the outcomes have changed, and the aim(s) achieved and therefore there is the risk that the process improved but the outcomes did not.
Are there any attempts to measure outcomes? We see annual reports on satisfaction with behavioral health services. Year after year, we see that individuals like their clinician but are not satisfied with outcomes. The same results, year after year, suggest an obvious area for quality improvement yet we see no efforts to dig deeper into the meaning of these survey results. Many states, and Canada, use the Adult Needs and Strengths Assessment (ANSA), which could readily lend itself to outcome measurements. However, when we see the same narrative copied from year to year, we have to question the validity, if not the reliability, of this instrument.
The Substance Abuse and Mental Health Services Administration (SAMHSA), based on the National Quality Strategy, created the Behavioral Health Quality Framework (2021). It too calls for “a quality measurement framework that can be used to guide and hold entities jointly accountable for improving care access and outcomes.” This framework found 1,400 different quality measures used by federal programs. However, it also found that “those focused on BH care, rely heavily on metrics and non-standardized quality measures, limiting use for benchmarking and value-based payment models,” and that, “Current BH quality reporting efforts are burdensome and limit resources for improving and measuring aspects of BH care most meaningful to different levels of the delivery system.”
It seems that we are not alone in recognizing the need for a renewed call for quality improvement in behavioral health.
Patel et al. (2015) suggest the reason for the focus on process measures is that most “quality” measures are based on administrative claims. They also suggest that is because it is easy and less burdensome. Again, however, these process measures do not capture the clinical outcomes necessary for true quality improvement. Again, we go back to Donabedian’s model that calls for process measures but also outcomes. Australia has mandated the use of standard outcome measures since 2000. Is it more difficult than gathering process measures? Yes, but it can be done and must be done for true quality improvement.
Thomas Grinley, MBA, CMQ/OE, is Program Planning and Review Specialist at the Bureau of Program Quality, Health Services Assessment Unit, New Hampshire Department of Health and Human Services. The author can provide references upon request: Thomas.Grinley@dhhs.nh.gov.