Blog

Operations, Satisfaction and Logistics as Clinical Quality Measures

Written by Matt Doogan | Aug 27, 2024 5:46:23 PM

This is a guest blog from Dr. Steve Tierney, SCF's Senior Medical Director of Quality Improvement.

 

We have all seen the lists of the ills plaguing modern health care. All the categories of unmet need from housing to social contact and food insecurity. We have added these to the lists of things that have been previously identified like mood disorders, substance and tobacco use, screen time, exercise and so on. There are almost too numerous to count quality measures for how well these things can be charted during health care operations. Of course, every category is stridently defended as the “most important” item that must at all costs be diligently collected because these things “must be addressed”.

All these things are important to the lives of every person who interacts with a health care setting. Their collection can all be measured, and many have standardized sets of questions to elicit details. These, or similar screening tools have been baked into many of the quality review organizations lists of must perform. But what’s the goal exactly?

If you review the chart all these things can be quantified as performed or not. But how does that relate to the lives of the people who these charts refer to? Is the goal extremely complete charting, or is it more about how many lives were changed? Does the act of asking and then collecting it fix it?

I was reviewing one of these exhaustive lists recently and was reminded of the movie “A Christmas Story”. I remember myself at a similar age almost exploding at the thought of finally getting to sit in the department store Santa’s lap and detail all the things I had wished and dreamed about over the past year. The breathless anticipation of finally having the opportunity to express all of my unmet hopes and desires to someone who I believed could “fix it”. How sad I was when my older brother finally told me at age 6 that Santa was not real and who really paid any attention to these things was mom and dad.

What was the purpose of the department store Santa then? Was it just to have millions of children worldwide sit down and for a moment allow these things to be asked for with absolutely no intention of ever delivering? I was struck by how similar the health care screening process was. We can screen you for depression or substance use, and refer you, but they may not be able to see you for 6 or more months and in some cases, never. We can ask about employment or education, but since most of the programs to address these things are either so over capacity or barely existent, are we just the newest version of the department store Santa? Or should we just be satisfied that we asked, and chart it, but delivering is too much to consider?

Today’s modern consumer has different expectations. When we go on to Amazon, Netflix or some other similar platform, how would consumers react to allowing a search of what they want, but they can’t have it? Is simply documenting that “they were asked” enough? Why is measuring “asking” considered clinical quality at all if nothing happens? What do we want clinically trained staff to do each day? Call it good because they asked?

When we (SCF) considered “quality” we had to decide what the measure was. Was it just asking, or delivering? Years ago we looked at how our organization was designed. We had created lists of what to ask, but aside from the basic medical care we referred everything else out and worked hard on creating the workflows for the questions.

We began to think of not the asking, but the doing, as another way to quantify clinical quality. We realized unless we brought people we had been referring to closer those who could do something about their needs, we were no more effective than the department store Santas were.

We started to shift from asking and referring, hoping that this would create what could now be called an Integrated Adaptive Environment. As much as possible for Behavioral Health, Specialized Pharmacy consulting, Health Education and Dietary advice, reproductive care, and Social and Community Services, we moved these staff inside legacy medical practices in settings like OB/GYN, Primary Care, and Pediatrics. The strategy was to not only create rapid access to care, but also once in place, to allow the encounter to happen more organically and place special, skilled staff we used to refer out to in the clinic setting and available in real time. The measure shifted to number of special skills staff used by each provider per encounter. Once any issue was identified, it was immediately placed with a special skills role that could directly address it. We moved to much more doing and much less asking.

This of course had limits. If a special architectural footprint was required, such as for Audiology or Optometry, then the referral and transfer need to occur, but the measure still focused on ‘seen’ rather than ‘referred.’ In the case where it was logistically not feasible given the small number of available referral providers (for example, Cardiology where both clinic footprint and staff size were limited) we had to keep the legacy referral and send approach. But as much as possible, Midwives, Pharmacists, Behavioral Health Providers, and Community Resource Specialists were fully integrated, and workflows were adapted to flex to most demands in real time. Hence the “Integrated Adaptive Model”. It was able to adapt in real time and was integrated into usual care. The focus on asking was vastly reduced and instead shifted to what was accommodated in real time. This did mean we had to integrate previously separate departments and locations and adapt new workspace to allow this wider range of skill sets to coexist. It also meant we have to approach training and orientation for staff to maximize these new opportunities. But the cost savings in reduced redundancy, waits and delays, waste and staff turnover more than offset the costs.

It became clear that if we allowed people to express what frustrated them in real time and designed (as much as allowable) for a real time response, using large scale population-based standardized inventories became waste that only added value on rare occasions. Over time, our customers began to recognize this new capacity and rather than needing to be prompted with a questionnaire they would clearly identify the “I would like to see the…” that they knew was now possible.

As new measures, we began to count total seen/special skills role as a function of legacy clinic design role, and total demand for special skills roles (seen/day/special skills and primary skills role) along with total diversion to acute care settings like hospital and ER.

Other surrogate measures over time were selected like total screened for cancer, controlled for chronic conditions, next available encounter, employee and consumer satisfaction, and clinical provider turnover.

What we found was the more the clinical workforce believed the majority of needs were being met for the people they interacted with, the higher their satisfaction rates were and similarly, the higher our customer satisfaction rates were. If satisfaction was very high, next available appointment was low, acute care diversion was low and basic population health outcome performance was high, we were “getting it done” and we shifted to using customer and employee satisfaction as surrogates. When we failed to deliver, satisfaction in both workforce and consumers dropped and turnover went up. The surrogate metric was “if people felt like they were well supported as staff, and well cared for as customers” we could be confident we “got it done”.

Our root lesson was to trust the workforce and customers to teach you about how effective your organization is at getting it done and spend less time depending upon charting as how you evaluate the system’s performance. Much like Amazon or Netflix, if you fail, everyone will vote with their feet.