I know it’s been quite around these parts lately but a recent article caught my attention this morning.

High Performance EMS posted “Does Response Time Matter?” and it got me thinking.

The author states an example of a patient being “treated” by fellow citizens at an airport and having to wait 20 minutes for an ambulance to arrive.  The author goes on to describe how we need to arrive quickly to save the public from themselves.  After 30 years of telling them to call 911 for anything and convincing them that “seconds count!” what did we expect?  While I agree that a delayed response to certain patient presentations could result in an adverse outcome, that points out a glaring omission from the story.  Missing from the story is the patient outcome.  The outcome will allow us to marry all the data from the response to determine the answer to the author’s question in the headline.

The short answer is no, response times don’t matter.  And no, I don’t have to pee.  I have data that does not have any correlation between quality of treatment, outcome and response time.  From my perch here at the data hub of a quite busy EMS system we have been trying to determine the quality of our EMS system and we rarely look at response times.

Don’t get me wrong, we look and our Department statistician collects, quantifies, qualifies and reports to regulators the 90th percentile of all code 2 and code 3 calls to meet their requirements.  We report it, they receive it.  The document says nothing about the quality of care or patient outcome.  The reason being that we can not guarantee a positive patient outcome, but can measure when we left and when we arrived.  Imagine if we had to treat 90% of symptomatic asthmatics with oxygen within 5 minutes of arrival and document an improvement in condition.  Can your system guarantee that?  Why aren’t EMS systems measured by the quality of their care instead of the quality of their response?

Apply this metric to any other industry and it fails.  Industry is measured by their quality and efficiency, not the speed in which they complete their tasks.  So long as we only look at one metric with any regularity we will continue to shuffle ambulances 2 blocks at 5 minute intervals to meet an average instead of realizing just leaving them still would bring the same outcome.

That’s where I come in.  My Medical Director and I, unhappy with the lack of actual patient care quality metrics, created our own in an effort to determine the quality of care being provided.  We learned very quickly that our ambulances do not respond in a vacuum.  Each patient receives a call taker, dispatcher, first response, ambulance response, assessment, treatment and some get transported.  Once at hospital they receive a whole new level of care and review until they are finally sent home.  It is hard to argue that the time it took to get an ambulance from point A to B has an impact on this outcome without any review of the call taker’s coding of the call, the dispatcher’s assignment of the ambulance all the way to the destination hospital capabilities and location.

We can all sit at the Pratt Street Ale House in Baltimore and discuss short times that had a bad outcome and long times that had a good outcome, but the worst part of all of this discussion is that so few systems measure anything more than response time.

If you consider response time your metric of success you have already failed.  You have failed the patient who improves when you arrive “late” and discounting that response as a failure, yet trading high 5s when a 2 minute response yields a call to the Medical Examiner’s Office.

We all know the stories of companies staffing ghost cars near the end of the month to bring down the monthly response metric to meet guidelines.  It happens.  But I also wonder if that flood of ambulances to help more people had any other impact.

The complication in tracking outcomes is the relationship your agency has with local hospitals.  We may never have a seamless transfer of data but what we can do is pull data from the PCR to determine if the patient received the indicated treatments for the recorded chief complaint and observed complications.  By reviewing your policies and protocols as well as your patient demographics you can quickly spot your core performance indicators and design tools to track them.

It may be nice to know that we make our 90th percentile in 8 of 10 districts on a regular basis, but what if those 2 districts happen to have the highest number of cardiac arrest survivals to discharge?  Are they still a failure?

Widen your view to include more than how quick you can put the ambulance in park.  This goes far beyond the lights and sirens System Status Management debate and speaks to the core of the reason we’re out there to begin with:

To make someone’s bad day better

Delays can hurt, but not unless you look deeper into your system to find out if that is the case…or not.

999
Agree? Disagree? Have something to add? Why not leave a comment or subscribe to the RSS feed to have future articles delivered to your feed reader?