On our last blog post, Jennifer mentioned that many of our clients are implementing multiple simulations on a single topic to assess performance improvement over time. In this post, I’d like to discuss one of those multi-simulation projects. It is a three-component initiative to enhance emergency physician’s ability to diagnose patients who present with undifferentiated dizziness.
A primary goal of this project is to increase the use of the HINTS (Head Impulse, Nystagmus, and Test of Skew) physical exam. By performing this exam, physicians can lessen their reliance on an MRI and thereby reduce the chance of a delayed or missed diagnosis. To meet its goals, this project includes two simulations and a follow-up survey to measure any changes in job performance.
The first simulation begins with assessment and then transitions to personalized feedback and training. In the beginning of the simulation, physicians are presented with a patient and are challenged to treat her much as they would a real patient in the Emergency Department (ED). While selecting questions to ask or exams to perform, physicians receive outcomes but are not given any judgment–based feedback. That is, they only receive information about the effect of a treatment, or the results of labs. They are not told if it was a wise decision or not. Why? Well, with assessment we don’t provide judgment. It would bias our results.
However, after physicians make their final decision, the simulation does provide personalized feedback from an online mentor who reviews the effectiveness of each major decision that the physicians made while treating their patient. Then the simulation provides video-based instruction on how to perform and interpret the HINTS exam.
The second simulation assesses retention from the first simulation. The goal is to measure any changes in diagnostic accuracy, test utilization, and the use of the HINTS exam. Once again, physicians diagnose a patient who reports to the ED with dizziness. After observing their patient’s response to treatment, physicians are debriefed by an online mentor who provides personalized feedback on key decisions.
In the future, there are plans to send a follow up survey several months after physicians complete the second simulation to measure how often they performed a HINTS exam while diagnosing real patients who present to the ED with dizziness.
Preliminary data from the two simulations are insightful. For example, diagnostic accuracy increased from 17% on the first simulation to 71% on the second simulation. Some of this may be attributable to a more difficult case on the first simulation. Test utilization also improved from 37% to 86% of physicians selecting the optimal tests. Lastly, the use of the HINTs exam improved from 88% to 100%. It will be very interesting to see new data emerge as this project continues.
What types of data are you collecting in your simulation? And how can you create a series of simulations to capture any changes in performance?
To learn more about measuring performance improvement, go to: http://www.mindtools.com/pages/article/kirkpatrick.htm.